Test Report: KVM_Linux_crio 20068

                    
                      3e5ae302b6a4bf4af6cc92954bf8488d685fb633:2024-12-09:37406
                    
                

Test fail (32/316)

Order failed test Duration
36 TestAddons/parallel/Ingress 155.16
38 TestAddons/parallel/MetricsServer 327.51
47 TestAddons/StoppedEnableDisable 154.43
166 TestMultiControlPlane/serial/StopSecondaryNode 141.66
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 5.67
168 TestMultiControlPlane/serial/RestartSecondaryNode 6.4
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 6.22
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 361.83
173 TestMultiControlPlane/serial/StopCluster 142.28
233 TestMultiNode/serial/RestartKeepsNodes 325.05
235 TestMultiNode/serial/StopMultiNode 145.34
242 TestPreload 175.44
250 TestKubernetesUpgrade 363.31
275 TestPause/serial/SecondStartNoReconfiguration 94.2
287 TestStartStop/group/old-k8s-version/serial/FirstStart 269.15
294 TestStartStop/group/embed-certs/serial/Stop 139.02
297 TestStartStop/group/no-preload/serial/Stop 138.97
298 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.38
302 TestStartStop/group/old-k8s-version/serial/DeployApp 0.5
303 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 99.87
304 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.42
308 TestStartStop/group/default-k8s-diff-port/serial/Stop 138.98
311 TestStartStop/group/old-k8s-version/serial/SecondStart 708.59
312 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.38
314 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 544.14
315 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 544.25
316 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 544.18
317 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.37
318 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 447.38
319 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 543.34
320 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 358.03
321 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 178.45
x
+
TestAddons/parallel/Ingress (155.16s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-156041 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-156041 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-156041 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [63f24e56-7ff2-470f-aef8-eaf2dada0965] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [63f24e56-7ff2-470f-aef8-eaf2dada0965] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 12.003186205s
I1209 10:37:58.141508  617017 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-156041 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-156041 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.416007604s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-156041 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-156041 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.161
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-156041 -n addons-156041
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-156041 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-156041 logs -n 25: (1.32707049s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                                                                       | minikube             | jenkins | v1.34.0 | 09 Dec 24 10:34 UTC | 09 Dec 24 10:34 UTC |
	| delete  | -p download-only-596508                                                                     | download-only-596508 | jenkins | v1.34.0 | 09 Dec 24 10:34 UTC | 09 Dec 24 10:34 UTC |
	| delete  | -p download-only-942086                                                                     | download-only-942086 | jenkins | v1.34.0 | 09 Dec 24 10:34 UTC | 09 Dec 24 10:34 UTC |
	| delete  | -p download-only-596508                                                                     | download-only-596508 | jenkins | v1.34.0 | 09 Dec 24 10:34 UTC | 09 Dec 24 10:34 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-654291 | jenkins | v1.34.0 | 09 Dec 24 10:34 UTC |                     |
	|         | binary-mirror-654291                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:45797                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-654291                                                                     | binary-mirror-654291 | jenkins | v1.34.0 | 09 Dec 24 10:34 UTC | 09 Dec 24 10:34 UTC |
	| addons  | enable dashboard -p                                                                         | addons-156041        | jenkins | v1.34.0 | 09 Dec 24 10:34 UTC |                     |
	|         | addons-156041                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-156041        | jenkins | v1.34.0 | 09 Dec 24 10:34 UTC |                     |
	|         | addons-156041                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-156041 --wait=true                                                                | addons-156041        | jenkins | v1.34.0 | 09 Dec 24 10:34 UTC | 09 Dec 24 10:36 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	| addons  | addons-156041 addons disable                                                                | addons-156041        | jenkins | v1.34.0 | 09 Dec 24 10:36 UTC | 09 Dec 24 10:36 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-156041 addons disable                                                                | addons-156041        | jenkins | v1.34.0 | 09 Dec 24 10:36 UTC | 09 Dec 24 10:36 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-156041        | jenkins | v1.34.0 | 09 Dec 24 10:36 UTC | 09 Dec 24 10:36 UTC |
	|         | -p addons-156041                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-156041 addons                                                                        | addons-156041        | jenkins | v1.34.0 | 09 Dec 24 10:37 UTC | 09 Dec 24 10:37 UTC |
	|         | disable nvidia-device-plugin                                                                |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-156041 addons disable                                                                | addons-156041        | jenkins | v1.34.0 | 09 Dec 24 10:37 UTC | 09 Dec 24 10:37 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-156041 ip                                                                            | addons-156041        | jenkins | v1.34.0 | 09 Dec 24 10:37 UTC | 09 Dec 24 10:37 UTC |
	| addons  | addons-156041 addons disable                                                                | addons-156041        | jenkins | v1.34.0 | 09 Dec 24 10:37 UTC | 09 Dec 24 10:37 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-156041 addons disable                                                                | addons-156041        | jenkins | v1.34.0 | 09 Dec 24 10:37 UTC | 09 Dec 24 10:37 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| ssh     | addons-156041 ssh cat                                                                       | addons-156041        | jenkins | v1.34.0 | 09 Dec 24 10:37 UTC | 09 Dec 24 10:37 UTC |
	|         | /opt/local-path-provisioner/pvc-24d2631a-658d-4b19-9ca8-01e524add183_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-156041 addons disable                                                                | addons-156041        | jenkins | v1.34.0 | 09 Dec 24 10:37 UTC | 09 Dec 24 10:38 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-156041 addons                                                                        | addons-156041        | jenkins | v1.34.0 | 09 Dec 24 10:37 UTC | 09 Dec 24 10:37 UTC |
	|         | disable inspektor-gadget                                                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-156041 addons                                                                        | addons-156041        | jenkins | v1.34.0 | 09 Dec 24 10:37 UTC | 09 Dec 24 10:37 UTC |
	|         | disable cloud-spanner                                                                       |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-156041 ssh curl -s                                                                   | addons-156041        | jenkins | v1.34.0 | 09 Dec 24 10:37 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-156041 addons                                                                        | addons-156041        | jenkins | v1.34.0 | 09 Dec 24 10:38 UTC | 09 Dec 24 10:38 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-156041 addons                                                                        | addons-156041        | jenkins | v1.34.0 | 09 Dec 24 10:38 UTC | 09 Dec 24 10:38 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-156041 ip                                                                            | addons-156041        | jenkins | v1.34.0 | 09 Dec 24 10:40 UTC | 09 Dec 24 10:40 UTC |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/09 10:34:10
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 10:34:10.334032  617708 out.go:345] Setting OutFile to fd 1 ...
	I1209 10:34:10.334143  617708 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 10:34:10.334154  617708 out.go:358] Setting ErrFile to fd 2...
	I1209 10:34:10.334158  617708 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 10:34:10.334365  617708 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20068-609844/.minikube/bin
	I1209 10:34:10.335026  617708 out.go:352] Setting JSON to false
	I1209 10:34:10.335966  617708 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":11794,"bootTime":1733728656,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 10:34:10.336071  617708 start.go:139] virtualization: kvm guest
	I1209 10:34:10.338055  617708 out.go:177] * [addons-156041] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1209 10:34:10.339160  617708 out.go:177]   - MINIKUBE_LOCATION=20068
	I1209 10:34:10.339161  617708 notify.go:220] Checking for updates...
	I1209 10:34:10.341433  617708 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 10:34:10.342561  617708 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20068-609844/kubeconfig
	I1209 10:34:10.343693  617708 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20068-609844/.minikube
	I1209 10:34:10.344680  617708 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1209 10:34:10.345673  617708 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 10:34:10.346802  617708 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 10:34:10.379431  617708 out.go:177] * Using the kvm2 driver based on user configuration
	I1209 10:34:10.380633  617708 start.go:297] selected driver: kvm2
	I1209 10:34:10.380648  617708 start.go:901] validating driver "kvm2" against <nil>
	I1209 10:34:10.380663  617708 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 10:34:10.381417  617708 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 10:34:10.381512  617708 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20068-609844/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1209 10:34:10.397095  617708 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1209 10:34:10.397149  617708 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1209 10:34:10.397440  617708 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 10:34:10.397484  617708 cni.go:84] Creating CNI manager for ""
	I1209 10:34:10.397538  617708 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 10:34:10.397548  617708 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1209 10:34:10.397624  617708 start.go:340] cluster config:
	{Name:addons-156041 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-156041 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 10:34:10.397744  617708 iso.go:125] acquiring lock: {Name:mk3bf4d9da9b75cc0ae0a3732f4492bac54a4b46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 10:34:10.400455  617708 out.go:177] * Starting "addons-156041" primary control-plane node in "addons-156041" cluster
	I1209 10:34:10.401804  617708 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 10:34:10.401854  617708 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1209 10:34:10.401872  617708 cache.go:56] Caching tarball of preloaded images
	I1209 10:34:10.401950  617708 preload.go:172] Found /home/jenkins/minikube-integration/20068-609844/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1209 10:34:10.401962  617708 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1209 10:34:10.402282  617708 profile.go:143] Saving config to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/addons-156041/config.json ...
	I1209 10:34:10.402311  617708 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/addons-156041/config.json: {Name:mkf770aad6ba2027e147531a9983e08c583227ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 10:34:10.402467  617708 start.go:360] acquireMachinesLock for addons-156041: {Name:mka6afffe9ddf2faebdc603ed0401805d58bf31e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 10:34:10.402521  617708 start.go:364] duration metric: took 39.895µs to acquireMachinesLock for "addons-156041"
	I1209 10:34:10.402538  617708 start.go:93] Provisioning new machine with config: &{Name:addons-156041 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:addons-156041 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 10:34:10.402598  617708 start.go:125] createHost starting for "" (driver="kvm2")
	I1209 10:34:10.404184  617708 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1209 10:34:10.404378  617708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:34:10.404427  617708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:34:10.419136  617708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36221
	I1209 10:34:10.419623  617708 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:34:10.420266  617708 main.go:141] libmachine: Using API Version  1
	I1209 10:34:10.420288  617708 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:34:10.420644  617708 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:34:10.420843  617708 main.go:141] libmachine: (addons-156041) Calling .GetMachineName
	I1209 10:34:10.421007  617708 main.go:141] libmachine: (addons-156041) Calling .DriverName
	I1209 10:34:10.421157  617708 start.go:159] libmachine.API.Create for "addons-156041" (driver="kvm2")
	I1209 10:34:10.421186  617708 client.go:168] LocalClient.Create starting
	I1209 10:34:10.421234  617708 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem
	I1209 10:34:10.557806  617708 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem
	I1209 10:34:10.898742  617708 main.go:141] libmachine: Running pre-create checks...
	I1209 10:34:10.898772  617708 main.go:141] libmachine: (addons-156041) Calling .PreCreateCheck
	I1209 10:34:10.899263  617708 main.go:141] libmachine: (addons-156041) Calling .GetConfigRaw
	I1209 10:34:10.899724  617708 main.go:141] libmachine: Creating machine...
	I1209 10:34:10.899738  617708 main.go:141] libmachine: (addons-156041) Calling .Create
	I1209 10:34:10.899907  617708 main.go:141] libmachine: (addons-156041) Creating KVM machine...
	I1209 10:34:10.901204  617708 main.go:141] libmachine: (addons-156041) DBG | found existing default KVM network
	I1209 10:34:10.902054  617708 main.go:141] libmachine: (addons-156041) DBG | I1209 10:34:10.901908  617731 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000123350}
	I1209 10:34:10.902082  617708 main.go:141] libmachine: (addons-156041) DBG | created network xml: 
	I1209 10:34:10.902096  617708 main.go:141] libmachine: (addons-156041) DBG | <network>
	I1209 10:34:10.902125  617708 main.go:141] libmachine: (addons-156041) DBG |   <name>mk-addons-156041</name>
	I1209 10:34:10.902156  617708 main.go:141] libmachine: (addons-156041) DBG |   <dns enable='no'/>
	I1209 10:34:10.902165  617708 main.go:141] libmachine: (addons-156041) DBG |   
	I1209 10:34:10.902194  617708 main.go:141] libmachine: (addons-156041) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1209 10:34:10.902217  617708 main.go:141] libmachine: (addons-156041) DBG |     <dhcp>
	I1209 10:34:10.902228  617708 main.go:141] libmachine: (addons-156041) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1209 10:34:10.902240  617708 main.go:141] libmachine: (addons-156041) DBG |     </dhcp>
	I1209 10:34:10.902249  617708 main.go:141] libmachine: (addons-156041) DBG |   </ip>
	I1209 10:34:10.902255  617708 main.go:141] libmachine: (addons-156041) DBG |   
	I1209 10:34:10.902265  617708 main.go:141] libmachine: (addons-156041) DBG | </network>
	I1209 10:34:10.902271  617708 main.go:141] libmachine: (addons-156041) DBG | 
	I1209 10:34:10.907299  617708 main.go:141] libmachine: (addons-156041) DBG | trying to create private KVM network mk-addons-156041 192.168.39.0/24...
	I1209 10:34:10.974694  617708 main.go:141] libmachine: (addons-156041) DBG | private KVM network mk-addons-156041 192.168.39.0/24 created
	I1209 10:34:10.974726  617708 main.go:141] libmachine: (addons-156041) DBG | I1209 10:34:10.974660  617731 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20068-609844/.minikube
	I1209 10:34:10.974750  617708 main.go:141] libmachine: (addons-156041) Setting up store path in /home/jenkins/minikube-integration/20068-609844/.minikube/machines/addons-156041 ...
	I1209 10:34:10.974773  617708 main.go:141] libmachine: (addons-156041) Building disk image from file:///home/jenkins/minikube-integration/20068-609844/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1209 10:34:10.974794  617708 main.go:141] libmachine: (addons-156041) Downloading /home/jenkins/minikube-integration/20068-609844/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20068-609844/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1209 10:34:11.267720  617708 main.go:141] libmachine: (addons-156041) DBG | I1209 10:34:11.267524  617731 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/addons-156041/id_rsa...
	I1209 10:34:11.459007  617708 main.go:141] libmachine: (addons-156041) DBG | I1209 10:34:11.458838  617731 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/addons-156041/addons-156041.rawdisk...
	I1209 10:34:11.459048  617708 main.go:141] libmachine: (addons-156041) DBG | Writing magic tar header
	I1209 10:34:11.459065  617708 main.go:141] libmachine: (addons-156041) DBG | Writing SSH key tar header
	I1209 10:34:11.459075  617708 main.go:141] libmachine: (addons-156041) DBG | I1209 10:34:11.458964  617731 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20068-609844/.minikube/machines/addons-156041 ...
	I1209 10:34:11.459088  617708 main.go:141] libmachine: (addons-156041) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/addons-156041
	I1209 10:34:11.459099  617708 main.go:141] libmachine: (addons-156041) Setting executable bit set on /home/jenkins/minikube-integration/20068-609844/.minikube/machines/addons-156041 (perms=drwx------)
	I1209 10:34:11.459109  617708 main.go:141] libmachine: (addons-156041) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20068-609844/.minikube/machines
	I1209 10:34:11.459120  617708 main.go:141] libmachine: (addons-156041) Setting executable bit set on /home/jenkins/minikube-integration/20068-609844/.minikube/machines (perms=drwxr-xr-x)
	I1209 10:34:11.459136  617708 main.go:141] libmachine: (addons-156041) Setting executable bit set on /home/jenkins/minikube-integration/20068-609844/.minikube (perms=drwxr-xr-x)
	I1209 10:34:11.459146  617708 main.go:141] libmachine: (addons-156041) Setting executable bit set on /home/jenkins/minikube-integration/20068-609844 (perms=drwxrwxr-x)
	I1209 10:34:11.459157  617708 main.go:141] libmachine: (addons-156041) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1209 10:34:11.459165  617708 main.go:141] libmachine: (addons-156041) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1209 10:34:11.459171  617708 main.go:141] libmachine: (addons-156041) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20068-609844/.minikube
	I1209 10:34:11.459191  617708 main.go:141] libmachine: (addons-156041) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20068-609844
	I1209 10:34:11.459206  617708 main.go:141] libmachine: (addons-156041) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1209 10:34:11.459213  617708 main.go:141] libmachine: (addons-156041) Creating domain...
	I1209 10:34:11.459227  617708 main.go:141] libmachine: (addons-156041) DBG | Checking permissions on dir: /home/jenkins
	I1209 10:34:11.459234  617708 main.go:141] libmachine: (addons-156041) DBG | Checking permissions on dir: /home
	I1209 10:34:11.459244  617708 main.go:141] libmachine: (addons-156041) DBG | Skipping /home - not owner
	I1209 10:34:11.460267  617708 main.go:141] libmachine: (addons-156041) define libvirt domain using xml: 
	I1209 10:34:11.460319  617708 main.go:141] libmachine: (addons-156041) <domain type='kvm'>
	I1209 10:34:11.460332  617708 main.go:141] libmachine: (addons-156041)   <name>addons-156041</name>
	I1209 10:34:11.460344  617708 main.go:141] libmachine: (addons-156041)   <memory unit='MiB'>4000</memory>
	I1209 10:34:11.460355  617708 main.go:141] libmachine: (addons-156041)   <vcpu>2</vcpu>
	I1209 10:34:11.460367  617708 main.go:141] libmachine: (addons-156041)   <features>
	I1209 10:34:11.460380  617708 main.go:141] libmachine: (addons-156041)     <acpi/>
	I1209 10:34:11.460391  617708 main.go:141] libmachine: (addons-156041)     <apic/>
	I1209 10:34:11.460403  617708 main.go:141] libmachine: (addons-156041)     <pae/>
	I1209 10:34:11.460413  617708 main.go:141] libmachine: (addons-156041)     
	I1209 10:34:11.460459  617708 main.go:141] libmachine: (addons-156041)   </features>
	I1209 10:34:11.460483  617708 main.go:141] libmachine: (addons-156041)   <cpu mode='host-passthrough'>
	I1209 10:34:11.460490  617708 main.go:141] libmachine: (addons-156041)   
	I1209 10:34:11.460499  617708 main.go:141] libmachine: (addons-156041)   </cpu>
	I1209 10:34:11.460508  617708 main.go:141] libmachine: (addons-156041)   <os>
	I1209 10:34:11.460515  617708 main.go:141] libmachine: (addons-156041)     <type>hvm</type>
	I1209 10:34:11.460527  617708 main.go:141] libmachine: (addons-156041)     <boot dev='cdrom'/>
	I1209 10:34:11.460537  617708 main.go:141] libmachine: (addons-156041)     <boot dev='hd'/>
	I1209 10:34:11.460549  617708 main.go:141] libmachine: (addons-156041)     <bootmenu enable='no'/>
	I1209 10:34:11.460557  617708 main.go:141] libmachine: (addons-156041)   </os>
	I1209 10:34:11.460562  617708 main.go:141] libmachine: (addons-156041)   <devices>
	I1209 10:34:11.460568  617708 main.go:141] libmachine: (addons-156041)     <disk type='file' device='cdrom'>
	I1209 10:34:11.460606  617708 main.go:141] libmachine: (addons-156041)       <source file='/home/jenkins/minikube-integration/20068-609844/.minikube/machines/addons-156041/boot2docker.iso'/>
	I1209 10:34:11.460636  617708 main.go:141] libmachine: (addons-156041)       <target dev='hdc' bus='scsi'/>
	I1209 10:34:11.460648  617708 main.go:141] libmachine: (addons-156041)       <readonly/>
	I1209 10:34:11.460660  617708 main.go:141] libmachine: (addons-156041)     </disk>
	I1209 10:34:11.460690  617708 main.go:141] libmachine: (addons-156041)     <disk type='file' device='disk'>
	I1209 10:34:11.460710  617708 main.go:141] libmachine: (addons-156041)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1209 10:34:11.460733  617708 main.go:141] libmachine: (addons-156041)       <source file='/home/jenkins/minikube-integration/20068-609844/.minikube/machines/addons-156041/addons-156041.rawdisk'/>
	I1209 10:34:11.460764  617708 main.go:141] libmachine: (addons-156041)       <target dev='hda' bus='virtio'/>
	I1209 10:34:11.460781  617708 main.go:141] libmachine: (addons-156041)     </disk>
	I1209 10:34:11.460792  617708 main.go:141] libmachine: (addons-156041)     <interface type='network'>
	I1209 10:34:11.460803  617708 main.go:141] libmachine: (addons-156041)       <source network='mk-addons-156041'/>
	I1209 10:34:11.460811  617708 main.go:141] libmachine: (addons-156041)       <model type='virtio'/>
	I1209 10:34:11.460822  617708 main.go:141] libmachine: (addons-156041)     </interface>
	I1209 10:34:11.460829  617708 main.go:141] libmachine: (addons-156041)     <interface type='network'>
	I1209 10:34:11.460841  617708 main.go:141] libmachine: (addons-156041)       <source network='default'/>
	I1209 10:34:11.460850  617708 main.go:141] libmachine: (addons-156041)       <model type='virtio'/>
	I1209 10:34:11.460856  617708 main.go:141] libmachine: (addons-156041)     </interface>
	I1209 10:34:11.460865  617708 main.go:141] libmachine: (addons-156041)     <serial type='pty'>
	I1209 10:34:11.460874  617708 main.go:141] libmachine: (addons-156041)       <target port='0'/>
	I1209 10:34:11.460888  617708 main.go:141] libmachine: (addons-156041)     </serial>
	I1209 10:34:11.460901  617708 main.go:141] libmachine: (addons-156041)     <console type='pty'>
	I1209 10:34:11.460912  617708 main.go:141] libmachine: (addons-156041)       <target type='serial' port='0'/>
	I1209 10:34:11.460919  617708 main.go:141] libmachine: (addons-156041)     </console>
	I1209 10:34:11.460929  617708 main.go:141] libmachine: (addons-156041)     <rng model='virtio'>
	I1209 10:34:11.460938  617708 main.go:141] libmachine: (addons-156041)       <backend model='random'>/dev/random</backend>
	I1209 10:34:11.460945  617708 main.go:141] libmachine: (addons-156041)     </rng>
	I1209 10:34:11.460952  617708 main.go:141] libmachine: (addons-156041)     
	I1209 10:34:11.460961  617708 main.go:141] libmachine: (addons-156041)     
	I1209 10:34:11.460977  617708 main.go:141] libmachine: (addons-156041)   </devices>
	I1209 10:34:11.460989  617708 main.go:141] libmachine: (addons-156041) </domain>
	I1209 10:34:11.461001  617708 main.go:141] libmachine: (addons-156041) 
	I1209 10:34:11.466548  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined MAC address 52:54:00:38:e5:cd in network default
	I1209 10:34:11.467097  617708 main.go:141] libmachine: (addons-156041) Ensuring networks are active...
	I1209 10:34:11.467121  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:34:11.467767  617708 main.go:141] libmachine: (addons-156041) Ensuring network default is active
	I1209 10:34:11.468085  617708 main.go:141] libmachine: (addons-156041) Ensuring network mk-addons-156041 is active
	I1209 10:34:11.468556  617708 main.go:141] libmachine: (addons-156041) Getting domain xml...
	I1209 10:34:11.469226  617708 main.go:141] libmachine: (addons-156041) Creating domain...
	I1209 10:34:12.877259  617708 main.go:141] libmachine: (addons-156041) Waiting to get IP...
	I1209 10:34:12.878071  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:34:12.878623  617708 main.go:141] libmachine: (addons-156041) DBG | unable to find current IP address of domain addons-156041 in network mk-addons-156041
	I1209 10:34:12.878652  617708 main.go:141] libmachine: (addons-156041) DBG | I1209 10:34:12.878601  617731 retry.go:31] will retry after 211.633142ms: waiting for machine to come up
	I1209 10:34:13.092362  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:34:13.092875  617708 main.go:141] libmachine: (addons-156041) DBG | unable to find current IP address of domain addons-156041 in network mk-addons-156041
	I1209 10:34:13.092901  617708 main.go:141] libmachine: (addons-156041) DBG | I1209 10:34:13.092821  617731 retry.go:31] will retry after 334.859148ms: waiting for machine to come up
	I1209 10:34:13.429491  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:34:13.429829  617708 main.go:141] libmachine: (addons-156041) DBG | unable to find current IP address of domain addons-156041 in network mk-addons-156041
	I1209 10:34:13.429867  617708 main.go:141] libmachine: (addons-156041) DBG | I1209 10:34:13.429789  617731 retry.go:31] will retry after 306.448763ms: waiting for machine to come up
	I1209 10:34:13.738661  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:34:13.739111  617708 main.go:141] libmachine: (addons-156041) DBG | unable to find current IP address of domain addons-156041 in network mk-addons-156041
	I1209 10:34:13.739146  617708 main.go:141] libmachine: (addons-156041) DBG | I1209 10:34:13.739068  617731 retry.go:31] will retry after 386.245722ms: waiting for machine to come up
	I1209 10:34:14.126628  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:34:14.126985  617708 main.go:141] libmachine: (addons-156041) DBG | unable to find current IP address of domain addons-156041 in network mk-addons-156041
	I1209 10:34:14.127010  617708 main.go:141] libmachine: (addons-156041) DBG | I1209 10:34:14.126955  617731 retry.go:31] will retry after 694.024962ms: waiting for machine to come up
	I1209 10:34:14.823112  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:34:14.823577  617708 main.go:141] libmachine: (addons-156041) DBG | unable to find current IP address of domain addons-156041 in network mk-addons-156041
	I1209 10:34:14.823601  617708 main.go:141] libmachine: (addons-156041) DBG | I1209 10:34:14.823533  617731 retry.go:31] will retry after 589.517993ms: waiting for machine to come up
	I1209 10:34:15.414323  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:34:15.414706  617708 main.go:141] libmachine: (addons-156041) DBG | unable to find current IP address of domain addons-156041 in network mk-addons-156041
	I1209 10:34:15.414736  617708 main.go:141] libmachine: (addons-156041) DBG | I1209 10:34:15.414644  617731 retry.go:31] will retry after 1.171119297s: waiting for machine to come up
	I1209 10:34:16.587399  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:34:16.587898  617708 main.go:141] libmachine: (addons-156041) DBG | unable to find current IP address of domain addons-156041 in network mk-addons-156041
	I1209 10:34:16.587919  617708 main.go:141] libmachine: (addons-156041) DBG | I1209 10:34:16.587876  617731 retry.go:31] will retry after 964.036276ms: waiting for machine to come up
	I1209 10:34:17.554151  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:34:17.554514  617708 main.go:141] libmachine: (addons-156041) DBG | unable to find current IP address of domain addons-156041 in network mk-addons-156041
	I1209 10:34:17.554546  617708 main.go:141] libmachine: (addons-156041) DBG | I1209 10:34:17.554478  617731 retry.go:31] will retry after 1.154329367s: waiting for machine to come up
	I1209 10:34:18.710995  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:34:18.711398  617708 main.go:141] libmachine: (addons-156041) DBG | unable to find current IP address of domain addons-156041 in network mk-addons-156041
	I1209 10:34:18.711421  617708 main.go:141] libmachine: (addons-156041) DBG | I1209 10:34:18.711353  617731 retry.go:31] will retry after 1.40055916s: waiting for machine to come up
	I1209 10:34:20.113871  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:34:20.114249  617708 main.go:141] libmachine: (addons-156041) DBG | unable to find current IP address of domain addons-156041 in network mk-addons-156041
	I1209 10:34:20.114281  617708 main.go:141] libmachine: (addons-156041) DBG | I1209 10:34:20.114155  617731 retry.go:31] will retry after 2.504420228s: waiting for machine to come up
	I1209 10:34:22.620064  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:34:22.620525  617708 main.go:141] libmachine: (addons-156041) DBG | unable to find current IP address of domain addons-156041 in network mk-addons-156041
	I1209 10:34:22.620552  617708 main.go:141] libmachine: (addons-156041) DBG | I1209 10:34:22.620489  617731 retry.go:31] will retry after 3.130098112s: waiting for machine to come up
	I1209 10:34:25.752259  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:34:25.752694  617708 main.go:141] libmachine: (addons-156041) DBG | unable to find current IP address of domain addons-156041 in network mk-addons-156041
	I1209 10:34:25.752717  617708 main.go:141] libmachine: (addons-156041) DBG | I1209 10:34:25.752635  617731 retry.go:31] will retry after 4.102691958s: waiting for machine to come up
	I1209 10:34:29.860162  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:34:29.860625  617708 main.go:141] libmachine: (addons-156041) DBG | unable to find current IP address of domain addons-156041 in network mk-addons-156041
	I1209 10:34:29.860661  617708 main.go:141] libmachine: (addons-156041) DBG | I1209 10:34:29.860596  617731 retry.go:31] will retry after 3.589941106s: waiting for machine to come up
	I1209 10:34:33.454289  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:34:33.454832  617708 main.go:141] libmachine: (addons-156041) Found IP for machine: 192.168.39.161
	I1209 10:34:33.454860  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has current primary IP address 192.168.39.161 and MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:34:33.454874  617708 main.go:141] libmachine: (addons-156041) Reserving static IP address...
	I1209 10:34:33.455112  617708 main.go:141] libmachine: (addons-156041) DBG | unable to find host DHCP lease matching {name: "addons-156041", mac: "52:54:00:fc:f1:8a", ip: "192.168.39.161"} in network mk-addons-156041
	I1209 10:34:33.529495  617708 main.go:141] libmachine: (addons-156041) Reserved static IP address: 192.168.39.161
	I1209 10:34:33.529536  617708 main.go:141] libmachine: (addons-156041) DBG | Getting to WaitForSSH function...
	I1209 10:34:33.529545  617708 main.go:141] libmachine: (addons-156041) Waiting for SSH to be available...
	I1209 10:34:33.531927  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:34:33.532251  617708 main.go:141] libmachine: (addons-156041) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:fc:f1:8a", ip: ""} in network mk-addons-156041
	I1209 10:34:33.532283  617708 main.go:141] libmachine: (addons-156041) DBG | unable to find defined IP address of network mk-addons-156041 interface with MAC address 52:54:00:fc:f1:8a
	I1209 10:34:33.532436  617708 main.go:141] libmachine: (addons-156041) DBG | Using SSH client type: external
	I1209 10:34:33.532464  617708 main.go:141] libmachine: (addons-156041) DBG | Using SSH private key: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/addons-156041/id_rsa (-rw-------)
	I1209 10:34:33.532502  617708 main.go:141] libmachine: (addons-156041) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20068-609844/.minikube/machines/addons-156041/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1209 10:34:33.532530  617708 main.go:141] libmachine: (addons-156041) DBG | About to run SSH command:
	I1209 10:34:33.532544  617708 main.go:141] libmachine: (addons-156041) DBG | exit 0
	I1209 10:34:33.543751  617708 main.go:141] libmachine: (addons-156041) DBG | SSH cmd err, output: exit status 255: 
	I1209 10:34:33.543776  617708 main.go:141] libmachine: (addons-156041) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1209 10:34:33.543783  617708 main.go:141] libmachine: (addons-156041) DBG | command : exit 0
	I1209 10:34:33.543788  617708 main.go:141] libmachine: (addons-156041) DBG | err     : exit status 255
	I1209 10:34:33.543795  617708 main.go:141] libmachine: (addons-156041) DBG | output  : 
	I1209 10:34:36.544520  617708 main.go:141] libmachine: (addons-156041) DBG | Getting to WaitForSSH function...
	I1209 10:34:36.546860  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:34:36.547188  617708 main.go:141] libmachine: (addons-156041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:f1:8a", ip: ""} in network mk-addons-156041: {Iface:virbr1 ExpiryTime:2024-12-09 11:34:25 +0000 UTC Type:0 Mac:52:54:00:fc:f1:8a Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:addons-156041 Clientid:01:52:54:00:fc:f1:8a}
	I1209 10:34:36.547210  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined IP address 192.168.39.161 and MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:34:36.547398  617708 main.go:141] libmachine: (addons-156041) DBG | Using SSH client type: external
	I1209 10:34:36.547432  617708 main.go:141] libmachine: (addons-156041) DBG | Using SSH private key: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/addons-156041/id_rsa (-rw-------)
	I1209 10:34:36.547474  617708 main.go:141] libmachine: (addons-156041) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.161 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20068-609844/.minikube/machines/addons-156041/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1209 10:34:36.547494  617708 main.go:141] libmachine: (addons-156041) DBG | About to run SSH command:
	I1209 10:34:36.547511  617708 main.go:141] libmachine: (addons-156041) DBG | exit 0
	I1209 10:34:36.674295  617708 main.go:141] libmachine: (addons-156041) DBG | SSH cmd err, output: <nil>: 
	I1209 10:34:36.674618  617708 main.go:141] libmachine: (addons-156041) KVM machine creation complete!
	I1209 10:34:36.674998  617708 main.go:141] libmachine: (addons-156041) Calling .GetConfigRaw
	I1209 10:34:36.675624  617708 main.go:141] libmachine: (addons-156041) Calling .DriverName
	I1209 10:34:36.675849  617708 main.go:141] libmachine: (addons-156041) Calling .DriverName
	I1209 10:34:36.676031  617708 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1209 10:34:36.676047  617708 main.go:141] libmachine: (addons-156041) Calling .GetState
	I1209 10:34:36.677323  617708 main.go:141] libmachine: Detecting operating system of created instance...
	I1209 10:34:36.677339  617708 main.go:141] libmachine: Waiting for SSH to be available...
	I1209 10:34:36.677344  617708 main.go:141] libmachine: Getting to WaitForSSH function...
	I1209 10:34:36.677350  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHHostname
	I1209 10:34:36.679630  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:34:36.679988  617708 main.go:141] libmachine: (addons-156041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:f1:8a", ip: ""} in network mk-addons-156041: {Iface:virbr1 ExpiryTime:2024-12-09 11:34:25 +0000 UTC Type:0 Mac:52:54:00:fc:f1:8a Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:addons-156041 Clientid:01:52:54:00:fc:f1:8a}
	I1209 10:34:36.680010  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined IP address 192.168.39.161 and MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:34:36.680141  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHPort
	I1209 10:34:36.680359  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHKeyPath
	I1209 10:34:36.680583  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHKeyPath
	I1209 10:34:36.680757  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHUsername
	I1209 10:34:36.680935  617708 main.go:141] libmachine: Using SSH client type: native
	I1209 10:34:36.681213  617708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.161 22 <nil> <nil>}
	I1209 10:34:36.681230  617708 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1209 10:34:36.789341  617708 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 10:34:36.789375  617708 main.go:141] libmachine: Detecting the provisioner...
	I1209 10:34:36.789384  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHHostname
	I1209 10:34:36.792214  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:34:36.792552  617708 main.go:141] libmachine: (addons-156041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:f1:8a", ip: ""} in network mk-addons-156041: {Iface:virbr1 ExpiryTime:2024-12-09 11:34:25 +0000 UTC Type:0 Mac:52:54:00:fc:f1:8a Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:addons-156041 Clientid:01:52:54:00:fc:f1:8a}
	I1209 10:34:36.792575  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined IP address 192.168.39.161 and MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:34:36.792755  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHPort
	I1209 10:34:36.792944  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHKeyPath
	I1209 10:34:36.793098  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHKeyPath
	I1209 10:34:36.793288  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHUsername
	I1209 10:34:36.793503  617708 main.go:141] libmachine: Using SSH client type: native
	I1209 10:34:36.793721  617708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.161 22 <nil> <nil>}
	I1209 10:34:36.793735  617708 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1209 10:34:36.902802  617708 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1209 10:34:36.902912  617708 main.go:141] libmachine: found compatible host: buildroot
	I1209 10:34:36.902924  617708 main.go:141] libmachine: Provisioning with buildroot...
	I1209 10:34:36.902935  617708 main.go:141] libmachine: (addons-156041) Calling .GetMachineName
	I1209 10:34:36.903199  617708 buildroot.go:166] provisioning hostname "addons-156041"
	I1209 10:34:36.903231  617708 main.go:141] libmachine: (addons-156041) Calling .GetMachineName
	I1209 10:34:36.903468  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHHostname
	I1209 10:34:36.906098  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:34:36.906435  617708 main.go:141] libmachine: (addons-156041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:f1:8a", ip: ""} in network mk-addons-156041: {Iface:virbr1 ExpiryTime:2024-12-09 11:34:25 +0000 UTC Type:0 Mac:52:54:00:fc:f1:8a Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:addons-156041 Clientid:01:52:54:00:fc:f1:8a}
	I1209 10:34:36.906458  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined IP address 192.168.39.161 and MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:34:36.906544  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHPort
	I1209 10:34:36.906773  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHKeyPath
	I1209 10:34:36.906933  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHKeyPath
	I1209 10:34:36.907168  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHUsername
	I1209 10:34:36.907352  617708 main.go:141] libmachine: Using SSH client type: native
	I1209 10:34:36.907534  617708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.161 22 <nil> <nil>}
	I1209 10:34:36.907548  617708 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-156041 && echo "addons-156041" | sudo tee /etc/hostname
	I1209 10:34:37.029308  617708 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-156041
	
	I1209 10:34:37.029333  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHHostname
	I1209 10:34:37.031961  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:34:37.032309  617708 main.go:141] libmachine: (addons-156041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:f1:8a", ip: ""} in network mk-addons-156041: {Iface:virbr1 ExpiryTime:2024-12-09 11:34:25 +0000 UTC Type:0 Mac:52:54:00:fc:f1:8a Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:addons-156041 Clientid:01:52:54:00:fc:f1:8a}
	I1209 10:34:37.032340  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined IP address 192.168.39.161 and MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:34:37.032529  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHPort
	I1209 10:34:37.032696  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHKeyPath
	I1209 10:34:37.032834  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHKeyPath
	I1209 10:34:37.032987  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHUsername
	I1209 10:34:37.033191  617708 main.go:141] libmachine: Using SSH client type: native
	I1209 10:34:37.033362  617708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.161 22 <nil> <nil>}
	I1209 10:34:37.033378  617708 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-156041' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-156041/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-156041' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 10:34:37.145967  617708 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 10:34:37.146028  617708 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20068-609844/.minikube CaCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20068-609844/.minikube}
	I1209 10:34:37.146100  617708 buildroot.go:174] setting up certificates
	I1209 10:34:37.146126  617708 provision.go:84] configureAuth start
	I1209 10:34:37.146151  617708 main.go:141] libmachine: (addons-156041) Calling .GetMachineName
	I1209 10:34:37.146501  617708 main.go:141] libmachine: (addons-156041) Calling .GetIP
	I1209 10:34:37.149063  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:34:37.149389  617708 main.go:141] libmachine: (addons-156041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:f1:8a", ip: ""} in network mk-addons-156041: {Iface:virbr1 ExpiryTime:2024-12-09 11:34:25 +0000 UTC Type:0 Mac:52:54:00:fc:f1:8a Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:addons-156041 Clientid:01:52:54:00:fc:f1:8a}
	I1209 10:34:37.149417  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined IP address 192.168.39.161 and MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:34:37.149536  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHHostname
	I1209 10:34:37.151668  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:34:37.151919  617708 main.go:141] libmachine: (addons-156041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:f1:8a", ip: ""} in network mk-addons-156041: {Iface:virbr1 ExpiryTime:2024-12-09 11:34:25 +0000 UTC Type:0 Mac:52:54:00:fc:f1:8a Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:addons-156041 Clientid:01:52:54:00:fc:f1:8a}
	I1209 10:34:37.151951  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined IP address 192.168.39.161 and MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:34:37.152048  617708 provision.go:143] copyHostCerts
	I1209 10:34:37.152128  617708 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem (1123 bytes)
	I1209 10:34:37.152268  617708 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem (1679 bytes)
	I1209 10:34:37.152332  617708 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem (1082 bytes)
	I1209 10:34:37.152381  617708 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem org=jenkins.addons-156041 san=[127.0.0.1 192.168.39.161 addons-156041 localhost minikube]
	I1209 10:34:37.423254  617708 provision.go:177] copyRemoteCerts
	I1209 10:34:37.423323  617708 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 10:34:37.423352  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHHostname
	I1209 10:34:37.426066  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:34:37.426444  617708 main.go:141] libmachine: (addons-156041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:f1:8a", ip: ""} in network mk-addons-156041: {Iface:virbr1 ExpiryTime:2024-12-09 11:34:25 +0000 UTC Type:0 Mac:52:54:00:fc:f1:8a Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:addons-156041 Clientid:01:52:54:00:fc:f1:8a}
	I1209 10:34:37.426477  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined IP address 192.168.39.161 and MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:34:37.426608  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHPort
	I1209 10:34:37.426825  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHKeyPath
	I1209 10:34:37.426964  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHUsername
	I1209 10:34:37.427113  617708 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/addons-156041/id_rsa Username:docker}
	I1209 10:34:37.512546  617708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1209 10:34:37.534821  617708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1209 10:34:37.556406  617708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1209 10:34:37.578220  617708 provision.go:87] duration metric: took 432.072178ms to configureAuth
	I1209 10:34:37.578261  617708 buildroot.go:189] setting minikube options for container-runtime
	I1209 10:34:37.578498  617708 config.go:182] Loaded profile config "addons-156041": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 10:34:37.578590  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHHostname
	I1209 10:34:37.580991  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:34:37.581399  617708 main.go:141] libmachine: (addons-156041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:f1:8a", ip: ""} in network mk-addons-156041: {Iface:virbr1 ExpiryTime:2024-12-09 11:34:25 +0000 UTC Type:0 Mac:52:54:00:fc:f1:8a Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:addons-156041 Clientid:01:52:54:00:fc:f1:8a}
	I1209 10:34:37.581430  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined IP address 192.168.39.161 and MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:34:37.581648  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHPort
	I1209 10:34:37.581893  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHKeyPath
	I1209 10:34:37.582085  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHKeyPath
	I1209 10:34:37.582292  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHUsername
	I1209 10:34:37.582483  617708 main.go:141] libmachine: Using SSH client type: native
	I1209 10:34:37.582645  617708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.161 22 <nil> <nil>}
	I1209 10:34:37.582658  617708 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 10:34:37.802965  617708 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 10:34:37.802991  617708 main.go:141] libmachine: Checking connection to Docker...
	I1209 10:34:37.803001  617708 main.go:141] libmachine: (addons-156041) Calling .GetURL
	I1209 10:34:37.804293  617708 main.go:141] libmachine: (addons-156041) DBG | Using libvirt version 6000000
	I1209 10:34:37.806358  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:34:37.806814  617708 main.go:141] libmachine: (addons-156041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:f1:8a", ip: ""} in network mk-addons-156041: {Iface:virbr1 ExpiryTime:2024-12-09 11:34:25 +0000 UTC Type:0 Mac:52:54:00:fc:f1:8a Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:addons-156041 Clientid:01:52:54:00:fc:f1:8a}
	I1209 10:34:37.806841  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined IP address 192.168.39.161 and MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:34:37.807024  617708 main.go:141] libmachine: Docker is up and running!
	I1209 10:34:37.807039  617708 main.go:141] libmachine: Reticulating splines...
	I1209 10:34:37.807049  617708 client.go:171] duration metric: took 27.385849388s to LocalClient.Create
	I1209 10:34:37.807093  617708 start.go:167] duration metric: took 27.385936007s to libmachine.API.Create "addons-156041"
	I1209 10:34:37.807118  617708 start.go:293] postStartSetup for "addons-156041" (driver="kvm2")
	I1209 10:34:37.807135  617708 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 10:34:37.807161  617708 main.go:141] libmachine: (addons-156041) Calling .DriverName
	I1209 10:34:37.807425  617708 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 10:34:37.807450  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHHostname
	I1209 10:34:37.809753  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:34:37.810084  617708 main.go:141] libmachine: (addons-156041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:f1:8a", ip: ""} in network mk-addons-156041: {Iface:virbr1 ExpiryTime:2024-12-09 11:34:25 +0000 UTC Type:0 Mac:52:54:00:fc:f1:8a Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:addons-156041 Clientid:01:52:54:00:fc:f1:8a}
	I1209 10:34:37.810110  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined IP address 192.168.39.161 and MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:34:37.810191  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHPort
	I1209 10:34:37.810395  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHKeyPath
	I1209 10:34:37.810542  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHUsername
	I1209 10:34:37.810685  617708 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/addons-156041/id_rsa Username:docker}
	I1209 10:34:37.892685  617708 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 10:34:37.896835  617708 info.go:137] Remote host: Buildroot 2023.02.9
	I1209 10:34:37.896866  617708 filesync.go:126] Scanning /home/jenkins/minikube-integration/20068-609844/.minikube/addons for local assets ...
	I1209 10:34:37.896940  617708 filesync.go:126] Scanning /home/jenkins/minikube-integration/20068-609844/.minikube/files for local assets ...
	I1209 10:34:37.896966  617708 start.go:296] duration metric: took 89.837446ms for postStartSetup
	I1209 10:34:37.897022  617708 main.go:141] libmachine: (addons-156041) Calling .GetConfigRaw
	I1209 10:34:37.897693  617708 main.go:141] libmachine: (addons-156041) Calling .GetIP
	I1209 10:34:37.900481  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:34:37.900800  617708 main.go:141] libmachine: (addons-156041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:f1:8a", ip: ""} in network mk-addons-156041: {Iface:virbr1 ExpiryTime:2024-12-09 11:34:25 +0000 UTC Type:0 Mac:52:54:00:fc:f1:8a Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:addons-156041 Clientid:01:52:54:00:fc:f1:8a}
	I1209 10:34:37.900827  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined IP address 192.168.39.161 and MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:34:37.901069  617708 profile.go:143] Saving config to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/addons-156041/config.json ...
	I1209 10:34:37.901268  617708 start.go:128] duration metric: took 27.498657742s to createHost
	I1209 10:34:37.901308  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHHostname
	I1209 10:34:37.903609  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:34:37.903905  617708 main.go:141] libmachine: (addons-156041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:f1:8a", ip: ""} in network mk-addons-156041: {Iface:virbr1 ExpiryTime:2024-12-09 11:34:25 +0000 UTC Type:0 Mac:52:54:00:fc:f1:8a Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:addons-156041 Clientid:01:52:54:00:fc:f1:8a}
	I1209 10:34:37.903929  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined IP address 192.168.39.161 and MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:34:37.904025  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHPort
	I1209 10:34:37.904242  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHKeyPath
	I1209 10:34:37.904364  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHKeyPath
	I1209 10:34:37.904520  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHUsername
	I1209 10:34:37.904633  617708 main.go:141] libmachine: Using SSH client type: native
	I1209 10:34:37.904792  617708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.161 22 <nil> <nil>}
	I1209 10:34:37.904809  617708 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1209 10:34:38.014838  617708 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733740477.994546390
	
	I1209 10:34:38.014871  617708 fix.go:216] guest clock: 1733740477.994546390
	I1209 10:34:38.014884  617708 fix.go:229] Guest: 2024-12-09 10:34:37.99454639 +0000 UTC Remote: 2024-12-09 10:34:37.901281977 +0000 UTC m=+27.606637014 (delta=93.264413ms)
	I1209 10:34:38.014943  617708 fix.go:200] guest clock delta is within tolerance: 93.264413ms
	I1209 10:34:38.014950  617708 start.go:83] releasing machines lock for "addons-156041", held for 27.612418671s
	I1209 10:34:38.014981  617708 main.go:141] libmachine: (addons-156041) Calling .DriverName
	I1209 10:34:38.015314  617708 main.go:141] libmachine: (addons-156041) Calling .GetIP
	I1209 10:34:38.017805  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:34:38.018144  617708 main.go:141] libmachine: (addons-156041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:f1:8a", ip: ""} in network mk-addons-156041: {Iface:virbr1 ExpiryTime:2024-12-09 11:34:25 +0000 UTC Type:0 Mac:52:54:00:fc:f1:8a Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:addons-156041 Clientid:01:52:54:00:fc:f1:8a}
	I1209 10:34:38.018193  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined IP address 192.168.39.161 and MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:34:38.018360  617708 main.go:141] libmachine: (addons-156041) Calling .DriverName
	I1209 10:34:38.018854  617708 main.go:141] libmachine: (addons-156041) Calling .DriverName
	I1209 10:34:38.019051  617708 main.go:141] libmachine: (addons-156041) Calling .DriverName
	I1209 10:34:38.019154  617708 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 10:34:38.019219  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHHostname
	I1209 10:34:38.019338  617708 ssh_runner.go:195] Run: cat /version.json
	I1209 10:34:38.019372  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHHostname
	I1209 10:34:38.022404  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:34:38.022572  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:34:38.022747  617708 main.go:141] libmachine: (addons-156041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:f1:8a", ip: ""} in network mk-addons-156041: {Iface:virbr1 ExpiryTime:2024-12-09 11:34:25 +0000 UTC Type:0 Mac:52:54:00:fc:f1:8a Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:addons-156041 Clientid:01:52:54:00:fc:f1:8a}
	I1209 10:34:38.022770  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined IP address 192.168.39.161 and MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:34:38.022942  617708 main.go:141] libmachine: (addons-156041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:f1:8a", ip: ""} in network mk-addons-156041: {Iface:virbr1 ExpiryTime:2024-12-09 11:34:25 +0000 UTC Type:0 Mac:52:54:00:fc:f1:8a Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:addons-156041 Clientid:01:52:54:00:fc:f1:8a}
	I1209 10:34:38.022974  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined IP address 192.168.39.161 and MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:34:38.022980  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHPort
	I1209 10:34:38.023174  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHKeyPath
	I1209 10:34:38.023255  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHPort
	I1209 10:34:38.023353  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHUsername
	I1209 10:34:38.023412  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHKeyPath
	I1209 10:34:38.023628  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHUsername
	I1209 10:34:38.023628  617708 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/addons-156041/id_rsa Username:docker}
	I1209 10:34:38.023772  617708 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/addons-156041/id_rsa Username:docker}
	I1209 10:34:38.136226  617708 ssh_runner.go:195] Run: systemctl --version
	I1209 10:34:38.142360  617708 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 10:34:38.302408  617708 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 10:34:38.308127  617708 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 10:34:38.308208  617708 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 10:34:38.323253  617708 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1209 10:34:38.323283  617708 start.go:495] detecting cgroup driver to use...
	I1209 10:34:38.323366  617708 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 10:34:38.338933  617708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 10:34:38.351830  617708 docker.go:217] disabling cri-docker service (if available) ...
	I1209 10:34:38.351888  617708 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 10:34:38.364462  617708 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 10:34:38.377338  617708 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 10:34:38.485900  617708 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 10:34:38.645698  617708 docker.go:233] disabling docker service ...
	I1209 10:34:38.645791  617708 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 10:34:38.659352  617708 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 10:34:38.671571  617708 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 10:34:38.786607  617708 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 10:34:38.892441  617708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 10:34:38.905567  617708 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 10:34:38.922831  617708 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1209 10:34:38.922895  617708 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 10:34:38.932804  617708 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 10:34:38.932865  617708 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 10:34:38.942693  617708 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 10:34:38.952607  617708 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 10:34:38.962413  617708 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 10:34:38.972377  617708 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 10:34:38.981878  617708 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 10:34:38.997306  617708 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 10:34:39.007362  617708 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 10:34:39.016018  617708 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1209 10:34:39.016096  617708 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1209 10:34:39.027553  617708 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 10:34:39.036029  617708 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 10:34:39.143122  617708 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 10:34:39.235446  617708 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 10:34:39.235560  617708 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 10:34:39.240300  617708 start.go:563] Will wait 60s for crictl version
	I1209 10:34:39.240373  617708 ssh_runner.go:195] Run: which crictl
	I1209 10:34:39.243913  617708 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 10:34:39.283891  617708 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1209 10:34:39.283988  617708 ssh_runner.go:195] Run: crio --version
	I1209 10:34:39.310143  617708 ssh_runner.go:195] Run: crio --version
	I1209 10:34:39.340092  617708 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1209 10:34:39.341562  617708 main.go:141] libmachine: (addons-156041) Calling .GetIP
	I1209 10:34:39.344421  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:34:39.344830  617708 main.go:141] libmachine: (addons-156041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:f1:8a", ip: ""} in network mk-addons-156041: {Iface:virbr1 ExpiryTime:2024-12-09 11:34:25 +0000 UTC Type:0 Mac:52:54:00:fc:f1:8a Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:addons-156041 Clientid:01:52:54:00:fc:f1:8a}
	I1209 10:34:39.344854  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined IP address 192.168.39.161 and MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:34:39.345030  617708 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1209 10:34:39.348824  617708 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 10:34:39.360923  617708 kubeadm.go:883] updating cluster {Name:addons-156041 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
2 ClusterName:addons-156041 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.161 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 10:34:39.361056  617708 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 10:34:39.361105  617708 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 10:34:39.391048  617708 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1209 10:34:39.391137  617708 ssh_runner.go:195] Run: which lz4
	I1209 10:34:39.395012  617708 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1209 10:34:39.398788  617708 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1209 10:34:39.398818  617708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1209 10:34:40.579576  617708 crio.go:462] duration metric: took 1.184590471s to copy over tarball
	I1209 10:34:40.579674  617708 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1209 10:34:42.648090  617708 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.068367956s)
	I1209 10:34:42.648129  617708 crio.go:469] duration metric: took 2.068514027s to extract the tarball
	I1209 10:34:42.648138  617708 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1209 10:34:42.685312  617708 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 10:34:42.724362  617708 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 10:34:42.724398  617708 cache_images.go:84] Images are preloaded, skipping loading
	I1209 10:34:42.724414  617708 kubeadm.go:934] updating node { 192.168.39.161 8443 v1.31.2 crio true true} ...
	I1209 10:34:42.724567  617708 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-156041 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.161
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:addons-156041 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 10:34:42.724639  617708 ssh_runner.go:195] Run: crio config
	I1209 10:34:42.769917  617708 cni.go:84] Creating CNI manager for ""
	I1209 10:34:42.769946  617708 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 10:34:42.769956  617708 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1209 10:34:42.769981  617708 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.161 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-156041 NodeName:addons-156041 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.161"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.161 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 10:34:42.770112  617708 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.161
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-156041"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.161"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.161"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 10:34:42.770193  617708 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1209 10:34:42.779819  617708 binaries.go:44] Found k8s binaries, skipping transfer
	I1209 10:34:42.779900  617708 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 10:34:42.788792  617708 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1209 10:34:42.804278  617708 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 10:34:42.819257  617708 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2293 bytes)
	I1209 10:34:42.834043  617708 ssh_runner.go:195] Run: grep 192.168.39.161	control-plane.minikube.internal$ /etc/hosts
	I1209 10:34:42.837415  617708 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.161	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 10:34:42.848699  617708 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 10:34:42.949230  617708 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 10:34:42.964274  617708 certs.go:68] Setting up /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/addons-156041 for IP: 192.168.39.161
	I1209 10:34:42.964306  617708 certs.go:194] generating shared ca certs ...
	I1209 10:34:42.964331  617708 certs.go:226] acquiring lock for ca certs: {Name:mk2df5887e08965d909a9c950da5dfffb8a04ddc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 10:34:42.964509  617708 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.key
	I1209 10:34:43.248749  617708 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt ...
	I1209 10:34:43.248782  617708 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt: {Name:mk622bdbb21507c1952d11c71417ae3a15eb5308 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 10:34:43.248955  617708 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20068-609844/.minikube/ca.key ...
	I1209 10:34:43.248966  617708 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/.minikube/ca.key: {Name:mke21334aad9871880bb7c0cf3c037a39323dbe3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 10:34:43.249041  617708 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.key
	I1209 10:34:43.609861  617708 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.crt ...
	I1209 10:34:43.609898  617708 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.crt: {Name:mkd1bd2a4594eb40096825a894a5a40d1347c0f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 10:34:43.610080  617708 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.key ...
	I1209 10:34:43.610092  617708 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.key: {Name:mkb829d565cca0c0464dd4998a8770ec52136425 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 10:34:43.610166  617708 certs.go:256] generating profile certs ...
	I1209 10:34:43.610254  617708 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/addons-156041/client.key
	I1209 10:34:43.610269  617708 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/addons-156041/client.crt with IP's: []
	I1209 10:34:43.991857  617708 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/addons-156041/client.crt ...
	I1209 10:34:43.991889  617708 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/addons-156041/client.crt: {Name:mk4acbf815427ee71599db617da9affaa4b132e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 10:34:43.992086  617708 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/addons-156041/client.key ...
	I1209 10:34:43.992102  617708 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/addons-156041/client.key: {Name:mk27229f75f751fd341adb8e2be9816d7605d2c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 10:34:43.992211  617708 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/addons-156041/apiserver.key.7d199bea
	I1209 10:34:43.992232  617708 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/addons-156041/apiserver.crt.7d199bea with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.161]
	I1209 10:34:44.231298  617708 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/addons-156041/apiserver.crt.7d199bea ...
	I1209 10:34:44.231339  617708 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/addons-156041/apiserver.crt.7d199bea: {Name:mk826768f1c37d7a376a1dc76e73b02655cee348 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 10:34:44.231577  617708 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/addons-156041/apiserver.key.7d199bea ...
	I1209 10:34:44.231599  617708 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/addons-156041/apiserver.key.7d199bea: {Name:mke74e9d81fe6d8530d9bbcb64d3edf05a659851 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 10:34:44.231723  617708 certs.go:381] copying /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/addons-156041/apiserver.crt.7d199bea -> /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/addons-156041/apiserver.crt
	I1209 10:34:44.231818  617708 certs.go:385] copying /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/addons-156041/apiserver.key.7d199bea -> /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/addons-156041/apiserver.key
	I1209 10:34:44.231877  617708 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/addons-156041/proxy-client.key
	I1209 10:34:44.231900  617708 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/addons-156041/proxy-client.crt with IP's: []
	I1209 10:34:44.552711  617708 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/addons-156041/proxy-client.crt ...
	I1209 10:34:44.552747  617708 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/addons-156041/proxy-client.crt: {Name:mk9c785b66230b8e800afed97f1d945d7e6f65d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 10:34:44.552953  617708 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/addons-156041/proxy-client.key ...
	I1209 10:34:44.552979  617708 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/addons-156041/proxy-client.key: {Name:mkb1c95d8a7bce4ef5bd80b6946bff12403ee745 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 10:34:44.553227  617708 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem (1675 bytes)
	I1209 10:34:44.553273  617708 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem (1082 bytes)
	I1209 10:34:44.553304  617708 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem (1123 bytes)
	I1209 10:34:44.553331  617708 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem (1679 bytes)
	I1209 10:34:44.554072  617708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 10:34:44.596811  617708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1209 10:34:44.645358  617708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 10:34:44.667497  617708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1209 10:34:44.689470  617708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/addons-156041/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1209 10:34:44.711968  617708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/addons-156041/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1209 10:34:44.734723  617708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/addons-156041/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 10:34:44.756160  617708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/addons-156041/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1209 10:34:44.777957  617708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 10:34:44.799771  617708 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 10:34:44.815818  617708 ssh_runner.go:195] Run: openssl version
	I1209 10:34:44.821770  617708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1209 10:34:44.832527  617708 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 10:34:44.836683  617708 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 10:34 /usr/share/ca-certificates/minikubeCA.pem
	I1209 10:34:44.836756  617708 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 10:34:44.842276  617708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1209 10:34:44.852431  617708 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 10:34:44.856064  617708 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1209 10:34:44.856128  617708 kubeadm.go:392] StartCluster: {Name:addons-156041 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 C
lusterName:addons-156041 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.161 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 10:34:44.856228  617708 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 10:34:44.856321  617708 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 10:34:44.896386  617708 cri.go:89] found id: ""
	I1209 10:34:44.896477  617708 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 10:34:44.905880  617708 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 10:34:44.914902  617708 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 10:34:44.924243  617708 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 10:34:44.924265  617708 kubeadm.go:157] found existing configuration files:
	
	I1209 10:34:44.924309  617708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 10:34:44.932900  617708 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 10:34:44.932968  617708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 10:34:44.941842  617708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 10:34:44.950239  617708 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 10:34:44.950298  617708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 10:34:44.959133  617708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 10:34:44.967550  617708 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 10:34:44.967611  617708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 10:34:44.976393  617708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 10:34:44.984503  617708 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 10:34:44.984553  617708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 10:34:44.993046  617708 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1209 10:34:45.144153  617708 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1209 10:34:55.293039  617708 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1209 10:34:55.293158  617708 kubeadm.go:310] [preflight] Running pre-flight checks
	I1209 10:34:55.293302  617708 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1209 10:34:55.293486  617708 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1209 10:34:55.293650  617708 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1209 10:34:55.293754  617708 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1209 10:34:55.295582  617708 out.go:235]   - Generating certificates and keys ...
	I1209 10:34:55.295681  617708 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1209 10:34:55.295742  617708 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1209 10:34:55.295826  617708 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1209 10:34:55.295925  617708 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1209 10:34:55.295998  617708 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1209 10:34:55.296067  617708 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1209 10:34:55.296146  617708 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1209 10:34:55.296263  617708 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-156041 localhost] and IPs [192.168.39.161 127.0.0.1 ::1]
	I1209 10:34:55.296310  617708 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1209 10:34:55.296460  617708 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-156041 localhost] and IPs [192.168.39.161 127.0.0.1 ::1]
	I1209 10:34:55.296569  617708 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1209 10:34:55.296665  617708 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1209 10:34:55.296728  617708 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1209 10:34:55.296806  617708 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1209 10:34:55.296884  617708 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1209 10:34:55.296974  617708 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1209 10:34:55.297040  617708 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1209 10:34:55.297125  617708 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1209 10:34:55.297212  617708 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1209 10:34:55.297314  617708 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1209 10:34:55.297399  617708 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1209 10:34:55.299074  617708 out.go:235]   - Booting up control plane ...
	I1209 10:34:55.299169  617708 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1209 10:34:55.299244  617708 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1209 10:34:55.299312  617708 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1209 10:34:55.299398  617708 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1209 10:34:55.299517  617708 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1209 10:34:55.299589  617708 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1209 10:34:55.299780  617708 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1209 10:34:55.299871  617708 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1209 10:34:55.299921  617708 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.580427ms
	I1209 10:34:55.299986  617708 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1209 10:34:55.300034  617708 kubeadm.go:310] [api-check] The API server is healthy after 5.00204533s
	I1209 10:34:55.300142  617708 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1209 10:34:55.300283  617708 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1209 10:34:55.300375  617708 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1209 10:34:55.300576  617708 kubeadm.go:310] [mark-control-plane] Marking the node addons-156041 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1209 10:34:55.300659  617708 kubeadm.go:310] [bootstrap-token] Using token: 1ez8ht.ew72wo64yxy4gta0
	I1209 10:34:55.302324  617708 out.go:235]   - Configuring RBAC rules ...
	I1209 10:34:55.302494  617708 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1209 10:34:55.302578  617708 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1209 10:34:55.302694  617708 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1209 10:34:55.302830  617708 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1209 10:34:55.302970  617708 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1209 10:34:55.303079  617708 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1209 10:34:55.303213  617708 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1209 10:34:55.303267  617708 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1209 10:34:55.303330  617708 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1209 10:34:55.303342  617708 kubeadm.go:310] 
	I1209 10:34:55.303439  617708 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1209 10:34:55.303452  617708 kubeadm.go:310] 
	I1209 10:34:55.303576  617708 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1209 10:34:55.303584  617708 kubeadm.go:310] 
	I1209 10:34:55.303605  617708 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1209 10:34:55.303655  617708 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1209 10:34:55.303698  617708 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1209 10:34:55.303703  617708 kubeadm.go:310] 
	I1209 10:34:55.303747  617708 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1209 10:34:55.303752  617708 kubeadm.go:310] 
	I1209 10:34:55.303836  617708 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1209 10:34:55.303867  617708 kubeadm.go:310] 
	I1209 10:34:55.303953  617708 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1209 10:34:55.304061  617708 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1209 10:34:55.304150  617708 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1209 10:34:55.304161  617708 kubeadm.go:310] 
	I1209 10:34:55.304233  617708 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1209 10:34:55.304327  617708 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1209 10:34:55.304341  617708 kubeadm.go:310] 
	I1209 10:34:55.304458  617708 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 1ez8ht.ew72wo64yxy4gta0 \
	I1209 10:34:55.304610  617708 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c011865ce27f188d55feab5f04226df0aae8d2e7cfe661283806c6f2de0ef29a \
	I1209 10:34:55.304631  617708 kubeadm.go:310] 	--control-plane 
	I1209 10:34:55.304637  617708 kubeadm.go:310] 
	I1209 10:34:55.304708  617708 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1209 10:34:55.304714  617708 kubeadm.go:310] 
	I1209 10:34:55.304821  617708 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 1ez8ht.ew72wo64yxy4gta0 \
	I1209 10:34:55.304960  617708 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c011865ce27f188d55feab5f04226df0aae8d2e7cfe661283806c6f2de0ef29a 
	I1209 10:34:55.304975  617708 cni.go:84] Creating CNI manager for ""
	I1209 10:34:55.304983  617708 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 10:34:55.306396  617708 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1209 10:34:55.307738  617708 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1209 10:34:55.319758  617708 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1209 10:34:55.338262  617708 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1209 10:34:55.338343  617708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 10:34:55.338370  617708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-156041 minikube.k8s.io/updated_at=2024_12_09T10_34_55_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=60110addcdbf0fec7168b962521659e922988d6c minikube.k8s.io/name=addons-156041 minikube.k8s.io/primary=true
	I1209 10:34:55.353438  617708 ops.go:34] apiserver oom_adj: -16
	I1209 10:34:55.457021  617708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 10:34:55.957278  617708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 10:34:56.457210  617708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 10:34:56.957858  617708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 10:34:57.458114  617708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 10:34:57.957308  617708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 10:34:58.457398  617708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 10:34:58.957294  617708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 10:34:59.457096  617708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 10:34:59.957109  617708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 10:35:00.072064  617708 kubeadm.go:1113] duration metric: took 4.733788723s to wait for elevateKubeSystemPrivileges
	I1209 10:35:00.072119  617708 kubeadm.go:394] duration metric: took 15.215996974s to StartCluster
	I1209 10:35:00.072148  617708 settings.go:142] acquiring lock: {Name:mkd819f3469f56ef651f2e5bb461e93bb6b2b5fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 10:35:00.072271  617708 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20068-609844/kubeconfig
	I1209 10:35:00.072735  617708 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/kubeconfig: {Name:mk32876a1ab47f13c0d7c0ba1ee5e32de01b4a4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 10:35:00.072935  617708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1209 10:35:00.072942  617708 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.161 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 10:35:00.073027  617708 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1209 10:35:00.073172  617708 addons.go:69] Setting yakd=true in profile "addons-156041"
	I1209 10:35:00.073191  617708 addons.go:234] Setting addon yakd=true in "addons-156041"
	I1209 10:35:00.073193  617708 addons.go:69] Setting inspektor-gadget=true in profile "addons-156041"
	I1209 10:35:00.073210  617708 config.go:182] Loaded profile config "addons-156041": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 10:35:00.073228  617708 addons.go:234] Setting addon inspektor-gadget=true in "addons-156041"
	I1209 10:35:00.073239  617708 host.go:66] Checking if "addons-156041" exists ...
	I1209 10:35:00.073227  617708 addons.go:69] Setting storage-provisioner=true in profile "addons-156041"
	I1209 10:35:00.073255  617708 addons.go:69] Setting ingress-dns=true in profile "addons-156041"
	I1209 10:35:00.073266  617708 addons.go:234] Setting addon ingress-dns=true in "addons-156041"
	I1209 10:35:00.073255  617708 addons.go:69] Setting ingress=true in profile "addons-156041"
	I1209 10:35:00.073281  617708 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-156041"
	I1209 10:35:00.073295  617708 addons.go:234] Setting addon ingress=true in "addons-156041"
	I1209 10:35:00.073281  617708 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-156041"
	I1209 10:35:00.073268  617708 addons.go:234] Setting addon storage-provisioner=true in "addons-156041"
	I1209 10:35:00.073310  617708 addons.go:69] Setting volumesnapshots=true in profile "addons-156041"
	I1209 10:35:00.073307  617708 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-156041"
	I1209 10:35:00.073318  617708 addons.go:234] Setting addon amd-gpu-device-plugin=true in "addons-156041"
	I1209 10:35:00.073329  617708 addons.go:69] Setting cloud-spanner=true in profile "addons-156041"
	I1209 10:35:00.073329  617708 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-156041"
	I1209 10:35:00.073333  617708 addons.go:69] Setting gcp-auth=true in profile "addons-156041"
	I1209 10:35:00.073339  617708 addons.go:234] Setting addon cloud-spanner=true in "addons-156041"
	I1209 10:35:00.073348  617708 host.go:66] Checking if "addons-156041" exists ...
	I1209 10:35:00.073351  617708 host.go:66] Checking if "addons-156041" exists ...
	I1209 10:35:00.073350  617708 mustload.go:65] Loading cluster: addons-156041
	I1209 10:35:00.073354  617708 host.go:66] Checking if "addons-156041" exists ...
	I1209 10:35:00.073520  617708 config.go:182] Loaded profile config "addons-156041": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 10:35:00.073298  617708 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-156041"
	I1209 10:35:00.073775  617708 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-156041"
	I1209 10:35:00.073790  617708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:35:00.073809  617708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:35:00.073835  617708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:35:00.073863  617708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:35:00.073321  617708 addons.go:234] Setting addon volumesnapshots=true in "addons-156041"
	I1209 10:35:00.073867  617708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:35:00.073888  617708 host.go:66] Checking if "addons-156041" exists ...
	I1209 10:35:00.073838  617708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:35:00.073811  617708 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-156041"
	I1209 10:35:00.074043  617708 host.go:66] Checking if "addons-156041" exists ...
	I1209 10:35:00.073890  617708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:35:00.073794  617708 host.go:66] Checking if "addons-156041" exists ...
	I1209 10:35:00.074300  617708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:35:00.073298  617708 addons.go:69] Setting volcano=true in profile "addons-156041"
	I1209 10:35:00.074331  617708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:35:00.074345  617708 addons.go:234] Setting addon volcano=true in "addons-156041"
	I1209 10:35:00.073250  617708 addons.go:69] Setting default-storageclass=true in profile "addons-156041"
	I1209 10:35:00.074424  617708 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-156041"
	I1209 10:35:00.073282  617708 addons.go:69] Setting registry=true in profile "addons-156041"
	I1209 10:35:00.074509  617708 host.go:66] Checking if "addons-156041" exists ...
	I1209 10:35:00.074537  617708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:35:00.074558  617708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:35:00.074514  617708 addons.go:234] Setting addon registry=true in "addons-156041"
	I1209 10:35:00.073325  617708 host.go:66] Checking if "addons-156041" exists ...
	I1209 10:35:00.074665  617708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:35:00.074688  617708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:35:00.074832  617708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:35:00.074843  617708 out.go:177] * Verifying Kubernetes components...
	I1209 10:35:00.074888  617708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:35:00.074907  617708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:35:00.073274  617708 host.go:66] Checking if "addons-156041" exists ...
	I1209 10:35:00.074970  617708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:35:00.074995  617708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:35:00.074853  617708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:35:00.075094  617708 host.go:66] Checking if "addons-156041" exists ...
	I1209 10:35:00.073275  617708 addons.go:69] Setting metrics-server=true in profile "addons-156041"
	I1209 10:35:00.075287  617708 addons.go:234] Setting addon metrics-server=true in "addons-156041"
	I1209 10:35:00.075318  617708 host.go:66] Checking if "addons-156041" exists ...
	I1209 10:35:00.073760  617708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:35:00.075428  617708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:35:00.073793  617708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:35:00.075513  617708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:35:00.073816  617708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:35:00.073306  617708 host.go:66] Checking if "addons-156041" exists ...
	I1209 10:35:00.076054  617708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:35:00.076073  617708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:35:00.077084  617708 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 10:35:00.094729  617708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36289
	I1209 10:35:00.094903  617708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43169
	I1209 10:35:00.106387  617708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43933
	I1209 10:35:00.106548  617708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:35:00.106595  617708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:35:00.106693  617708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:35:00.106722  617708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:35:00.106816  617708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:35:00.106851  617708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:35:00.106857  617708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44337
	I1209 10:35:00.107137  617708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38277
	I1209 10:35:00.107317  617708 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:35:00.107422  617708 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:35:00.107502  617708 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:35:00.107578  617708 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:35:00.107915  617708 main.go:141] libmachine: Using API Version  1
	I1209 10:35:00.107933  617708 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:35:00.108051  617708 main.go:141] libmachine: Using API Version  1
	I1209 10:35:00.108063  617708 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:35:00.108111  617708 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:35:00.108200  617708 main.go:141] libmachine: Using API Version  1
	I1209 10:35:00.108207  617708 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:35:00.108651  617708 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:35:00.108722  617708 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:35:00.108854  617708 main.go:141] libmachine: Using API Version  1
	I1209 10:35:00.108864  617708 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:35:00.108975  617708 main.go:141] libmachine: Using API Version  1
	I1209 10:35:00.108986  617708 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:35:00.109433  617708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:35:00.109459  617708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:35:00.109910  617708 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:35:00.109990  617708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43677
	I1209 10:35:00.110143  617708 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:35:00.110595  617708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:35:00.110617  617708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:35:00.110773  617708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:35:00.110796  617708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:35:00.114608  617708 main.go:141] libmachine: (addons-156041) Calling .GetState
	I1209 10:35:00.114688  617708 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:35:00.114720  617708 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:35:00.115213  617708 main.go:141] libmachine: Using API Version  1
	I1209 10:35:00.115233  617708 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:35:00.116152  617708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:35:00.116195  617708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:35:00.116795  617708 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:35:00.116870  617708 host.go:66] Checking if "addons-156041" exists ...
	I1209 10:35:00.117248  617708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:35:00.117286  617708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:35:00.117458  617708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41471
	I1209 10:35:00.132986  617708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33187
	I1209 10:35:00.133630  617708 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:35:00.134265  617708 main.go:141] libmachine: Using API Version  1
	I1209 10:35:00.134286  617708 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:35:00.134747  617708 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:35:00.135205  617708 main.go:141] libmachine: (addons-156041) Calling .GetState
	I1209 10:35:00.139057  617708 addons.go:234] Setting addon default-storageclass=true in "addons-156041"
	I1209 10:35:00.139110  617708 host.go:66] Checking if "addons-156041" exists ...
	I1209 10:35:00.139515  617708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:35:00.139553  617708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:35:00.147754  617708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35967
	I1209 10:35:00.147817  617708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42173
	I1209 10:35:00.148671  617708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42379
	I1209 10:35:00.148739  617708 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:35:00.148842  617708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38279
	I1209 10:35:00.148887  617708 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:35:00.149524  617708 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:35:00.149606  617708 main.go:141] libmachine: Using API Version  1
	I1209 10:35:00.149609  617708 main.go:141] libmachine: Using API Version  1
	I1209 10:35:00.149623  617708 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:35:00.149628  617708 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:35:00.149988  617708 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:35:00.150097  617708 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:35:00.150155  617708 main.go:141] libmachine: Using API Version  1
	I1209 10:35:00.150180  617708 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:35:00.150665  617708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:35:00.150713  617708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:35:00.150937  617708 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:35:00.151102  617708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:35:00.151152  617708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:35:00.151170  617708 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:35:00.151592  617708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:35:00.151639  617708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:35:00.151877  617708 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:35:00.151959  617708 main.go:141] libmachine: Using API Version  1
	I1209 10:35:00.151975  617708 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:35:00.152022  617708 main.go:141] libmachine: Using API Version  1
	I1209 10:35:00.152039  617708 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:35:00.152406  617708 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:35:00.152457  617708 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:35:00.152509  617708 main.go:141] libmachine: (addons-156041) Calling .GetState
	I1209 10:35:00.153012  617708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:35:00.153048  617708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:35:00.153884  617708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:35:00.153923  617708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:35:00.154083  617708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39677
	I1209 10:35:00.154587  617708 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:35:00.155105  617708 main.go:141] libmachine: Using API Version  1
	I1209 10:35:00.155123  617708 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:35:00.155335  617708 main.go:141] libmachine: (addons-156041) Calling .DriverName
	I1209 10:35:00.155474  617708 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:35:00.155867  617708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39765
	I1209 10:35:00.156311  617708 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:35:00.156783  617708 main.go:141] libmachine: Using API Version  1
	I1209 10:35:00.156807  617708 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:35:00.157057  617708 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.25
	I1209 10:35:00.157150  617708 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:35:00.157297  617708 main.go:141] libmachine: (addons-156041) Calling .GetState
	I1209 10:35:00.158098  617708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46033
	I1209 10:35:00.158315  617708 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1209 10:35:00.158327  617708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44441
	I1209 10:35:00.158335  617708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1209 10:35:00.158358  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHHostname
	I1209 10:35:00.160820  617708 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:35:00.161313  617708 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:35:00.161413  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:35:00.161465  617708 main.go:141] libmachine: Using API Version  1
	I1209 10:35:00.161487  617708 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:35:00.161752  617708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:35:00.161804  617708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:35:00.161996  617708 main.go:141] libmachine: Using API Version  1
	I1209 10:35:00.162014  617708 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:35:00.162083  617708 main.go:141] libmachine: (addons-156041) Calling .DriverName
	I1209 10:35:00.162151  617708 main.go:141] libmachine: (addons-156041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:f1:8a", ip: ""} in network mk-addons-156041: {Iface:virbr1 ExpiryTime:2024-12-09 11:34:25 +0000 UTC Type:0 Mac:52:54:00:fc:f1:8a Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:addons-156041 Clientid:01:52:54:00:fc:f1:8a}
	I1209 10:35:00.162197  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined IP address 192.168.39.161 and MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:35:00.162537  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHPort
	I1209 10:35:00.162714  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHKeyPath
	I1209 10:35:00.162770  617708 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:35:00.162897  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHUsername
	I1209 10:35:00.163061  617708 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/addons-156041/id_rsa Username:docker}
	I1209 10:35:00.163211  617708 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:35:00.163463  617708 main.go:141] libmachine: (addons-156041) Calling .GetState
	I1209 10:35:00.164495  617708 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1209 10:35:00.164586  617708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:35:00.164623  617708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:35:00.165871  617708 main.go:141] libmachine: (addons-156041) Calling .DriverName
	I1209 10:35:00.166819  617708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36185
	I1209 10:35:00.167201  617708 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:35:00.167231  617708 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1209 10:35:00.167792  617708 main.go:141] libmachine: Using API Version  1
	I1209 10:35:00.167813  617708 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:35:00.168095  617708 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1209 10:35:00.168220  617708 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:35:00.168365  617708 main.go:141] libmachine: (addons-156041) Calling .GetState
	I1209 10:35:00.169337  617708 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1209 10:35:00.169357  617708 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1209 10:35:00.169379  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHHostname
	I1209 10:35:00.170858  617708 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1209 10:35:00.171497  617708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41309
	I1209 10:35:00.171949  617708 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:35:00.172694  617708 main.go:141] libmachine: Using API Version  1
	I1209 10:35:00.172714  617708 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:35:00.173042  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:35:00.173218  617708 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1209 10:35:00.173474  617708 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:35:00.173579  617708 main.go:141] libmachine: (addons-156041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:f1:8a", ip: ""} in network mk-addons-156041: {Iface:virbr1 ExpiryTime:2024-12-09 11:34:25 +0000 UTC Type:0 Mac:52:54:00:fc:f1:8a Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:addons-156041 Clientid:01:52:54:00:fc:f1:8a}
	I1209 10:35:00.173597  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined IP address 192.168.39.161 and MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:35:00.173858  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHPort
	I1209 10:35:00.173938  617708 main.go:141] libmachine: (addons-156041) Calling .DriverName
	I1209 10:35:00.174189  617708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:35:00.174238  617708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:35:00.174491  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHKeyPath
	I1209 10:35:00.174704  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHUsername
	I1209 10:35:00.174870  617708 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/addons-156041/id_rsa Username:docker}
	I1209 10:35:00.175544  617708 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1209 10:35:00.175684  617708 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.0
	I1209 10:35:00.176810  617708 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1209 10:35:00.176830  617708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1209 10:35:00.176851  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHHostname
	I1209 10:35:00.178302  617708 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1209 10:35:00.179485  617708 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1209 10:35:00.180638  617708 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1209 10:35:00.181485  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:35:00.181968  617708 main.go:141] libmachine: (addons-156041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:f1:8a", ip: ""} in network mk-addons-156041: {Iface:virbr1 ExpiryTime:2024-12-09 11:34:25 +0000 UTC Type:0 Mac:52:54:00:fc:f1:8a Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:addons-156041 Clientid:01:52:54:00:fc:f1:8a}
	I1209 10:35:00.182036  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined IP address 192.168.39.161 and MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:35:00.182153  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHPort
	I1209 10:35:00.182298  617708 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1209 10:35:00.182324  617708 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1209 10:35:00.182347  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHHostname
	I1209 10:35:00.182391  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHKeyPath
	I1209 10:35:00.182561  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHUsername
	I1209 10:35:00.182724  617708 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/addons-156041/id_rsa Username:docker}
	I1209 10:35:00.185153  617708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36883
	I1209 10:35:00.185545  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:35:00.185995  617708 main.go:141] libmachine: (addons-156041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:f1:8a", ip: ""} in network mk-addons-156041: {Iface:virbr1 ExpiryTime:2024-12-09 11:34:25 +0000 UTC Type:0 Mac:52:54:00:fc:f1:8a Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:addons-156041 Clientid:01:52:54:00:fc:f1:8a}
	I1209 10:35:00.186094  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined IP address 192.168.39.161 and MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:35:00.186231  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHPort
	I1209 10:35:00.186373  617708 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:35:00.186473  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHKeyPath
	I1209 10:35:00.186681  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHUsername
	I1209 10:35:00.186754  617708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40277
	I1209 10:35:00.187056  617708 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/addons-156041/id_rsa Username:docker}
	I1209 10:35:00.187750  617708 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:35:00.187861  617708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42551
	I1209 10:35:00.188425  617708 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:35:00.188619  617708 main.go:141] libmachine: Using API Version  1
	I1209 10:35:00.188631  617708 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:35:00.188918  617708 main.go:141] libmachine: Using API Version  1
	I1209 10:35:00.188938  617708 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:35:00.189159  617708 main.go:141] libmachine: Using API Version  1
	I1209 10:35:00.189175  617708 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:35:00.189375  617708 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:35:00.189983  617708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:35:00.190028  617708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:35:00.190262  617708 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:35:00.190301  617708 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:35:00.190487  617708 main.go:141] libmachine: (addons-156041) Calling .GetState
	I1209 10:35:00.190576  617708 main.go:141] libmachine: (addons-156041) Calling .GetState
	I1209 10:35:00.192236  617708 main.go:141] libmachine: (addons-156041) Calling .DriverName
	I1209 10:35:00.193776  617708 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-156041"
	I1209 10:35:00.193828  617708 host.go:66] Checking if "addons-156041" exists ...
	I1209 10:35:00.194232  617708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:35:00.194279  617708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:35:00.195460  617708 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 10:35:00.196802  617708 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 10:35:00.196824  617708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1209 10:35:00.196847  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHHostname
	I1209 10:35:00.197604  617708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36539
	I1209 10:35:00.200567  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:35:00.201206  617708 main.go:141] libmachine: (addons-156041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:f1:8a", ip: ""} in network mk-addons-156041: {Iface:virbr1 ExpiryTime:2024-12-09 11:34:25 +0000 UTC Type:0 Mac:52:54:00:fc:f1:8a Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:addons-156041 Clientid:01:52:54:00:fc:f1:8a}
	I1209 10:35:00.201235  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined IP address 192.168.39.161 and MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:35:00.201463  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHPort
	I1209 10:35:00.201650  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHKeyPath
	I1209 10:35:00.201818  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHUsername
	I1209 10:35:00.201987  617708 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/addons-156041/id_rsa Username:docker}
	I1209 10:35:00.202342  617708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35041
	I1209 10:35:00.202488  617708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39223
	I1209 10:35:00.202615  617708 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:35:00.203094  617708 main.go:141] libmachine: Using API Version  1
	I1209 10:35:00.203115  617708 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:35:00.203277  617708 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:35:00.203836  617708 main.go:141] libmachine: Using API Version  1
	I1209 10:35:00.203853  617708 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:35:00.204301  617708 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:35:00.204366  617708 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:35:00.204581  617708 main.go:141] libmachine: (addons-156041) Calling .GetState
	I1209 10:35:00.205716  617708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:35:00.205766  617708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:35:00.206618  617708 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:35:00.207154  617708 main.go:141] libmachine: (addons-156041) Calling .DriverName
	I1209 10:35:00.207963  617708 main.go:141] libmachine: Using API Version  1
	I1209 10:35:00.207982  617708 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:35:00.208422  617708 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:35:00.208611  617708 main.go:141] libmachine: (addons-156041) Calling .GetState
	I1209 10:35:00.209504  617708 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1209 10:35:00.210722  617708 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1209 10:35:00.210743  617708 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1209 10:35:00.210766  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHHostname
	I1209 10:35:00.211474  617708 main.go:141] libmachine: (addons-156041) Calling .DriverName
	I1209 10:35:00.212701  617708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42909
	I1209 10:35:00.212865  617708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40549
	I1209 10:35:00.212931  617708 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I1209 10:35:00.213586  617708 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:35:00.213676  617708 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:35:00.214338  617708 main.go:141] libmachine: Using API Version  1
	I1209 10:35:00.214368  617708 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:35:00.214755  617708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44497
	I1209 10:35:00.215353  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:35:00.215392  617708 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:35:00.215415  617708 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:35:00.215606  617708 main.go:141] libmachine: Using API Version  1
	I1209 10:35:00.215629  617708 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:35:00.215716  617708 main.go:141] libmachine: (addons-156041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:f1:8a", ip: ""} in network mk-addons-156041: {Iface:virbr1 ExpiryTime:2024-12-09 11:34:25 +0000 UTC Type:0 Mac:52:54:00:fc:f1:8a Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:addons-156041 Clientid:01:52:54:00:fc:f1:8a}
	I1209 10:35:00.215738  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined IP address 192.168.39.161 and MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:35:00.215805  617708 main.go:141] libmachine: (addons-156041) Calling .GetState
	I1209 10:35:00.216282  617708 main.go:141] libmachine: Using API Version  1
	I1209 10:35:00.216298  617708 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:35:00.216358  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHPort
	I1209 10:35:00.216400  617708 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1209 10:35:00.216406  617708 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:35:00.216550  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHKeyPath
	I1209 10:35:00.216842  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHUsername
	I1209 10:35:00.216843  617708 main.go:141] libmachine: (addons-156041) Calling .GetState
	I1209 10:35:00.216947  617708 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:35:00.217152  617708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32863
	I1209 10:35:00.217337  617708 main.go:141] libmachine: (addons-156041) Calling .GetState
	I1209 10:35:00.217908  617708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45867
	I1209 10:35:00.217905  617708 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/addons-156041/id_rsa Username:docker}
	I1209 10:35:00.217915  617708 main.go:141] libmachine: (addons-156041) Calling .DriverName
	I1209 10:35:00.218453  617708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36569
	I1209 10:35:00.218470  617708 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:35:00.218805  617708 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:35:00.218989  617708 main.go:141] libmachine: Using API Version  1
	I1209 10:35:00.219007  617708 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:35:00.219099  617708 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:35:00.219371  617708 main.go:141] libmachine: Using API Version  1
	I1209 10:35:00.219395  617708 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:35:00.219378  617708 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:35:00.219679  617708 main.go:141] libmachine: (addons-156041) Calling .GetState
	I1209 10:35:00.219813  617708 main.go:141] libmachine: (addons-156041) Calling .DriverName
	I1209 10:35:00.219819  617708 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1209 10:35:00.219652  617708 main.go:141] libmachine: (addons-156041) Calling .DriverName
	I1209 10:35:00.219993  617708 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:35:00.220073  617708 main.go:141] libmachine: Using API Version  1
	I1209 10:35:00.220086  617708 main.go:141] libmachine: Making call to close driver server
	I1209 10:35:00.220095  617708 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:35:00.220097  617708 main.go:141] libmachine: (addons-156041) Calling .Close
	I1209 10:35:00.220238  617708 main.go:141] libmachine: (addons-156041) Calling .GetState
	I1209 10:35:00.220260  617708 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I1209 10:35:00.220647  617708 main.go:141] libmachine: (addons-156041) DBG | Closing plugin on server side
	I1209 10:35:00.220665  617708 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:35:00.220679  617708 main.go:141] libmachine: Successfully made call to close driver server
	I1209 10:35:00.220687  617708 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 10:35:00.220694  617708 main.go:141] libmachine: Making call to close driver server
	I1209 10:35:00.220700  617708 main.go:141] libmachine: (addons-156041) Calling .Close
	I1209 10:35:00.220856  617708 main.go:141] libmachine: (addons-156041) Calling .DriverName
	I1209 10:35:00.220934  617708 main.go:141] libmachine: (addons-156041) DBG | Closing plugin on server side
	I1209 10:35:00.220954  617708 main.go:141] libmachine: Successfully made call to close driver server
	I1209 10:35:00.220961  617708 main.go:141] libmachine: Making call to close connection to plugin binary
	W1209 10:35:00.221098  617708 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1209 10:35:00.221435  617708 main.go:141] libmachine: (addons-156041) Calling .DriverName
	I1209 10:35:00.221830  617708 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1209 10:35:00.221850  617708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1209 10:35:00.221871  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHHostname
	I1209 10:35:00.221973  617708 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1209 10:35:00.221990  617708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1209 10:35:00.222009  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHHostname
	I1209 10:35:00.222553  617708 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1209 10:35:00.223074  617708 main.go:141] libmachine: (addons-156041) Calling .DriverName
	I1209 10:35:00.224416  617708 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.35.0
	I1209 10:35:00.224502  617708 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I1209 10:35:00.224554  617708 addons.go:431] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1209 10:35:00.224572  617708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1209 10:35:00.224593  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHHostname
	I1209 10:35:00.226610  617708 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1209 10:35:00.226629  617708 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I1209 10:35:00.226648  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHHostname
	I1209 10:35:00.226651  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:35:00.226616  617708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32875
	I1209 10:35:00.226791  617708 out.go:177]   - Using image docker.io/registry:2.8.3
	I1209 10:35:00.227247  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHPort
	I1209 10:35:00.227250  617708 main.go:141] libmachine: (addons-156041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:f1:8a", ip: ""} in network mk-addons-156041: {Iface:virbr1 ExpiryTime:2024-12-09 11:34:25 +0000 UTC Type:0 Mac:52:54:00:fc:f1:8a Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:addons-156041 Clientid:01:52:54:00:fc:f1:8a}
	I1209 10:35:00.227269  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined IP address 192.168.39.161 and MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:35:00.227663  617708 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:35:00.227774  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:35:00.227821  617708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33269
	I1209 10:35:00.227936  617708 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1209 10:35:00.227951  617708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1209 10:35:00.227967  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHHostname
	I1209 10:35:00.228773  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHPort
	I1209 10:35:00.228866  617708 main.go:141] libmachine: Using API Version  1
	I1209 10:35:00.228884  617708 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:35:00.228889  617708 main.go:141] libmachine: (addons-156041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:f1:8a", ip: ""} in network mk-addons-156041: {Iface:virbr1 ExpiryTime:2024-12-09 11:34:25 +0000 UTC Type:0 Mac:52:54:00:fc:f1:8a Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:addons-156041 Clientid:01:52:54:00:fc:f1:8a}
	I1209 10:35:00.228904  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined IP address 192.168.39.161 and MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:35:00.228925  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHKeyPath
	I1209 10:35:00.228949  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHKeyPath
	I1209 10:35:00.229164  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHUsername
	I1209 10:35:00.229186  617708 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:35:00.229229  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:35:00.229455  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHUsername
	I1209 10:35:00.229459  617708 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/addons-156041/id_rsa Username:docker}
	I1209 10:35:00.229671  617708 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/addons-156041/id_rsa Username:docker}
	I1209 10:35:00.229806  617708 main.go:141] libmachine: Using API Version  1
	I1209 10:35:00.229819  617708 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:35:00.230338  617708 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:35:00.230614  617708 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:35:00.230862  617708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:35:00.230887  617708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:35:00.231054  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:35:00.231244  617708 main.go:141] libmachine: (addons-156041) Calling .GetState
	I1209 10:35:00.231595  617708 main.go:141] libmachine: (addons-156041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:f1:8a", ip: ""} in network mk-addons-156041: {Iface:virbr1 ExpiryTime:2024-12-09 11:34:25 +0000 UTC Type:0 Mac:52:54:00:fc:f1:8a Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:addons-156041 Clientid:01:52:54:00:fc:f1:8a}
	I1209 10:35:00.231614  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined IP address 192.168.39.161 and MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:35:00.231786  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHPort
	I1209 10:35:00.232164  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHKeyPath
	I1209 10:35:00.232161  617708 main.go:141] libmachine: (addons-156041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:f1:8a", ip: ""} in network mk-addons-156041: {Iface:virbr1 ExpiryTime:2024-12-09 11:34:25 +0000 UTC Type:0 Mac:52:54:00:fc:f1:8a Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:addons-156041 Clientid:01:52:54:00:fc:f1:8a}
	I1209 10:35:00.232234  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined IP address 192.168.39.161 and MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:35:00.232451  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHPort
	I1209 10:35:00.232542  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHUsername
	I1209 10:35:00.232639  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHKeyPath
	I1209 10:35:00.232854  617708 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/addons-156041/id_rsa Username:docker}
	I1209 10:35:00.232912  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHUsername
	I1209 10:35:00.233120  617708 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/addons-156041/id_rsa Username:docker}
	I1209 10:35:00.233526  617708 main.go:141] libmachine: (addons-156041) Calling .DriverName
	I1209 10:35:00.234695  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:35:00.235039  617708 main.go:141] libmachine: (addons-156041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:f1:8a", ip: ""} in network mk-addons-156041: {Iface:virbr1 ExpiryTime:2024-12-09 11:34:25 +0000 UTC Type:0 Mac:52:54:00:fc:f1:8a Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:addons-156041 Clientid:01:52:54:00:fc:f1:8a}
	I1209 10:35:00.235066  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined IP address 192.168.39.161 and MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:35:00.235261  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHPort
	I1209 10:35:00.235436  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHKeyPath
	I1209 10:35:00.235649  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHUsername
	I1209 10:35:00.235757  617708 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/addons-156041/id_rsa Username:docker}
	I1209 10:35:00.235916  617708 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	W1209 10:35:00.236497  617708 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:35788->192.168.39.161:22: read: connection reset by peer
	I1209 10:35:00.236529  617708 retry.go:31] will retry after 205.977713ms: ssh: handshake failed: read tcp 192.168.39.1:35788->192.168.39.161:22: read: connection reset by peer
	I1209 10:35:00.237658  617708 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1209 10:35:00.237679  617708 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1209 10:35:00.237712  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHHostname
	I1209 10:35:00.240922  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:35:00.241326  617708 main.go:141] libmachine: (addons-156041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:f1:8a", ip: ""} in network mk-addons-156041: {Iface:virbr1 ExpiryTime:2024-12-09 11:34:25 +0000 UTC Type:0 Mac:52:54:00:fc:f1:8a Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:addons-156041 Clientid:01:52:54:00:fc:f1:8a}
	I1209 10:35:00.241352  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined IP address 192.168.39.161 and MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:35:00.241580  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHPort
	I1209 10:35:00.241744  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHKeyPath
	I1209 10:35:00.241868  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHUsername
	I1209 10:35:00.241974  617708 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/addons-156041/id_rsa Username:docker}
	I1209 10:35:00.248655  617708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40023
	I1209 10:35:00.249180  617708 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:35:00.249780  617708 main.go:141] libmachine: Using API Version  1
	I1209 10:35:00.249805  617708 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:35:00.250127  617708 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:35:00.250324  617708 main.go:141] libmachine: (addons-156041) Calling .GetState
	I1209 10:35:00.252010  617708 main.go:141] libmachine: (addons-156041) Calling .DriverName
	I1209 10:35:00.252247  617708 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1209 10:35:00.252262  617708 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1209 10:35:00.252279  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHHostname
	I1209 10:35:00.254428  617708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33577
	I1209 10:35:00.255031  617708 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:35:00.255637  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:35:00.255757  617708 main.go:141] libmachine: Using API Version  1
	I1209 10:35:00.255771  617708 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:35:00.255852  617708 main.go:141] libmachine: (addons-156041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:f1:8a", ip: ""} in network mk-addons-156041: {Iface:virbr1 ExpiryTime:2024-12-09 11:34:25 +0000 UTC Type:0 Mac:52:54:00:fc:f1:8a Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:addons-156041 Clientid:01:52:54:00:fc:f1:8a}
	I1209 10:35:00.255868  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined IP address 192.168.39.161 and MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:35:00.255899  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHPort
	I1209 10:35:00.256050  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHKeyPath
	I1209 10:35:00.256186  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHUsername
	I1209 10:35:00.256313  617708 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/addons-156041/id_rsa Username:docker}
	I1209 10:35:00.256606  617708 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:35:00.256792  617708 main.go:141] libmachine: (addons-156041) Calling .GetState
	I1209 10:35:00.258268  617708 main.go:141] libmachine: (addons-156041) Calling .DriverName
	I1209 10:35:00.260209  617708 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1209 10:35:00.261526  617708 out.go:177]   - Using image docker.io/busybox:stable
	I1209 10:35:00.262862  617708 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1209 10:35:00.262885  617708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1209 10:35:00.262908  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHHostname
	I1209 10:35:00.266031  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:35:00.266525  617708 main.go:141] libmachine: (addons-156041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:f1:8a", ip: ""} in network mk-addons-156041: {Iface:virbr1 ExpiryTime:2024-12-09 11:34:25 +0000 UTC Type:0 Mac:52:54:00:fc:f1:8a Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:addons-156041 Clientid:01:52:54:00:fc:f1:8a}
	I1209 10:35:00.266551  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined IP address 192.168.39.161 and MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:35:00.266705  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHPort
	I1209 10:35:00.266922  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHKeyPath
	I1209 10:35:00.267060  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHUsername
	I1209 10:35:00.267219  617708 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/addons-156041/id_rsa Username:docker}
	I1209 10:35:00.524078  617708 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 10:35:00.524157  617708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1209 10:35:00.556787  617708 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1209 10:35:00.568737  617708 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1209 10:35:00.604237  617708 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1209 10:35:00.616040  617708 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1209 10:35:00.616083  617708 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1209 10:35:00.646338  617708 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1209 10:35:00.646371  617708 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1209 10:35:00.732967  617708 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 10:35:00.742502  617708 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1209 10:35:00.749111  617708 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1209 10:35:00.756379  617708 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1209 10:35:00.801038  617708 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1209 10:35:00.801070  617708 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1209 10:35:00.823280  617708 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1209 10:35:00.823314  617708 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1209 10:35:00.853505  617708 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1209 10:35:00.853543  617708 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1209 10:35:00.865237  617708 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1209 10:35:00.877339  617708 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1209 10:35:00.877362  617708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1209 10:35:00.893337  617708 addons.go:431] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1209 10:35:00.893365  617708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14576 bytes)
	I1209 10:35:00.904948  617708 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1209 10:35:00.904979  617708 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1209 10:35:00.966833  617708 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1209 10:35:00.966864  617708 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1209 10:35:00.997544  617708 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1209 10:35:00.997586  617708 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1209 10:35:01.029431  617708 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1209 10:35:01.029460  617708 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1209 10:35:01.053665  617708 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1209 10:35:01.053697  617708 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1209 10:35:01.081333  617708 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1209 10:35:01.081361  617708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1209 10:35:01.143187  617708 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1209 10:35:01.180337  617708 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1209 10:35:01.180369  617708 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1209 10:35:01.216920  617708 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1209 10:35:01.220096  617708 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1209 10:35:01.220123  617708 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1209 10:35:01.247759  617708 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1209 10:35:01.247803  617708 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1209 10:35:01.327854  617708 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1209 10:35:01.380827  617708 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1209 10:35:01.380858  617708 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1209 10:35:01.414854  617708 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1209 10:35:01.414879  617708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1209 10:35:01.468511  617708 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1209 10:35:01.468554  617708 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1209 10:35:01.599427  617708 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1209 10:35:01.599456  617708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1209 10:35:01.617771  617708 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1209 10:35:01.668722  617708 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1209 10:35:01.668748  617708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1209 10:35:01.852015  617708 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1209 10:35:01.941830  617708 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1209 10:35:01.941865  617708 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1209 10:35:02.050480  617708 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1209 10:35:02.050509  617708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1209 10:35:02.421394  617708 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1209 10:35:02.421440  617708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1209 10:35:02.827613  617708 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1209 10:35:02.827647  617708 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1209 10:35:02.966980  617708 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.442773358s)
	I1209 10:35:02.967021  617708 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1209 10:35:02.967027  617708 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.442903391s)
	I1209 10:35:02.967799  617708 node_ready.go:35] waiting up to 6m0s for node "addons-156041" to be "Ready" ...
	I1209 10:35:02.971419  617708 node_ready.go:49] node "addons-156041" has status "Ready":"True"
	I1209 10:35:02.971454  617708 node_ready.go:38] duration metric: took 3.630999ms for node "addons-156041" to be "Ready" ...
	I1209 10:35:02.971469  617708 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 10:35:02.990608  617708 pod_ready.go:79] waiting up to 6m0s for pod "amd-gpu-device-plugin-hbkzd" in "kube-system" namespace to be "Ready" ...
	I1209 10:35:03.087417  617708 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1209 10:35:03.474541  617708 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-156041" context rescaled to 1 replicas
	I1209 10:35:04.197602  617708 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.640762302s)
	I1209 10:35:04.197670  617708 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.628893777s)
	I1209 10:35:04.197721  617708 main.go:141] libmachine: Making call to close driver server
	I1209 10:35:04.197737  617708 main.go:141] libmachine: (addons-156041) Calling .Close
	I1209 10:35:04.197679  617708 main.go:141] libmachine: Making call to close driver server
	I1209 10:35:04.197796  617708 main.go:141] libmachine: (addons-156041) Calling .Close
	I1209 10:35:04.197730  617708 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.593459686s)
	I1209 10:35:04.197847  617708 main.go:141] libmachine: Making call to close driver server
	I1209 10:35:04.197857  617708 main.go:141] libmachine: (addons-156041) Calling .Close
	I1209 10:35:04.198080  617708 main.go:141] libmachine: Successfully made call to close driver server
	I1209 10:35:04.198096  617708 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 10:35:04.198107  617708 main.go:141] libmachine: Making call to close driver server
	I1209 10:35:04.198114  617708 main.go:141] libmachine: (addons-156041) Calling .Close
	I1209 10:35:04.198286  617708 main.go:141] libmachine: Successfully made call to close driver server
	I1209 10:35:04.198302  617708 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 10:35:04.198290  617708 main.go:141] libmachine: (addons-156041) DBG | Closing plugin on server side
	I1209 10:35:04.198336  617708 main.go:141] libmachine: Successfully made call to close driver server
	I1209 10:35:04.198338  617708 main.go:141] libmachine: (addons-156041) DBG | Closing plugin on server side
	I1209 10:35:04.198355  617708 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 10:35:04.198365  617708 main.go:141] libmachine: Making call to close driver server
	I1209 10:35:04.198374  617708 main.go:141] libmachine: (addons-156041) Calling .Close
	I1209 10:35:04.198397  617708 main.go:141] libmachine: Making call to close driver server
	I1209 10:35:04.198412  617708 main.go:141] libmachine: (addons-156041) Calling .Close
	I1209 10:35:04.198637  617708 main.go:141] libmachine: (addons-156041) DBG | Closing plugin on server side
	I1209 10:35:04.198644  617708 main.go:141] libmachine: (addons-156041) DBG | Closing plugin on server side
	I1209 10:35:04.198665  617708 main.go:141] libmachine: Successfully made call to close driver server
	I1209 10:35:04.198672  617708 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 10:35:04.198673  617708 main.go:141] libmachine: Successfully made call to close driver server
	I1209 10:35:04.198680  617708 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 10:35:04.200887  617708 main.go:141] libmachine: Successfully made call to close driver server
	I1209 10:35:04.200906  617708 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 10:35:04.200906  617708 main.go:141] libmachine: (addons-156041) DBG | Closing plugin on server side
	I1209 10:35:04.755322  617708 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.02231028s)
	I1209 10:35:04.755386  617708 main.go:141] libmachine: Making call to close driver server
	I1209 10:35:04.755402  617708 main.go:141] libmachine: (addons-156041) Calling .Close
	I1209 10:35:04.755789  617708 main.go:141] libmachine: Successfully made call to close driver server
	I1209 10:35:04.755811  617708 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 10:35:04.755825  617708 main.go:141] libmachine: Making call to close driver server
	I1209 10:35:04.755831  617708 main.go:141] libmachine: (addons-156041) Calling .Close
	I1209 10:35:04.756223  617708 main.go:141] libmachine: (addons-156041) DBG | Closing plugin on server side
	I1209 10:35:04.756272  617708 main.go:141] libmachine: Successfully made call to close driver server
	I1209 10:35:04.756284  617708 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 10:35:05.080642  617708 pod_ready.go:103] pod "amd-gpu-device-plugin-hbkzd" in "kube-system" namespace has status "Ready":"False"
	I1209 10:35:05.934564  617708 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.192011503s)
	I1209 10:35:05.934634  617708 main.go:141] libmachine: Making call to close driver server
	I1209 10:35:05.934646  617708 main.go:141] libmachine: (addons-156041) Calling .Close
	I1209 10:35:05.935086  617708 main.go:141] libmachine: (addons-156041) DBG | Closing plugin on server side
	I1209 10:35:05.935115  617708 main.go:141] libmachine: Successfully made call to close driver server
	I1209 10:35:05.935124  617708 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 10:35:05.935157  617708 main.go:141] libmachine: Making call to close driver server
	I1209 10:35:05.935169  617708 main.go:141] libmachine: (addons-156041) Calling .Close
	I1209 10:35:05.935476  617708 main.go:141] libmachine: (addons-156041) DBG | Closing plugin on server side
	I1209 10:35:05.935514  617708 main.go:141] libmachine: Successfully made call to close driver server
	I1209 10:35:05.935522  617708 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 10:35:06.021143  617708 main.go:141] libmachine: Making call to close driver server
	I1209 10:35:06.021183  617708 main.go:141] libmachine: (addons-156041) Calling .Close
	I1209 10:35:06.021519  617708 main.go:141] libmachine: Successfully made call to close driver server
	I1209 10:35:06.021542  617708 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 10:35:07.106730  617708 pod_ready.go:103] pod "amd-gpu-device-plugin-hbkzd" in "kube-system" namespace has status "Ready":"False"
	I1209 10:35:07.248509  617708 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1209 10:35:07.248568  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHHostname
	I1209 10:35:07.251869  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:35:07.252243  617708 main.go:141] libmachine: (addons-156041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:f1:8a", ip: ""} in network mk-addons-156041: {Iface:virbr1 ExpiryTime:2024-12-09 11:34:25 +0000 UTC Type:0 Mac:52:54:00:fc:f1:8a Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:addons-156041 Clientid:01:52:54:00:fc:f1:8a}
	I1209 10:35:07.252278  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined IP address 192.168.39.161 and MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:35:07.252444  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHPort
	I1209 10:35:07.252691  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHKeyPath
	I1209 10:35:07.252899  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHUsername
	I1209 10:35:07.253095  617708 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/addons-156041/id_rsa Username:docker}
	I1209 10:35:07.675590  617708 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1209 10:35:07.914577  617708 addons.go:234] Setting addon gcp-auth=true in "addons-156041"
	I1209 10:35:07.914650  617708 host.go:66] Checking if "addons-156041" exists ...
	I1209 10:35:07.915013  617708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:35:07.915068  617708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:35:07.931536  617708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40523
	I1209 10:35:07.932217  617708 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:35:07.932828  617708 main.go:141] libmachine: Using API Version  1
	I1209 10:35:07.932850  617708 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:35:07.933292  617708 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:35:07.934000  617708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:35:07.934064  617708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:35:07.950018  617708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37643
	I1209 10:35:07.950515  617708 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:35:07.950999  617708 main.go:141] libmachine: Using API Version  1
	I1209 10:35:07.951018  617708 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:35:07.951474  617708 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:35:07.951696  617708 main.go:141] libmachine: (addons-156041) Calling .GetState
	I1209 10:35:07.953645  617708 main.go:141] libmachine: (addons-156041) Calling .DriverName
	I1209 10:35:07.953882  617708 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1209 10:35:07.953907  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHHostname
	I1209 10:35:07.956691  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:35:07.957115  617708 main.go:141] libmachine: (addons-156041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:f1:8a", ip: ""} in network mk-addons-156041: {Iface:virbr1 ExpiryTime:2024-12-09 11:34:25 +0000 UTC Type:0 Mac:52:54:00:fc:f1:8a Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:addons-156041 Clientid:01:52:54:00:fc:f1:8a}
	I1209 10:35:07.957146  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined IP address 192.168.39.161 and MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:35:07.957318  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHPort
	I1209 10:35:07.957536  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHKeyPath
	I1209 10:35:07.957700  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHUsername
	I1209 10:35:07.957845  617708 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/addons-156041/id_rsa Username:docker}
	I1209 10:35:08.758511  617708 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.009356161s)
	I1209 10:35:08.758556  617708 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (8.002140967s)
	I1209 10:35:08.758574  617708 main.go:141] libmachine: Making call to close driver server
	I1209 10:35:08.758587  617708 main.go:141] libmachine: (addons-156041) Calling .Close
	I1209 10:35:08.758598  617708 main.go:141] libmachine: Making call to close driver server
	I1209 10:35:08.758610  617708 main.go:141] libmachine: (addons-156041) Calling .Close
	I1209 10:35:08.758617  617708 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.893335089s)
	I1209 10:35:08.758659  617708 main.go:141] libmachine: Making call to close driver server
	I1209 10:35:08.758677  617708 main.go:141] libmachine: (addons-156041) Calling .Close
	I1209 10:35:08.758763  617708 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (7.615544263s)
	I1209 10:35:08.758788  617708 main.go:141] libmachine: Making call to close driver server
	I1209 10:35:08.758798  617708 main.go:141] libmachine: (addons-156041) Calling .Close
	I1209 10:35:08.758937  617708 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.54197958s)
	I1209 10:35:08.759017  617708 main.go:141] libmachine: Making call to close driver server
	I1209 10:35:08.759046  617708 main.go:141] libmachine: (addons-156041) Calling .Close
	I1209 10:35:08.759075  617708 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.141274788s)
	I1209 10:35:08.759107  617708 main.go:141] libmachine: Making call to close driver server
	I1209 10:35:08.759122  617708 main.go:141] libmachine: (addons-156041) Calling .Close
	I1209 10:35:08.759280  617708 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.907232523s)
	I1209 10:35:08.759302  617708 main.go:141] libmachine: (addons-156041) DBG | Closing plugin on server side
	W1209 10:35:08.759309  617708 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1209 10:35:08.759323  617708 main.go:141] libmachine: (addons-156041) DBG | Closing plugin on server side
	I1209 10:35:08.759351  617708 retry.go:31] will retry after 138.951478ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1209 10:35:08.759359  617708 main.go:141] libmachine: Successfully made call to close driver server
	I1209 10:35:08.759378  617708 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 10:35:08.759376  617708 main.go:141] libmachine: Successfully made call to close driver server
	I1209 10:35:08.759390  617708 main.go:141] libmachine: Making call to close driver server
	I1209 10:35:08.759400  617708 main.go:141] libmachine: (addons-156041) Calling .Close
	I1209 10:35:08.759407  617708 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 10:35:08.759416  617708 main.go:141] libmachine: Making call to close driver server
	I1209 10:35:08.759423  617708 main.go:141] libmachine: (addons-156041) Calling .Close
	I1209 10:35:08.759027  617708 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.43113163s)
	I1209 10:35:08.759505  617708 main.go:141] libmachine: Successfully made call to close driver server
	I1209 10:35:08.759525  617708 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 10:35:08.759533  617708 main.go:141] libmachine: Making call to close driver server
	I1209 10:35:08.759544  617708 main.go:141] libmachine: (addons-156041) Calling .Close
	I1209 10:35:08.759500  617708 main.go:141] libmachine: Making call to close driver server
	I1209 10:35:08.759583  617708 main.go:141] libmachine: (addons-156041) Calling .Close
	I1209 10:35:08.759800  617708 main.go:141] libmachine: (addons-156041) DBG | Closing plugin on server side
	I1209 10:35:08.759810  617708 main.go:141] libmachine: Successfully made call to close driver server
	I1209 10:35:08.759821  617708 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 10:35:08.759829  617708 main.go:141] libmachine: Making call to close driver server
	I1209 10:35:08.759836  617708 main.go:141] libmachine: (addons-156041) Calling .Close
	I1209 10:35:08.759851  617708 main.go:141] libmachine: (addons-156041) DBG | Closing plugin on server side
	I1209 10:35:08.759866  617708 main.go:141] libmachine: (addons-156041) DBG | Closing plugin on server side
	I1209 10:35:08.759886  617708 main.go:141] libmachine: Successfully made call to close driver server
	I1209 10:35:08.759892  617708 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 10:35:08.759902  617708 addons.go:475] Verifying addon ingress=true in "addons-156041"
	I1209 10:35:08.760125  617708 main.go:141] libmachine: (addons-156041) DBG | Closing plugin on server side
	I1209 10:35:08.760158  617708 main.go:141] libmachine: Successfully made call to close driver server
	I1209 10:35:08.760165  617708 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 10:35:08.760175  617708 main.go:141] libmachine: Making call to close driver server
	I1209 10:35:08.760181  617708 main.go:141] libmachine: (addons-156041) Calling .Close
	I1209 10:35:08.760240  617708 main.go:141] libmachine: Successfully made call to close driver server
	I1209 10:35:08.760246  617708 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 10:35:08.760691  617708 main.go:141] libmachine: (addons-156041) DBG | Closing plugin on server side
	I1209 10:35:08.760723  617708 main.go:141] libmachine: Successfully made call to close driver server
	I1209 10:35:08.760734  617708 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 10:35:08.761390  617708 main.go:141] libmachine: Successfully made call to close driver server
	I1209 10:35:08.761403  617708 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 10:35:08.761622  617708 main.go:141] libmachine: Successfully made call to close driver server
	I1209 10:35:08.761640  617708 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 10:35:08.761650  617708 main.go:141] libmachine: Making call to close driver server
	I1209 10:35:08.761659  617708 main.go:141] libmachine: (addons-156041) Calling .Close
	I1209 10:35:08.761845  617708 main.go:141] libmachine: Successfully made call to close driver server
	I1209 10:35:08.761901  617708 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 10:35:08.761922  617708 main.go:141] libmachine: Making call to close driver server
	I1209 10:35:08.761930  617708 main.go:141] libmachine: Successfully made call to close driver server
	I1209 10:35:08.761852  617708 main.go:141] libmachine: (addons-156041) DBG | Closing plugin on server side
	I1209 10:35:08.761949  617708 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 10:35:08.761960  617708 addons.go:475] Verifying addon metrics-server=true in "addons-156041"
	I1209 10:35:08.762058  617708 out.go:177] * Verifying ingress addon...
	I1209 10:35:08.761941  617708 main.go:141] libmachine: (addons-156041) Calling .Close
	I1209 10:35:08.763224  617708 main.go:141] libmachine: (addons-156041) DBG | Closing plugin on server side
	I1209 10:35:08.763250  617708 main.go:141] libmachine: Successfully made call to close driver server
	I1209 10:35:08.763256  617708 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 10:35:08.763265  617708 addons.go:475] Verifying addon registry=true in "addons-156041"
	I1209 10:35:08.763414  617708 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-156041 service yakd-dashboard -n yakd-dashboard
	
	I1209 10:35:08.761355  617708 main.go:141] libmachine: (addons-156041) DBG | Closing plugin on server side
	I1209 10:35:08.763607  617708 main.go:141] libmachine: (addons-156041) DBG | Closing plugin on server side
	I1209 10:35:08.763632  617708 main.go:141] libmachine: Successfully made call to close driver server
	I1209 10:35:08.763978  617708 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 10:35:08.765273  617708 out.go:177] * Verifying registry addon...
	I1209 10:35:08.765308  617708 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1209 10:35:08.767396  617708 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1209 10:35:08.770984  617708 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1209 10:35:08.771007  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:08.791512  617708 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1209 10:35:08.791540  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:08.793415  617708 main.go:141] libmachine: Making call to close driver server
	I1209 10:35:08.793433  617708 main.go:141] libmachine: (addons-156041) Calling .Close
	I1209 10:35:08.793736  617708 main.go:141] libmachine: Successfully made call to close driver server
	I1209 10:35:08.793754  617708 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 10:35:08.899302  617708 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1209 10:35:09.281512  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:09.335439  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:09.561200  617708 pod_ready.go:103] pod "amd-gpu-device-plugin-hbkzd" in "kube-system" namespace has status "Ready":"False"
	I1209 10:35:09.584144  617708 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.630218743s)
	I1209 10:35:09.584272  617708 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.496779935s)
	I1209 10:35:09.584422  617708 main.go:141] libmachine: Making call to close driver server
	I1209 10:35:09.584457  617708 main.go:141] libmachine: (addons-156041) Calling .Close
	I1209 10:35:09.584890  617708 main.go:141] libmachine: (addons-156041) DBG | Closing plugin on server side
	I1209 10:35:09.584971  617708 main.go:141] libmachine: Successfully made call to close driver server
	I1209 10:35:09.584985  617708 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 10:35:09.585009  617708 main.go:141] libmachine: Making call to close driver server
	I1209 10:35:09.585018  617708 main.go:141] libmachine: (addons-156041) Calling .Close
	I1209 10:35:09.585388  617708 main.go:141] libmachine: (addons-156041) DBG | Closing plugin on server side
	I1209 10:35:09.585425  617708 main.go:141] libmachine: Successfully made call to close driver server
	I1209 10:35:09.585444  617708 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 10:35:09.585469  617708 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-156041"
	I1209 10:35:09.585925  617708 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1209 10:35:09.587475  617708 out.go:177] * Verifying csi-hostpath-driver addon...
	I1209 10:35:09.589173  617708 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1209 10:35:09.590114  617708 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1209 10:35:09.590719  617708 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1209 10:35:09.590739  617708 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1209 10:35:09.638221  617708 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1209 10:35:09.638252  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:09.703286  617708 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1209 10:35:09.703326  617708 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1209 10:35:09.783816  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:09.784120  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:09.942159  617708 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1209 10:35:09.942210  617708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1209 10:35:09.981015  617708 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1209 10:35:10.096099  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:10.269214  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:10.270855  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:10.594720  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:10.660393  617708 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.76103319s)
	I1209 10:35:10.660464  617708 main.go:141] libmachine: Making call to close driver server
	I1209 10:35:10.660482  617708 main.go:141] libmachine: (addons-156041) Calling .Close
	I1209 10:35:10.660825  617708 main.go:141] libmachine: Successfully made call to close driver server
	I1209 10:35:10.660893  617708 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 10:35:10.660910  617708 main.go:141] libmachine: Making call to close driver server
	I1209 10:35:10.660918  617708 main.go:141] libmachine: (addons-156041) Calling .Close
	I1209 10:35:10.660868  617708 main.go:141] libmachine: (addons-156041) DBG | Closing plugin on server side
	I1209 10:35:10.661199  617708 main.go:141] libmachine: Successfully made call to close driver server
	I1209 10:35:10.661209  617708 main.go:141] libmachine: (addons-156041) DBG | Closing plugin on server side
	I1209 10:35:10.661217  617708 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 10:35:10.769697  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:10.771091  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:11.103535  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:11.320313  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:11.320371  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:11.347720  617708 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.366655085s)
	I1209 10:35:11.347789  617708 main.go:141] libmachine: Making call to close driver server
	I1209 10:35:11.347805  617708 main.go:141] libmachine: (addons-156041) Calling .Close
	I1209 10:35:11.348213  617708 main.go:141] libmachine: Successfully made call to close driver server
	I1209 10:35:11.348257  617708 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 10:35:11.348269  617708 main.go:141] libmachine: Making call to close driver server
	I1209 10:35:11.348277  617708 main.go:141] libmachine: (addons-156041) Calling .Close
	I1209 10:35:11.348537  617708 main.go:141] libmachine: Successfully made call to close driver server
	I1209 10:35:11.348553  617708 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 10:35:11.348594  617708 main.go:141] libmachine: (addons-156041) DBG | Closing plugin on server side
	I1209 10:35:11.349654  617708 addons.go:475] Verifying addon gcp-auth=true in "addons-156041"
	I1209 10:35:11.351214  617708 out.go:177] * Verifying gcp-auth addon...
	I1209 10:35:11.353918  617708 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1209 10:35:11.410828  617708 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1209 10:35:11.410854  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:11.595766  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:11.772703  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:11.776697  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:11.857960  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:12.003166  617708 pod_ready.go:103] pod "amd-gpu-device-plugin-hbkzd" in "kube-system" namespace has status "Ready":"False"
	I1209 10:35:12.095430  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:12.270151  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:12.272055  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:12.358097  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:12.596066  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:12.770254  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:12.772057  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:12.859268  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:13.095439  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:13.269784  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:13.270754  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:13.357598  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:13.595145  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:13.770128  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:13.771841  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:13.857899  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:14.095391  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:14.269550  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:14.270780  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:14.357663  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:14.497528  617708 pod_ready.go:103] pod "amd-gpu-device-plugin-hbkzd" in "kube-system" namespace has status "Ready":"False"
	I1209 10:35:14.595716  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:14.770862  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:14.771418  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:14.859350  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:15.344740  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:15.445241  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:15.445711  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:15.445933  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:15.595493  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:15.771626  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:15.772596  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:15.857751  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:16.097083  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:16.271870  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:16.272158  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:16.360610  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:16.509710  617708 pod_ready.go:103] pod "amd-gpu-device-plugin-hbkzd" in "kube-system" namespace has status "Ready":"False"
	I1209 10:35:16.599718  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:16.770558  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:16.772222  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:16.857965  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:17.095117  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:17.270235  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:17.270529  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:17.357281  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:17.594959  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:17.770261  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:17.771384  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:17.858857  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:18.095432  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:18.272233  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:18.272337  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:18.358183  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:18.595194  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:18.769389  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:18.770895  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:18.857959  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:18.996622  617708 pod_ready.go:103] pod "amd-gpu-device-plugin-hbkzd" in "kube-system" namespace has status "Ready":"False"
	I1209 10:35:19.096555  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:19.269814  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:19.271458  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:19.357123  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:19.595054  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:19.770055  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:19.772716  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:19.857378  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:20.095979  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:20.270532  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:20.272648  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:20.357926  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:20.595341  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:20.770317  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:20.772213  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:20.858059  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:21.095441  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:21.269967  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:21.271635  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:21.357844  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:21.497323  617708 pod_ready.go:103] pod "amd-gpu-device-plugin-hbkzd" in "kube-system" namespace has status "Ready":"False"
	I1209 10:35:21.594263  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:21.769887  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:21.770680  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:21.857519  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:22.094851  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:22.269545  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:22.270741  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:22.357787  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:22.594719  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:22.769525  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:22.771209  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:22.857992  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:23.094935  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:23.269871  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:23.272077  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:23.357909  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:23.594697  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:23.770012  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:23.770998  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:23.858021  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:23.996141  617708 pod_ready.go:103] pod "amd-gpu-device-plugin-hbkzd" in "kube-system" namespace has status "Ready":"False"
	I1209 10:35:24.096322  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:24.268895  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:24.270686  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:24.357395  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:24.908693  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:24.909352  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:24.909968  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:24.910162  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:25.094915  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:25.269620  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:25.271449  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:25.357056  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:25.596798  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:25.769920  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:25.771941  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:25.857521  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:25.996869  617708 pod_ready.go:103] pod "amd-gpu-device-plugin-hbkzd" in "kube-system" namespace has status "Ready":"False"
	I1209 10:35:26.095049  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:26.271108  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:26.271470  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:26.357773  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:26.594992  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:26.770375  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:26.771782  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:26.857537  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:27.094955  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:27.270711  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:27.271691  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:27.357631  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:27.594434  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:27.770273  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:27.770996  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:27.858350  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:27.998875  617708 pod_ready.go:103] pod "amd-gpu-device-plugin-hbkzd" in "kube-system" namespace has status "Ready":"False"
	I1209 10:35:28.095261  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:28.290633  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:28.290974  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:28.357248  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:28.594967  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:28.770453  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:28.771966  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:28.857678  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:29.094614  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:29.269595  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:29.271583  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:29.357945  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:29.595446  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:29.771056  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:29.772122  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:29.858422  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:30.094678  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:30.269919  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:30.271225  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:30.357407  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:30.497372  617708 pod_ready.go:103] pod "amd-gpu-device-plugin-hbkzd" in "kube-system" namespace has status "Ready":"False"
	I1209 10:35:30.594675  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:30.769769  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:30.771320  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:30.856918  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:31.095331  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:31.269323  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:31.270560  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:31.357807  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:31.595049  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:31.770402  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:31.770671  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:31.857536  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:32.094590  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:32.270094  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:32.272296  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:32.369972  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:32.497633  617708 pod_ready.go:103] pod "amd-gpu-device-plugin-hbkzd" in "kube-system" namespace has status "Ready":"False"
	I1209 10:35:32.594284  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:32.770903  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:32.771762  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:32.857474  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:33.094201  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:33.269074  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:33.270289  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:33.358202  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:33.596726  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:33.771834  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:33.772008  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:33.871602  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:34.094926  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:34.275523  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:34.277329  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:34.374066  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:34.504658  617708 pod_ready.go:93] pod "amd-gpu-device-plugin-hbkzd" in "kube-system" namespace has status "Ready":"True"
	I1209 10:35:34.504684  617708 pod_ready.go:82] duration metric: took 31.514043184s for pod "amd-gpu-device-plugin-hbkzd" in "kube-system" namespace to be "Ready" ...
	I1209 10:35:34.504694  617708 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-cd4lm" in "kube-system" namespace to be "Ready" ...
	I1209 10:35:34.510154  617708 pod_ready.go:93] pod "coredns-7c65d6cfc9-cd4lm" in "kube-system" namespace has status "Ready":"True"
	I1209 10:35:34.510191  617708 pod_ready.go:82] duration metric: took 5.489042ms for pod "coredns-7c65d6cfc9-cd4lm" in "kube-system" namespace to be "Ready" ...
	I1209 10:35:34.510203  617708 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-qll9z" in "kube-system" namespace to be "Ready" ...
	I1209 10:35:34.511896  617708 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-qll9z" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-qll9z" not found
	I1209 10:35:34.511914  617708 pod_ready.go:82] duration metric: took 1.705568ms for pod "coredns-7c65d6cfc9-qll9z" in "kube-system" namespace to be "Ready" ...
	E1209 10:35:34.511924  617708 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-qll9z" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-qll9z" not found
	I1209 10:35:34.511929  617708 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-156041" in "kube-system" namespace to be "Ready" ...
	I1209 10:35:34.516171  617708 pod_ready.go:93] pod "etcd-addons-156041" in "kube-system" namespace has status "Ready":"True"
	I1209 10:35:34.516188  617708 pod_ready.go:82] duration metric: took 4.25298ms for pod "etcd-addons-156041" in "kube-system" namespace to be "Ready" ...
	I1209 10:35:34.516196  617708 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-156041" in "kube-system" namespace to be "Ready" ...
	I1209 10:35:34.520411  617708 pod_ready.go:93] pod "kube-apiserver-addons-156041" in "kube-system" namespace has status "Ready":"True"
	I1209 10:35:34.520436  617708 pod_ready.go:82] duration metric: took 4.232655ms for pod "kube-apiserver-addons-156041" in "kube-system" namespace to be "Ready" ...
	I1209 10:35:34.520449  617708 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-156041" in "kube-system" namespace to be "Ready" ...
	I1209 10:35:34.594962  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:34.694080  617708 pod_ready.go:93] pod "kube-controller-manager-addons-156041" in "kube-system" namespace has status "Ready":"True"
	I1209 10:35:34.694108  617708 pod_ready.go:82] duration metric: took 173.6504ms for pod "kube-controller-manager-addons-156041" in "kube-system" namespace to be "Ready" ...
	I1209 10:35:34.694122  617708 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-bthmb" in "kube-system" namespace to be "Ready" ...
	I1209 10:35:34.770416  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:34.771768  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:34.857592  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:35.094806  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:35.095368  617708 pod_ready.go:93] pod "kube-proxy-bthmb" in "kube-system" namespace has status "Ready":"True"
	I1209 10:35:35.095392  617708 pod_ready.go:82] duration metric: took 401.261193ms for pod "kube-proxy-bthmb" in "kube-system" namespace to be "Ready" ...
	I1209 10:35:35.095406  617708 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-156041" in "kube-system" namespace to be "Ready" ...
	I1209 10:35:35.269859  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:35.276754  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:35.357702  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:35.494772  617708 pod_ready.go:93] pod "kube-scheduler-addons-156041" in "kube-system" namespace has status "Ready":"True"
	I1209 10:35:35.494795  617708 pod_ready.go:82] duration metric: took 399.378278ms for pod "kube-scheduler-addons-156041" in "kube-system" namespace to be "Ready" ...
	I1209 10:35:35.494807  617708 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-kjjpq" in "kube-system" namespace to be "Ready" ...
	I1209 10:35:35.595057  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:35.769834  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:35.771256  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:35.870434  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:35.894109  617708 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-kjjpq" in "kube-system" namespace has status "Ready":"True"
	I1209 10:35:35.894136  617708 pod_ready.go:82] duration metric: took 399.321311ms for pod "nvidia-device-plugin-daemonset-kjjpq" in "kube-system" namespace to be "Ready" ...
	I1209 10:35:35.894148  617708 pod_ready.go:39] duration metric: took 32.922661121s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 10:35:35.894190  617708 api_server.go:52] waiting for apiserver process to appear ...
	I1209 10:35:35.894256  617708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 10:35:35.934416  617708 api_server.go:72] duration metric: took 35.861440295s to wait for apiserver process to appear ...
	I1209 10:35:35.934471  617708 api_server.go:88] waiting for apiserver healthz status ...
	I1209 10:35:35.934501  617708 api_server.go:253] Checking apiserver healthz at https://192.168.39.161:8443/healthz ...
	I1209 10:35:35.939760  617708 api_server.go:279] https://192.168.39.161:8443/healthz returned 200:
	ok
	I1209 10:35:35.940811  617708 api_server.go:141] control plane version: v1.31.2
	I1209 10:35:35.940845  617708 api_server.go:131] duration metric: took 6.365033ms to wait for apiserver health ...
	I1209 10:35:35.940857  617708 system_pods.go:43] waiting for kube-system pods to appear ...
	I1209 10:35:36.098124  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:36.101263  617708 system_pods.go:59] 18 kube-system pods found
	I1209 10:35:36.101310  617708 system_pods.go:61] "amd-gpu-device-plugin-hbkzd" [68ff1229-b428-4958-bcad-1fa9f1bb55a4] Running
	I1209 10:35:36.101319  617708 system_pods.go:61] "coredns-7c65d6cfc9-cd4lm" [29f3ba07-4465-49c1-89c9-7963559eb074] Running
	I1209 10:35:36.101330  617708 system_pods.go:61] "csi-hostpath-attacher-0" [4ce59eb1-f6b0-42e3-b167-82743dead6d5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1209 10:35:36.101345  617708 system_pods.go:61] "csi-hostpath-resizer-0" [12cbf9e5-ab92-4e05-a5bb-1aa38a653bd3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1209 10:35:36.101356  617708 system_pods.go:61] "csi-hostpathplugin-rk6qq" [c81f365c-4fbf-46b9-80d2-7388776c3da4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1209 10:35:36.101363  617708 system_pods.go:61] "etcd-addons-156041" [0db5fec8-be2b-43b9-95f0-bf7f1d4559d6] Running
	I1209 10:35:36.101373  617708 system_pods.go:61] "kube-apiserver-addons-156041" [b075ff6d-c2d2-4302-ad7e-ead23095ec56] Running
	I1209 10:35:36.101379  617708 system_pods.go:61] "kube-controller-manager-addons-156041" [cd4e8d19-1671-4024-8696-b12865565898] Running
	I1209 10:35:36.101384  617708 system_pods.go:61] "kube-ingress-dns-minikube" [dbc14232-0f6b-4848-9da8-d14681daebc5] Running
	I1209 10:35:36.101390  617708 system_pods.go:61] "kube-proxy-bthmb" [5a3b6ebf-90ff-4b75-b064-8de7e85140a0] Running
	I1209 10:35:36.101398  617708 system_pods.go:61] "kube-scheduler-addons-156041" [35d03d6e-d290-4cfd-b722-d5ac4682b7af] Running
	I1209 10:35:36.101410  617708 system_pods.go:61] "metrics-server-84c5f94fbc-s7gmn" [a2e3bba5-5ed2-4131-a072-a3597c3d28b1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1209 10:35:36.101420  617708 system_pods.go:61] "nvidia-device-plugin-daemonset-kjjpq" [9d6efa63-ad7e-417c-9a30-6ae237fb8824] Running
	I1209 10:35:36.101432  617708 system_pods.go:61] "registry-5cc95cd69-dz5k9" [94e4ed5a-c1d2-4327-99af-d2d3f88d0300] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1209 10:35:36.101441  617708 system_pods.go:61] "registry-proxy-8fjdn" [92870ba1-49e0-461f-91f0-1d0ee71c79d7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1209 10:35:36.101454  617708 system_pods.go:61] "snapshot-controller-56fcc65765-pf49d" [2d37dd49-32af-4d58-917a-73cafe8fdf4a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1209 10:35:36.101466  617708 system_pods.go:61] "snapshot-controller-56fcc65765-zh99l" [ded01d68-2ce6-4cfe-99d0-672c5a04ce9a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1209 10:35:36.101479  617708 system_pods.go:61] "storage-provisioner" [105ef2e5-38ab-44ff-9b22-17aea32e722a] Running
	I1209 10:35:36.101492  617708 system_pods.go:74] duration metric: took 160.626319ms to wait for pod list to return data ...
	I1209 10:35:36.101507  617708 default_sa.go:34] waiting for default service account to be created ...
	I1209 10:35:36.270757  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:36.272139  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:36.296813  617708 default_sa.go:45] found service account: "default"
	I1209 10:35:36.296844  617708 default_sa.go:55] duration metric: took 195.325773ms for default service account to be created ...
	I1209 10:35:36.296857  617708 system_pods.go:116] waiting for k8s-apps to be running ...
	I1209 10:35:36.596266  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:36.598069  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:36.601663  617708 system_pods.go:86] 18 kube-system pods found
	I1209 10:35:36.601688  617708 system_pods.go:89] "amd-gpu-device-plugin-hbkzd" [68ff1229-b428-4958-bcad-1fa9f1bb55a4] Running
	I1209 10:35:36.601695  617708 system_pods.go:89] "coredns-7c65d6cfc9-cd4lm" [29f3ba07-4465-49c1-89c9-7963559eb074] Running
	I1209 10:35:36.601701  617708 system_pods.go:89] "csi-hostpath-attacher-0" [4ce59eb1-f6b0-42e3-b167-82743dead6d5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1209 10:35:36.601708  617708 system_pods.go:89] "csi-hostpath-resizer-0" [12cbf9e5-ab92-4e05-a5bb-1aa38a653bd3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1209 10:35:36.601726  617708 system_pods.go:89] "csi-hostpathplugin-rk6qq" [c81f365c-4fbf-46b9-80d2-7388776c3da4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1209 10:35:36.601735  617708 system_pods.go:89] "etcd-addons-156041" [0db5fec8-be2b-43b9-95f0-bf7f1d4559d6] Running
	I1209 10:35:36.601740  617708 system_pods.go:89] "kube-apiserver-addons-156041" [b075ff6d-c2d2-4302-ad7e-ead23095ec56] Running
	I1209 10:35:36.601744  617708 system_pods.go:89] "kube-controller-manager-addons-156041" [cd4e8d19-1671-4024-8696-b12865565898] Running
	I1209 10:35:36.601750  617708 system_pods.go:89] "kube-ingress-dns-minikube" [dbc14232-0f6b-4848-9da8-d14681daebc5] Running
	I1209 10:35:36.601756  617708 system_pods.go:89] "kube-proxy-bthmb" [5a3b6ebf-90ff-4b75-b064-8de7e85140a0] Running
	I1209 10:35:36.601759  617708 system_pods.go:89] "kube-scheduler-addons-156041" [35d03d6e-d290-4cfd-b722-d5ac4682b7af] Running
	I1209 10:35:36.601765  617708 system_pods.go:89] "metrics-server-84c5f94fbc-s7gmn" [a2e3bba5-5ed2-4131-a072-a3597c3d28b1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1209 10:35:36.601768  617708 system_pods.go:89] "nvidia-device-plugin-daemonset-kjjpq" [9d6efa63-ad7e-417c-9a30-6ae237fb8824] Running
	I1209 10:35:36.601777  617708 system_pods.go:89] "registry-5cc95cd69-dz5k9" [94e4ed5a-c1d2-4327-99af-d2d3f88d0300] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1209 10:35:36.601782  617708 system_pods.go:89] "registry-proxy-8fjdn" [92870ba1-49e0-461f-91f0-1d0ee71c79d7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1209 10:35:36.601792  617708 system_pods.go:89] "snapshot-controller-56fcc65765-pf49d" [2d37dd49-32af-4d58-917a-73cafe8fdf4a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1209 10:35:36.601801  617708 system_pods.go:89] "snapshot-controller-56fcc65765-zh99l" [ded01d68-2ce6-4cfe-99d0-672c5a04ce9a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1209 10:35:36.601806  617708 system_pods.go:89] "storage-provisioner" [105ef2e5-38ab-44ff-9b22-17aea32e722a] Running
	I1209 10:35:36.601815  617708 system_pods.go:126] duration metric: took 304.951645ms to wait for k8s-apps to be running ...
	I1209 10:35:36.601825  617708 system_svc.go:44] waiting for kubelet service to be running ....
	I1209 10:35:36.601874  617708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 10:35:36.615991  617708 system_svc.go:56] duration metric: took 14.153504ms WaitForService to wait for kubelet
	I1209 10:35:36.616022  617708 kubeadm.go:582] duration metric: took 36.543056723s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 10:35:36.616043  617708 node_conditions.go:102] verifying NodePressure condition ...
	I1209 10:35:36.805849  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:36.806603  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:36.807421  617708 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1209 10:35:36.807468  617708 node_conditions.go:123] node cpu capacity is 2
	I1209 10:35:36.807493  617708 node_conditions.go:105] duration metric: took 191.443254ms to run NodePressure ...
	I1209 10:35:36.807510  617708 start.go:241] waiting for startup goroutines ...
	I1209 10:35:36.857277  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:37.094345  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:37.269848  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:37.271652  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:37.358034  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:37.595377  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:37.771242  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:37.772249  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:37.857895  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:38.095475  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:38.269650  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:38.271900  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:38.358581  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:38.596503  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:38.769812  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:38.771042  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:38.858043  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:39.096373  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:39.269694  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:39.271083  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:39.357877  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:39.594668  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:39.770674  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:39.771366  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:39.857010  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:40.095122  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:40.269613  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:40.271536  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:40.357861  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:40.594988  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:40.770807  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:40.772712  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:40.857533  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:41.094822  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:41.270130  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:41.271214  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:41.357702  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:41.594657  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:41.769564  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:41.771221  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:41.857810  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:42.094770  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:42.269989  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:42.271221  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:42.358116  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:42.595407  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:42.769832  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:42.772729  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:42.857260  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:43.093986  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:43.277016  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:43.277720  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:43.357691  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:43.594995  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:43.770564  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:43.870074  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:43.870160  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:44.095309  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:44.269779  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:44.271159  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:44.358627  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:44.594505  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:44.769640  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:44.771265  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:44.857701  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:45.094949  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:45.277470  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:45.278365  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:45.671458  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:45.672747  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:45.771383  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:45.771751  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:45.871300  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:46.095750  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:46.270428  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:46.271752  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:46.371073  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:46.596739  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:46.770059  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:46.771724  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:46.857871  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:47.095448  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:47.270008  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:47.271167  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:47.357890  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:47.595700  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:47.769854  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:47.771859  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:47.857555  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:48.094576  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:48.269227  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:48.270738  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:48.357667  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:48.594710  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:48.770625  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:48.771185  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:48.858252  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:49.095442  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:49.269591  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:49.270962  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:49.358621  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:49.594436  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:49.769618  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:49.771312  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:49.858560  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:50.094349  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:50.269890  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:50.271436  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:50.357873  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:50.595297  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:50.770952  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:50.771148  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:50.871289  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:51.095090  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:51.270555  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:51.271512  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:51.358115  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:51.616922  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:51.784297  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:51.784323  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:51.858253  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:52.095136  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:52.270981  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:52.272840  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:52.357789  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:52.595245  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:52.771892  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:52.773642  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:52.857030  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:53.095437  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:53.270004  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:53.271088  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:53.358496  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:53.595838  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:53.770377  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:53.771655  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:53.871179  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:54.097098  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:54.269310  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:54.270730  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:54.357514  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:54.594655  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:54.770034  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:54.772391  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:54.857614  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:55.096045  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:55.269618  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:55.270865  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:55.357947  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:55.594647  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:55.769510  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:55.771085  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:55.858109  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:56.095515  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:56.270365  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:56.271781  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:56.357881  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:56.594683  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:56.769688  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:56.771258  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:56.857766  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:57.094897  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:57.270165  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:57.271299  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:57.357882  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:57.594681  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:57.770510  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:57.771927  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:57.857542  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:58.094386  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:58.269317  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:58.270917  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:58.357892  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:58.595177  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:58.953903  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:58.954813  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:58.956027  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:59.095038  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:59.271571  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:59.272368  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:59.358055  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:59.595402  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:59.769170  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:59.771034  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:59.857631  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:36:00.094879  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:00.270262  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:36:00.271911  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:36:00.357010  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:36:00.594304  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:00.770539  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:36:00.771770  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:36:00.859890  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:36:01.363674  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:36:01.364021  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:36:01.364060  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:36:01.364682  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:01.594839  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:01.769651  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:36:01.771104  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:36:01.857552  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:36:02.094938  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:02.269708  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:36:02.270996  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:36:02.357626  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:36:02.593962  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:02.771227  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:36:02.771407  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:36:02.857257  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:36:03.095009  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:03.272217  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:36:03.273083  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:36:03.370396  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:36:03.594411  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:03.771098  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:36:03.771968  617708 kapi.go:107] duration metric: took 55.004572108s to wait for kubernetes.io/minikube-addons=registry ...
	I1209 10:36:03.870310  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:36:04.095382  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:04.269161  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:36:04.357532  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:36:04.593965  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:04.770694  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:36:04.869788  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:36:05.096340  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:05.270703  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:36:05.357703  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:36:05.595978  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:05.770462  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:36:05.857810  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:36:06.095011  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:06.269614  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:36:06.357991  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:36:06.988536  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:07.088031  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:36:07.088722  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:36:07.095122  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:07.270996  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:36:07.370124  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:36:07.595667  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:07.769986  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:36:07.857573  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:36:08.094231  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:08.270107  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:36:08.357949  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:36:08.595449  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:08.770470  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:36:08.861585  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:36:09.094445  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:09.269880  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:36:09.357883  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:36:09.595928  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:09.770006  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:36:09.857511  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:36:10.095738  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:10.270332  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:36:10.356899  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:36:10.596945  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:10.770271  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:36:10.857773  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:36:11.095067  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:11.269221  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:36:11.357726  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:36:11.606465  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:11.770415  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:36:11.863279  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:36:12.097012  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:12.278439  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:36:12.375904  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:36:12.595054  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:12.769695  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:36:12.859344  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:36:13.094817  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:13.269750  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:36:13.357418  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:36:13.596767  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:13.769623  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:36:14.099851  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:36:14.108699  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:14.272958  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:36:14.371930  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:36:14.595821  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:14.770159  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:36:14.857333  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:36:15.094796  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:15.270725  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:36:15.358605  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:36:15.595724  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:15.770083  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:36:15.861080  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:36:16.095160  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:16.269694  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:36:16.357024  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:36:16.595386  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:16.769424  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:36:16.857768  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:36:17.094875  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:17.270072  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:36:17.357237  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:36:17.595804  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:17.770107  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:36:17.857435  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:36:18.094595  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:18.284102  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:36:18.357730  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:36:18.594848  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:18.770324  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:36:18.857880  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:36:19.094583  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:19.270374  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:36:19.357432  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:36:19.594813  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:19.770323  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:36:19.858381  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:36:20.094510  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:20.269688  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:36:20.359919  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:36:20.594341  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:21.118300  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:36:21.118589  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:36:21.118989  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:21.270060  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:36:21.370252  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:36:21.596665  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:21.770041  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:36:21.857367  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:36:22.094396  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:22.270403  617708 kapi.go:107] duration metric: took 1m13.505086791s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1209 10:36:22.358346  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:36:22.595225  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:22.858403  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:36:23.102417  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:23.357660  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:36:23.595104  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:23.858076  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:36:24.095191  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:24.358084  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:36:24.595546  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:24.858233  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:36:25.096132  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:25.358395  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:36:25.595437  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:25.858080  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:36:26.095276  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:26.358030  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:36:26.595076  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:26.857806  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:36:27.094872  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:27.357408  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:36:27.594564  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:27.858411  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:36:28.095550  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:28.358388  617708 kapi.go:107] duration metric: took 1m17.00446715s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1209 10:36:28.359906  617708 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-156041 cluster.
	I1209 10:36:28.361368  617708 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1209 10:36:28.362582  617708 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1209 10:36:28.594259  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:29.094662  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:29.594940  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:30.095194  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:30.595950  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:31.095024  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:31.802111  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:32.095611  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:32.594841  617708 kapi.go:107] duration metric: took 1m23.004720869s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1209 10:36:32.596774  617708 out.go:177] * Enabled addons: nvidia-device-plugin, ingress-dns, cloud-spanner, storage-provisioner, storage-provisioner-rancher, amd-gpu-device-plugin, metrics-server, inspektor-gadget, yakd, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1209 10:36:32.597893  617708 addons.go:510] duration metric: took 1m32.524857757s for enable addons: enabled=[nvidia-device-plugin ingress-dns cloud-spanner storage-provisioner storage-provisioner-rancher amd-gpu-device-plugin metrics-server inspektor-gadget yakd default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1209 10:36:32.597942  617708 start.go:246] waiting for cluster config update ...
	I1209 10:36:32.597967  617708 start.go:255] writing updated cluster config ...
	I1209 10:36:32.598292  617708 ssh_runner.go:195] Run: rm -f paused
	I1209 10:36:32.654191  617708 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1209 10:36:32.655785  617708 out.go:177] * Done! kubectl is now configured to use "addons-156041" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 09 10:40:10 addons-156041 crio[662]: time="2024-12-09 10:40:10.016628647Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=953e4374-fad2-4e66-b1df-0a6fea7ff721 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 10:40:10 addons-156041 crio[662]: time="2024-12-09 10:40:10.016708213Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=953e4374-fad2-4e66-b1df-0a6fea7ff721 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 10:40:10 addons-156041 crio[662]: time="2024-12-09 10:40:10.033274531Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1fda0a7c-ebb8-4b83-bb93-29f7993dacf1 name=/runtime.v1.RuntimeService/Version
	Dec 09 10:40:10 addons-156041 crio[662]: time="2024-12-09 10:40:10.033369032Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1fda0a7c-ebb8-4b83-bb93-29f7993dacf1 name=/runtime.v1.RuntimeService/Version
	Dec 09 10:40:10 addons-156041 crio[662]: time="2024-12-09 10:40:10.034983871Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f13b030e-8225-4409-bfda-2da6fbf48101 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 10:40:10 addons-156041 crio[662]: time="2024-12-09 10:40:10.036973498Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733740810036933857,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595934,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f13b030e-8225-4409-bfda-2da6fbf48101 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 10:40:10 addons-156041 crio[662]: time="2024-12-09 10:40:10.037920913Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dbaa0f5c-e924-4c17-9c0b-703fd1bef174 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 10:40:10 addons-156041 crio[662]: time="2024-12-09 10:40:10.038011675Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dbaa0f5c-e924-4c17-9c0b-703fd1bef174 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 10:40:10 addons-156041 crio[662]: time="2024-12-09 10:40:10.038452050Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6b168e9562cf64220e8a9c4b8beac12735826788d716e089cb7e34fdae303b2f,PodSandboxId:ac3a87c920ab6b60db82caa81206de42aa3c987c0937f468cb0066372d894be9,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91ca84b4f57794f97f70443afccff26aed771e36bc48bad1e26c2ce66124ea66,State:CONTAINER_RUNNING,CreatedAt:1733740670424687973,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 63f24e56-7ff2-470f-aef8-eaf2dada0965,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff776faa63ae9dcd8b90e9dd5374810408479ee8ea214a4add287e5ebb8365cb,PodSandboxId:a3b4719506e0b21e0c353d9bf21cd48f3f51a0849734c71ca7ca024789a384dc,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1733740596918948092,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 750ac467-92cd-4f0f-8288-ccecae9af727,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b8662f9b7c4ad2bcfa0c7b869d176d93891dce373ef0cc20195f0441038e4ec,PodSandboxId:3ec287923cd99fd5929c13600f30a74e3c70a493f37434f17913ee751ed141d3,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1733740581247451879,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-5f85ff4588-wb74p,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 5aa912bf-5f46-477e-9e94-41d33fdb8358,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:c43d4f61885a87b0aba332542acce1eb8cd51fb0e025f8c0cfafa32bfccc1697,PodSandboxId:8f1e649fc4b0870183f4fead2abb8469d10ef5bb7a181bdb9c56d91af979505e,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,St
ate:CONTAINER_EXITED,CreatedAt:1733740567662732673,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-8vp2g,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 12950ff4-b2e0-4f1b-b543-9990b22e5308,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41ee9c95df1f4ede24dfd3c41f7b3ab5d5965af558374e35fd11ae922165ff16,PodSandboxId:08720ba90aecba112731274dbc49242e8c847466c25693356de7df902cc3719e,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f61806552
90afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1733740567232264662,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-ss886,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 23ede1aa-75de-4edb-aa8d-7def95035755,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:394115f8b81ee2fb3ecfaf0e3323653056440234aef7037a4f7ff12fbb0ce841,PodSandboxId:db0494eee99c9c8630c77e161af89475c89af4b5c3633f0b6b527c0ae756303b,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,}
,ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1733740545768877462,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-s7gmn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2e3bba5-5ed2-4131-a072-a3597c3d28b1,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eab70ffac425c2e03e7b07a17673804d9f18c462bdcf94ec70b00b8447221c59,PodSandboxId:1039eee9ff97dab2e58d5634aa48bbb54bd7d8a6daf640cead89335f0d80d391,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256
:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1733740533539218551,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-hbkzd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68ff1229-b428-4958-bcad-1fa9f1bb55a4,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15e14433ec4364cb8a936337442d35d1c3d6a2a4715ad0d5cc3f57b9d57f115d,PodSandboxId:9a1906fb89f534bbbc002c18d419267004637a6e31732e6062fcaf09015996cb,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:
gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1733740516541331036,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbc14232-0f6b-4848-9da8-d14681daebc5,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e75072ac379d65e9afbaf83daf64c569651d9cdc52aca2478b15b50422e9bb9a,PodSandboxId:349471269d2f80ee7
3b495f546885b4a41e8886cdc755a52b91ba41f16669f77,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733740506615913756,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 105ef2e5-38ab-44ff-9b22-17aea32e722a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bc8f7eb6f9e3d4fef8a83f5803bff22ae2c79298d343bfc37123f5f724cb7bc,PodSandboxId:5e55d595d4610753a6e4830b76c80
c3384530eb911db14afb2b430ba0c18eb75,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733740503019052026,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-cd4lm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29f3ba07-4465-49c1-89c9-7963559eb074,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termin
ationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:caaaa0e3d8bef385b8e9d924a28834f3d28156742f878b33c9f3ca3839a8061d,PodSandboxId:f4d203d196bafbf9883905950139a3cefe11830b57e72e0d21291758e5a2a9ce,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733740501365124514,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bthmb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a3b6ebf-90ff-4b75-b064-8de7e85140a0,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kuber
netes.pod.terminationGracePeriod: 30,},},&Container{Id:32f3e3cc7ba9b044d6281de503c488e78f6fe147933ca3eddafd455ff57969f0,PodSandboxId:a92c79d980fe69e7985961b19f79015693e14e1d03775e561680218da8a22ac7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733740489663843532,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-156041,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5804f4020c516b70575448cdaf565d0,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Conta
iner{Id:5be4a6b2466ed9d67eab208c35cd2ca892798f62c660be84f94db3f8ab52d683,PodSandboxId:10ff2598d0df13f7ddde83a59ee7ab879c10020281e7999028f08ab8d5451316,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733740489640050025,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-156041,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 656fd1e0a35f1dffa82d5963f298e8ee,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69162feab
66f8d9454b1e7b9084d063da9017e38a8589fb25dcebe2fda8589e9,PodSandboxId:1587ded85bbe1aa9fa4b317e1399a61ead1e72f6ea584ec56ad355cd2c55d810,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733740489631611141,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-156041,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f79094dcc5999520aa9623cd82617e9f,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d58a34c77c2cf417f71d91473
33ecb767357bdae1c925fa33ad4de8512e260b,PodSandboxId:97a28aa36f69636a4ed07c89c43e16f80a0c7f7c46b91799e4ef830ad16b1b57,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733740489645315020,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-156041,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d59fd670d2a9b851fabc09bfa591e92,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/inte
rceptors.go:74" id=dbaa0f5c-e924-4c17-9c0b-703fd1bef174 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 10:40:10 addons-156041 crio[662]: time="2024-12-09 10:40:10.056223743Z" level=debug msg="Content-Type from manifest GET is \"application/vnd.docker.distribution.manifest.list.v2+json\"" file="docker/docker_client.go:964"
	Dec 09 10:40:10 addons-156041 crio[662]: time="2024-12-09 10:40:10.056560178Z" level=debug msg="GET https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86" file="docker/docker_client.go:631"
	Dec 09 10:40:10 addons-156041 crio[662]: time="2024-12-09 10:40:10.078812379Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=447ba4c1-e086-4fc1-82d6-8a9e2be9879b name=/runtime.v1.RuntimeService/Version
	Dec 09 10:40:10 addons-156041 crio[662]: time="2024-12-09 10:40:10.078931279Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=447ba4c1-e086-4fc1-82d6-8a9e2be9879b name=/runtime.v1.RuntimeService/Version
	Dec 09 10:40:10 addons-156041 crio[662]: time="2024-12-09 10:40:10.080594271Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=26187521-9198-4dfd-95b6-12aea184d348 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 10:40:10 addons-156041 crio[662]: time="2024-12-09 10:40:10.082346242Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733740810082309043,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595934,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=26187521-9198-4dfd-95b6-12aea184d348 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 10:40:10 addons-156041 crio[662]: time="2024-12-09 10:40:10.083002385Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=633fcb95-de39-4f57-aefb-0df4b8d8aa9a name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 10:40:10 addons-156041 crio[662]: time="2024-12-09 10:40:10.083138831Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=633fcb95-de39-4f57-aefb-0df4b8d8aa9a name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 10:40:10 addons-156041 crio[662]: time="2024-12-09 10:40:10.083662228Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6b168e9562cf64220e8a9c4b8beac12735826788d716e089cb7e34fdae303b2f,PodSandboxId:ac3a87c920ab6b60db82caa81206de42aa3c987c0937f468cb0066372d894be9,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91ca84b4f57794f97f70443afccff26aed771e36bc48bad1e26c2ce66124ea66,State:CONTAINER_RUNNING,CreatedAt:1733740670424687973,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 63f24e56-7ff2-470f-aef8-eaf2dada0965,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff776faa63ae9dcd8b90e9dd5374810408479ee8ea214a4add287e5ebb8365cb,PodSandboxId:a3b4719506e0b21e0c353d9bf21cd48f3f51a0849734c71ca7ca024789a384dc,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1733740596918948092,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 750ac467-92cd-4f0f-8288-ccecae9af727,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b8662f9b7c4ad2bcfa0c7b869d176d93891dce373ef0cc20195f0441038e4ec,PodSandboxId:3ec287923cd99fd5929c13600f30a74e3c70a493f37434f17913ee751ed141d3,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1733740581247451879,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-5f85ff4588-wb74p,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 5aa912bf-5f46-477e-9e94-41d33fdb8358,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:c43d4f61885a87b0aba332542acce1eb8cd51fb0e025f8c0cfafa32bfccc1697,PodSandboxId:8f1e649fc4b0870183f4fead2abb8469d10ef5bb7a181bdb9c56d91af979505e,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,St
ate:CONTAINER_EXITED,CreatedAt:1733740567662732673,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-8vp2g,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 12950ff4-b2e0-4f1b-b543-9990b22e5308,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41ee9c95df1f4ede24dfd3c41f7b3ab5d5965af558374e35fd11ae922165ff16,PodSandboxId:08720ba90aecba112731274dbc49242e8c847466c25693356de7df902cc3719e,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f61806552
90afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1733740567232264662,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-ss886,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 23ede1aa-75de-4edb-aa8d-7def95035755,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:394115f8b81ee2fb3ecfaf0e3323653056440234aef7037a4f7ff12fbb0ce841,PodSandboxId:db0494eee99c9c8630c77e161af89475c89af4b5c3633f0b6b527c0ae756303b,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,}
,ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1733740545768877462,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-s7gmn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2e3bba5-5ed2-4131-a072-a3597c3d28b1,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eab70ffac425c2e03e7b07a17673804d9f18c462bdcf94ec70b00b8447221c59,PodSandboxId:1039eee9ff97dab2e58d5634aa48bbb54bd7d8a6daf640cead89335f0d80d391,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256
:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1733740533539218551,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-hbkzd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68ff1229-b428-4958-bcad-1fa9f1bb55a4,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15e14433ec4364cb8a936337442d35d1c3d6a2a4715ad0d5cc3f57b9d57f115d,PodSandboxId:9a1906fb89f534bbbc002c18d419267004637a6e31732e6062fcaf09015996cb,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:
gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1733740516541331036,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbc14232-0f6b-4848-9da8-d14681daebc5,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e75072ac379d65e9afbaf83daf64c569651d9cdc52aca2478b15b50422e9bb9a,PodSandboxId:349471269d2f80ee7
3b495f546885b4a41e8886cdc755a52b91ba41f16669f77,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733740506615913756,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 105ef2e5-38ab-44ff-9b22-17aea32e722a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bc8f7eb6f9e3d4fef8a83f5803bff22ae2c79298d343bfc37123f5f724cb7bc,PodSandboxId:5e55d595d4610753a6e4830b76c80
c3384530eb911db14afb2b430ba0c18eb75,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733740503019052026,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-cd4lm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29f3ba07-4465-49c1-89c9-7963559eb074,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termin
ationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:caaaa0e3d8bef385b8e9d924a28834f3d28156742f878b33c9f3ca3839a8061d,PodSandboxId:f4d203d196bafbf9883905950139a3cefe11830b57e72e0d21291758e5a2a9ce,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733740501365124514,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bthmb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a3b6ebf-90ff-4b75-b064-8de7e85140a0,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kuber
netes.pod.terminationGracePeriod: 30,},},&Container{Id:32f3e3cc7ba9b044d6281de503c488e78f6fe147933ca3eddafd455ff57969f0,PodSandboxId:a92c79d980fe69e7985961b19f79015693e14e1d03775e561680218da8a22ac7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733740489663843532,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-156041,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5804f4020c516b70575448cdaf565d0,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Conta
iner{Id:5be4a6b2466ed9d67eab208c35cd2ca892798f62c660be84f94db3f8ab52d683,PodSandboxId:10ff2598d0df13f7ddde83a59ee7ab879c10020281e7999028f08ab8d5451316,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733740489640050025,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-156041,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 656fd1e0a35f1dffa82d5963f298e8ee,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69162feab
66f8d9454b1e7b9084d063da9017e38a8589fb25dcebe2fda8589e9,PodSandboxId:1587ded85bbe1aa9fa4b317e1399a61ead1e72f6ea584ec56ad355cd2c55d810,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733740489631611141,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-156041,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f79094dcc5999520aa9623cd82617e9f,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d58a34c77c2cf417f71d91473
33ecb767357bdae1c925fa33ad4de8512e260b,PodSandboxId:97a28aa36f69636a4ed07c89c43e16f80a0c7f7c46b91799e4ef830ad16b1b57,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733740489645315020,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-156041,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d59fd670d2a9b851fabc09bfa591e92,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/inte
rceptors.go:74" id=633fcb95-de39-4f57-aefb-0df4b8d8aa9a name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 10:40:10 addons-156041 crio[662]: time="2024-12-09 10:40:10.126530383Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e79ec5c8-e5fd-4be6-841f-01b54b19a55d name=/runtime.v1.RuntimeService/Version
	Dec 09 10:40:10 addons-156041 crio[662]: time="2024-12-09 10:40:10.126624574Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e79ec5c8-e5fd-4be6-841f-01b54b19a55d name=/runtime.v1.RuntimeService/Version
	Dec 09 10:40:10 addons-156041 crio[662]: time="2024-12-09 10:40:10.128305194Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=91174fac-2b26-49d6-838f-3135f09591c8 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 10:40:10 addons-156041 crio[662]: time="2024-12-09 10:40:10.129583622Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733740810129555518,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595934,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=91174fac-2b26-49d6-838f-3135f09591c8 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 10:40:10 addons-156041 crio[662]: time="2024-12-09 10:40:10.130319037Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=07f89a52-dfec-4e1f-aa97-43348024403b name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 10:40:10 addons-156041 crio[662]: time="2024-12-09 10:40:10.130422248Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=07f89a52-dfec-4e1f-aa97-43348024403b name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 10:40:10 addons-156041 crio[662]: time="2024-12-09 10:40:10.130977869Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6b168e9562cf64220e8a9c4b8beac12735826788d716e089cb7e34fdae303b2f,PodSandboxId:ac3a87c920ab6b60db82caa81206de42aa3c987c0937f468cb0066372d894be9,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91ca84b4f57794f97f70443afccff26aed771e36bc48bad1e26c2ce66124ea66,State:CONTAINER_RUNNING,CreatedAt:1733740670424687973,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 63f24e56-7ff2-470f-aef8-eaf2dada0965,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff776faa63ae9dcd8b90e9dd5374810408479ee8ea214a4add287e5ebb8365cb,PodSandboxId:a3b4719506e0b21e0c353d9bf21cd48f3f51a0849734c71ca7ca024789a384dc,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1733740596918948092,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 750ac467-92cd-4f0f-8288-ccecae9af727,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b8662f9b7c4ad2bcfa0c7b869d176d93891dce373ef0cc20195f0441038e4ec,PodSandboxId:3ec287923cd99fd5929c13600f30a74e3c70a493f37434f17913ee751ed141d3,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1733740581247451879,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-5f85ff4588-wb74p,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 5aa912bf-5f46-477e-9e94-41d33fdb8358,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:c43d4f61885a87b0aba332542acce1eb8cd51fb0e025f8c0cfafa32bfccc1697,PodSandboxId:8f1e649fc4b0870183f4fead2abb8469d10ef5bb7a181bdb9c56d91af979505e,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,St
ate:CONTAINER_EXITED,CreatedAt:1733740567662732673,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-8vp2g,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 12950ff4-b2e0-4f1b-b543-9990b22e5308,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41ee9c95df1f4ede24dfd3c41f7b3ab5d5965af558374e35fd11ae922165ff16,PodSandboxId:08720ba90aecba112731274dbc49242e8c847466c25693356de7df902cc3719e,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f61806552
90afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1733740567232264662,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-ss886,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 23ede1aa-75de-4edb-aa8d-7def95035755,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:394115f8b81ee2fb3ecfaf0e3323653056440234aef7037a4f7ff12fbb0ce841,PodSandboxId:db0494eee99c9c8630c77e161af89475c89af4b5c3633f0b6b527c0ae756303b,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,}
,ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1733740545768877462,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-s7gmn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2e3bba5-5ed2-4131-a072-a3597c3d28b1,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eab70ffac425c2e03e7b07a17673804d9f18c462bdcf94ec70b00b8447221c59,PodSandboxId:1039eee9ff97dab2e58d5634aa48bbb54bd7d8a6daf640cead89335f0d80d391,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256
:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1733740533539218551,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-hbkzd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68ff1229-b428-4958-bcad-1fa9f1bb55a4,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15e14433ec4364cb8a936337442d35d1c3d6a2a4715ad0d5cc3f57b9d57f115d,PodSandboxId:9a1906fb89f534bbbc002c18d419267004637a6e31732e6062fcaf09015996cb,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:
gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1733740516541331036,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbc14232-0f6b-4848-9da8-d14681daebc5,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e75072ac379d65e9afbaf83daf64c569651d9cdc52aca2478b15b50422e9bb9a,PodSandboxId:349471269d2f80ee7
3b495f546885b4a41e8886cdc755a52b91ba41f16669f77,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733740506615913756,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 105ef2e5-38ab-44ff-9b22-17aea32e722a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bc8f7eb6f9e3d4fef8a83f5803bff22ae2c79298d343bfc37123f5f724cb7bc,PodSandboxId:5e55d595d4610753a6e4830b76c80
c3384530eb911db14afb2b430ba0c18eb75,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733740503019052026,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-cd4lm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29f3ba07-4465-49c1-89c9-7963559eb074,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termin
ationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:caaaa0e3d8bef385b8e9d924a28834f3d28156742f878b33c9f3ca3839a8061d,PodSandboxId:f4d203d196bafbf9883905950139a3cefe11830b57e72e0d21291758e5a2a9ce,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733740501365124514,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bthmb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a3b6ebf-90ff-4b75-b064-8de7e85140a0,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kuber
netes.pod.terminationGracePeriod: 30,},},&Container{Id:32f3e3cc7ba9b044d6281de503c488e78f6fe147933ca3eddafd455ff57969f0,PodSandboxId:a92c79d980fe69e7985961b19f79015693e14e1d03775e561680218da8a22ac7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733740489663843532,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-156041,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5804f4020c516b70575448cdaf565d0,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Conta
iner{Id:5be4a6b2466ed9d67eab208c35cd2ca892798f62c660be84f94db3f8ab52d683,PodSandboxId:10ff2598d0df13f7ddde83a59ee7ab879c10020281e7999028f08ab8d5451316,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733740489640050025,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-156041,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 656fd1e0a35f1dffa82d5963f298e8ee,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69162feab
66f8d9454b1e7b9084d063da9017e38a8589fb25dcebe2fda8589e9,PodSandboxId:1587ded85bbe1aa9fa4b317e1399a61ead1e72f6ea584ec56ad355cd2c55d810,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733740489631611141,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-156041,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f79094dcc5999520aa9623cd82617e9f,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d58a34c77c2cf417f71d91473
33ecb767357bdae1c925fa33ad4de8512e260b,PodSandboxId:97a28aa36f69636a4ed07c89c43e16f80a0c7f7c46b91799e4ef830ad16b1b57,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733740489645315020,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-156041,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d59fd670d2a9b851fabc09bfa591e92,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/inte
rceptors.go:74" id=07f89a52-dfec-4e1f-aa97-43348024403b name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	6b168e9562cf6       docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4                              2 minutes ago       Running             nginx                     0                   ac3a87c920ab6       nginx
	ff776faa63ae9       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago       Running             busybox                   0                   a3b4719506e0b       busybox
	7b8662f9b7c4a       registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b             3 minutes ago       Running             controller                0                   3ec287923cd99       ingress-nginx-controller-5f85ff4588-wb74p
	c43d4f61885a8       a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb                                                             4 minutes ago       Exited              patch                     1                   8f1e649fc4b08       ingress-nginx-admission-patch-8vp2g
	41ee9c95df1f4       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f   4 minutes ago       Exited              create                    0                   08720ba90aecb       ingress-nginx-admission-create-ss886
	394115f8b81ee       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a        4 minutes ago       Running             metrics-server            0                   db0494eee99c9       metrics-server-84c5f94fbc-s7gmn
	eab70ffac425c       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     4 minutes ago       Running             amd-gpu-device-plugin     0                   1039eee9ff97d       amd-gpu-device-plugin-hbkzd
	15e14433ec436       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab             4 minutes ago       Running             minikube-ingress-dns      0                   9a1906fb89f53       kube-ingress-dns-minikube
	e75072ac379d6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             5 minutes ago       Running             storage-provisioner       0                   349471269d2f8       storage-provisioner
	6bc8f7eb6f9e3       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                             5 minutes ago       Running             coredns                   0                   5e55d595d4610       coredns-7c65d6cfc9-cd4lm
	caaaa0e3d8bef       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                                             5 minutes ago       Running             kube-proxy                0                   f4d203d196baf       kube-proxy-bthmb
	32f3e3cc7ba9b       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                             5 minutes ago       Running             etcd                      0                   a92c79d980fe6       etcd-addons-156041
	7d58a34c77c2c       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                                             5 minutes ago       Running             kube-controller-manager   0                   97a28aa36f696       kube-controller-manager-addons-156041
	5be4a6b2466ed       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                                             5 minutes ago       Running             kube-apiserver            0                   10ff2598d0df1       kube-apiserver-addons-156041
	69162feab66f8       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                                             5 minutes ago       Running             kube-scheduler            0                   1587ded85bbe1       kube-scheduler-addons-156041
	
	
	==> coredns [6bc8f7eb6f9e3d4fef8a83f5803bff22ae2c79298d343bfc37123f5f724cb7bc] <==
	[INFO] 10.244.0.8:34057 - 21201 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.00011531s
	[INFO] 10.244.0.8:34057 - 7368 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000128338s
	[INFO] 10.244.0.8:34057 - 8518 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.00012782s
	[INFO] 10.244.0.8:34057 - 43651 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000066456s
	[INFO] 10.244.0.8:34057 - 8607 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000131165s
	[INFO] 10.244.0.8:34057 - 60199 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000118105s
	[INFO] 10.244.0.8:34057 - 36104 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000062409s
	[INFO] 10.244.0.8:35151 - 3869 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00005979s
	[INFO] 10.244.0.8:35151 - 3629 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000033217s
	[INFO] 10.244.0.8:52494 - 33319 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000035789s
	[INFO] 10.244.0.8:52494 - 33098 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000024224s
	[INFO] 10.244.0.8:41506 - 4842 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000125994s
	[INFO] 10.244.0.8:41506 - 4384 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000083557s
	[INFO] 10.244.0.8:34893 - 27318 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000078446s
	[INFO] 10.244.0.8:34893 - 27052 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000130453s
	[INFO] 10.244.0.23:54595 - 3097 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000657449s
	[INFO] 10.244.0.23:54871 - 55462 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000123304s
	[INFO] 10.244.0.23:52220 - 6898 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.0001807s
	[INFO] 10.244.0.23:43363 - 27175 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000128035s
	[INFO] 10.244.0.23:41045 - 39996 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000096081s
	[INFO] 10.244.0.23:47251 - 4949 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000215374s
	[INFO] 10.244.0.23:60248 - 12741 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.004894138s
	[INFO] 10.244.0.23:58760 - 2997 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 572 0.005088101s
	[INFO] 10.244.0.26:60170 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000297991s
	[INFO] 10.244.0.26:42325 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000101536s
	
	
	==> describe nodes <==
	Name:               addons-156041
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-156041
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=60110addcdbf0fec7168b962521659e922988d6c
	                    minikube.k8s.io/name=addons-156041
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_09T10_34_55_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-156041
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 09 Dec 2024 10:34:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-156041
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 09 Dec 2024 10:40:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 09 Dec 2024 10:37:58 +0000   Mon, 09 Dec 2024 10:34:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 09 Dec 2024 10:37:58 +0000   Mon, 09 Dec 2024 10:34:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 09 Dec 2024 10:37:58 +0000   Mon, 09 Dec 2024 10:34:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 09 Dec 2024 10:37:58 +0000   Mon, 09 Dec 2024 10:34:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.161
	  Hostname:    addons-156041
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 af3881500388411695ff2439e8e5bf3a
	  System UUID:                af388150-0388-4116-95ff-2439e8e5bf3a
	  Boot ID:                    3c44fb91-9e20-4f02-a13a-4dfef199939f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m37s
	  default                     hello-world-app-55bf9c44b4-nfmnx             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m24s
	  ingress-nginx               ingress-nginx-controller-5f85ff4588-wb74p    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         5m2s
	  kube-system                 amd-gpu-device-plugin-hbkzd                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m8s
	  kube-system                 coredns-7c65d6cfc9-cd4lm                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     5m11s
	  kube-system                 etcd-addons-156041                           100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         5m16s
	  kube-system                 kube-apiserver-addons-156041                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m16s
	  kube-system                 kube-controller-manager-addons-156041        200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m16s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m6s
	  kube-system                 kube-proxy-bthmb                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m11s
	  kube-system                 kube-scheduler-addons-156041                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m16s
	  kube-system                 metrics-server-84c5f94fbc-s7gmn              100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         5m5s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             460Mi (12%)  170Mi (4%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 5m8s   kube-proxy       
	  Normal  Starting                 5m16s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m16s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m16s  kubelet          Node addons-156041 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m16s  kubelet          Node addons-156041 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m16s  kubelet          Node addons-156041 status is now: NodeHasSufficientPID
	  Normal  NodeReady                5m15s  kubelet          Node addons-156041 status is now: NodeReady
	  Normal  RegisteredNode           5m12s  node-controller  Node addons-156041 event: Registered Node addons-156041 in Controller
	
	
	==> dmesg <==
	[Dec 9 10:35] systemd-fstab-generator[1351]: Ignoring "noauto" option for root device
	[  +0.149509] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.001627] kauditd_printk_skb: 113 callbacks suppressed
	[  +5.001348] kauditd_printk_skb: 162 callbacks suppressed
	[  +6.098600] kauditd_printk_skb: 44 callbacks suppressed
	[ +14.476865] kauditd_printk_skb: 5 callbacks suppressed
	[ +19.863642] kauditd_printk_skb: 4 callbacks suppressed
	[  +8.298900] kauditd_printk_skb: 27 callbacks suppressed
	[Dec 9 10:36] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.624561] kauditd_printk_skb: 35 callbacks suppressed
	[  +5.089000] kauditd_printk_skb: 50 callbacks suppressed
	[  +5.738809] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.865174] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.735390] kauditd_printk_skb: 12 callbacks suppressed
	[ +14.776639] kauditd_printk_skb: 11 callbacks suppressed
	[  +8.124724] kauditd_printk_skb: 2 callbacks suppressed
	[Dec 9 10:37] kauditd_printk_skb: 8 callbacks suppressed
	[  +7.463808] kauditd_printk_skb: 19 callbacks suppressed
	[  +5.494145] kauditd_printk_skb: 48 callbacks suppressed
	[  +5.014184] kauditd_printk_skb: 15 callbacks suppressed
	[  +6.217761] kauditd_printk_skb: 32 callbacks suppressed
	[ +12.317805] kauditd_printk_skb: 15 callbacks suppressed
	[ +11.114463] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.039426] kauditd_printk_skb: 15 callbacks suppressed
	[Dec 9 10:40] kauditd_printk_skb: 49 callbacks suppressed
	
	
	==> etcd [32f3e3cc7ba9b044d6281de503c488e78f6fe147933ca3eddafd455ff57969f0] <==
	{"level":"warn","ts":"2024-12-09T10:36:21.097850Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-09T10:36:20.704574Z","time spent":"393.272804ms","remote":"127.0.0.1:33808","response type":"/etcdserverpb.KV/Range","request count":0,"request size":82,"response count":0,"response size":27,"request content":"key:\"/registry/validatingadmissionpolicies/\" range_end:\"/registry/validatingadmissionpolicies0\" count_only:true "}
	{"level":"warn","ts":"2024-12-09T10:36:21.097941Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"342.625143ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-09T10:36:21.097955Z","caller":"traceutil/trace.go:171","msg":"trace[351815813] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1089; }","duration":"342.638371ms","start":"2024-12-09T10:36:20.755312Z","end":"2024-12-09T10:36:21.097951Z","steps":["trace[351815813] 'agreement among raft nodes before linearized reading'  (duration: 342.614811ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-09T10:36:21.097966Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-09T10:36:20.755280Z","time spent":"342.68316ms","remote":"127.0.0.1:33520","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":27,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2024-12-09T10:36:21.098208Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"253.779731ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-09T10:36:21.098258Z","caller":"traceutil/trace.go:171","msg":"trace[791460012] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1089; }","duration":"253.832343ms","start":"2024-12-09T10:36:20.844419Z","end":"2024-12-09T10:36:21.098252Z","steps":["trace[791460012] 'agreement among raft nodes before linearized reading'  (duration: 253.772593ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-09T10:36:21.098322Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"321.247002ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-09T10:36:21.098334Z","caller":"traceutil/trace.go:171","msg":"trace[545256593] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1089; }","duration":"321.259478ms","start":"2024-12-09T10:36:20.777070Z","end":"2024-12-09T10:36:21.098330Z","steps":["trace[545256593] 'agreement among raft nodes before linearized reading'  (duration: 321.241095ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-09T10:36:31.782191Z","caller":"traceutil/trace.go:171","msg":"trace[887302388] linearizableReadLoop","detail":"{readStateIndex:1187; appliedIndex:1186; }","duration":"199.179993ms","start":"2024-12-09T10:36:31.582997Z","end":"2024-12-09T10:36:31.782177Z","steps":["trace[887302388] 'read index received'  (duration: 198.890735ms)","trace[887302388] 'applied index is now lower than readState.Index'  (duration: 288.665µs)"],"step_count":2}
	{"level":"info","ts":"2024-12-09T10:36:31.782382Z","caller":"traceutil/trace.go:171","msg":"trace[1371218546] transaction","detail":"{read_only:false; response_revision:1150; number_of_response:1; }","duration":"285.965033ms","start":"2024-12-09T10:36:31.496410Z","end":"2024-12-09T10:36:31.782375Z","steps":["trace[1371218546] 'process raft request'  (duration: 285.516505ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-09T10:36:31.783279Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"200.202998ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-09T10:36:31.784208Z","caller":"traceutil/trace.go:171","msg":"trace[205452147] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1150; }","duration":"201.218315ms","start":"2024-12-09T10:36:31.582978Z","end":"2024-12-09T10:36:31.784197Z","steps":["trace[205452147] 'agreement among raft nodes before linearized reading'  (duration: 200.181589ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-09T10:37:03.501699Z","caller":"traceutil/trace.go:171","msg":"trace[983110363] linearizableReadLoop","detail":"{readStateIndex:1367; appliedIndex:1366; }","duration":"164.085201ms","start":"2024-12-09T10:37:03.337595Z","end":"2024-12-09T10:37:03.501680Z","steps":["trace[983110363] 'read index received'  (duration: 163.884895ms)","trace[983110363] 'applied index is now lower than readState.Index'  (duration: 199.3µs)"],"step_count":2}
	{"level":"info","ts":"2024-12-09T10:37:03.501825Z","caller":"traceutil/trace.go:171","msg":"trace[1263777103] transaction","detail":"{read_only:false; response_revision:1321; number_of_response:1; }","duration":"334.283368ms","start":"2024-12-09T10:37:03.167532Z","end":"2024-12-09T10:37:03.501816Z","steps":["trace[1263777103] 'process raft request'  (duration: 333.992621ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-09T10:37:03.501922Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-09T10:37:03.167514Z","time spent":"334.341426ms","remote":"127.0.0.1:33586","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":537,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" mod_revision:1314 > success:<request_put:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" value_size:450 >> failure:<request_range:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" > >"}
	{"level":"warn","ts":"2024-12-09T10:37:03.501935Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"114.957133ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-09T10:37:03.501966Z","caller":"traceutil/trace.go:171","msg":"trace[903923745] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1321; }","duration":"114.987375ms","start":"2024-12-09T10:37:03.386971Z","end":"2024-12-09T10:37:03.501958Z","steps":["trace[903923745] 'agreement among raft nodes before linearized reading'  (duration: 114.933885ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-09T10:37:03.502196Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"164.593005ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset\" ","response":"range_response_count:1 size:3395"}
	{"level":"info","ts":"2024-12-09T10:37:03.502231Z","caller":"traceutil/trace.go:171","msg":"trace[919535365] range","detail":"{range_begin:/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset; range_end:; response_count:1; response_revision:1321; }","duration":"164.627907ms","start":"2024-12-09T10:37:03.337591Z","end":"2024-12-09T10:37:03.502219Z","steps":["trace[919535365] 'agreement among raft nodes before linearized reading'  (duration: 164.464661ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-09T10:37:33.606958Z","caller":"traceutil/trace.go:171","msg":"trace[550684819] transaction","detail":"{read_only:false; response_revision:1534; number_of_response:1; }","duration":"195.113759ms","start":"2024-12-09T10:37:33.411825Z","end":"2024-12-09T10:37:33.606939Z","steps":["trace[550684819] 'process raft request'  (duration: 194.812417ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-09T10:37:39.421340Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"320.593874ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-09T10:37:39.421382Z","caller":"traceutil/trace.go:171","msg":"trace[858721861] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1583; }","duration":"320.650246ms","start":"2024-12-09T10:37:39.100722Z","end":"2024-12-09T10:37:39.421372Z","steps":["trace[858721861] 'range keys from in-memory index tree'  (duration: 320.547608ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-09T10:37:39.421415Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-09T10:37:39.100685Z","time spent":"320.723166ms","remote":"127.0.0.1:33520","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":27,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2024-12-09T10:38:04.993007Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"215.108475ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-09T10:38:04.993146Z","caller":"traceutil/trace.go:171","msg":"trace[1223110132] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1742; }","duration":"215.266284ms","start":"2024-12-09T10:38:04.777869Z","end":"2024-12-09T10:38:04.993135Z","steps":["trace[1223110132] 'range keys from in-memory index tree'  (duration: 215.097466ms)"],"step_count":1}
	
	
	==> kernel <==
	 10:40:10 up 5 min,  0 users,  load average: 0.42, 0.92, 0.53
	Linux addons-156041 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [5be4a6b2466ed9d67eab208c35cd2ca892798f62c660be84f94db3f8ab52d683] <==
	 > logger="UnhandledError"
	E1209 10:36:55.777467       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.111.60.41:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.111.60.41:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.111.60.41:443: connect: connection refused" logger="UnhandledError"
	E1209 10:36:55.784986       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.111.60.41:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.111.60.41:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.111.60.41:443: connect: connection refused" logger="UnhandledError"
	I1209 10:36:55.857786       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1209 10:36:57.513785       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.98.155.64"}
	I1209 10:37:33.721035       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1209 10:37:34.851757       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	E1209 10:37:41.211914       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1209 10:37:45.937314       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I1209 10:37:46.111496       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.100.141.144"}
	I1209 10:37:47.972807       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1209 10:38:04.201732       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1209 10:38:04.201767       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1209 10:38:04.221039       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1209 10:38:04.221193       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1209 10:38:04.239856       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1209 10:38:04.240440       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1209 10:38:04.273597       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1209 10:38:04.273651       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1209 10:38:04.356607       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1209 10:38:04.356696       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1209 10:38:05.273698       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W1209 10:38:05.357815       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1209 10:38:05.362669       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I1209 10:40:08.901754       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.104.234.181"}
	
	
	==> kube-controller-manager [7d58a34c77c2cf417f71d9147333ecb767357bdae1c925fa33ad4de8512e260b] <==
	I1209 10:38:29.305410       1 shared_informer.go:320] Caches are synced for resource quota
	I1209 10:38:29.638375       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I1209 10:38:29.638428       1 shared_informer.go:320] Caches are synced for garbage collector
	W1209 10:38:44.623931       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1209 10:38:44.624009       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1209 10:38:46.754995       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1209 10:38:46.755185       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1209 10:38:48.310715       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1209 10:38:48.310832       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1209 10:39:05.971017       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1209 10:39:05.971109       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1209 10:39:17.328647       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1209 10:39:17.328712       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1209 10:39:21.476367       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1209 10:39:21.476473       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1209 10:39:31.550473       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1209 10:39:31.550537       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1209 10:39:46.185456       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1209 10:39:46.185582       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1209 10:40:00.168294       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1209 10:40:00.168423       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1209 10:40:08.705234       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="41.933474ms"
	I1209 10:40:08.717175       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="11.868134ms"
	I1209 10:40:08.717789       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="89.328µs"
	I1209 10:40:08.725619       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="43.442µs"
	
	
	==> kube-proxy [caaaa0e3d8bef385b8e9d924a28834f3d28156742f878b33c9f3ca3839a8061d] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1209 10:35:02.287373       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1209 10:35:02.298445       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.161"]
	E1209 10:35:02.298522       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1209 10:35:02.396828       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1209 10:35:02.396865       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1209 10:35:02.396925       1 server_linux.go:169] "Using iptables Proxier"
	I1209 10:35:02.399843       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1209 10:35:02.400173       1 server.go:483] "Version info" version="v1.31.2"
	I1209 10:35:02.400184       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1209 10:35:02.401421       1 config.go:199] "Starting service config controller"
	I1209 10:35:02.401436       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1209 10:35:02.401466       1 config.go:105] "Starting endpoint slice config controller"
	I1209 10:35:02.401469       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1209 10:35:02.401977       1 config.go:328] "Starting node config controller"
	I1209 10:35:02.401989       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1209 10:35:02.501603       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1209 10:35:02.501668       1 shared_informer.go:320] Caches are synced for service config
	I1209 10:35:02.502275       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [69162feab66f8d9454b1e7b9084d063da9017e38a8589fb25dcebe2fda8589e9] <==
	W1209 10:34:52.190757       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1209 10:34:52.192377       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1209 10:34:52.190794       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1209 10:34:52.192477       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1209 10:34:52.190832       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1209 10:34:52.195241       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1209 10:34:52.190871       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1209 10:34:52.195434       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1209 10:34:52.191018       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1209 10:34:52.195549       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1209 10:34:53.051931       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1209 10:34:53.051969       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1209 10:34:53.060851       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1209 10:34:53.060898       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1209 10:34:53.066537       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1209 10:34:53.068478       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1209 10:34:53.081535       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1209 10:34:53.082186       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1209 10:34:53.092324       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1209 10:34:53.092369       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1209 10:34:53.171566       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1209 10:34:53.171647       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1209 10:34:53.398292       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1209 10:34:53.398338       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1209 10:34:55.084840       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 09 10:40:08 addons-156041 kubelet[1220]: E1209 10:40:08.694346    1220 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7ae45954-73e9-4c66-a5ce-ebc93e32987b" containerName="local-path-provisioner"
	Dec 09 10:40:08 addons-156041 kubelet[1220]: E1209 10:40:08.694813    1220 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c81f365c-4fbf-46b9-80d2-7388776c3da4" containerName="liveness-probe"
	Dec 09 10:40:08 addons-156041 kubelet[1220]: E1209 10:40:08.694930    1220 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c81f365c-4fbf-46b9-80d2-7388776c3da4" containerName="csi-snapshotter"
	Dec 09 10:40:08 addons-156041 kubelet[1220]: E1209 10:40:08.695020    1220 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ded01d68-2ce6-4cfe-99d0-672c5a04ce9a" containerName="volume-snapshot-controller"
	Dec 09 10:40:08 addons-156041 kubelet[1220]: E1209 10:40:08.695054    1220 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4ce59eb1-f6b0-42e3-b167-82743dead6d5" containerName="csi-attacher"
	Dec 09 10:40:08 addons-156041 kubelet[1220]: E1209 10:40:08.695148    1220 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c81f365c-4fbf-46b9-80d2-7388776c3da4" containerName="node-driver-registrar"
	Dec 09 10:40:08 addons-156041 kubelet[1220]: E1209 10:40:08.695185    1220 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f1fb80a4-c638-430c-a5fe-1dc735303cc7" containerName="task-pv-container"
	Dec 09 10:40:08 addons-156041 kubelet[1220]: E1209 10:40:08.695260    1220 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c81f365c-4fbf-46b9-80d2-7388776c3da4" containerName="hostpath"
	Dec 09 10:40:08 addons-156041 kubelet[1220]: E1209 10:40:08.695292    1220 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c81f365c-4fbf-46b9-80d2-7388776c3da4" containerName="csi-provisioner"
	Dec 09 10:40:08 addons-156041 kubelet[1220]: E1209 10:40:08.695476    1220 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c81f365c-4fbf-46b9-80d2-7388776c3da4" containerName="csi-external-health-monitor-controller"
	Dec 09 10:40:08 addons-156041 kubelet[1220]: E1209 10:40:08.695713    1220 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2d37dd49-32af-4d58-917a-73cafe8fdf4a" containerName="volume-snapshot-controller"
	Dec 09 10:40:08 addons-156041 kubelet[1220]: E1209 10:40:08.695807    1220 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="12cbf9e5-ab92-4e05-a5bb-1aa38a653bd3" containerName="csi-resizer"
	Dec 09 10:40:08 addons-156041 kubelet[1220]: I1209 10:40:08.696012    1220 memory_manager.go:354] "RemoveStaleState removing state" podUID="c81f365c-4fbf-46b9-80d2-7388776c3da4" containerName="csi-provisioner"
	Dec 09 10:40:08 addons-156041 kubelet[1220]: I1209 10:40:08.696125    1220 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ae45954-73e9-4c66-a5ce-ebc93e32987b" containerName="local-path-provisioner"
	Dec 09 10:40:08 addons-156041 kubelet[1220]: I1209 10:40:08.696158    1220 memory_manager.go:354] "RemoveStaleState removing state" podUID="12cbf9e5-ab92-4e05-a5bb-1aa38a653bd3" containerName="csi-resizer"
	Dec 09 10:40:08 addons-156041 kubelet[1220]: I1209 10:40:08.696257    1220 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ce59eb1-f6b0-42e3-b167-82743dead6d5" containerName="csi-attacher"
	Dec 09 10:40:08 addons-156041 kubelet[1220]: I1209 10:40:08.696289    1220 memory_manager.go:354] "RemoveStaleState removing state" podUID="c81f365c-4fbf-46b9-80d2-7388776c3da4" containerName="csi-snapshotter"
	Dec 09 10:40:08 addons-156041 kubelet[1220]: I1209 10:40:08.696361    1220 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1fb80a4-c638-430c-a5fe-1dc735303cc7" containerName="task-pv-container"
	Dec 09 10:40:08 addons-156041 kubelet[1220]: I1209 10:40:08.696392    1220 memory_manager.go:354] "RemoveStaleState removing state" podUID="ded01d68-2ce6-4cfe-99d0-672c5a04ce9a" containerName="volume-snapshot-controller"
	Dec 09 10:40:08 addons-156041 kubelet[1220]: I1209 10:40:08.696524    1220 memory_manager.go:354] "RemoveStaleState removing state" podUID="c81f365c-4fbf-46b9-80d2-7388776c3da4" containerName="liveness-probe"
	Dec 09 10:40:08 addons-156041 kubelet[1220]: I1209 10:40:08.696599    1220 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d37dd49-32af-4d58-917a-73cafe8fdf4a" containerName="volume-snapshot-controller"
	Dec 09 10:40:08 addons-156041 kubelet[1220]: I1209 10:40:08.696686    1220 memory_manager.go:354] "RemoveStaleState removing state" podUID="c81f365c-4fbf-46b9-80d2-7388776c3da4" containerName="node-driver-registrar"
	Dec 09 10:40:08 addons-156041 kubelet[1220]: I1209 10:40:08.696718    1220 memory_manager.go:354] "RemoveStaleState removing state" podUID="c81f365c-4fbf-46b9-80d2-7388776c3da4" containerName="hostpath"
	Dec 09 10:40:08 addons-156041 kubelet[1220]: I1209 10:40:08.696794    1220 memory_manager.go:354] "RemoveStaleState removing state" podUID="c81f365c-4fbf-46b9-80d2-7388776c3da4" containerName="csi-external-health-monitor-controller"
	Dec 09 10:40:08 addons-156041 kubelet[1220]: I1209 10:40:08.827030    1220 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5b7rg\" (UniqueName: \"kubernetes.io/projected/a6cde186-7633-4323-9b33-f3737f01184c-kube-api-access-5b7rg\") pod \"hello-world-app-55bf9c44b4-nfmnx\" (UID: \"a6cde186-7633-4323-9b33-f3737f01184c\") " pod="default/hello-world-app-55bf9c44b4-nfmnx"
	
	
	==> storage-provisioner [e75072ac379d65e9afbaf83daf64c569651d9cdc52aca2478b15b50422e9bb9a] <==
	I1209 10:35:07.796260       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1209 10:35:07.836192       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1209 10:35:07.836272       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1209 10:35:07.927783       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1209 10:35:07.927954       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-156041_ebf2427a-c807-47ef-a9e5-1cb1fc71f37a!
	I1209 10:35:07.934947       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b5aa873f-06fb-48fe-8c0c-8c1d664836ee", APIVersion:"v1", ResourceVersion:"660", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-156041_ebf2427a-c807-47ef-a9e5-1cb1fc71f37a became leader
	I1209 10:35:08.044411       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-156041_ebf2427a-c807-47ef-a9e5-1cb1fc71f37a!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-156041 -n addons-156041
helpers_test.go:261: (dbg) Run:  kubectl --context addons-156041 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: hello-world-app-55bf9c44b4-nfmnx ingress-nginx-admission-create-ss886 ingress-nginx-admission-patch-8vp2g
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-156041 describe pod hello-world-app-55bf9c44b4-nfmnx ingress-nginx-admission-create-ss886 ingress-nginx-admission-patch-8vp2g
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-156041 describe pod hello-world-app-55bf9c44b4-nfmnx ingress-nginx-admission-create-ss886 ingress-nginx-admission-patch-8vp2g: exit status 1 (72.19204ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-55bf9c44b4-nfmnx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-156041/192.168.39.161
	Start Time:       Mon, 09 Dec 2024 10:40:08 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=55bf9c44b4
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-55bf9c44b4
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5b7rg (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-5b7rg:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  3s    default-scheduler  Successfully assigned default/hello-world-app-55bf9c44b4-nfmnx to addons-156041
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-ss886" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-8vp2g" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-156041 describe pod hello-world-app-55bf9c44b4-nfmnx ingress-nginx-admission-create-ss886 ingress-nginx-admission-patch-8vp2g: exit status 1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-156041 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-156041 addons disable ingress-dns --alsologtostderr -v=1: (1.74579142s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-156041 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-156041 addons disable ingress --alsologtostderr -v=1: (7.714748679s)
--- FAIL: TestAddons/parallel/Ingress (155.16s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (327.51s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 5.091185ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-s7gmn" [a2e3bba5-5ed2-4131-a072-a3597c3d28b1] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.003829624s
addons_test.go:402: (dbg) Run:  kubectl --context addons-156041 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-156041 top pods -n kube-system: exit status 1 (92.081761ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-hbkzd, age: 2m0.845176271s

                                                
                                                
** /stderr **
I1209 10:37:02.847849  617017 retry.go:31] will retry after 1.826642982s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-156041 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-156041 top pods -n kube-system: exit status 1 (79.118912ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-hbkzd, age: 2m2.752168433s

                                                
                                                
** /stderr **
I1209 10:37:04.754649  617017 retry.go:31] will retry after 4.740600165s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-156041 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-156041 top pods -n kube-system: exit status 1 (67.879206ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-hbkzd, age: 2m7.561294066s

                                                
                                                
** /stderr **
I1209 10:37:09.563802  617017 retry.go:31] will retry after 8.274739366s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-156041 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-156041 top pods -n kube-system: exit status 1 (84.927032ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-hbkzd, age: 2m15.920733615s

                                                
                                                
** /stderr **
I1209 10:37:17.924571  617017 retry.go:31] will retry after 10.472806689s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-156041 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-156041 top pods -n kube-system: exit status 1 (71.514188ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-hbkzd, age: 2m26.466750065s

                                                
                                                
** /stderr **
I1209 10:37:28.469642  617017 retry.go:31] will retry after 16.800632304s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-156041 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-156041 top pods -n kube-system: exit status 1 (69.332743ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-hbkzd, age: 2m43.338376842s

                                                
                                                
** /stderr **
I1209 10:37:45.340765  617017 retry.go:31] will retry after 31.481123976s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-156041 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-156041 top pods -n kube-system: exit status 1 (64.42295ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-hbkzd, age: 3m14.884927503s

                                                
                                                
** /stderr **
I1209 10:38:16.887835  617017 retry.go:31] will retry after 50.646776117s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-156041 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-156041 top pods -n kube-system: exit status 1 (65.876078ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-hbkzd, age: 4m5.601501225s

                                                
                                                
** /stderr **
I1209 10:39:07.604120  617017 retry.go:31] will retry after 36.789665617s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-156041 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-156041 top pods -n kube-system: exit status 1 (66.595792ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-hbkzd, age: 4m42.458385844s

                                                
                                                
** /stderr **
I1209 10:39:44.461373  617017 retry.go:31] will retry after 44.641561444s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-156041 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-156041 top pods -n kube-system: exit status 1 (67.587473ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-hbkzd, age: 5m27.171303148s

                                                
                                                
** /stderr **
I1209 10:40:29.174240  617017 retry.go:31] will retry after 35.546645581s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-156041 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-156041 top pods -n kube-system: exit status 1 (66.416834ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-hbkzd, age: 6m2.785064137s

                                                
                                                
** /stderr **
I1209 10:41:04.787666  617017 retry.go:31] will retry after 38.012418366s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-156041 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-156041 top pods -n kube-system: exit status 1 (65.563028ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-hbkzd, age: 6m40.863472436s

                                                
                                                
** /stderr **
I1209 10:41:42.866356  617017 retry.go:31] will retry after 38.757663181s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-156041 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-156041 top pods -n kube-system: exit status 1 (68.173513ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-hbkzd, age: 7m19.690873472s

                                                
                                                
** /stderr **
addons_test.go:416: failed checking metric server: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-156041 -n addons-156041
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-156041 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-156041 logs -n 25: (1.149227401s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-942086                                                                     | download-only-942086 | jenkins | v1.34.0 | 09 Dec 24 10:34 UTC | 09 Dec 24 10:34 UTC |
	| delete  | -p download-only-596508                                                                     | download-only-596508 | jenkins | v1.34.0 | 09 Dec 24 10:34 UTC | 09 Dec 24 10:34 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-654291 | jenkins | v1.34.0 | 09 Dec 24 10:34 UTC |                     |
	|         | binary-mirror-654291                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:45797                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-654291                                                                     | binary-mirror-654291 | jenkins | v1.34.0 | 09 Dec 24 10:34 UTC | 09 Dec 24 10:34 UTC |
	| addons  | enable dashboard -p                                                                         | addons-156041        | jenkins | v1.34.0 | 09 Dec 24 10:34 UTC |                     |
	|         | addons-156041                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-156041        | jenkins | v1.34.0 | 09 Dec 24 10:34 UTC |                     |
	|         | addons-156041                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-156041 --wait=true                                                                | addons-156041        | jenkins | v1.34.0 | 09 Dec 24 10:34 UTC | 09 Dec 24 10:36 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	| addons  | addons-156041 addons disable                                                                | addons-156041        | jenkins | v1.34.0 | 09 Dec 24 10:36 UTC | 09 Dec 24 10:36 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-156041 addons disable                                                                | addons-156041        | jenkins | v1.34.0 | 09 Dec 24 10:36 UTC | 09 Dec 24 10:36 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-156041        | jenkins | v1.34.0 | 09 Dec 24 10:36 UTC | 09 Dec 24 10:36 UTC |
	|         | -p addons-156041                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-156041 addons                                                                        | addons-156041        | jenkins | v1.34.0 | 09 Dec 24 10:37 UTC | 09 Dec 24 10:37 UTC |
	|         | disable nvidia-device-plugin                                                                |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-156041 addons disable                                                                | addons-156041        | jenkins | v1.34.0 | 09 Dec 24 10:37 UTC | 09 Dec 24 10:37 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-156041 ip                                                                            | addons-156041        | jenkins | v1.34.0 | 09 Dec 24 10:37 UTC | 09 Dec 24 10:37 UTC |
	| addons  | addons-156041 addons disable                                                                | addons-156041        | jenkins | v1.34.0 | 09 Dec 24 10:37 UTC | 09 Dec 24 10:37 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-156041 addons disable                                                                | addons-156041        | jenkins | v1.34.0 | 09 Dec 24 10:37 UTC | 09 Dec 24 10:37 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| ssh     | addons-156041 ssh cat                                                                       | addons-156041        | jenkins | v1.34.0 | 09 Dec 24 10:37 UTC | 09 Dec 24 10:37 UTC |
	|         | /opt/local-path-provisioner/pvc-24d2631a-658d-4b19-9ca8-01e524add183_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-156041 addons disable                                                                | addons-156041        | jenkins | v1.34.0 | 09 Dec 24 10:37 UTC | 09 Dec 24 10:38 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-156041 addons                                                                        | addons-156041        | jenkins | v1.34.0 | 09 Dec 24 10:37 UTC | 09 Dec 24 10:37 UTC |
	|         | disable inspektor-gadget                                                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-156041 addons                                                                        | addons-156041        | jenkins | v1.34.0 | 09 Dec 24 10:37 UTC | 09 Dec 24 10:37 UTC |
	|         | disable cloud-spanner                                                                       |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-156041 ssh curl -s                                                                   | addons-156041        | jenkins | v1.34.0 | 09 Dec 24 10:37 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-156041 addons                                                                        | addons-156041        | jenkins | v1.34.0 | 09 Dec 24 10:38 UTC | 09 Dec 24 10:38 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-156041 addons                                                                        | addons-156041        | jenkins | v1.34.0 | 09 Dec 24 10:38 UTC | 09 Dec 24 10:38 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-156041 ip                                                                            | addons-156041        | jenkins | v1.34.0 | 09 Dec 24 10:40 UTC | 09 Dec 24 10:40 UTC |
	| addons  | addons-156041 addons disable                                                                | addons-156041        | jenkins | v1.34.0 | 09 Dec 24 10:40 UTC | 09 Dec 24 10:40 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-156041 addons disable                                                                | addons-156041        | jenkins | v1.34.0 | 09 Dec 24 10:40 UTC | 09 Dec 24 10:40 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/09 10:34:10
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 10:34:10.334032  617708 out.go:345] Setting OutFile to fd 1 ...
	I1209 10:34:10.334143  617708 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 10:34:10.334154  617708 out.go:358] Setting ErrFile to fd 2...
	I1209 10:34:10.334158  617708 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 10:34:10.334365  617708 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20068-609844/.minikube/bin
	I1209 10:34:10.335026  617708 out.go:352] Setting JSON to false
	I1209 10:34:10.335966  617708 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":11794,"bootTime":1733728656,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 10:34:10.336071  617708 start.go:139] virtualization: kvm guest
	I1209 10:34:10.338055  617708 out.go:177] * [addons-156041] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1209 10:34:10.339160  617708 out.go:177]   - MINIKUBE_LOCATION=20068
	I1209 10:34:10.339161  617708 notify.go:220] Checking for updates...
	I1209 10:34:10.341433  617708 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 10:34:10.342561  617708 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20068-609844/kubeconfig
	I1209 10:34:10.343693  617708 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20068-609844/.minikube
	I1209 10:34:10.344680  617708 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1209 10:34:10.345673  617708 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 10:34:10.346802  617708 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 10:34:10.379431  617708 out.go:177] * Using the kvm2 driver based on user configuration
	I1209 10:34:10.380633  617708 start.go:297] selected driver: kvm2
	I1209 10:34:10.380648  617708 start.go:901] validating driver "kvm2" against <nil>
	I1209 10:34:10.380663  617708 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 10:34:10.381417  617708 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 10:34:10.381512  617708 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20068-609844/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1209 10:34:10.397095  617708 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1209 10:34:10.397149  617708 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1209 10:34:10.397440  617708 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 10:34:10.397484  617708 cni.go:84] Creating CNI manager for ""
	I1209 10:34:10.397538  617708 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 10:34:10.397548  617708 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1209 10:34:10.397624  617708 start.go:340] cluster config:
	{Name:addons-156041 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-156041 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 10:34:10.397744  617708 iso.go:125] acquiring lock: {Name:mk3bf4d9da9b75cc0ae0a3732f4492bac54a4b46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 10:34:10.400455  617708 out.go:177] * Starting "addons-156041" primary control-plane node in "addons-156041" cluster
	I1209 10:34:10.401804  617708 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 10:34:10.401854  617708 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1209 10:34:10.401872  617708 cache.go:56] Caching tarball of preloaded images
	I1209 10:34:10.401950  617708 preload.go:172] Found /home/jenkins/minikube-integration/20068-609844/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1209 10:34:10.401962  617708 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1209 10:34:10.402282  617708 profile.go:143] Saving config to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/addons-156041/config.json ...
	I1209 10:34:10.402311  617708 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/addons-156041/config.json: {Name:mkf770aad6ba2027e147531a9983e08c583227ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 10:34:10.402467  617708 start.go:360] acquireMachinesLock for addons-156041: {Name:mka6afffe9ddf2faebdc603ed0401805d58bf31e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 10:34:10.402521  617708 start.go:364] duration metric: took 39.895µs to acquireMachinesLock for "addons-156041"
	I1209 10:34:10.402538  617708 start.go:93] Provisioning new machine with config: &{Name:addons-156041 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:addons-156041 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 10:34:10.402598  617708 start.go:125] createHost starting for "" (driver="kvm2")
	I1209 10:34:10.404184  617708 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1209 10:34:10.404378  617708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:34:10.404427  617708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:34:10.419136  617708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36221
	I1209 10:34:10.419623  617708 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:34:10.420266  617708 main.go:141] libmachine: Using API Version  1
	I1209 10:34:10.420288  617708 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:34:10.420644  617708 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:34:10.420843  617708 main.go:141] libmachine: (addons-156041) Calling .GetMachineName
	I1209 10:34:10.421007  617708 main.go:141] libmachine: (addons-156041) Calling .DriverName
	I1209 10:34:10.421157  617708 start.go:159] libmachine.API.Create for "addons-156041" (driver="kvm2")
	I1209 10:34:10.421186  617708 client.go:168] LocalClient.Create starting
	I1209 10:34:10.421234  617708 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem
	I1209 10:34:10.557806  617708 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem
	I1209 10:34:10.898742  617708 main.go:141] libmachine: Running pre-create checks...
	I1209 10:34:10.898772  617708 main.go:141] libmachine: (addons-156041) Calling .PreCreateCheck
	I1209 10:34:10.899263  617708 main.go:141] libmachine: (addons-156041) Calling .GetConfigRaw
	I1209 10:34:10.899724  617708 main.go:141] libmachine: Creating machine...
	I1209 10:34:10.899738  617708 main.go:141] libmachine: (addons-156041) Calling .Create
	I1209 10:34:10.899907  617708 main.go:141] libmachine: (addons-156041) Creating KVM machine...
	I1209 10:34:10.901204  617708 main.go:141] libmachine: (addons-156041) DBG | found existing default KVM network
	I1209 10:34:10.902054  617708 main.go:141] libmachine: (addons-156041) DBG | I1209 10:34:10.901908  617731 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000123350}
	I1209 10:34:10.902082  617708 main.go:141] libmachine: (addons-156041) DBG | created network xml: 
	I1209 10:34:10.902096  617708 main.go:141] libmachine: (addons-156041) DBG | <network>
	I1209 10:34:10.902125  617708 main.go:141] libmachine: (addons-156041) DBG |   <name>mk-addons-156041</name>
	I1209 10:34:10.902156  617708 main.go:141] libmachine: (addons-156041) DBG |   <dns enable='no'/>
	I1209 10:34:10.902165  617708 main.go:141] libmachine: (addons-156041) DBG |   
	I1209 10:34:10.902194  617708 main.go:141] libmachine: (addons-156041) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1209 10:34:10.902217  617708 main.go:141] libmachine: (addons-156041) DBG |     <dhcp>
	I1209 10:34:10.902228  617708 main.go:141] libmachine: (addons-156041) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1209 10:34:10.902240  617708 main.go:141] libmachine: (addons-156041) DBG |     </dhcp>
	I1209 10:34:10.902249  617708 main.go:141] libmachine: (addons-156041) DBG |   </ip>
	I1209 10:34:10.902255  617708 main.go:141] libmachine: (addons-156041) DBG |   
	I1209 10:34:10.902265  617708 main.go:141] libmachine: (addons-156041) DBG | </network>
	I1209 10:34:10.902271  617708 main.go:141] libmachine: (addons-156041) DBG | 
	I1209 10:34:10.907299  617708 main.go:141] libmachine: (addons-156041) DBG | trying to create private KVM network mk-addons-156041 192.168.39.0/24...
	I1209 10:34:10.974694  617708 main.go:141] libmachine: (addons-156041) DBG | private KVM network mk-addons-156041 192.168.39.0/24 created
	I1209 10:34:10.974726  617708 main.go:141] libmachine: (addons-156041) DBG | I1209 10:34:10.974660  617731 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20068-609844/.minikube
	I1209 10:34:10.974750  617708 main.go:141] libmachine: (addons-156041) Setting up store path in /home/jenkins/minikube-integration/20068-609844/.minikube/machines/addons-156041 ...
	I1209 10:34:10.974773  617708 main.go:141] libmachine: (addons-156041) Building disk image from file:///home/jenkins/minikube-integration/20068-609844/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1209 10:34:10.974794  617708 main.go:141] libmachine: (addons-156041) Downloading /home/jenkins/minikube-integration/20068-609844/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20068-609844/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1209 10:34:11.267720  617708 main.go:141] libmachine: (addons-156041) DBG | I1209 10:34:11.267524  617731 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/addons-156041/id_rsa...
	I1209 10:34:11.459007  617708 main.go:141] libmachine: (addons-156041) DBG | I1209 10:34:11.458838  617731 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/addons-156041/addons-156041.rawdisk...
	I1209 10:34:11.459048  617708 main.go:141] libmachine: (addons-156041) DBG | Writing magic tar header
	I1209 10:34:11.459065  617708 main.go:141] libmachine: (addons-156041) DBG | Writing SSH key tar header
	I1209 10:34:11.459075  617708 main.go:141] libmachine: (addons-156041) DBG | I1209 10:34:11.458964  617731 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20068-609844/.minikube/machines/addons-156041 ...
	I1209 10:34:11.459088  617708 main.go:141] libmachine: (addons-156041) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/addons-156041
	I1209 10:34:11.459099  617708 main.go:141] libmachine: (addons-156041) Setting executable bit set on /home/jenkins/minikube-integration/20068-609844/.minikube/machines/addons-156041 (perms=drwx------)
	I1209 10:34:11.459109  617708 main.go:141] libmachine: (addons-156041) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20068-609844/.minikube/machines
	I1209 10:34:11.459120  617708 main.go:141] libmachine: (addons-156041) Setting executable bit set on /home/jenkins/minikube-integration/20068-609844/.minikube/machines (perms=drwxr-xr-x)
	I1209 10:34:11.459136  617708 main.go:141] libmachine: (addons-156041) Setting executable bit set on /home/jenkins/minikube-integration/20068-609844/.minikube (perms=drwxr-xr-x)
	I1209 10:34:11.459146  617708 main.go:141] libmachine: (addons-156041) Setting executable bit set on /home/jenkins/minikube-integration/20068-609844 (perms=drwxrwxr-x)
	I1209 10:34:11.459157  617708 main.go:141] libmachine: (addons-156041) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1209 10:34:11.459165  617708 main.go:141] libmachine: (addons-156041) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1209 10:34:11.459171  617708 main.go:141] libmachine: (addons-156041) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20068-609844/.minikube
	I1209 10:34:11.459191  617708 main.go:141] libmachine: (addons-156041) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20068-609844
	I1209 10:34:11.459206  617708 main.go:141] libmachine: (addons-156041) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1209 10:34:11.459213  617708 main.go:141] libmachine: (addons-156041) Creating domain...
	I1209 10:34:11.459227  617708 main.go:141] libmachine: (addons-156041) DBG | Checking permissions on dir: /home/jenkins
	I1209 10:34:11.459234  617708 main.go:141] libmachine: (addons-156041) DBG | Checking permissions on dir: /home
	I1209 10:34:11.459244  617708 main.go:141] libmachine: (addons-156041) DBG | Skipping /home - not owner
	I1209 10:34:11.460267  617708 main.go:141] libmachine: (addons-156041) define libvirt domain using xml: 
	I1209 10:34:11.460319  617708 main.go:141] libmachine: (addons-156041) <domain type='kvm'>
	I1209 10:34:11.460332  617708 main.go:141] libmachine: (addons-156041)   <name>addons-156041</name>
	I1209 10:34:11.460344  617708 main.go:141] libmachine: (addons-156041)   <memory unit='MiB'>4000</memory>
	I1209 10:34:11.460355  617708 main.go:141] libmachine: (addons-156041)   <vcpu>2</vcpu>
	I1209 10:34:11.460367  617708 main.go:141] libmachine: (addons-156041)   <features>
	I1209 10:34:11.460380  617708 main.go:141] libmachine: (addons-156041)     <acpi/>
	I1209 10:34:11.460391  617708 main.go:141] libmachine: (addons-156041)     <apic/>
	I1209 10:34:11.460403  617708 main.go:141] libmachine: (addons-156041)     <pae/>
	I1209 10:34:11.460413  617708 main.go:141] libmachine: (addons-156041)     
	I1209 10:34:11.460459  617708 main.go:141] libmachine: (addons-156041)   </features>
	I1209 10:34:11.460483  617708 main.go:141] libmachine: (addons-156041)   <cpu mode='host-passthrough'>
	I1209 10:34:11.460490  617708 main.go:141] libmachine: (addons-156041)   
	I1209 10:34:11.460499  617708 main.go:141] libmachine: (addons-156041)   </cpu>
	I1209 10:34:11.460508  617708 main.go:141] libmachine: (addons-156041)   <os>
	I1209 10:34:11.460515  617708 main.go:141] libmachine: (addons-156041)     <type>hvm</type>
	I1209 10:34:11.460527  617708 main.go:141] libmachine: (addons-156041)     <boot dev='cdrom'/>
	I1209 10:34:11.460537  617708 main.go:141] libmachine: (addons-156041)     <boot dev='hd'/>
	I1209 10:34:11.460549  617708 main.go:141] libmachine: (addons-156041)     <bootmenu enable='no'/>
	I1209 10:34:11.460557  617708 main.go:141] libmachine: (addons-156041)   </os>
	I1209 10:34:11.460562  617708 main.go:141] libmachine: (addons-156041)   <devices>
	I1209 10:34:11.460568  617708 main.go:141] libmachine: (addons-156041)     <disk type='file' device='cdrom'>
	I1209 10:34:11.460606  617708 main.go:141] libmachine: (addons-156041)       <source file='/home/jenkins/minikube-integration/20068-609844/.minikube/machines/addons-156041/boot2docker.iso'/>
	I1209 10:34:11.460636  617708 main.go:141] libmachine: (addons-156041)       <target dev='hdc' bus='scsi'/>
	I1209 10:34:11.460648  617708 main.go:141] libmachine: (addons-156041)       <readonly/>
	I1209 10:34:11.460660  617708 main.go:141] libmachine: (addons-156041)     </disk>
	I1209 10:34:11.460690  617708 main.go:141] libmachine: (addons-156041)     <disk type='file' device='disk'>
	I1209 10:34:11.460710  617708 main.go:141] libmachine: (addons-156041)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1209 10:34:11.460733  617708 main.go:141] libmachine: (addons-156041)       <source file='/home/jenkins/minikube-integration/20068-609844/.minikube/machines/addons-156041/addons-156041.rawdisk'/>
	I1209 10:34:11.460764  617708 main.go:141] libmachine: (addons-156041)       <target dev='hda' bus='virtio'/>
	I1209 10:34:11.460781  617708 main.go:141] libmachine: (addons-156041)     </disk>
	I1209 10:34:11.460792  617708 main.go:141] libmachine: (addons-156041)     <interface type='network'>
	I1209 10:34:11.460803  617708 main.go:141] libmachine: (addons-156041)       <source network='mk-addons-156041'/>
	I1209 10:34:11.460811  617708 main.go:141] libmachine: (addons-156041)       <model type='virtio'/>
	I1209 10:34:11.460822  617708 main.go:141] libmachine: (addons-156041)     </interface>
	I1209 10:34:11.460829  617708 main.go:141] libmachine: (addons-156041)     <interface type='network'>
	I1209 10:34:11.460841  617708 main.go:141] libmachine: (addons-156041)       <source network='default'/>
	I1209 10:34:11.460850  617708 main.go:141] libmachine: (addons-156041)       <model type='virtio'/>
	I1209 10:34:11.460856  617708 main.go:141] libmachine: (addons-156041)     </interface>
	I1209 10:34:11.460865  617708 main.go:141] libmachine: (addons-156041)     <serial type='pty'>
	I1209 10:34:11.460874  617708 main.go:141] libmachine: (addons-156041)       <target port='0'/>
	I1209 10:34:11.460888  617708 main.go:141] libmachine: (addons-156041)     </serial>
	I1209 10:34:11.460901  617708 main.go:141] libmachine: (addons-156041)     <console type='pty'>
	I1209 10:34:11.460912  617708 main.go:141] libmachine: (addons-156041)       <target type='serial' port='0'/>
	I1209 10:34:11.460919  617708 main.go:141] libmachine: (addons-156041)     </console>
	I1209 10:34:11.460929  617708 main.go:141] libmachine: (addons-156041)     <rng model='virtio'>
	I1209 10:34:11.460938  617708 main.go:141] libmachine: (addons-156041)       <backend model='random'>/dev/random</backend>
	I1209 10:34:11.460945  617708 main.go:141] libmachine: (addons-156041)     </rng>
	I1209 10:34:11.460952  617708 main.go:141] libmachine: (addons-156041)     
	I1209 10:34:11.460961  617708 main.go:141] libmachine: (addons-156041)     
	I1209 10:34:11.460977  617708 main.go:141] libmachine: (addons-156041)   </devices>
	I1209 10:34:11.460989  617708 main.go:141] libmachine: (addons-156041) </domain>
	I1209 10:34:11.461001  617708 main.go:141] libmachine: (addons-156041) 
	I1209 10:34:11.466548  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined MAC address 52:54:00:38:e5:cd in network default
	I1209 10:34:11.467097  617708 main.go:141] libmachine: (addons-156041) Ensuring networks are active...
	I1209 10:34:11.467121  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:34:11.467767  617708 main.go:141] libmachine: (addons-156041) Ensuring network default is active
	I1209 10:34:11.468085  617708 main.go:141] libmachine: (addons-156041) Ensuring network mk-addons-156041 is active
	I1209 10:34:11.468556  617708 main.go:141] libmachine: (addons-156041) Getting domain xml...
	I1209 10:34:11.469226  617708 main.go:141] libmachine: (addons-156041) Creating domain...
	I1209 10:34:12.877259  617708 main.go:141] libmachine: (addons-156041) Waiting to get IP...
	I1209 10:34:12.878071  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:34:12.878623  617708 main.go:141] libmachine: (addons-156041) DBG | unable to find current IP address of domain addons-156041 in network mk-addons-156041
	I1209 10:34:12.878652  617708 main.go:141] libmachine: (addons-156041) DBG | I1209 10:34:12.878601  617731 retry.go:31] will retry after 211.633142ms: waiting for machine to come up
	I1209 10:34:13.092362  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:34:13.092875  617708 main.go:141] libmachine: (addons-156041) DBG | unable to find current IP address of domain addons-156041 in network mk-addons-156041
	I1209 10:34:13.092901  617708 main.go:141] libmachine: (addons-156041) DBG | I1209 10:34:13.092821  617731 retry.go:31] will retry after 334.859148ms: waiting for machine to come up
	I1209 10:34:13.429491  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:34:13.429829  617708 main.go:141] libmachine: (addons-156041) DBG | unable to find current IP address of domain addons-156041 in network mk-addons-156041
	I1209 10:34:13.429867  617708 main.go:141] libmachine: (addons-156041) DBG | I1209 10:34:13.429789  617731 retry.go:31] will retry after 306.448763ms: waiting for machine to come up
	I1209 10:34:13.738661  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:34:13.739111  617708 main.go:141] libmachine: (addons-156041) DBG | unable to find current IP address of domain addons-156041 in network mk-addons-156041
	I1209 10:34:13.739146  617708 main.go:141] libmachine: (addons-156041) DBG | I1209 10:34:13.739068  617731 retry.go:31] will retry after 386.245722ms: waiting for machine to come up
	I1209 10:34:14.126628  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:34:14.126985  617708 main.go:141] libmachine: (addons-156041) DBG | unable to find current IP address of domain addons-156041 in network mk-addons-156041
	I1209 10:34:14.127010  617708 main.go:141] libmachine: (addons-156041) DBG | I1209 10:34:14.126955  617731 retry.go:31] will retry after 694.024962ms: waiting for machine to come up
	I1209 10:34:14.823112  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:34:14.823577  617708 main.go:141] libmachine: (addons-156041) DBG | unable to find current IP address of domain addons-156041 in network mk-addons-156041
	I1209 10:34:14.823601  617708 main.go:141] libmachine: (addons-156041) DBG | I1209 10:34:14.823533  617731 retry.go:31] will retry after 589.517993ms: waiting for machine to come up
	I1209 10:34:15.414323  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:34:15.414706  617708 main.go:141] libmachine: (addons-156041) DBG | unable to find current IP address of domain addons-156041 in network mk-addons-156041
	I1209 10:34:15.414736  617708 main.go:141] libmachine: (addons-156041) DBG | I1209 10:34:15.414644  617731 retry.go:31] will retry after 1.171119297s: waiting for machine to come up
	I1209 10:34:16.587399  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:34:16.587898  617708 main.go:141] libmachine: (addons-156041) DBG | unable to find current IP address of domain addons-156041 in network mk-addons-156041
	I1209 10:34:16.587919  617708 main.go:141] libmachine: (addons-156041) DBG | I1209 10:34:16.587876  617731 retry.go:31] will retry after 964.036276ms: waiting for machine to come up
	I1209 10:34:17.554151  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:34:17.554514  617708 main.go:141] libmachine: (addons-156041) DBG | unable to find current IP address of domain addons-156041 in network mk-addons-156041
	I1209 10:34:17.554546  617708 main.go:141] libmachine: (addons-156041) DBG | I1209 10:34:17.554478  617731 retry.go:31] will retry after 1.154329367s: waiting for machine to come up
	I1209 10:34:18.710995  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:34:18.711398  617708 main.go:141] libmachine: (addons-156041) DBG | unable to find current IP address of domain addons-156041 in network mk-addons-156041
	I1209 10:34:18.711421  617708 main.go:141] libmachine: (addons-156041) DBG | I1209 10:34:18.711353  617731 retry.go:31] will retry after 1.40055916s: waiting for machine to come up
	I1209 10:34:20.113871  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:34:20.114249  617708 main.go:141] libmachine: (addons-156041) DBG | unable to find current IP address of domain addons-156041 in network mk-addons-156041
	I1209 10:34:20.114281  617708 main.go:141] libmachine: (addons-156041) DBG | I1209 10:34:20.114155  617731 retry.go:31] will retry after 2.504420228s: waiting for machine to come up
	I1209 10:34:22.620064  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:34:22.620525  617708 main.go:141] libmachine: (addons-156041) DBG | unable to find current IP address of domain addons-156041 in network mk-addons-156041
	I1209 10:34:22.620552  617708 main.go:141] libmachine: (addons-156041) DBG | I1209 10:34:22.620489  617731 retry.go:31] will retry after 3.130098112s: waiting for machine to come up
	I1209 10:34:25.752259  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:34:25.752694  617708 main.go:141] libmachine: (addons-156041) DBG | unable to find current IP address of domain addons-156041 in network mk-addons-156041
	I1209 10:34:25.752717  617708 main.go:141] libmachine: (addons-156041) DBG | I1209 10:34:25.752635  617731 retry.go:31] will retry after 4.102691958s: waiting for machine to come up
	I1209 10:34:29.860162  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:34:29.860625  617708 main.go:141] libmachine: (addons-156041) DBG | unable to find current IP address of domain addons-156041 in network mk-addons-156041
	I1209 10:34:29.860661  617708 main.go:141] libmachine: (addons-156041) DBG | I1209 10:34:29.860596  617731 retry.go:31] will retry after 3.589941106s: waiting for machine to come up
	I1209 10:34:33.454289  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:34:33.454832  617708 main.go:141] libmachine: (addons-156041) Found IP for machine: 192.168.39.161
	I1209 10:34:33.454860  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has current primary IP address 192.168.39.161 and MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:34:33.454874  617708 main.go:141] libmachine: (addons-156041) Reserving static IP address...
	I1209 10:34:33.455112  617708 main.go:141] libmachine: (addons-156041) DBG | unable to find host DHCP lease matching {name: "addons-156041", mac: "52:54:00:fc:f1:8a", ip: "192.168.39.161"} in network mk-addons-156041
	I1209 10:34:33.529495  617708 main.go:141] libmachine: (addons-156041) Reserved static IP address: 192.168.39.161
	I1209 10:34:33.529536  617708 main.go:141] libmachine: (addons-156041) DBG | Getting to WaitForSSH function...
	I1209 10:34:33.529545  617708 main.go:141] libmachine: (addons-156041) Waiting for SSH to be available...
	I1209 10:34:33.531927  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:34:33.532251  617708 main.go:141] libmachine: (addons-156041) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:fc:f1:8a", ip: ""} in network mk-addons-156041
	I1209 10:34:33.532283  617708 main.go:141] libmachine: (addons-156041) DBG | unable to find defined IP address of network mk-addons-156041 interface with MAC address 52:54:00:fc:f1:8a
	I1209 10:34:33.532436  617708 main.go:141] libmachine: (addons-156041) DBG | Using SSH client type: external
	I1209 10:34:33.532464  617708 main.go:141] libmachine: (addons-156041) DBG | Using SSH private key: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/addons-156041/id_rsa (-rw-------)
	I1209 10:34:33.532502  617708 main.go:141] libmachine: (addons-156041) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20068-609844/.minikube/machines/addons-156041/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1209 10:34:33.532530  617708 main.go:141] libmachine: (addons-156041) DBG | About to run SSH command:
	I1209 10:34:33.532544  617708 main.go:141] libmachine: (addons-156041) DBG | exit 0
	I1209 10:34:33.543751  617708 main.go:141] libmachine: (addons-156041) DBG | SSH cmd err, output: exit status 255: 
	I1209 10:34:33.543776  617708 main.go:141] libmachine: (addons-156041) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1209 10:34:33.543783  617708 main.go:141] libmachine: (addons-156041) DBG | command : exit 0
	I1209 10:34:33.543788  617708 main.go:141] libmachine: (addons-156041) DBG | err     : exit status 255
	I1209 10:34:33.543795  617708 main.go:141] libmachine: (addons-156041) DBG | output  : 
	I1209 10:34:36.544520  617708 main.go:141] libmachine: (addons-156041) DBG | Getting to WaitForSSH function...
	I1209 10:34:36.546860  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:34:36.547188  617708 main.go:141] libmachine: (addons-156041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:f1:8a", ip: ""} in network mk-addons-156041: {Iface:virbr1 ExpiryTime:2024-12-09 11:34:25 +0000 UTC Type:0 Mac:52:54:00:fc:f1:8a Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:addons-156041 Clientid:01:52:54:00:fc:f1:8a}
	I1209 10:34:36.547210  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined IP address 192.168.39.161 and MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:34:36.547398  617708 main.go:141] libmachine: (addons-156041) DBG | Using SSH client type: external
	I1209 10:34:36.547432  617708 main.go:141] libmachine: (addons-156041) DBG | Using SSH private key: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/addons-156041/id_rsa (-rw-------)
	I1209 10:34:36.547474  617708 main.go:141] libmachine: (addons-156041) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.161 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20068-609844/.minikube/machines/addons-156041/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1209 10:34:36.547494  617708 main.go:141] libmachine: (addons-156041) DBG | About to run SSH command:
	I1209 10:34:36.547511  617708 main.go:141] libmachine: (addons-156041) DBG | exit 0
	I1209 10:34:36.674295  617708 main.go:141] libmachine: (addons-156041) DBG | SSH cmd err, output: <nil>: 
	I1209 10:34:36.674618  617708 main.go:141] libmachine: (addons-156041) KVM machine creation complete!
	I1209 10:34:36.674998  617708 main.go:141] libmachine: (addons-156041) Calling .GetConfigRaw
	I1209 10:34:36.675624  617708 main.go:141] libmachine: (addons-156041) Calling .DriverName
	I1209 10:34:36.675849  617708 main.go:141] libmachine: (addons-156041) Calling .DriverName
	I1209 10:34:36.676031  617708 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1209 10:34:36.676047  617708 main.go:141] libmachine: (addons-156041) Calling .GetState
	I1209 10:34:36.677323  617708 main.go:141] libmachine: Detecting operating system of created instance...
	I1209 10:34:36.677339  617708 main.go:141] libmachine: Waiting for SSH to be available...
	I1209 10:34:36.677344  617708 main.go:141] libmachine: Getting to WaitForSSH function...
	I1209 10:34:36.677350  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHHostname
	I1209 10:34:36.679630  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:34:36.679988  617708 main.go:141] libmachine: (addons-156041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:f1:8a", ip: ""} in network mk-addons-156041: {Iface:virbr1 ExpiryTime:2024-12-09 11:34:25 +0000 UTC Type:0 Mac:52:54:00:fc:f1:8a Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:addons-156041 Clientid:01:52:54:00:fc:f1:8a}
	I1209 10:34:36.680010  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined IP address 192.168.39.161 and MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:34:36.680141  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHPort
	I1209 10:34:36.680359  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHKeyPath
	I1209 10:34:36.680583  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHKeyPath
	I1209 10:34:36.680757  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHUsername
	I1209 10:34:36.680935  617708 main.go:141] libmachine: Using SSH client type: native
	I1209 10:34:36.681213  617708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.161 22 <nil> <nil>}
	I1209 10:34:36.681230  617708 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1209 10:34:36.789341  617708 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 10:34:36.789375  617708 main.go:141] libmachine: Detecting the provisioner...
	I1209 10:34:36.789384  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHHostname
	I1209 10:34:36.792214  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:34:36.792552  617708 main.go:141] libmachine: (addons-156041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:f1:8a", ip: ""} in network mk-addons-156041: {Iface:virbr1 ExpiryTime:2024-12-09 11:34:25 +0000 UTC Type:0 Mac:52:54:00:fc:f1:8a Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:addons-156041 Clientid:01:52:54:00:fc:f1:8a}
	I1209 10:34:36.792575  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined IP address 192.168.39.161 and MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:34:36.792755  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHPort
	I1209 10:34:36.792944  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHKeyPath
	I1209 10:34:36.793098  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHKeyPath
	I1209 10:34:36.793288  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHUsername
	I1209 10:34:36.793503  617708 main.go:141] libmachine: Using SSH client type: native
	I1209 10:34:36.793721  617708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.161 22 <nil> <nil>}
	I1209 10:34:36.793735  617708 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1209 10:34:36.902802  617708 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1209 10:34:36.902912  617708 main.go:141] libmachine: found compatible host: buildroot
	I1209 10:34:36.902924  617708 main.go:141] libmachine: Provisioning with buildroot...
	I1209 10:34:36.902935  617708 main.go:141] libmachine: (addons-156041) Calling .GetMachineName
	I1209 10:34:36.903199  617708 buildroot.go:166] provisioning hostname "addons-156041"
	I1209 10:34:36.903231  617708 main.go:141] libmachine: (addons-156041) Calling .GetMachineName
	I1209 10:34:36.903468  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHHostname
	I1209 10:34:36.906098  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:34:36.906435  617708 main.go:141] libmachine: (addons-156041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:f1:8a", ip: ""} in network mk-addons-156041: {Iface:virbr1 ExpiryTime:2024-12-09 11:34:25 +0000 UTC Type:0 Mac:52:54:00:fc:f1:8a Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:addons-156041 Clientid:01:52:54:00:fc:f1:8a}
	I1209 10:34:36.906458  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined IP address 192.168.39.161 and MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:34:36.906544  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHPort
	I1209 10:34:36.906773  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHKeyPath
	I1209 10:34:36.906933  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHKeyPath
	I1209 10:34:36.907168  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHUsername
	I1209 10:34:36.907352  617708 main.go:141] libmachine: Using SSH client type: native
	I1209 10:34:36.907534  617708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.161 22 <nil> <nil>}
	I1209 10:34:36.907548  617708 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-156041 && echo "addons-156041" | sudo tee /etc/hostname
	I1209 10:34:37.029308  617708 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-156041
	
	I1209 10:34:37.029333  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHHostname
	I1209 10:34:37.031961  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:34:37.032309  617708 main.go:141] libmachine: (addons-156041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:f1:8a", ip: ""} in network mk-addons-156041: {Iface:virbr1 ExpiryTime:2024-12-09 11:34:25 +0000 UTC Type:0 Mac:52:54:00:fc:f1:8a Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:addons-156041 Clientid:01:52:54:00:fc:f1:8a}
	I1209 10:34:37.032340  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined IP address 192.168.39.161 and MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:34:37.032529  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHPort
	I1209 10:34:37.032696  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHKeyPath
	I1209 10:34:37.032834  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHKeyPath
	I1209 10:34:37.032987  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHUsername
	I1209 10:34:37.033191  617708 main.go:141] libmachine: Using SSH client type: native
	I1209 10:34:37.033362  617708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.161 22 <nil> <nil>}
	I1209 10:34:37.033378  617708 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-156041' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-156041/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-156041' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 10:34:37.145967  617708 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 10:34:37.146028  617708 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20068-609844/.minikube CaCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20068-609844/.minikube}
	I1209 10:34:37.146100  617708 buildroot.go:174] setting up certificates
	I1209 10:34:37.146126  617708 provision.go:84] configureAuth start
	I1209 10:34:37.146151  617708 main.go:141] libmachine: (addons-156041) Calling .GetMachineName
	I1209 10:34:37.146501  617708 main.go:141] libmachine: (addons-156041) Calling .GetIP
	I1209 10:34:37.149063  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:34:37.149389  617708 main.go:141] libmachine: (addons-156041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:f1:8a", ip: ""} in network mk-addons-156041: {Iface:virbr1 ExpiryTime:2024-12-09 11:34:25 +0000 UTC Type:0 Mac:52:54:00:fc:f1:8a Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:addons-156041 Clientid:01:52:54:00:fc:f1:8a}
	I1209 10:34:37.149417  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined IP address 192.168.39.161 and MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:34:37.149536  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHHostname
	I1209 10:34:37.151668  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:34:37.151919  617708 main.go:141] libmachine: (addons-156041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:f1:8a", ip: ""} in network mk-addons-156041: {Iface:virbr1 ExpiryTime:2024-12-09 11:34:25 +0000 UTC Type:0 Mac:52:54:00:fc:f1:8a Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:addons-156041 Clientid:01:52:54:00:fc:f1:8a}
	I1209 10:34:37.151951  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined IP address 192.168.39.161 and MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:34:37.152048  617708 provision.go:143] copyHostCerts
	I1209 10:34:37.152128  617708 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem (1123 bytes)
	I1209 10:34:37.152268  617708 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem (1679 bytes)
	I1209 10:34:37.152332  617708 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem (1082 bytes)
	I1209 10:34:37.152381  617708 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem org=jenkins.addons-156041 san=[127.0.0.1 192.168.39.161 addons-156041 localhost minikube]
	I1209 10:34:37.423254  617708 provision.go:177] copyRemoteCerts
	I1209 10:34:37.423323  617708 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 10:34:37.423352  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHHostname
	I1209 10:34:37.426066  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:34:37.426444  617708 main.go:141] libmachine: (addons-156041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:f1:8a", ip: ""} in network mk-addons-156041: {Iface:virbr1 ExpiryTime:2024-12-09 11:34:25 +0000 UTC Type:0 Mac:52:54:00:fc:f1:8a Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:addons-156041 Clientid:01:52:54:00:fc:f1:8a}
	I1209 10:34:37.426477  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined IP address 192.168.39.161 and MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:34:37.426608  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHPort
	I1209 10:34:37.426825  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHKeyPath
	I1209 10:34:37.426964  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHUsername
	I1209 10:34:37.427113  617708 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/addons-156041/id_rsa Username:docker}
	I1209 10:34:37.512546  617708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1209 10:34:37.534821  617708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1209 10:34:37.556406  617708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1209 10:34:37.578220  617708 provision.go:87] duration metric: took 432.072178ms to configureAuth
	I1209 10:34:37.578261  617708 buildroot.go:189] setting minikube options for container-runtime
	I1209 10:34:37.578498  617708 config.go:182] Loaded profile config "addons-156041": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 10:34:37.578590  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHHostname
	I1209 10:34:37.580991  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:34:37.581399  617708 main.go:141] libmachine: (addons-156041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:f1:8a", ip: ""} in network mk-addons-156041: {Iface:virbr1 ExpiryTime:2024-12-09 11:34:25 +0000 UTC Type:0 Mac:52:54:00:fc:f1:8a Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:addons-156041 Clientid:01:52:54:00:fc:f1:8a}
	I1209 10:34:37.581430  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined IP address 192.168.39.161 and MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:34:37.581648  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHPort
	I1209 10:34:37.581893  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHKeyPath
	I1209 10:34:37.582085  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHKeyPath
	I1209 10:34:37.582292  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHUsername
	I1209 10:34:37.582483  617708 main.go:141] libmachine: Using SSH client type: native
	I1209 10:34:37.582645  617708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.161 22 <nil> <nil>}
	I1209 10:34:37.582658  617708 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 10:34:37.802965  617708 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 10:34:37.802991  617708 main.go:141] libmachine: Checking connection to Docker...
	I1209 10:34:37.803001  617708 main.go:141] libmachine: (addons-156041) Calling .GetURL
	I1209 10:34:37.804293  617708 main.go:141] libmachine: (addons-156041) DBG | Using libvirt version 6000000
	I1209 10:34:37.806358  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:34:37.806814  617708 main.go:141] libmachine: (addons-156041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:f1:8a", ip: ""} in network mk-addons-156041: {Iface:virbr1 ExpiryTime:2024-12-09 11:34:25 +0000 UTC Type:0 Mac:52:54:00:fc:f1:8a Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:addons-156041 Clientid:01:52:54:00:fc:f1:8a}
	I1209 10:34:37.806841  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined IP address 192.168.39.161 and MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:34:37.807024  617708 main.go:141] libmachine: Docker is up and running!
	I1209 10:34:37.807039  617708 main.go:141] libmachine: Reticulating splines...
	I1209 10:34:37.807049  617708 client.go:171] duration metric: took 27.385849388s to LocalClient.Create
	I1209 10:34:37.807093  617708 start.go:167] duration metric: took 27.385936007s to libmachine.API.Create "addons-156041"
	I1209 10:34:37.807118  617708 start.go:293] postStartSetup for "addons-156041" (driver="kvm2")
	I1209 10:34:37.807135  617708 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 10:34:37.807161  617708 main.go:141] libmachine: (addons-156041) Calling .DriverName
	I1209 10:34:37.807425  617708 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 10:34:37.807450  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHHostname
	I1209 10:34:37.809753  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:34:37.810084  617708 main.go:141] libmachine: (addons-156041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:f1:8a", ip: ""} in network mk-addons-156041: {Iface:virbr1 ExpiryTime:2024-12-09 11:34:25 +0000 UTC Type:0 Mac:52:54:00:fc:f1:8a Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:addons-156041 Clientid:01:52:54:00:fc:f1:8a}
	I1209 10:34:37.810110  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined IP address 192.168.39.161 and MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:34:37.810191  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHPort
	I1209 10:34:37.810395  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHKeyPath
	I1209 10:34:37.810542  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHUsername
	I1209 10:34:37.810685  617708 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/addons-156041/id_rsa Username:docker}
	I1209 10:34:37.892685  617708 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 10:34:37.896835  617708 info.go:137] Remote host: Buildroot 2023.02.9
	I1209 10:34:37.896866  617708 filesync.go:126] Scanning /home/jenkins/minikube-integration/20068-609844/.minikube/addons for local assets ...
	I1209 10:34:37.896940  617708 filesync.go:126] Scanning /home/jenkins/minikube-integration/20068-609844/.minikube/files for local assets ...
	I1209 10:34:37.896966  617708 start.go:296] duration metric: took 89.837446ms for postStartSetup
	I1209 10:34:37.897022  617708 main.go:141] libmachine: (addons-156041) Calling .GetConfigRaw
	I1209 10:34:37.897693  617708 main.go:141] libmachine: (addons-156041) Calling .GetIP
	I1209 10:34:37.900481  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:34:37.900800  617708 main.go:141] libmachine: (addons-156041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:f1:8a", ip: ""} in network mk-addons-156041: {Iface:virbr1 ExpiryTime:2024-12-09 11:34:25 +0000 UTC Type:0 Mac:52:54:00:fc:f1:8a Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:addons-156041 Clientid:01:52:54:00:fc:f1:8a}
	I1209 10:34:37.900827  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined IP address 192.168.39.161 and MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:34:37.901069  617708 profile.go:143] Saving config to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/addons-156041/config.json ...
	I1209 10:34:37.901268  617708 start.go:128] duration metric: took 27.498657742s to createHost
	I1209 10:34:37.901308  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHHostname
	I1209 10:34:37.903609  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:34:37.903905  617708 main.go:141] libmachine: (addons-156041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:f1:8a", ip: ""} in network mk-addons-156041: {Iface:virbr1 ExpiryTime:2024-12-09 11:34:25 +0000 UTC Type:0 Mac:52:54:00:fc:f1:8a Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:addons-156041 Clientid:01:52:54:00:fc:f1:8a}
	I1209 10:34:37.903929  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined IP address 192.168.39.161 and MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:34:37.904025  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHPort
	I1209 10:34:37.904242  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHKeyPath
	I1209 10:34:37.904364  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHKeyPath
	I1209 10:34:37.904520  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHUsername
	I1209 10:34:37.904633  617708 main.go:141] libmachine: Using SSH client type: native
	I1209 10:34:37.904792  617708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.161 22 <nil> <nil>}
	I1209 10:34:37.904809  617708 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1209 10:34:38.014838  617708 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733740477.994546390
	
	I1209 10:34:38.014871  617708 fix.go:216] guest clock: 1733740477.994546390
	I1209 10:34:38.014884  617708 fix.go:229] Guest: 2024-12-09 10:34:37.99454639 +0000 UTC Remote: 2024-12-09 10:34:37.901281977 +0000 UTC m=+27.606637014 (delta=93.264413ms)
	I1209 10:34:38.014943  617708 fix.go:200] guest clock delta is within tolerance: 93.264413ms
	I1209 10:34:38.014950  617708 start.go:83] releasing machines lock for "addons-156041", held for 27.612418671s
	I1209 10:34:38.014981  617708 main.go:141] libmachine: (addons-156041) Calling .DriverName
	I1209 10:34:38.015314  617708 main.go:141] libmachine: (addons-156041) Calling .GetIP
	I1209 10:34:38.017805  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:34:38.018144  617708 main.go:141] libmachine: (addons-156041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:f1:8a", ip: ""} in network mk-addons-156041: {Iface:virbr1 ExpiryTime:2024-12-09 11:34:25 +0000 UTC Type:0 Mac:52:54:00:fc:f1:8a Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:addons-156041 Clientid:01:52:54:00:fc:f1:8a}
	I1209 10:34:38.018193  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined IP address 192.168.39.161 and MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:34:38.018360  617708 main.go:141] libmachine: (addons-156041) Calling .DriverName
	I1209 10:34:38.018854  617708 main.go:141] libmachine: (addons-156041) Calling .DriverName
	I1209 10:34:38.019051  617708 main.go:141] libmachine: (addons-156041) Calling .DriverName
	I1209 10:34:38.019154  617708 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 10:34:38.019219  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHHostname
	I1209 10:34:38.019338  617708 ssh_runner.go:195] Run: cat /version.json
	I1209 10:34:38.019372  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHHostname
	I1209 10:34:38.022404  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:34:38.022572  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:34:38.022747  617708 main.go:141] libmachine: (addons-156041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:f1:8a", ip: ""} in network mk-addons-156041: {Iface:virbr1 ExpiryTime:2024-12-09 11:34:25 +0000 UTC Type:0 Mac:52:54:00:fc:f1:8a Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:addons-156041 Clientid:01:52:54:00:fc:f1:8a}
	I1209 10:34:38.022770  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined IP address 192.168.39.161 and MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:34:38.022942  617708 main.go:141] libmachine: (addons-156041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:f1:8a", ip: ""} in network mk-addons-156041: {Iface:virbr1 ExpiryTime:2024-12-09 11:34:25 +0000 UTC Type:0 Mac:52:54:00:fc:f1:8a Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:addons-156041 Clientid:01:52:54:00:fc:f1:8a}
	I1209 10:34:38.022974  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined IP address 192.168.39.161 and MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:34:38.022980  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHPort
	I1209 10:34:38.023174  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHKeyPath
	I1209 10:34:38.023255  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHPort
	I1209 10:34:38.023353  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHUsername
	I1209 10:34:38.023412  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHKeyPath
	I1209 10:34:38.023628  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHUsername
	I1209 10:34:38.023628  617708 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/addons-156041/id_rsa Username:docker}
	I1209 10:34:38.023772  617708 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/addons-156041/id_rsa Username:docker}
	I1209 10:34:38.136226  617708 ssh_runner.go:195] Run: systemctl --version
	I1209 10:34:38.142360  617708 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 10:34:38.302408  617708 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 10:34:38.308127  617708 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 10:34:38.308208  617708 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 10:34:38.323253  617708 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1209 10:34:38.323283  617708 start.go:495] detecting cgroup driver to use...
	I1209 10:34:38.323366  617708 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 10:34:38.338933  617708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 10:34:38.351830  617708 docker.go:217] disabling cri-docker service (if available) ...
	I1209 10:34:38.351888  617708 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 10:34:38.364462  617708 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 10:34:38.377338  617708 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 10:34:38.485900  617708 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 10:34:38.645698  617708 docker.go:233] disabling docker service ...
	I1209 10:34:38.645791  617708 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 10:34:38.659352  617708 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 10:34:38.671571  617708 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 10:34:38.786607  617708 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 10:34:38.892441  617708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 10:34:38.905567  617708 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 10:34:38.922831  617708 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1209 10:34:38.922895  617708 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 10:34:38.932804  617708 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 10:34:38.932865  617708 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 10:34:38.942693  617708 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 10:34:38.952607  617708 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 10:34:38.962413  617708 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 10:34:38.972377  617708 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 10:34:38.981878  617708 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 10:34:38.997306  617708 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 10:34:39.007362  617708 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 10:34:39.016018  617708 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1209 10:34:39.016096  617708 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1209 10:34:39.027553  617708 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 10:34:39.036029  617708 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 10:34:39.143122  617708 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 10:34:39.235446  617708 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 10:34:39.235560  617708 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 10:34:39.240300  617708 start.go:563] Will wait 60s for crictl version
	I1209 10:34:39.240373  617708 ssh_runner.go:195] Run: which crictl
	I1209 10:34:39.243913  617708 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 10:34:39.283891  617708 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1209 10:34:39.283988  617708 ssh_runner.go:195] Run: crio --version
	I1209 10:34:39.310143  617708 ssh_runner.go:195] Run: crio --version
	I1209 10:34:39.340092  617708 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1209 10:34:39.341562  617708 main.go:141] libmachine: (addons-156041) Calling .GetIP
	I1209 10:34:39.344421  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:34:39.344830  617708 main.go:141] libmachine: (addons-156041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:f1:8a", ip: ""} in network mk-addons-156041: {Iface:virbr1 ExpiryTime:2024-12-09 11:34:25 +0000 UTC Type:0 Mac:52:54:00:fc:f1:8a Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:addons-156041 Clientid:01:52:54:00:fc:f1:8a}
	I1209 10:34:39.344854  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined IP address 192.168.39.161 and MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:34:39.345030  617708 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1209 10:34:39.348824  617708 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 10:34:39.360923  617708 kubeadm.go:883] updating cluster {Name:addons-156041 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
2 ClusterName:addons-156041 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.161 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 10:34:39.361056  617708 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 10:34:39.361105  617708 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 10:34:39.391048  617708 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1209 10:34:39.391137  617708 ssh_runner.go:195] Run: which lz4
	I1209 10:34:39.395012  617708 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1209 10:34:39.398788  617708 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1209 10:34:39.398818  617708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1209 10:34:40.579576  617708 crio.go:462] duration metric: took 1.184590471s to copy over tarball
	I1209 10:34:40.579674  617708 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1209 10:34:42.648090  617708 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.068367956s)
	I1209 10:34:42.648129  617708 crio.go:469] duration metric: took 2.068514027s to extract the tarball
	I1209 10:34:42.648138  617708 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1209 10:34:42.685312  617708 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 10:34:42.724362  617708 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 10:34:42.724398  617708 cache_images.go:84] Images are preloaded, skipping loading
	I1209 10:34:42.724414  617708 kubeadm.go:934] updating node { 192.168.39.161 8443 v1.31.2 crio true true} ...
	I1209 10:34:42.724567  617708 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-156041 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.161
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:addons-156041 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 10:34:42.724639  617708 ssh_runner.go:195] Run: crio config
	I1209 10:34:42.769917  617708 cni.go:84] Creating CNI manager for ""
	I1209 10:34:42.769946  617708 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 10:34:42.769956  617708 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1209 10:34:42.769981  617708 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.161 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-156041 NodeName:addons-156041 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.161"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.161 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 10:34:42.770112  617708 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.161
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-156041"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.161"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.161"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 10:34:42.770193  617708 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1209 10:34:42.779819  617708 binaries.go:44] Found k8s binaries, skipping transfer
	I1209 10:34:42.779900  617708 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 10:34:42.788792  617708 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1209 10:34:42.804278  617708 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 10:34:42.819257  617708 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2293 bytes)
	I1209 10:34:42.834043  617708 ssh_runner.go:195] Run: grep 192.168.39.161	control-plane.minikube.internal$ /etc/hosts
	I1209 10:34:42.837415  617708 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.161	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 10:34:42.848699  617708 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 10:34:42.949230  617708 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 10:34:42.964274  617708 certs.go:68] Setting up /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/addons-156041 for IP: 192.168.39.161
	I1209 10:34:42.964306  617708 certs.go:194] generating shared ca certs ...
	I1209 10:34:42.964331  617708 certs.go:226] acquiring lock for ca certs: {Name:mk2df5887e08965d909a9c950da5dfffb8a04ddc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 10:34:42.964509  617708 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.key
	I1209 10:34:43.248749  617708 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt ...
	I1209 10:34:43.248782  617708 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt: {Name:mk622bdbb21507c1952d11c71417ae3a15eb5308 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 10:34:43.248955  617708 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20068-609844/.minikube/ca.key ...
	I1209 10:34:43.248966  617708 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/.minikube/ca.key: {Name:mke21334aad9871880bb7c0cf3c037a39323dbe3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 10:34:43.249041  617708 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.key
	I1209 10:34:43.609861  617708 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.crt ...
	I1209 10:34:43.609898  617708 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.crt: {Name:mkd1bd2a4594eb40096825a894a5a40d1347c0f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 10:34:43.610080  617708 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.key ...
	I1209 10:34:43.610092  617708 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.key: {Name:mkb829d565cca0c0464dd4998a8770ec52136425 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 10:34:43.610166  617708 certs.go:256] generating profile certs ...
	I1209 10:34:43.610254  617708 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/addons-156041/client.key
	I1209 10:34:43.610269  617708 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/addons-156041/client.crt with IP's: []
	I1209 10:34:43.991857  617708 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/addons-156041/client.crt ...
	I1209 10:34:43.991889  617708 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/addons-156041/client.crt: {Name:mk4acbf815427ee71599db617da9affaa4b132e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 10:34:43.992086  617708 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/addons-156041/client.key ...
	I1209 10:34:43.992102  617708 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/addons-156041/client.key: {Name:mk27229f75f751fd341adb8e2be9816d7605d2c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 10:34:43.992211  617708 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/addons-156041/apiserver.key.7d199bea
	I1209 10:34:43.992232  617708 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/addons-156041/apiserver.crt.7d199bea with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.161]
	I1209 10:34:44.231298  617708 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/addons-156041/apiserver.crt.7d199bea ...
	I1209 10:34:44.231339  617708 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/addons-156041/apiserver.crt.7d199bea: {Name:mk826768f1c37d7a376a1dc76e73b02655cee348 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 10:34:44.231577  617708 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/addons-156041/apiserver.key.7d199bea ...
	I1209 10:34:44.231599  617708 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/addons-156041/apiserver.key.7d199bea: {Name:mke74e9d81fe6d8530d9bbcb64d3edf05a659851 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 10:34:44.231723  617708 certs.go:381] copying /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/addons-156041/apiserver.crt.7d199bea -> /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/addons-156041/apiserver.crt
	I1209 10:34:44.231818  617708 certs.go:385] copying /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/addons-156041/apiserver.key.7d199bea -> /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/addons-156041/apiserver.key
	I1209 10:34:44.231877  617708 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/addons-156041/proxy-client.key
	I1209 10:34:44.231900  617708 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/addons-156041/proxy-client.crt with IP's: []
	I1209 10:34:44.552711  617708 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/addons-156041/proxy-client.crt ...
	I1209 10:34:44.552747  617708 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/addons-156041/proxy-client.crt: {Name:mk9c785b66230b8e800afed97f1d945d7e6f65d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 10:34:44.552953  617708 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/addons-156041/proxy-client.key ...
	I1209 10:34:44.552979  617708 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/addons-156041/proxy-client.key: {Name:mkb1c95d8a7bce4ef5bd80b6946bff12403ee745 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 10:34:44.553227  617708 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem (1675 bytes)
	I1209 10:34:44.553273  617708 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem (1082 bytes)
	I1209 10:34:44.553304  617708 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem (1123 bytes)
	I1209 10:34:44.553331  617708 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem (1679 bytes)
	I1209 10:34:44.554072  617708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 10:34:44.596811  617708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1209 10:34:44.645358  617708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 10:34:44.667497  617708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1209 10:34:44.689470  617708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/addons-156041/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1209 10:34:44.711968  617708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/addons-156041/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1209 10:34:44.734723  617708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/addons-156041/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 10:34:44.756160  617708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/addons-156041/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1209 10:34:44.777957  617708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 10:34:44.799771  617708 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 10:34:44.815818  617708 ssh_runner.go:195] Run: openssl version
	I1209 10:34:44.821770  617708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1209 10:34:44.832527  617708 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 10:34:44.836683  617708 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 10:34 /usr/share/ca-certificates/minikubeCA.pem
	I1209 10:34:44.836756  617708 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 10:34:44.842276  617708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1209 10:34:44.852431  617708 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 10:34:44.856064  617708 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1209 10:34:44.856128  617708 kubeadm.go:392] StartCluster: {Name:addons-156041 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 C
lusterName:addons-156041 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.161 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 10:34:44.856228  617708 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 10:34:44.856321  617708 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 10:34:44.896386  617708 cri.go:89] found id: ""
	I1209 10:34:44.896477  617708 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 10:34:44.905880  617708 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 10:34:44.914902  617708 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 10:34:44.924243  617708 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 10:34:44.924265  617708 kubeadm.go:157] found existing configuration files:
	
	I1209 10:34:44.924309  617708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 10:34:44.932900  617708 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 10:34:44.932968  617708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 10:34:44.941842  617708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 10:34:44.950239  617708 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 10:34:44.950298  617708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 10:34:44.959133  617708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 10:34:44.967550  617708 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 10:34:44.967611  617708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 10:34:44.976393  617708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 10:34:44.984503  617708 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 10:34:44.984553  617708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 10:34:44.993046  617708 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1209 10:34:45.144153  617708 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1209 10:34:55.293039  617708 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1209 10:34:55.293158  617708 kubeadm.go:310] [preflight] Running pre-flight checks
	I1209 10:34:55.293302  617708 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1209 10:34:55.293486  617708 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1209 10:34:55.293650  617708 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1209 10:34:55.293754  617708 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1209 10:34:55.295582  617708 out.go:235]   - Generating certificates and keys ...
	I1209 10:34:55.295681  617708 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1209 10:34:55.295742  617708 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1209 10:34:55.295826  617708 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1209 10:34:55.295925  617708 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1209 10:34:55.295998  617708 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1209 10:34:55.296067  617708 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1209 10:34:55.296146  617708 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1209 10:34:55.296263  617708 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-156041 localhost] and IPs [192.168.39.161 127.0.0.1 ::1]
	I1209 10:34:55.296310  617708 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1209 10:34:55.296460  617708 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-156041 localhost] and IPs [192.168.39.161 127.0.0.1 ::1]
	I1209 10:34:55.296569  617708 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1209 10:34:55.296665  617708 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1209 10:34:55.296728  617708 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1209 10:34:55.296806  617708 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1209 10:34:55.296884  617708 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1209 10:34:55.296974  617708 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1209 10:34:55.297040  617708 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1209 10:34:55.297125  617708 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1209 10:34:55.297212  617708 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1209 10:34:55.297314  617708 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1209 10:34:55.297399  617708 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1209 10:34:55.299074  617708 out.go:235]   - Booting up control plane ...
	I1209 10:34:55.299169  617708 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1209 10:34:55.299244  617708 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1209 10:34:55.299312  617708 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1209 10:34:55.299398  617708 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1209 10:34:55.299517  617708 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1209 10:34:55.299589  617708 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1209 10:34:55.299780  617708 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1209 10:34:55.299871  617708 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1209 10:34:55.299921  617708 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.580427ms
	I1209 10:34:55.299986  617708 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1209 10:34:55.300034  617708 kubeadm.go:310] [api-check] The API server is healthy after 5.00204533s
	I1209 10:34:55.300142  617708 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1209 10:34:55.300283  617708 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1209 10:34:55.300375  617708 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1209 10:34:55.300576  617708 kubeadm.go:310] [mark-control-plane] Marking the node addons-156041 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1209 10:34:55.300659  617708 kubeadm.go:310] [bootstrap-token] Using token: 1ez8ht.ew72wo64yxy4gta0
	I1209 10:34:55.302324  617708 out.go:235]   - Configuring RBAC rules ...
	I1209 10:34:55.302494  617708 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1209 10:34:55.302578  617708 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1209 10:34:55.302694  617708 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1209 10:34:55.302830  617708 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1209 10:34:55.302970  617708 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1209 10:34:55.303079  617708 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1209 10:34:55.303213  617708 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1209 10:34:55.303267  617708 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1209 10:34:55.303330  617708 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1209 10:34:55.303342  617708 kubeadm.go:310] 
	I1209 10:34:55.303439  617708 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1209 10:34:55.303452  617708 kubeadm.go:310] 
	I1209 10:34:55.303576  617708 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1209 10:34:55.303584  617708 kubeadm.go:310] 
	I1209 10:34:55.303605  617708 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1209 10:34:55.303655  617708 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1209 10:34:55.303698  617708 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1209 10:34:55.303703  617708 kubeadm.go:310] 
	I1209 10:34:55.303747  617708 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1209 10:34:55.303752  617708 kubeadm.go:310] 
	I1209 10:34:55.303836  617708 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1209 10:34:55.303867  617708 kubeadm.go:310] 
	I1209 10:34:55.303953  617708 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1209 10:34:55.304061  617708 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1209 10:34:55.304150  617708 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1209 10:34:55.304161  617708 kubeadm.go:310] 
	I1209 10:34:55.304233  617708 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1209 10:34:55.304327  617708 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1209 10:34:55.304341  617708 kubeadm.go:310] 
	I1209 10:34:55.304458  617708 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 1ez8ht.ew72wo64yxy4gta0 \
	I1209 10:34:55.304610  617708 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c011865ce27f188d55feab5f04226df0aae8d2e7cfe661283806c6f2de0ef29a \
	I1209 10:34:55.304631  617708 kubeadm.go:310] 	--control-plane 
	I1209 10:34:55.304637  617708 kubeadm.go:310] 
	I1209 10:34:55.304708  617708 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1209 10:34:55.304714  617708 kubeadm.go:310] 
	I1209 10:34:55.304821  617708 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 1ez8ht.ew72wo64yxy4gta0 \
	I1209 10:34:55.304960  617708 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c011865ce27f188d55feab5f04226df0aae8d2e7cfe661283806c6f2de0ef29a 
	I1209 10:34:55.304975  617708 cni.go:84] Creating CNI manager for ""
	I1209 10:34:55.304983  617708 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 10:34:55.306396  617708 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1209 10:34:55.307738  617708 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1209 10:34:55.319758  617708 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1209 10:34:55.338262  617708 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1209 10:34:55.338343  617708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 10:34:55.338370  617708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-156041 minikube.k8s.io/updated_at=2024_12_09T10_34_55_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=60110addcdbf0fec7168b962521659e922988d6c minikube.k8s.io/name=addons-156041 minikube.k8s.io/primary=true
	I1209 10:34:55.353438  617708 ops.go:34] apiserver oom_adj: -16
	I1209 10:34:55.457021  617708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 10:34:55.957278  617708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 10:34:56.457210  617708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 10:34:56.957858  617708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 10:34:57.458114  617708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 10:34:57.957308  617708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 10:34:58.457398  617708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 10:34:58.957294  617708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 10:34:59.457096  617708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 10:34:59.957109  617708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 10:35:00.072064  617708 kubeadm.go:1113] duration metric: took 4.733788723s to wait for elevateKubeSystemPrivileges
	I1209 10:35:00.072119  617708 kubeadm.go:394] duration metric: took 15.215996974s to StartCluster
	I1209 10:35:00.072148  617708 settings.go:142] acquiring lock: {Name:mkd819f3469f56ef651f2e5bb461e93bb6b2b5fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 10:35:00.072271  617708 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20068-609844/kubeconfig
	I1209 10:35:00.072735  617708 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/kubeconfig: {Name:mk32876a1ab47f13c0d7c0ba1ee5e32de01b4a4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 10:35:00.072935  617708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1209 10:35:00.072942  617708 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.161 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 10:35:00.073027  617708 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1209 10:35:00.073172  617708 addons.go:69] Setting yakd=true in profile "addons-156041"
	I1209 10:35:00.073191  617708 addons.go:234] Setting addon yakd=true in "addons-156041"
	I1209 10:35:00.073193  617708 addons.go:69] Setting inspektor-gadget=true in profile "addons-156041"
	I1209 10:35:00.073210  617708 config.go:182] Loaded profile config "addons-156041": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 10:35:00.073228  617708 addons.go:234] Setting addon inspektor-gadget=true in "addons-156041"
	I1209 10:35:00.073239  617708 host.go:66] Checking if "addons-156041" exists ...
	I1209 10:35:00.073227  617708 addons.go:69] Setting storage-provisioner=true in profile "addons-156041"
	I1209 10:35:00.073255  617708 addons.go:69] Setting ingress-dns=true in profile "addons-156041"
	I1209 10:35:00.073266  617708 addons.go:234] Setting addon ingress-dns=true in "addons-156041"
	I1209 10:35:00.073255  617708 addons.go:69] Setting ingress=true in profile "addons-156041"
	I1209 10:35:00.073281  617708 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-156041"
	I1209 10:35:00.073295  617708 addons.go:234] Setting addon ingress=true in "addons-156041"
	I1209 10:35:00.073281  617708 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-156041"
	I1209 10:35:00.073268  617708 addons.go:234] Setting addon storage-provisioner=true in "addons-156041"
	I1209 10:35:00.073310  617708 addons.go:69] Setting volumesnapshots=true in profile "addons-156041"
	I1209 10:35:00.073307  617708 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-156041"
	I1209 10:35:00.073318  617708 addons.go:234] Setting addon amd-gpu-device-plugin=true in "addons-156041"
	I1209 10:35:00.073329  617708 addons.go:69] Setting cloud-spanner=true in profile "addons-156041"
	I1209 10:35:00.073329  617708 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-156041"
	I1209 10:35:00.073333  617708 addons.go:69] Setting gcp-auth=true in profile "addons-156041"
	I1209 10:35:00.073339  617708 addons.go:234] Setting addon cloud-spanner=true in "addons-156041"
	I1209 10:35:00.073348  617708 host.go:66] Checking if "addons-156041" exists ...
	I1209 10:35:00.073351  617708 host.go:66] Checking if "addons-156041" exists ...
	I1209 10:35:00.073350  617708 mustload.go:65] Loading cluster: addons-156041
	I1209 10:35:00.073354  617708 host.go:66] Checking if "addons-156041" exists ...
	I1209 10:35:00.073520  617708 config.go:182] Loaded profile config "addons-156041": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 10:35:00.073298  617708 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-156041"
	I1209 10:35:00.073775  617708 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-156041"
	I1209 10:35:00.073790  617708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:35:00.073809  617708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:35:00.073835  617708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:35:00.073863  617708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:35:00.073321  617708 addons.go:234] Setting addon volumesnapshots=true in "addons-156041"
	I1209 10:35:00.073867  617708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:35:00.073888  617708 host.go:66] Checking if "addons-156041" exists ...
	I1209 10:35:00.073838  617708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:35:00.073811  617708 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-156041"
	I1209 10:35:00.074043  617708 host.go:66] Checking if "addons-156041" exists ...
	I1209 10:35:00.073890  617708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:35:00.073794  617708 host.go:66] Checking if "addons-156041" exists ...
	I1209 10:35:00.074300  617708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:35:00.073298  617708 addons.go:69] Setting volcano=true in profile "addons-156041"
	I1209 10:35:00.074331  617708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:35:00.074345  617708 addons.go:234] Setting addon volcano=true in "addons-156041"
	I1209 10:35:00.073250  617708 addons.go:69] Setting default-storageclass=true in profile "addons-156041"
	I1209 10:35:00.074424  617708 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-156041"
	I1209 10:35:00.073282  617708 addons.go:69] Setting registry=true in profile "addons-156041"
	I1209 10:35:00.074509  617708 host.go:66] Checking if "addons-156041" exists ...
	I1209 10:35:00.074537  617708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:35:00.074558  617708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:35:00.074514  617708 addons.go:234] Setting addon registry=true in "addons-156041"
	I1209 10:35:00.073325  617708 host.go:66] Checking if "addons-156041" exists ...
	I1209 10:35:00.074665  617708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:35:00.074688  617708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:35:00.074832  617708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:35:00.074843  617708 out.go:177] * Verifying Kubernetes components...
	I1209 10:35:00.074888  617708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:35:00.074907  617708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:35:00.073274  617708 host.go:66] Checking if "addons-156041" exists ...
	I1209 10:35:00.074970  617708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:35:00.074995  617708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:35:00.074853  617708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:35:00.075094  617708 host.go:66] Checking if "addons-156041" exists ...
	I1209 10:35:00.073275  617708 addons.go:69] Setting metrics-server=true in profile "addons-156041"
	I1209 10:35:00.075287  617708 addons.go:234] Setting addon metrics-server=true in "addons-156041"
	I1209 10:35:00.075318  617708 host.go:66] Checking if "addons-156041" exists ...
	I1209 10:35:00.073760  617708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:35:00.075428  617708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:35:00.073793  617708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:35:00.075513  617708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:35:00.073816  617708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:35:00.073306  617708 host.go:66] Checking if "addons-156041" exists ...
	I1209 10:35:00.076054  617708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:35:00.076073  617708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:35:00.077084  617708 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 10:35:00.094729  617708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36289
	I1209 10:35:00.094903  617708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43169
	I1209 10:35:00.106387  617708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43933
	I1209 10:35:00.106548  617708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:35:00.106595  617708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:35:00.106693  617708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:35:00.106722  617708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:35:00.106816  617708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:35:00.106851  617708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:35:00.106857  617708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44337
	I1209 10:35:00.107137  617708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38277
	I1209 10:35:00.107317  617708 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:35:00.107422  617708 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:35:00.107502  617708 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:35:00.107578  617708 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:35:00.107915  617708 main.go:141] libmachine: Using API Version  1
	I1209 10:35:00.107933  617708 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:35:00.108051  617708 main.go:141] libmachine: Using API Version  1
	I1209 10:35:00.108063  617708 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:35:00.108111  617708 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:35:00.108200  617708 main.go:141] libmachine: Using API Version  1
	I1209 10:35:00.108207  617708 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:35:00.108651  617708 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:35:00.108722  617708 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:35:00.108854  617708 main.go:141] libmachine: Using API Version  1
	I1209 10:35:00.108864  617708 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:35:00.108975  617708 main.go:141] libmachine: Using API Version  1
	I1209 10:35:00.108986  617708 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:35:00.109433  617708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:35:00.109459  617708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:35:00.109910  617708 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:35:00.109990  617708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43677
	I1209 10:35:00.110143  617708 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:35:00.110595  617708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:35:00.110617  617708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:35:00.110773  617708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:35:00.110796  617708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:35:00.114608  617708 main.go:141] libmachine: (addons-156041) Calling .GetState
	I1209 10:35:00.114688  617708 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:35:00.114720  617708 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:35:00.115213  617708 main.go:141] libmachine: Using API Version  1
	I1209 10:35:00.115233  617708 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:35:00.116152  617708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:35:00.116195  617708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:35:00.116795  617708 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:35:00.116870  617708 host.go:66] Checking if "addons-156041" exists ...
	I1209 10:35:00.117248  617708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:35:00.117286  617708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:35:00.117458  617708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41471
	I1209 10:35:00.132986  617708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33187
	I1209 10:35:00.133630  617708 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:35:00.134265  617708 main.go:141] libmachine: Using API Version  1
	I1209 10:35:00.134286  617708 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:35:00.134747  617708 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:35:00.135205  617708 main.go:141] libmachine: (addons-156041) Calling .GetState
	I1209 10:35:00.139057  617708 addons.go:234] Setting addon default-storageclass=true in "addons-156041"
	I1209 10:35:00.139110  617708 host.go:66] Checking if "addons-156041" exists ...
	I1209 10:35:00.139515  617708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:35:00.139553  617708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:35:00.147754  617708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35967
	I1209 10:35:00.147817  617708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42173
	I1209 10:35:00.148671  617708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42379
	I1209 10:35:00.148739  617708 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:35:00.148842  617708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38279
	I1209 10:35:00.148887  617708 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:35:00.149524  617708 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:35:00.149606  617708 main.go:141] libmachine: Using API Version  1
	I1209 10:35:00.149609  617708 main.go:141] libmachine: Using API Version  1
	I1209 10:35:00.149623  617708 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:35:00.149628  617708 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:35:00.149988  617708 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:35:00.150097  617708 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:35:00.150155  617708 main.go:141] libmachine: Using API Version  1
	I1209 10:35:00.150180  617708 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:35:00.150665  617708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:35:00.150713  617708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:35:00.150937  617708 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:35:00.151102  617708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:35:00.151152  617708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:35:00.151170  617708 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:35:00.151592  617708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:35:00.151639  617708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:35:00.151877  617708 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:35:00.151959  617708 main.go:141] libmachine: Using API Version  1
	I1209 10:35:00.151975  617708 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:35:00.152022  617708 main.go:141] libmachine: Using API Version  1
	I1209 10:35:00.152039  617708 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:35:00.152406  617708 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:35:00.152457  617708 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:35:00.152509  617708 main.go:141] libmachine: (addons-156041) Calling .GetState
	I1209 10:35:00.153012  617708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:35:00.153048  617708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:35:00.153884  617708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:35:00.153923  617708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:35:00.154083  617708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39677
	I1209 10:35:00.154587  617708 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:35:00.155105  617708 main.go:141] libmachine: Using API Version  1
	I1209 10:35:00.155123  617708 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:35:00.155335  617708 main.go:141] libmachine: (addons-156041) Calling .DriverName
	I1209 10:35:00.155474  617708 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:35:00.155867  617708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39765
	I1209 10:35:00.156311  617708 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:35:00.156783  617708 main.go:141] libmachine: Using API Version  1
	I1209 10:35:00.156807  617708 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:35:00.157057  617708 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.25
	I1209 10:35:00.157150  617708 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:35:00.157297  617708 main.go:141] libmachine: (addons-156041) Calling .GetState
	I1209 10:35:00.158098  617708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46033
	I1209 10:35:00.158315  617708 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1209 10:35:00.158327  617708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44441
	I1209 10:35:00.158335  617708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1209 10:35:00.158358  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHHostname
	I1209 10:35:00.160820  617708 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:35:00.161313  617708 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:35:00.161413  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:35:00.161465  617708 main.go:141] libmachine: Using API Version  1
	I1209 10:35:00.161487  617708 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:35:00.161752  617708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:35:00.161804  617708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:35:00.161996  617708 main.go:141] libmachine: Using API Version  1
	I1209 10:35:00.162014  617708 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:35:00.162083  617708 main.go:141] libmachine: (addons-156041) Calling .DriverName
	I1209 10:35:00.162151  617708 main.go:141] libmachine: (addons-156041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:f1:8a", ip: ""} in network mk-addons-156041: {Iface:virbr1 ExpiryTime:2024-12-09 11:34:25 +0000 UTC Type:0 Mac:52:54:00:fc:f1:8a Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:addons-156041 Clientid:01:52:54:00:fc:f1:8a}
	I1209 10:35:00.162197  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined IP address 192.168.39.161 and MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:35:00.162537  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHPort
	I1209 10:35:00.162714  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHKeyPath
	I1209 10:35:00.162770  617708 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:35:00.162897  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHUsername
	I1209 10:35:00.163061  617708 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/addons-156041/id_rsa Username:docker}
	I1209 10:35:00.163211  617708 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:35:00.163463  617708 main.go:141] libmachine: (addons-156041) Calling .GetState
	I1209 10:35:00.164495  617708 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1209 10:35:00.164586  617708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:35:00.164623  617708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:35:00.165871  617708 main.go:141] libmachine: (addons-156041) Calling .DriverName
	I1209 10:35:00.166819  617708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36185
	I1209 10:35:00.167201  617708 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:35:00.167231  617708 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1209 10:35:00.167792  617708 main.go:141] libmachine: Using API Version  1
	I1209 10:35:00.167813  617708 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:35:00.168095  617708 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1209 10:35:00.168220  617708 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:35:00.168365  617708 main.go:141] libmachine: (addons-156041) Calling .GetState
	I1209 10:35:00.169337  617708 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1209 10:35:00.169357  617708 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1209 10:35:00.169379  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHHostname
	I1209 10:35:00.170858  617708 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1209 10:35:00.171497  617708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41309
	I1209 10:35:00.171949  617708 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:35:00.172694  617708 main.go:141] libmachine: Using API Version  1
	I1209 10:35:00.172714  617708 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:35:00.173042  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:35:00.173218  617708 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1209 10:35:00.173474  617708 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:35:00.173579  617708 main.go:141] libmachine: (addons-156041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:f1:8a", ip: ""} in network mk-addons-156041: {Iface:virbr1 ExpiryTime:2024-12-09 11:34:25 +0000 UTC Type:0 Mac:52:54:00:fc:f1:8a Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:addons-156041 Clientid:01:52:54:00:fc:f1:8a}
	I1209 10:35:00.173597  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined IP address 192.168.39.161 and MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:35:00.173858  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHPort
	I1209 10:35:00.173938  617708 main.go:141] libmachine: (addons-156041) Calling .DriverName
	I1209 10:35:00.174189  617708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:35:00.174238  617708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:35:00.174491  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHKeyPath
	I1209 10:35:00.174704  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHUsername
	I1209 10:35:00.174870  617708 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/addons-156041/id_rsa Username:docker}
	I1209 10:35:00.175544  617708 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1209 10:35:00.175684  617708 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.0
	I1209 10:35:00.176810  617708 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1209 10:35:00.176830  617708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1209 10:35:00.176851  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHHostname
	I1209 10:35:00.178302  617708 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1209 10:35:00.179485  617708 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1209 10:35:00.180638  617708 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1209 10:35:00.181485  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:35:00.181968  617708 main.go:141] libmachine: (addons-156041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:f1:8a", ip: ""} in network mk-addons-156041: {Iface:virbr1 ExpiryTime:2024-12-09 11:34:25 +0000 UTC Type:0 Mac:52:54:00:fc:f1:8a Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:addons-156041 Clientid:01:52:54:00:fc:f1:8a}
	I1209 10:35:00.182036  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined IP address 192.168.39.161 and MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:35:00.182153  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHPort
	I1209 10:35:00.182298  617708 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1209 10:35:00.182324  617708 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1209 10:35:00.182347  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHHostname
	I1209 10:35:00.182391  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHKeyPath
	I1209 10:35:00.182561  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHUsername
	I1209 10:35:00.182724  617708 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/addons-156041/id_rsa Username:docker}
	I1209 10:35:00.185153  617708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36883
	I1209 10:35:00.185545  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:35:00.185995  617708 main.go:141] libmachine: (addons-156041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:f1:8a", ip: ""} in network mk-addons-156041: {Iface:virbr1 ExpiryTime:2024-12-09 11:34:25 +0000 UTC Type:0 Mac:52:54:00:fc:f1:8a Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:addons-156041 Clientid:01:52:54:00:fc:f1:8a}
	I1209 10:35:00.186094  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined IP address 192.168.39.161 and MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:35:00.186231  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHPort
	I1209 10:35:00.186373  617708 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:35:00.186473  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHKeyPath
	I1209 10:35:00.186681  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHUsername
	I1209 10:35:00.186754  617708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40277
	I1209 10:35:00.187056  617708 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/addons-156041/id_rsa Username:docker}
	I1209 10:35:00.187750  617708 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:35:00.187861  617708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42551
	I1209 10:35:00.188425  617708 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:35:00.188619  617708 main.go:141] libmachine: Using API Version  1
	I1209 10:35:00.188631  617708 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:35:00.188918  617708 main.go:141] libmachine: Using API Version  1
	I1209 10:35:00.188938  617708 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:35:00.189159  617708 main.go:141] libmachine: Using API Version  1
	I1209 10:35:00.189175  617708 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:35:00.189375  617708 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:35:00.189983  617708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:35:00.190028  617708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:35:00.190262  617708 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:35:00.190301  617708 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:35:00.190487  617708 main.go:141] libmachine: (addons-156041) Calling .GetState
	I1209 10:35:00.190576  617708 main.go:141] libmachine: (addons-156041) Calling .GetState
	I1209 10:35:00.192236  617708 main.go:141] libmachine: (addons-156041) Calling .DriverName
	I1209 10:35:00.193776  617708 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-156041"
	I1209 10:35:00.193828  617708 host.go:66] Checking if "addons-156041" exists ...
	I1209 10:35:00.194232  617708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:35:00.194279  617708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:35:00.195460  617708 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 10:35:00.196802  617708 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 10:35:00.196824  617708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1209 10:35:00.196847  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHHostname
	I1209 10:35:00.197604  617708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36539
	I1209 10:35:00.200567  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:35:00.201206  617708 main.go:141] libmachine: (addons-156041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:f1:8a", ip: ""} in network mk-addons-156041: {Iface:virbr1 ExpiryTime:2024-12-09 11:34:25 +0000 UTC Type:0 Mac:52:54:00:fc:f1:8a Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:addons-156041 Clientid:01:52:54:00:fc:f1:8a}
	I1209 10:35:00.201235  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined IP address 192.168.39.161 and MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:35:00.201463  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHPort
	I1209 10:35:00.201650  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHKeyPath
	I1209 10:35:00.201818  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHUsername
	I1209 10:35:00.201987  617708 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/addons-156041/id_rsa Username:docker}
	I1209 10:35:00.202342  617708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35041
	I1209 10:35:00.202488  617708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39223
	I1209 10:35:00.202615  617708 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:35:00.203094  617708 main.go:141] libmachine: Using API Version  1
	I1209 10:35:00.203115  617708 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:35:00.203277  617708 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:35:00.203836  617708 main.go:141] libmachine: Using API Version  1
	I1209 10:35:00.203853  617708 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:35:00.204301  617708 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:35:00.204366  617708 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:35:00.204581  617708 main.go:141] libmachine: (addons-156041) Calling .GetState
	I1209 10:35:00.205716  617708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:35:00.205766  617708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:35:00.206618  617708 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:35:00.207154  617708 main.go:141] libmachine: (addons-156041) Calling .DriverName
	I1209 10:35:00.207963  617708 main.go:141] libmachine: Using API Version  1
	I1209 10:35:00.207982  617708 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:35:00.208422  617708 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:35:00.208611  617708 main.go:141] libmachine: (addons-156041) Calling .GetState
	I1209 10:35:00.209504  617708 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1209 10:35:00.210722  617708 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1209 10:35:00.210743  617708 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1209 10:35:00.210766  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHHostname
	I1209 10:35:00.211474  617708 main.go:141] libmachine: (addons-156041) Calling .DriverName
	I1209 10:35:00.212701  617708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42909
	I1209 10:35:00.212865  617708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40549
	I1209 10:35:00.212931  617708 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I1209 10:35:00.213586  617708 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:35:00.213676  617708 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:35:00.214338  617708 main.go:141] libmachine: Using API Version  1
	I1209 10:35:00.214368  617708 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:35:00.214755  617708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44497
	I1209 10:35:00.215353  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:35:00.215392  617708 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:35:00.215415  617708 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:35:00.215606  617708 main.go:141] libmachine: Using API Version  1
	I1209 10:35:00.215629  617708 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:35:00.215716  617708 main.go:141] libmachine: (addons-156041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:f1:8a", ip: ""} in network mk-addons-156041: {Iface:virbr1 ExpiryTime:2024-12-09 11:34:25 +0000 UTC Type:0 Mac:52:54:00:fc:f1:8a Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:addons-156041 Clientid:01:52:54:00:fc:f1:8a}
	I1209 10:35:00.215738  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined IP address 192.168.39.161 and MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:35:00.215805  617708 main.go:141] libmachine: (addons-156041) Calling .GetState
	I1209 10:35:00.216282  617708 main.go:141] libmachine: Using API Version  1
	I1209 10:35:00.216298  617708 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:35:00.216358  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHPort
	I1209 10:35:00.216400  617708 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1209 10:35:00.216406  617708 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:35:00.216550  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHKeyPath
	I1209 10:35:00.216842  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHUsername
	I1209 10:35:00.216843  617708 main.go:141] libmachine: (addons-156041) Calling .GetState
	I1209 10:35:00.216947  617708 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:35:00.217152  617708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32863
	I1209 10:35:00.217337  617708 main.go:141] libmachine: (addons-156041) Calling .GetState
	I1209 10:35:00.217908  617708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45867
	I1209 10:35:00.217905  617708 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/addons-156041/id_rsa Username:docker}
	I1209 10:35:00.217915  617708 main.go:141] libmachine: (addons-156041) Calling .DriverName
	I1209 10:35:00.218453  617708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36569
	I1209 10:35:00.218470  617708 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:35:00.218805  617708 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:35:00.218989  617708 main.go:141] libmachine: Using API Version  1
	I1209 10:35:00.219007  617708 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:35:00.219099  617708 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:35:00.219371  617708 main.go:141] libmachine: Using API Version  1
	I1209 10:35:00.219395  617708 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:35:00.219378  617708 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:35:00.219679  617708 main.go:141] libmachine: (addons-156041) Calling .GetState
	I1209 10:35:00.219813  617708 main.go:141] libmachine: (addons-156041) Calling .DriverName
	I1209 10:35:00.219819  617708 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1209 10:35:00.219652  617708 main.go:141] libmachine: (addons-156041) Calling .DriverName
	I1209 10:35:00.219993  617708 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:35:00.220073  617708 main.go:141] libmachine: Using API Version  1
	I1209 10:35:00.220086  617708 main.go:141] libmachine: Making call to close driver server
	I1209 10:35:00.220095  617708 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:35:00.220097  617708 main.go:141] libmachine: (addons-156041) Calling .Close
	I1209 10:35:00.220238  617708 main.go:141] libmachine: (addons-156041) Calling .GetState
	I1209 10:35:00.220260  617708 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I1209 10:35:00.220647  617708 main.go:141] libmachine: (addons-156041) DBG | Closing plugin on server side
	I1209 10:35:00.220665  617708 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:35:00.220679  617708 main.go:141] libmachine: Successfully made call to close driver server
	I1209 10:35:00.220687  617708 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 10:35:00.220694  617708 main.go:141] libmachine: Making call to close driver server
	I1209 10:35:00.220700  617708 main.go:141] libmachine: (addons-156041) Calling .Close
	I1209 10:35:00.220856  617708 main.go:141] libmachine: (addons-156041) Calling .DriverName
	I1209 10:35:00.220934  617708 main.go:141] libmachine: (addons-156041) DBG | Closing plugin on server side
	I1209 10:35:00.220954  617708 main.go:141] libmachine: Successfully made call to close driver server
	I1209 10:35:00.220961  617708 main.go:141] libmachine: Making call to close connection to plugin binary
	W1209 10:35:00.221098  617708 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1209 10:35:00.221435  617708 main.go:141] libmachine: (addons-156041) Calling .DriverName
	I1209 10:35:00.221830  617708 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1209 10:35:00.221850  617708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1209 10:35:00.221871  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHHostname
	I1209 10:35:00.221973  617708 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1209 10:35:00.221990  617708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1209 10:35:00.222009  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHHostname
	I1209 10:35:00.222553  617708 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1209 10:35:00.223074  617708 main.go:141] libmachine: (addons-156041) Calling .DriverName
	I1209 10:35:00.224416  617708 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.35.0
	I1209 10:35:00.224502  617708 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I1209 10:35:00.224554  617708 addons.go:431] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1209 10:35:00.224572  617708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1209 10:35:00.224593  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHHostname
	I1209 10:35:00.226610  617708 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1209 10:35:00.226629  617708 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I1209 10:35:00.226648  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHHostname
	I1209 10:35:00.226651  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:35:00.226616  617708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32875
	I1209 10:35:00.226791  617708 out.go:177]   - Using image docker.io/registry:2.8.3
	I1209 10:35:00.227247  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHPort
	I1209 10:35:00.227250  617708 main.go:141] libmachine: (addons-156041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:f1:8a", ip: ""} in network mk-addons-156041: {Iface:virbr1 ExpiryTime:2024-12-09 11:34:25 +0000 UTC Type:0 Mac:52:54:00:fc:f1:8a Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:addons-156041 Clientid:01:52:54:00:fc:f1:8a}
	I1209 10:35:00.227269  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined IP address 192.168.39.161 and MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:35:00.227663  617708 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:35:00.227774  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:35:00.227821  617708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33269
	I1209 10:35:00.227936  617708 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1209 10:35:00.227951  617708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1209 10:35:00.227967  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHHostname
	I1209 10:35:00.228773  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHPort
	I1209 10:35:00.228866  617708 main.go:141] libmachine: Using API Version  1
	I1209 10:35:00.228884  617708 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:35:00.228889  617708 main.go:141] libmachine: (addons-156041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:f1:8a", ip: ""} in network mk-addons-156041: {Iface:virbr1 ExpiryTime:2024-12-09 11:34:25 +0000 UTC Type:0 Mac:52:54:00:fc:f1:8a Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:addons-156041 Clientid:01:52:54:00:fc:f1:8a}
	I1209 10:35:00.228904  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined IP address 192.168.39.161 and MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:35:00.228925  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHKeyPath
	I1209 10:35:00.228949  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHKeyPath
	I1209 10:35:00.229164  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHUsername
	I1209 10:35:00.229186  617708 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:35:00.229229  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:35:00.229455  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHUsername
	I1209 10:35:00.229459  617708 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/addons-156041/id_rsa Username:docker}
	I1209 10:35:00.229671  617708 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/addons-156041/id_rsa Username:docker}
	I1209 10:35:00.229806  617708 main.go:141] libmachine: Using API Version  1
	I1209 10:35:00.229819  617708 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:35:00.230338  617708 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:35:00.230614  617708 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:35:00.230862  617708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:35:00.230887  617708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:35:00.231054  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:35:00.231244  617708 main.go:141] libmachine: (addons-156041) Calling .GetState
	I1209 10:35:00.231595  617708 main.go:141] libmachine: (addons-156041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:f1:8a", ip: ""} in network mk-addons-156041: {Iface:virbr1 ExpiryTime:2024-12-09 11:34:25 +0000 UTC Type:0 Mac:52:54:00:fc:f1:8a Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:addons-156041 Clientid:01:52:54:00:fc:f1:8a}
	I1209 10:35:00.231614  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined IP address 192.168.39.161 and MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:35:00.231786  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHPort
	I1209 10:35:00.232164  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHKeyPath
	I1209 10:35:00.232161  617708 main.go:141] libmachine: (addons-156041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:f1:8a", ip: ""} in network mk-addons-156041: {Iface:virbr1 ExpiryTime:2024-12-09 11:34:25 +0000 UTC Type:0 Mac:52:54:00:fc:f1:8a Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:addons-156041 Clientid:01:52:54:00:fc:f1:8a}
	I1209 10:35:00.232234  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined IP address 192.168.39.161 and MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:35:00.232451  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHPort
	I1209 10:35:00.232542  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHUsername
	I1209 10:35:00.232639  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHKeyPath
	I1209 10:35:00.232854  617708 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/addons-156041/id_rsa Username:docker}
	I1209 10:35:00.232912  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHUsername
	I1209 10:35:00.233120  617708 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/addons-156041/id_rsa Username:docker}
	I1209 10:35:00.233526  617708 main.go:141] libmachine: (addons-156041) Calling .DriverName
	I1209 10:35:00.234695  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:35:00.235039  617708 main.go:141] libmachine: (addons-156041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:f1:8a", ip: ""} in network mk-addons-156041: {Iface:virbr1 ExpiryTime:2024-12-09 11:34:25 +0000 UTC Type:0 Mac:52:54:00:fc:f1:8a Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:addons-156041 Clientid:01:52:54:00:fc:f1:8a}
	I1209 10:35:00.235066  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined IP address 192.168.39.161 and MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:35:00.235261  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHPort
	I1209 10:35:00.235436  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHKeyPath
	I1209 10:35:00.235649  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHUsername
	I1209 10:35:00.235757  617708 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/addons-156041/id_rsa Username:docker}
	I1209 10:35:00.235916  617708 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	W1209 10:35:00.236497  617708 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:35788->192.168.39.161:22: read: connection reset by peer
	I1209 10:35:00.236529  617708 retry.go:31] will retry after 205.977713ms: ssh: handshake failed: read tcp 192.168.39.1:35788->192.168.39.161:22: read: connection reset by peer
	I1209 10:35:00.237658  617708 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1209 10:35:00.237679  617708 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1209 10:35:00.237712  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHHostname
	I1209 10:35:00.240922  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:35:00.241326  617708 main.go:141] libmachine: (addons-156041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:f1:8a", ip: ""} in network mk-addons-156041: {Iface:virbr1 ExpiryTime:2024-12-09 11:34:25 +0000 UTC Type:0 Mac:52:54:00:fc:f1:8a Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:addons-156041 Clientid:01:52:54:00:fc:f1:8a}
	I1209 10:35:00.241352  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined IP address 192.168.39.161 and MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:35:00.241580  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHPort
	I1209 10:35:00.241744  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHKeyPath
	I1209 10:35:00.241868  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHUsername
	I1209 10:35:00.241974  617708 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/addons-156041/id_rsa Username:docker}
	I1209 10:35:00.248655  617708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40023
	I1209 10:35:00.249180  617708 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:35:00.249780  617708 main.go:141] libmachine: Using API Version  1
	I1209 10:35:00.249805  617708 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:35:00.250127  617708 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:35:00.250324  617708 main.go:141] libmachine: (addons-156041) Calling .GetState
	I1209 10:35:00.252010  617708 main.go:141] libmachine: (addons-156041) Calling .DriverName
	I1209 10:35:00.252247  617708 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1209 10:35:00.252262  617708 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1209 10:35:00.252279  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHHostname
	I1209 10:35:00.254428  617708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33577
	I1209 10:35:00.255031  617708 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:35:00.255637  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:35:00.255757  617708 main.go:141] libmachine: Using API Version  1
	I1209 10:35:00.255771  617708 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:35:00.255852  617708 main.go:141] libmachine: (addons-156041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:f1:8a", ip: ""} in network mk-addons-156041: {Iface:virbr1 ExpiryTime:2024-12-09 11:34:25 +0000 UTC Type:0 Mac:52:54:00:fc:f1:8a Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:addons-156041 Clientid:01:52:54:00:fc:f1:8a}
	I1209 10:35:00.255868  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined IP address 192.168.39.161 and MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:35:00.255899  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHPort
	I1209 10:35:00.256050  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHKeyPath
	I1209 10:35:00.256186  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHUsername
	I1209 10:35:00.256313  617708 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/addons-156041/id_rsa Username:docker}
	I1209 10:35:00.256606  617708 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:35:00.256792  617708 main.go:141] libmachine: (addons-156041) Calling .GetState
	I1209 10:35:00.258268  617708 main.go:141] libmachine: (addons-156041) Calling .DriverName
	I1209 10:35:00.260209  617708 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1209 10:35:00.261526  617708 out.go:177]   - Using image docker.io/busybox:stable
	I1209 10:35:00.262862  617708 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1209 10:35:00.262885  617708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1209 10:35:00.262908  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHHostname
	I1209 10:35:00.266031  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:35:00.266525  617708 main.go:141] libmachine: (addons-156041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:f1:8a", ip: ""} in network mk-addons-156041: {Iface:virbr1 ExpiryTime:2024-12-09 11:34:25 +0000 UTC Type:0 Mac:52:54:00:fc:f1:8a Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:addons-156041 Clientid:01:52:54:00:fc:f1:8a}
	I1209 10:35:00.266551  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined IP address 192.168.39.161 and MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:35:00.266705  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHPort
	I1209 10:35:00.266922  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHKeyPath
	I1209 10:35:00.267060  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHUsername
	I1209 10:35:00.267219  617708 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/addons-156041/id_rsa Username:docker}
	I1209 10:35:00.524078  617708 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 10:35:00.524157  617708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1209 10:35:00.556787  617708 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1209 10:35:00.568737  617708 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1209 10:35:00.604237  617708 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1209 10:35:00.616040  617708 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1209 10:35:00.616083  617708 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1209 10:35:00.646338  617708 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1209 10:35:00.646371  617708 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1209 10:35:00.732967  617708 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 10:35:00.742502  617708 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1209 10:35:00.749111  617708 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1209 10:35:00.756379  617708 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1209 10:35:00.801038  617708 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1209 10:35:00.801070  617708 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1209 10:35:00.823280  617708 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1209 10:35:00.823314  617708 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1209 10:35:00.853505  617708 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1209 10:35:00.853543  617708 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1209 10:35:00.865237  617708 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1209 10:35:00.877339  617708 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1209 10:35:00.877362  617708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1209 10:35:00.893337  617708 addons.go:431] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1209 10:35:00.893365  617708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14576 bytes)
	I1209 10:35:00.904948  617708 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1209 10:35:00.904979  617708 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1209 10:35:00.966833  617708 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1209 10:35:00.966864  617708 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1209 10:35:00.997544  617708 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1209 10:35:00.997586  617708 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1209 10:35:01.029431  617708 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1209 10:35:01.029460  617708 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1209 10:35:01.053665  617708 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1209 10:35:01.053697  617708 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1209 10:35:01.081333  617708 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1209 10:35:01.081361  617708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1209 10:35:01.143187  617708 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1209 10:35:01.180337  617708 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1209 10:35:01.180369  617708 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1209 10:35:01.216920  617708 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1209 10:35:01.220096  617708 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1209 10:35:01.220123  617708 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1209 10:35:01.247759  617708 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1209 10:35:01.247803  617708 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1209 10:35:01.327854  617708 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1209 10:35:01.380827  617708 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1209 10:35:01.380858  617708 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1209 10:35:01.414854  617708 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1209 10:35:01.414879  617708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1209 10:35:01.468511  617708 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1209 10:35:01.468554  617708 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1209 10:35:01.599427  617708 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1209 10:35:01.599456  617708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1209 10:35:01.617771  617708 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1209 10:35:01.668722  617708 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1209 10:35:01.668748  617708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1209 10:35:01.852015  617708 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1209 10:35:01.941830  617708 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1209 10:35:01.941865  617708 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1209 10:35:02.050480  617708 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1209 10:35:02.050509  617708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1209 10:35:02.421394  617708 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1209 10:35:02.421440  617708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1209 10:35:02.827613  617708 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1209 10:35:02.827647  617708 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1209 10:35:02.966980  617708 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.442773358s)
	I1209 10:35:02.967021  617708 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1209 10:35:02.967027  617708 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.442903391s)
	I1209 10:35:02.967799  617708 node_ready.go:35] waiting up to 6m0s for node "addons-156041" to be "Ready" ...
	I1209 10:35:02.971419  617708 node_ready.go:49] node "addons-156041" has status "Ready":"True"
	I1209 10:35:02.971454  617708 node_ready.go:38] duration metric: took 3.630999ms for node "addons-156041" to be "Ready" ...
	I1209 10:35:02.971469  617708 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 10:35:02.990608  617708 pod_ready.go:79] waiting up to 6m0s for pod "amd-gpu-device-plugin-hbkzd" in "kube-system" namespace to be "Ready" ...
	I1209 10:35:03.087417  617708 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1209 10:35:03.474541  617708 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-156041" context rescaled to 1 replicas
	I1209 10:35:04.197602  617708 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.640762302s)
	I1209 10:35:04.197670  617708 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.628893777s)
	I1209 10:35:04.197721  617708 main.go:141] libmachine: Making call to close driver server
	I1209 10:35:04.197737  617708 main.go:141] libmachine: (addons-156041) Calling .Close
	I1209 10:35:04.197679  617708 main.go:141] libmachine: Making call to close driver server
	I1209 10:35:04.197796  617708 main.go:141] libmachine: (addons-156041) Calling .Close
	I1209 10:35:04.197730  617708 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.593459686s)
	I1209 10:35:04.197847  617708 main.go:141] libmachine: Making call to close driver server
	I1209 10:35:04.197857  617708 main.go:141] libmachine: (addons-156041) Calling .Close
	I1209 10:35:04.198080  617708 main.go:141] libmachine: Successfully made call to close driver server
	I1209 10:35:04.198096  617708 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 10:35:04.198107  617708 main.go:141] libmachine: Making call to close driver server
	I1209 10:35:04.198114  617708 main.go:141] libmachine: (addons-156041) Calling .Close
	I1209 10:35:04.198286  617708 main.go:141] libmachine: Successfully made call to close driver server
	I1209 10:35:04.198302  617708 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 10:35:04.198290  617708 main.go:141] libmachine: (addons-156041) DBG | Closing plugin on server side
	I1209 10:35:04.198336  617708 main.go:141] libmachine: Successfully made call to close driver server
	I1209 10:35:04.198338  617708 main.go:141] libmachine: (addons-156041) DBG | Closing plugin on server side
	I1209 10:35:04.198355  617708 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 10:35:04.198365  617708 main.go:141] libmachine: Making call to close driver server
	I1209 10:35:04.198374  617708 main.go:141] libmachine: (addons-156041) Calling .Close
	I1209 10:35:04.198397  617708 main.go:141] libmachine: Making call to close driver server
	I1209 10:35:04.198412  617708 main.go:141] libmachine: (addons-156041) Calling .Close
	I1209 10:35:04.198637  617708 main.go:141] libmachine: (addons-156041) DBG | Closing plugin on server side
	I1209 10:35:04.198644  617708 main.go:141] libmachine: (addons-156041) DBG | Closing plugin on server side
	I1209 10:35:04.198665  617708 main.go:141] libmachine: Successfully made call to close driver server
	I1209 10:35:04.198672  617708 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 10:35:04.198673  617708 main.go:141] libmachine: Successfully made call to close driver server
	I1209 10:35:04.198680  617708 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 10:35:04.200887  617708 main.go:141] libmachine: Successfully made call to close driver server
	I1209 10:35:04.200906  617708 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 10:35:04.200906  617708 main.go:141] libmachine: (addons-156041) DBG | Closing plugin on server side
	I1209 10:35:04.755322  617708 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.02231028s)
	I1209 10:35:04.755386  617708 main.go:141] libmachine: Making call to close driver server
	I1209 10:35:04.755402  617708 main.go:141] libmachine: (addons-156041) Calling .Close
	I1209 10:35:04.755789  617708 main.go:141] libmachine: Successfully made call to close driver server
	I1209 10:35:04.755811  617708 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 10:35:04.755825  617708 main.go:141] libmachine: Making call to close driver server
	I1209 10:35:04.755831  617708 main.go:141] libmachine: (addons-156041) Calling .Close
	I1209 10:35:04.756223  617708 main.go:141] libmachine: (addons-156041) DBG | Closing plugin on server side
	I1209 10:35:04.756272  617708 main.go:141] libmachine: Successfully made call to close driver server
	I1209 10:35:04.756284  617708 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 10:35:05.080642  617708 pod_ready.go:103] pod "amd-gpu-device-plugin-hbkzd" in "kube-system" namespace has status "Ready":"False"
	I1209 10:35:05.934564  617708 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.192011503s)
	I1209 10:35:05.934634  617708 main.go:141] libmachine: Making call to close driver server
	I1209 10:35:05.934646  617708 main.go:141] libmachine: (addons-156041) Calling .Close
	I1209 10:35:05.935086  617708 main.go:141] libmachine: (addons-156041) DBG | Closing plugin on server side
	I1209 10:35:05.935115  617708 main.go:141] libmachine: Successfully made call to close driver server
	I1209 10:35:05.935124  617708 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 10:35:05.935157  617708 main.go:141] libmachine: Making call to close driver server
	I1209 10:35:05.935169  617708 main.go:141] libmachine: (addons-156041) Calling .Close
	I1209 10:35:05.935476  617708 main.go:141] libmachine: (addons-156041) DBG | Closing plugin on server side
	I1209 10:35:05.935514  617708 main.go:141] libmachine: Successfully made call to close driver server
	I1209 10:35:05.935522  617708 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 10:35:06.021143  617708 main.go:141] libmachine: Making call to close driver server
	I1209 10:35:06.021183  617708 main.go:141] libmachine: (addons-156041) Calling .Close
	I1209 10:35:06.021519  617708 main.go:141] libmachine: Successfully made call to close driver server
	I1209 10:35:06.021542  617708 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 10:35:07.106730  617708 pod_ready.go:103] pod "amd-gpu-device-plugin-hbkzd" in "kube-system" namespace has status "Ready":"False"
	I1209 10:35:07.248509  617708 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1209 10:35:07.248568  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHHostname
	I1209 10:35:07.251869  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:35:07.252243  617708 main.go:141] libmachine: (addons-156041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:f1:8a", ip: ""} in network mk-addons-156041: {Iface:virbr1 ExpiryTime:2024-12-09 11:34:25 +0000 UTC Type:0 Mac:52:54:00:fc:f1:8a Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:addons-156041 Clientid:01:52:54:00:fc:f1:8a}
	I1209 10:35:07.252278  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined IP address 192.168.39.161 and MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:35:07.252444  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHPort
	I1209 10:35:07.252691  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHKeyPath
	I1209 10:35:07.252899  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHUsername
	I1209 10:35:07.253095  617708 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/addons-156041/id_rsa Username:docker}
	I1209 10:35:07.675590  617708 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1209 10:35:07.914577  617708 addons.go:234] Setting addon gcp-auth=true in "addons-156041"
	I1209 10:35:07.914650  617708 host.go:66] Checking if "addons-156041" exists ...
	I1209 10:35:07.915013  617708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:35:07.915068  617708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:35:07.931536  617708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40523
	I1209 10:35:07.932217  617708 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:35:07.932828  617708 main.go:141] libmachine: Using API Version  1
	I1209 10:35:07.932850  617708 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:35:07.933292  617708 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:35:07.934000  617708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:35:07.934064  617708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:35:07.950018  617708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37643
	I1209 10:35:07.950515  617708 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:35:07.950999  617708 main.go:141] libmachine: Using API Version  1
	I1209 10:35:07.951018  617708 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:35:07.951474  617708 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:35:07.951696  617708 main.go:141] libmachine: (addons-156041) Calling .GetState
	I1209 10:35:07.953645  617708 main.go:141] libmachine: (addons-156041) Calling .DriverName
	I1209 10:35:07.953882  617708 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1209 10:35:07.953907  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHHostname
	I1209 10:35:07.956691  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:35:07.957115  617708 main.go:141] libmachine: (addons-156041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:f1:8a", ip: ""} in network mk-addons-156041: {Iface:virbr1 ExpiryTime:2024-12-09 11:34:25 +0000 UTC Type:0 Mac:52:54:00:fc:f1:8a Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:addons-156041 Clientid:01:52:54:00:fc:f1:8a}
	I1209 10:35:07.957146  617708 main.go:141] libmachine: (addons-156041) DBG | domain addons-156041 has defined IP address 192.168.39.161 and MAC address 52:54:00:fc:f1:8a in network mk-addons-156041
	I1209 10:35:07.957318  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHPort
	I1209 10:35:07.957536  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHKeyPath
	I1209 10:35:07.957700  617708 main.go:141] libmachine: (addons-156041) Calling .GetSSHUsername
	I1209 10:35:07.957845  617708 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/addons-156041/id_rsa Username:docker}
	I1209 10:35:08.758511  617708 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.009356161s)
	I1209 10:35:08.758556  617708 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (8.002140967s)
	I1209 10:35:08.758574  617708 main.go:141] libmachine: Making call to close driver server
	I1209 10:35:08.758587  617708 main.go:141] libmachine: (addons-156041) Calling .Close
	I1209 10:35:08.758598  617708 main.go:141] libmachine: Making call to close driver server
	I1209 10:35:08.758610  617708 main.go:141] libmachine: (addons-156041) Calling .Close
	I1209 10:35:08.758617  617708 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.893335089s)
	I1209 10:35:08.758659  617708 main.go:141] libmachine: Making call to close driver server
	I1209 10:35:08.758677  617708 main.go:141] libmachine: (addons-156041) Calling .Close
	I1209 10:35:08.758763  617708 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (7.615544263s)
	I1209 10:35:08.758788  617708 main.go:141] libmachine: Making call to close driver server
	I1209 10:35:08.758798  617708 main.go:141] libmachine: (addons-156041) Calling .Close
	I1209 10:35:08.758937  617708 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.54197958s)
	I1209 10:35:08.759017  617708 main.go:141] libmachine: Making call to close driver server
	I1209 10:35:08.759046  617708 main.go:141] libmachine: (addons-156041) Calling .Close
	I1209 10:35:08.759075  617708 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.141274788s)
	I1209 10:35:08.759107  617708 main.go:141] libmachine: Making call to close driver server
	I1209 10:35:08.759122  617708 main.go:141] libmachine: (addons-156041) Calling .Close
	I1209 10:35:08.759280  617708 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.907232523s)
	I1209 10:35:08.759302  617708 main.go:141] libmachine: (addons-156041) DBG | Closing plugin on server side
	W1209 10:35:08.759309  617708 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1209 10:35:08.759323  617708 main.go:141] libmachine: (addons-156041) DBG | Closing plugin on server side
	I1209 10:35:08.759351  617708 retry.go:31] will retry after 138.951478ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1209 10:35:08.759359  617708 main.go:141] libmachine: Successfully made call to close driver server
	I1209 10:35:08.759378  617708 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 10:35:08.759376  617708 main.go:141] libmachine: Successfully made call to close driver server
	I1209 10:35:08.759390  617708 main.go:141] libmachine: Making call to close driver server
	I1209 10:35:08.759400  617708 main.go:141] libmachine: (addons-156041) Calling .Close
	I1209 10:35:08.759407  617708 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 10:35:08.759416  617708 main.go:141] libmachine: Making call to close driver server
	I1209 10:35:08.759423  617708 main.go:141] libmachine: (addons-156041) Calling .Close
	I1209 10:35:08.759027  617708 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.43113163s)
	I1209 10:35:08.759505  617708 main.go:141] libmachine: Successfully made call to close driver server
	I1209 10:35:08.759525  617708 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 10:35:08.759533  617708 main.go:141] libmachine: Making call to close driver server
	I1209 10:35:08.759544  617708 main.go:141] libmachine: (addons-156041) Calling .Close
	I1209 10:35:08.759500  617708 main.go:141] libmachine: Making call to close driver server
	I1209 10:35:08.759583  617708 main.go:141] libmachine: (addons-156041) Calling .Close
	I1209 10:35:08.759800  617708 main.go:141] libmachine: (addons-156041) DBG | Closing plugin on server side
	I1209 10:35:08.759810  617708 main.go:141] libmachine: Successfully made call to close driver server
	I1209 10:35:08.759821  617708 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 10:35:08.759829  617708 main.go:141] libmachine: Making call to close driver server
	I1209 10:35:08.759836  617708 main.go:141] libmachine: (addons-156041) Calling .Close
	I1209 10:35:08.759851  617708 main.go:141] libmachine: (addons-156041) DBG | Closing plugin on server side
	I1209 10:35:08.759866  617708 main.go:141] libmachine: (addons-156041) DBG | Closing plugin on server side
	I1209 10:35:08.759886  617708 main.go:141] libmachine: Successfully made call to close driver server
	I1209 10:35:08.759892  617708 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 10:35:08.759902  617708 addons.go:475] Verifying addon ingress=true in "addons-156041"
	I1209 10:35:08.760125  617708 main.go:141] libmachine: (addons-156041) DBG | Closing plugin on server side
	I1209 10:35:08.760158  617708 main.go:141] libmachine: Successfully made call to close driver server
	I1209 10:35:08.760165  617708 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 10:35:08.760175  617708 main.go:141] libmachine: Making call to close driver server
	I1209 10:35:08.760181  617708 main.go:141] libmachine: (addons-156041) Calling .Close
	I1209 10:35:08.760240  617708 main.go:141] libmachine: Successfully made call to close driver server
	I1209 10:35:08.760246  617708 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 10:35:08.760691  617708 main.go:141] libmachine: (addons-156041) DBG | Closing plugin on server side
	I1209 10:35:08.760723  617708 main.go:141] libmachine: Successfully made call to close driver server
	I1209 10:35:08.760734  617708 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 10:35:08.761390  617708 main.go:141] libmachine: Successfully made call to close driver server
	I1209 10:35:08.761403  617708 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 10:35:08.761622  617708 main.go:141] libmachine: Successfully made call to close driver server
	I1209 10:35:08.761640  617708 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 10:35:08.761650  617708 main.go:141] libmachine: Making call to close driver server
	I1209 10:35:08.761659  617708 main.go:141] libmachine: (addons-156041) Calling .Close
	I1209 10:35:08.761845  617708 main.go:141] libmachine: Successfully made call to close driver server
	I1209 10:35:08.761901  617708 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 10:35:08.761922  617708 main.go:141] libmachine: Making call to close driver server
	I1209 10:35:08.761930  617708 main.go:141] libmachine: Successfully made call to close driver server
	I1209 10:35:08.761852  617708 main.go:141] libmachine: (addons-156041) DBG | Closing plugin on server side
	I1209 10:35:08.761949  617708 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 10:35:08.761960  617708 addons.go:475] Verifying addon metrics-server=true in "addons-156041"
	I1209 10:35:08.762058  617708 out.go:177] * Verifying ingress addon...
	I1209 10:35:08.761941  617708 main.go:141] libmachine: (addons-156041) Calling .Close
	I1209 10:35:08.763224  617708 main.go:141] libmachine: (addons-156041) DBG | Closing plugin on server side
	I1209 10:35:08.763250  617708 main.go:141] libmachine: Successfully made call to close driver server
	I1209 10:35:08.763256  617708 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 10:35:08.763265  617708 addons.go:475] Verifying addon registry=true in "addons-156041"
	I1209 10:35:08.763414  617708 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-156041 service yakd-dashboard -n yakd-dashboard
	
	I1209 10:35:08.761355  617708 main.go:141] libmachine: (addons-156041) DBG | Closing plugin on server side
	I1209 10:35:08.763607  617708 main.go:141] libmachine: (addons-156041) DBG | Closing plugin on server side
	I1209 10:35:08.763632  617708 main.go:141] libmachine: Successfully made call to close driver server
	I1209 10:35:08.763978  617708 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 10:35:08.765273  617708 out.go:177] * Verifying registry addon...
	I1209 10:35:08.765308  617708 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1209 10:35:08.767396  617708 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1209 10:35:08.770984  617708 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1209 10:35:08.771007  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:08.791512  617708 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1209 10:35:08.791540  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:08.793415  617708 main.go:141] libmachine: Making call to close driver server
	I1209 10:35:08.793433  617708 main.go:141] libmachine: (addons-156041) Calling .Close
	I1209 10:35:08.793736  617708 main.go:141] libmachine: Successfully made call to close driver server
	I1209 10:35:08.793754  617708 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 10:35:08.899302  617708 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1209 10:35:09.281512  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:09.335439  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:09.561200  617708 pod_ready.go:103] pod "amd-gpu-device-plugin-hbkzd" in "kube-system" namespace has status "Ready":"False"
	I1209 10:35:09.584144  617708 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.630218743s)
	I1209 10:35:09.584272  617708 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.496779935s)
	I1209 10:35:09.584422  617708 main.go:141] libmachine: Making call to close driver server
	I1209 10:35:09.584457  617708 main.go:141] libmachine: (addons-156041) Calling .Close
	I1209 10:35:09.584890  617708 main.go:141] libmachine: (addons-156041) DBG | Closing plugin on server side
	I1209 10:35:09.584971  617708 main.go:141] libmachine: Successfully made call to close driver server
	I1209 10:35:09.584985  617708 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 10:35:09.585009  617708 main.go:141] libmachine: Making call to close driver server
	I1209 10:35:09.585018  617708 main.go:141] libmachine: (addons-156041) Calling .Close
	I1209 10:35:09.585388  617708 main.go:141] libmachine: (addons-156041) DBG | Closing plugin on server side
	I1209 10:35:09.585425  617708 main.go:141] libmachine: Successfully made call to close driver server
	I1209 10:35:09.585444  617708 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 10:35:09.585469  617708 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-156041"
	I1209 10:35:09.585925  617708 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1209 10:35:09.587475  617708 out.go:177] * Verifying csi-hostpath-driver addon...
	I1209 10:35:09.589173  617708 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1209 10:35:09.590114  617708 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1209 10:35:09.590719  617708 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1209 10:35:09.590739  617708 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1209 10:35:09.638221  617708 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1209 10:35:09.638252  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:09.703286  617708 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1209 10:35:09.703326  617708 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1209 10:35:09.783816  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:09.784120  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:09.942159  617708 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1209 10:35:09.942210  617708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1209 10:35:09.981015  617708 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1209 10:35:10.096099  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:10.269214  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:10.270855  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:10.594720  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:10.660393  617708 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.76103319s)
	I1209 10:35:10.660464  617708 main.go:141] libmachine: Making call to close driver server
	I1209 10:35:10.660482  617708 main.go:141] libmachine: (addons-156041) Calling .Close
	I1209 10:35:10.660825  617708 main.go:141] libmachine: Successfully made call to close driver server
	I1209 10:35:10.660893  617708 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 10:35:10.660910  617708 main.go:141] libmachine: Making call to close driver server
	I1209 10:35:10.660918  617708 main.go:141] libmachine: (addons-156041) Calling .Close
	I1209 10:35:10.660868  617708 main.go:141] libmachine: (addons-156041) DBG | Closing plugin on server side
	I1209 10:35:10.661199  617708 main.go:141] libmachine: Successfully made call to close driver server
	I1209 10:35:10.661209  617708 main.go:141] libmachine: (addons-156041) DBG | Closing plugin on server side
	I1209 10:35:10.661217  617708 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 10:35:10.769697  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:10.771091  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:11.103535  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:11.320313  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:11.320371  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:11.347720  617708 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.366655085s)
	I1209 10:35:11.347789  617708 main.go:141] libmachine: Making call to close driver server
	I1209 10:35:11.347805  617708 main.go:141] libmachine: (addons-156041) Calling .Close
	I1209 10:35:11.348213  617708 main.go:141] libmachine: Successfully made call to close driver server
	I1209 10:35:11.348257  617708 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 10:35:11.348269  617708 main.go:141] libmachine: Making call to close driver server
	I1209 10:35:11.348277  617708 main.go:141] libmachine: (addons-156041) Calling .Close
	I1209 10:35:11.348537  617708 main.go:141] libmachine: Successfully made call to close driver server
	I1209 10:35:11.348553  617708 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 10:35:11.348594  617708 main.go:141] libmachine: (addons-156041) DBG | Closing plugin on server side
	I1209 10:35:11.349654  617708 addons.go:475] Verifying addon gcp-auth=true in "addons-156041"
	I1209 10:35:11.351214  617708 out.go:177] * Verifying gcp-auth addon...
	I1209 10:35:11.353918  617708 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1209 10:35:11.410828  617708 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1209 10:35:11.410854  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:11.595766  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:11.772703  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:11.776697  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:11.857960  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:12.003166  617708 pod_ready.go:103] pod "amd-gpu-device-plugin-hbkzd" in "kube-system" namespace has status "Ready":"False"
	I1209 10:35:12.095430  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:12.270151  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:12.272055  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:12.358097  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:12.596066  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:12.770254  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:12.772057  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:12.859268  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:13.095439  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:13.269784  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:13.270754  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:13.357598  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:13.595145  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:13.770128  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:13.771841  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:13.857899  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:14.095391  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:14.269550  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:14.270780  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:14.357663  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:14.497528  617708 pod_ready.go:103] pod "amd-gpu-device-plugin-hbkzd" in "kube-system" namespace has status "Ready":"False"
	I1209 10:35:14.595716  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:14.770862  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:14.771418  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:14.859350  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:15.344740  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:15.445241  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:15.445711  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:15.445933  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:15.595493  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:15.771626  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:15.772596  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:15.857751  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:16.097083  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:16.271870  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:16.272158  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:16.360610  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:16.509710  617708 pod_ready.go:103] pod "amd-gpu-device-plugin-hbkzd" in "kube-system" namespace has status "Ready":"False"
	I1209 10:35:16.599718  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:16.770558  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:16.772222  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:16.857965  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:17.095117  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:17.270235  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:17.270529  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:17.357281  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:17.594959  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:17.770261  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:17.771384  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:17.858857  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:18.095432  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:18.272233  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:18.272337  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:18.358183  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:18.595194  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:18.769389  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:18.770895  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:18.857959  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:18.996622  617708 pod_ready.go:103] pod "amd-gpu-device-plugin-hbkzd" in "kube-system" namespace has status "Ready":"False"
	I1209 10:35:19.096555  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:19.269814  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:19.271458  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:19.357123  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:19.595054  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:19.770055  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:19.772716  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:19.857378  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:20.095979  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:20.270532  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:20.272648  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:20.357926  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:20.595341  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:20.770317  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:20.772213  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:20.858059  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:21.095441  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:21.269967  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:21.271635  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:21.357844  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:21.497323  617708 pod_ready.go:103] pod "amd-gpu-device-plugin-hbkzd" in "kube-system" namespace has status "Ready":"False"
	I1209 10:35:21.594263  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:21.769887  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:21.770680  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:21.857519  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:22.094851  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:22.269545  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:22.270741  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:22.357787  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:22.594719  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:22.769525  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:22.771209  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:22.857992  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:23.094935  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:23.269871  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:23.272077  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:23.357909  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:23.594697  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:23.770012  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:23.770998  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:23.858021  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:23.996141  617708 pod_ready.go:103] pod "amd-gpu-device-plugin-hbkzd" in "kube-system" namespace has status "Ready":"False"
	I1209 10:35:24.096322  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:24.268895  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:24.270686  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:24.357395  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:24.908693  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:24.909352  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:24.909968  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:24.910162  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:25.094915  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:25.269620  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:25.271449  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:25.357056  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:25.596798  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:25.769920  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:25.771941  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:25.857521  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:25.996869  617708 pod_ready.go:103] pod "amd-gpu-device-plugin-hbkzd" in "kube-system" namespace has status "Ready":"False"
	I1209 10:35:26.095049  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:26.271108  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:26.271470  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:26.357773  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:26.594992  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:26.770375  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:26.771782  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:26.857537  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:27.094955  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:27.270711  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:27.271691  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:27.357631  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:27.594434  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:27.770273  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:27.770996  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:27.858350  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:27.998875  617708 pod_ready.go:103] pod "amd-gpu-device-plugin-hbkzd" in "kube-system" namespace has status "Ready":"False"
	I1209 10:35:28.095261  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:28.290633  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:28.290974  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:28.357248  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:28.594967  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:28.770453  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:28.771966  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:28.857678  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:29.094614  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:29.269595  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:29.271583  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:29.357945  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:29.595446  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:29.771056  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:29.772122  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:29.858422  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:30.094678  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:30.269919  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:30.271225  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:30.357407  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:30.497372  617708 pod_ready.go:103] pod "amd-gpu-device-plugin-hbkzd" in "kube-system" namespace has status "Ready":"False"
	I1209 10:35:30.594675  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:30.769769  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:30.771320  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:30.856918  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:31.095331  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:31.269323  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:31.270560  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:31.357807  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:31.595049  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:31.770402  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:31.770671  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:31.857536  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:32.094590  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:32.270094  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:32.272296  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:32.369972  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:32.497633  617708 pod_ready.go:103] pod "amd-gpu-device-plugin-hbkzd" in "kube-system" namespace has status "Ready":"False"
	I1209 10:35:32.594284  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:32.770903  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:32.771762  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:32.857474  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:33.094201  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:33.269074  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:33.270289  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:33.358202  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:33.596726  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:33.771834  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:33.772008  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:33.871602  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:34.094926  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:34.275523  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:34.277329  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:34.374066  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:34.504658  617708 pod_ready.go:93] pod "amd-gpu-device-plugin-hbkzd" in "kube-system" namespace has status "Ready":"True"
	I1209 10:35:34.504684  617708 pod_ready.go:82] duration metric: took 31.514043184s for pod "amd-gpu-device-plugin-hbkzd" in "kube-system" namespace to be "Ready" ...
	I1209 10:35:34.504694  617708 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-cd4lm" in "kube-system" namespace to be "Ready" ...
	I1209 10:35:34.510154  617708 pod_ready.go:93] pod "coredns-7c65d6cfc9-cd4lm" in "kube-system" namespace has status "Ready":"True"
	I1209 10:35:34.510191  617708 pod_ready.go:82] duration metric: took 5.489042ms for pod "coredns-7c65d6cfc9-cd4lm" in "kube-system" namespace to be "Ready" ...
	I1209 10:35:34.510203  617708 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-qll9z" in "kube-system" namespace to be "Ready" ...
	I1209 10:35:34.511896  617708 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-qll9z" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-qll9z" not found
	I1209 10:35:34.511914  617708 pod_ready.go:82] duration metric: took 1.705568ms for pod "coredns-7c65d6cfc9-qll9z" in "kube-system" namespace to be "Ready" ...
	E1209 10:35:34.511924  617708 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-qll9z" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-qll9z" not found
	I1209 10:35:34.511929  617708 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-156041" in "kube-system" namespace to be "Ready" ...
	I1209 10:35:34.516171  617708 pod_ready.go:93] pod "etcd-addons-156041" in "kube-system" namespace has status "Ready":"True"
	I1209 10:35:34.516188  617708 pod_ready.go:82] duration metric: took 4.25298ms for pod "etcd-addons-156041" in "kube-system" namespace to be "Ready" ...
	I1209 10:35:34.516196  617708 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-156041" in "kube-system" namespace to be "Ready" ...
	I1209 10:35:34.520411  617708 pod_ready.go:93] pod "kube-apiserver-addons-156041" in "kube-system" namespace has status "Ready":"True"
	I1209 10:35:34.520436  617708 pod_ready.go:82] duration metric: took 4.232655ms for pod "kube-apiserver-addons-156041" in "kube-system" namespace to be "Ready" ...
	I1209 10:35:34.520449  617708 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-156041" in "kube-system" namespace to be "Ready" ...
	I1209 10:35:34.594962  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:34.694080  617708 pod_ready.go:93] pod "kube-controller-manager-addons-156041" in "kube-system" namespace has status "Ready":"True"
	I1209 10:35:34.694108  617708 pod_ready.go:82] duration metric: took 173.6504ms for pod "kube-controller-manager-addons-156041" in "kube-system" namespace to be "Ready" ...
	I1209 10:35:34.694122  617708 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-bthmb" in "kube-system" namespace to be "Ready" ...
	I1209 10:35:34.770416  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:34.771768  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:34.857592  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:35.094806  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:35.095368  617708 pod_ready.go:93] pod "kube-proxy-bthmb" in "kube-system" namespace has status "Ready":"True"
	I1209 10:35:35.095392  617708 pod_ready.go:82] duration metric: took 401.261193ms for pod "kube-proxy-bthmb" in "kube-system" namespace to be "Ready" ...
	I1209 10:35:35.095406  617708 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-156041" in "kube-system" namespace to be "Ready" ...
	I1209 10:35:35.269859  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:35.276754  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:35.357702  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:35.494772  617708 pod_ready.go:93] pod "kube-scheduler-addons-156041" in "kube-system" namespace has status "Ready":"True"
	I1209 10:35:35.494795  617708 pod_ready.go:82] duration metric: took 399.378278ms for pod "kube-scheduler-addons-156041" in "kube-system" namespace to be "Ready" ...
	I1209 10:35:35.494807  617708 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-kjjpq" in "kube-system" namespace to be "Ready" ...
	I1209 10:35:35.595057  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:35.769834  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:35.771256  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:35.870434  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:35.894109  617708 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-kjjpq" in "kube-system" namespace has status "Ready":"True"
	I1209 10:35:35.894136  617708 pod_ready.go:82] duration metric: took 399.321311ms for pod "nvidia-device-plugin-daemonset-kjjpq" in "kube-system" namespace to be "Ready" ...
	I1209 10:35:35.894148  617708 pod_ready.go:39] duration metric: took 32.922661121s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 10:35:35.894190  617708 api_server.go:52] waiting for apiserver process to appear ...
	I1209 10:35:35.894256  617708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 10:35:35.934416  617708 api_server.go:72] duration metric: took 35.861440295s to wait for apiserver process to appear ...
	I1209 10:35:35.934471  617708 api_server.go:88] waiting for apiserver healthz status ...
	I1209 10:35:35.934501  617708 api_server.go:253] Checking apiserver healthz at https://192.168.39.161:8443/healthz ...
	I1209 10:35:35.939760  617708 api_server.go:279] https://192.168.39.161:8443/healthz returned 200:
	ok
	I1209 10:35:35.940811  617708 api_server.go:141] control plane version: v1.31.2
	I1209 10:35:35.940845  617708 api_server.go:131] duration metric: took 6.365033ms to wait for apiserver health ...
	I1209 10:35:35.940857  617708 system_pods.go:43] waiting for kube-system pods to appear ...
	I1209 10:35:36.098124  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:36.101263  617708 system_pods.go:59] 18 kube-system pods found
	I1209 10:35:36.101310  617708 system_pods.go:61] "amd-gpu-device-plugin-hbkzd" [68ff1229-b428-4958-bcad-1fa9f1bb55a4] Running
	I1209 10:35:36.101319  617708 system_pods.go:61] "coredns-7c65d6cfc9-cd4lm" [29f3ba07-4465-49c1-89c9-7963559eb074] Running
	I1209 10:35:36.101330  617708 system_pods.go:61] "csi-hostpath-attacher-0" [4ce59eb1-f6b0-42e3-b167-82743dead6d5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1209 10:35:36.101345  617708 system_pods.go:61] "csi-hostpath-resizer-0" [12cbf9e5-ab92-4e05-a5bb-1aa38a653bd3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1209 10:35:36.101356  617708 system_pods.go:61] "csi-hostpathplugin-rk6qq" [c81f365c-4fbf-46b9-80d2-7388776c3da4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1209 10:35:36.101363  617708 system_pods.go:61] "etcd-addons-156041" [0db5fec8-be2b-43b9-95f0-bf7f1d4559d6] Running
	I1209 10:35:36.101373  617708 system_pods.go:61] "kube-apiserver-addons-156041" [b075ff6d-c2d2-4302-ad7e-ead23095ec56] Running
	I1209 10:35:36.101379  617708 system_pods.go:61] "kube-controller-manager-addons-156041" [cd4e8d19-1671-4024-8696-b12865565898] Running
	I1209 10:35:36.101384  617708 system_pods.go:61] "kube-ingress-dns-minikube" [dbc14232-0f6b-4848-9da8-d14681daebc5] Running
	I1209 10:35:36.101390  617708 system_pods.go:61] "kube-proxy-bthmb" [5a3b6ebf-90ff-4b75-b064-8de7e85140a0] Running
	I1209 10:35:36.101398  617708 system_pods.go:61] "kube-scheduler-addons-156041" [35d03d6e-d290-4cfd-b722-d5ac4682b7af] Running
	I1209 10:35:36.101410  617708 system_pods.go:61] "metrics-server-84c5f94fbc-s7gmn" [a2e3bba5-5ed2-4131-a072-a3597c3d28b1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1209 10:35:36.101420  617708 system_pods.go:61] "nvidia-device-plugin-daemonset-kjjpq" [9d6efa63-ad7e-417c-9a30-6ae237fb8824] Running
	I1209 10:35:36.101432  617708 system_pods.go:61] "registry-5cc95cd69-dz5k9" [94e4ed5a-c1d2-4327-99af-d2d3f88d0300] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1209 10:35:36.101441  617708 system_pods.go:61] "registry-proxy-8fjdn" [92870ba1-49e0-461f-91f0-1d0ee71c79d7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1209 10:35:36.101454  617708 system_pods.go:61] "snapshot-controller-56fcc65765-pf49d" [2d37dd49-32af-4d58-917a-73cafe8fdf4a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1209 10:35:36.101466  617708 system_pods.go:61] "snapshot-controller-56fcc65765-zh99l" [ded01d68-2ce6-4cfe-99d0-672c5a04ce9a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1209 10:35:36.101479  617708 system_pods.go:61] "storage-provisioner" [105ef2e5-38ab-44ff-9b22-17aea32e722a] Running
	I1209 10:35:36.101492  617708 system_pods.go:74] duration metric: took 160.626319ms to wait for pod list to return data ...
	I1209 10:35:36.101507  617708 default_sa.go:34] waiting for default service account to be created ...
	I1209 10:35:36.270757  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:36.272139  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:36.296813  617708 default_sa.go:45] found service account: "default"
	I1209 10:35:36.296844  617708 default_sa.go:55] duration metric: took 195.325773ms for default service account to be created ...
	I1209 10:35:36.296857  617708 system_pods.go:116] waiting for k8s-apps to be running ...
	I1209 10:35:36.596266  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:36.598069  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:36.601663  617708 system_pods.go:86] 18 kube-system pods found
	I1209 10:35:36.601688  617708 system_pods.go:89] "amd-gpu-device-plugin-hbkzd" [68ff1229-b428-4958-bcad-1fa9f1bb55a4] Running
	I1209 10:35:36.601695  617708 system_pods.go:89] "coredns-7c65d6cfc9-cd4lm" [29f3ba07-4465-49c1-89c9-7963559eb074] Running
	I1209 10:35:36.601701  617708 system_pods.go:89] "csi-hostpath-attacher-0" [4ce59eb1-f6b0-42e3-b167-82743dead6d5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1209 10:35:36.601708  617708 system_pods.go:89] "csi-hostpath-resizer-0" [12cbf9e5-ab92-4e05-a5bb-1aa38a653bd3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1209 10:35:36.601726  617708 system_pods.go:89] "csi-hostpathplugin-rk6qq" [c81f365c-4fbf-46b9-80d2-7388776c3da4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1209 10:35:36.601735  617708 system_pods.go:89] "etcd-addons-156041" [0db5fec8-be2b-43b9-95f0-bf7f1d4559d6] Running
	I1209 10:35:36.601740  617708 system_pods.go:89] "kube-apiserver-addons-156041" [b075ff6d-c2d2-4302-ad7e-ead23095ec56] Running
	I1209 10:35:36.601744  617708 system_pods.go:89] "kube-controller-manager-addons-156041" [cd4e8d19-1671-4024-8696-b12865565898] Running
	I1209 10:35:36.601750  617708 system_pods.go:89] "kube-ingress-dns-minikube" [dbc14232-0f6b-4848-9da8-d14681daebc5] Running
	I1209 10:35:36.601756  617708 system_pods.go:89] "kube-proxy-bthmb" [5a3b6ebf-90ff-4b75-b064-8de7e85140a0] Running
	I1209 10:35:36.601759  617708 system_pods.go:89] "kube-scheduler-addons-156041" [35d03d6e-d290-4cfd-b722-d5ac4682b7af] Running
	I1209 10:35:36.601765  617708 system_pods.go:89] "metrics-server-84c5f94fbc-s7gmn" [a2e3bba5-5ed2-4131-a072-a3597c3d28b1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1209 10:35:36.601768  617708 system_pods.go:89] "nvidia-device-plugin-daemonset-kjjpq" [9d6efa63-ad7e-417c-9a30-6ae237fb8824] Running
	I1209 10:35:36.601777  617708 system_pods.go:89] "registry-5cc95cd69-dz5k9" [94e4ed5a-c1d2-4327-99af-d2d3f88d0300] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1209 10:35:36.601782  617708 system_pods.go:89] "registry-proxy-8fjdn" [92870ba1-49e0-461f-91f0-1d0ee71c79d7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1209 10:35:36.601792  617708 system_pods.go:89] "snapshot-controller-56fcc65765-pf49d" [2d37dd49-32af-4d58-917a-73cafe8fdf4a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1209 10:35:36.601801  617708 system_pods.go:89] "snapshot-controller-56fcc65765-zh99l" [ded01d68-2ce6-4cfe-99d0-672c5a04ce9a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1209 10:35:36.601806  617708 system_pods.go:89] "storage-provisioner" [105ef2e5-38ab-44ff-9b22-17aea32e722a] Running
	I1209 10:35:36.601815  617708 system_pods.go:126] duration metric: took 304.951645ms to wait for k8s-apps to be running ...
	I1209 10:35:36.601825  617708 system_svc.go:44] waiting for kubelet service to be running ....
	I1209 10:35:36.601874  617708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 10:35:36.615991  617708 system_svc.go:56] duration metric: took 14.153504ms WaitForService to wait for kubelet
	I1209 10:35:36.616022  617708 kubeadm.go:582] duration metric: took 36.543056723s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 10:35:36.616043  617708 node_conditions.go:102] verifying NodePressure condition ...
	I1209 10:35:36.805849  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:36.806603  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:36.807421  617708 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1209 10:35:36.807468  617708 node_conditions.go:123] node cpu capacity is 2
	I1209 10:35:36.807493  617708 node_conditions.go:105] duration metric: took 191.443254ms to run NodePressure ...
	I1209 10:35:36.807510  617708 start.go:241] waiting for startup goroutines ...
	I1209 10:35:36.857277  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:37.094345  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:37.269848  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:37.271652  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:37.358034  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:37.595377  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:37.771242  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:37.772249  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:37.857895  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:38.095475  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:38.269650  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:38.271900  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:38.358581  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:38.596503  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:38.769812  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:38.771042  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:38.858043  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:39.096373  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:39.269694  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:39.271083  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:39.357877  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:39.594668  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:39.770674  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:39.771366  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:39.857010  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:40.095122  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:40.269613  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:40.271536  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:40.357861  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:40.594988  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:40.770807  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:40.772712  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:40.857533  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:41.094822  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:41.270130  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:41.271214  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:41.357702  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:41.594657  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:41.769564  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:41.771221  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:41.857810  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:42.094770  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:42.269989  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:42.271221  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:42.358116  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:42.595407  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:42.769832  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:42.772729  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:42.857260  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:43.093986  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:43.277016  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:43.277720  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:43.357691  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:43.594995  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:43.770564  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:43.870074  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:43.870160  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:44.095309  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:44.269779  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:44.271159  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:44.358627  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:44.594505  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:44.769640  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:44.771265  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:44.857701  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:45.094949  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:45.277470  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:45.278365  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:45.671458  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:45.672747  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:45.771383  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:45.771751  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:45.871300  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:46.095750  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:46.270428  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:46.271752  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:46.371073  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:46.596739  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:46.770059  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:46.771724  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:46.857871  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:47.095448  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:47.270008  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:47.271167  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:47.357890  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:47.595700  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:47.769854  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:47.771859  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:47.857555  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:48.094576  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:48.269227  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:48.270738  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:48.357667  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:48.594710  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:48.770625  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:48.771185  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:48.858252  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:49.095442  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:49.269591  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:49.270962  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:49.358621  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:49.594436  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:49.769618  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:49.771312  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:49.858560  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:50.094349  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:50.269890  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:50.271436  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:50.357873  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:50.595297  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:50.770952  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:50.771148  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:50.871289  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:51.095090  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:51.270555  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:51.271512  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:51.358115  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:51.616922  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:51.784297  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:51.784323  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:51.858253  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:52.095136  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:52.270981  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:52.272840  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:52.357789  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:52.595245  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:52.771892  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:52.773642  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:52.857030  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:53.095437  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:53.270004  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:53.271088  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:53.358496  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:53.595838  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:53.770377  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:53.771655  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:53.871179  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:54.097098  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:54.269310  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:54.270730  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:54.357514  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:54.594655  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:54.770034  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:54.772391  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:54.857614  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:55.096045  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:55.269618  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:55.270865  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:55.357947  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:55.594647  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:55.769510  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:55.771085  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:55.858109  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:56.095515  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:56.270365  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:56.271781  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:56.357881  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:56.594683  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:56.769688  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:56.771258  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:56.857766  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:57.094897  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:57.270165  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:57.271299  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:57.357882  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:57.594681  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:57.770510  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:57.771927  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:57.857542  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:58.094386  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:58.269317  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:58.270917  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:58.357892  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:58.595177  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:58.953903  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:58.954813  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:58.956027  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:59.095038  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:59.271571  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:59.272368  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:59.358055  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:35:59.595402  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:35:59.769170  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:35:59.771034  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:35:59.857631  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:36:00.094879  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:00.270262  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:36:00.271911  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:36:00.357010  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:36:00.594304  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:00.770539  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:36:00.771770  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:36:00.859890  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:36:01.363674  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:36:01.364021  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:36:01.364060  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:36:01.364682  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:01.594839  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:01.769651  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:36:01.771104  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:36:01.857552  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:36:02.094938  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:02.269708  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:36:02.270996  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:36:02.357626  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:36:02.593962  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:02.771227  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:36:02.771407  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:36:02.857257  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:36:03.095009  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:03.272217  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 10:36:03.273083  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:36:03.370396  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:36:03.594411  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:03.771098  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:36:03.771968  617708 kapi.go:107] duration metric: took 55.004572108s to wait for kubernetes.io/minikube-addons=registry ...
	I1209 10:36:03.870310  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:36:04.095382  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:04.269161  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:36:04.357532  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:36:04.593965  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:04.770694  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:36:04.869788  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:36:05.096340  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:05.270703  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:36:05.357703  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:36:05.595978  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:05.770462  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:36:05.857810  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:36:06.095011  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:06.269614  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:36:06.357991  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:36:06.988536  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:07.088031  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:36:07.088722  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:36:07.095122  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:07.270996  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:36:07.370124  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:36:07.595667  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:07.769986  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:36:07.857573  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:36:08.094231  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:08.270107  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:36:08.357949  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:36:08.595449  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:08.770470  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:36:08.861585  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:36:09.094445  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:09.269880  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:36:09.357883  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:36:09.595928  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:09.770006  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:36:09.857511  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:36:10.095738  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:10.270332  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:36:10.356899  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:36:10.596945  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:10.770271  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:36:10.857773  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:36:11.095067  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:11.269221  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:36:11.357726  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:36:11.606465  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:11.770415  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:36:11.863279  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:36:12.097012  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:12.278439  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:36:12.375904  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:36:12.595054  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:12.769695  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:36:12.859344  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:36:13.094817  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:13.269750  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:36:13.357418  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:36:13.596767  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:13.769623  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:36:14.099851  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:36:14.108699  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:14.272958  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:36:14.371930  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:36:14.595821  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:14.770159  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:36:14.857333  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:36:15.094796  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:15.270725  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:36:15.358605  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:36:15.595724  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:15.770083  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:36:15.861080  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:36:16.095160  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:16.269694  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:36:16.357024  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:36:16.595386  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:16.769424  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:36:16.857768  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:36:17.094875  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:17.270072  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:36:17.357237  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:36:17.595804  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:17.770107  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:36:17.857435  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:36:18.094595  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:18.284102  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:36:18.357730  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:36:18.594848  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:18.770324  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:36:18.857880  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:36:19.094583  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:19.270374  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:36:19.357432  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:36:19.594813  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:19.770323  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:36:19.858381  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:36:20.094510  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:20.269688  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:36:20.359919  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:36:20.594341  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:21.118300  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:36:21.118589  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:36:21.118989  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:21.270060  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:36:21.370252  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:36:21.596665  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:21.770041  617708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 10:36:21.857367  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:36:22.094396  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:22.270403  617708 kapi.go:107] duration metric: took 1m13.505086791s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1209 10:36:22.358346  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:36:22.595225  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:22.858403  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:36:23.102417  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:23.357660  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:36:23.595104  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:23.858076  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:36:24.095191  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:24.358084  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:36:24.595546  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:24.858233  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:36:25.096132  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:25.358395  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:36:25.595437  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:25.858080  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:36:26.095276  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:26.358030  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:36:26.595076  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:26.857806  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:36:27.094872  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:27.357408  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:36:27.594564  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:27.858411  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 10:36:28.095550  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:28.358388  617708 kapi.go:107] duration metric: took 1m17.00446715s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1209 10:36:28.359906  617708 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-156041 cluster.
	I1209 10:36:28.361368  617708 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1209 10:36:28.362582  617708 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1209 10:36:28.594259  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:29.094662  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:29.594940  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:30.095194  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:30.595950  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:31.095024  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:31.802111  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:32.095611  617708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 10:36:32.594841  617708 kapi.go:107] duration metric: took 1m23.004720869s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1209 10:36:32.596774  617708 out.go:177] * Enabled addons: nvidia-device-plugin, ingress-dns, cloud-spanner, storage-provisioner, storage-provisioner-rancher, amd-gpu-device-plugin, metrics-server, inspektor-gadget, yakd, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1209 10:36:32.597893  617708 addons.go:510] duration metric: took 1m32.524857757s for enable addons: enabled=[nvidia-device-plugin ingress-dns cloud-spanner storage-provisioner storage-provisioner-rancher amd-gpu-device-plugin metrics-server inspektor-gadget yakd default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1209 10:36:32.597942  617708 start.go:246] waiting for cluster config update ...
	I1209 10:36:32.597967  617708 start.go:255] writing updated cluster config ...
	I1209 10:36:32.598292  617708 ssh_runner.go:195] Run: rm -f paused
	I1209 10:36:32.654191  617708 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1209 10:36:32.655785  617708 out.go:177] * Done! kubectl is now configured to use "addons-156041" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 09 10:42:22 addons-156041 crio[662]: time="2024-12-09 10:42:22.453913494Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a37e716d-a2f5-4f89-827d-45d02841f8f9 name=/runtime.v1.RuntimeService/Version
	Dec 09 10:42:22 addons-156041 crio[662]: time="2024-12-09 10:42:22.455385321Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1e6f2e76-db55-4c56-8c1d-b694877ef934 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 10:42:22 addons-156041 crio[662]: time="2024-12-09 10:42:22.462502349Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733740942462457657,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604540,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1e6f2e76-db55-4c56-8c1d-b694877ef934 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 10:42:22 addons-156041 crio[662]: time="2024-12-09 10:42:22.465823447Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=434cee91-6261-438b-abad-1498e004f5ff name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 10:42:22 addons-156041 crio[662]: time="2024-12-09 10:42:22.465987109Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=434cee91-6261-438b-abad-1498e004f5ff name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 10:42:22 addons-156041 crio[662]: time="2024-12-09 10:42:22.466391892Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0156508be550053639469886fe0b24b0acfeb2f0f487f8d2e4317544e5c26b39,PodSandboxId:e296c75389c1fce69f6691b166f60eb93c4c3dc898e64c52b251bbb31de2dd33,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1733740811624619523,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-nfmnx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a6cde186-7633-4323-9b33-f3737f01184c,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b168e9562cf64220e8a9c4b8beac12735826788d716e089cb7e34fdae303b2f,PodSandboxId:ac3a87c920ab6b60db82caa81206de42aa3c987c0937f468cb0066372d894be9,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91ca84b4f57794f97f70443afccff26aed771e36bc48bad1e26c2ce66124ea66,State:CONTAINER_RUNNING,CreatedAt:1733740670424687973,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 63f24e56-7ff2-470f-aef8-eaf2dada0965,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff776faa63ae9dcd8b90e9dd5374810408479ee8ea214a4add287e5ebb8365cb,PodSandboxId:a3b4719506e0b21e0c353d9bf21cd48f3f51a0849734c71ca7ca024789a384dc,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1733740596918948092,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 750ac467-92cd-4f0f-8
288-ccecae9af727,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:394115f8b81ee2fb3ecfaf0e3323653056440234aef7037a4f7ff12fbb0ce841,PodSandboxId:db0494eee99c9c8630c77e161af89475c89af4b5c3633f0b6b527c0ae756303b,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1733740545768877462,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-s7gmn,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: a2e3bba5-5ed2-4131-a072-a3597c3d28b1,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eab70ffac425c2e03e7b07a17673804d9f18c462bdcf94ec70b00b8447221c59,PodSandboxId:1039eee9ff97dab2e58d5634aa48bbb54bd7d8a6daf640cead89335f0d80d391,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1733740533539218551,Labels:map[string]string{io.kube
rnetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-hbkzd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68ff1229-b428-4958-bcad-1fa9f1bb55a4,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e75072ac379d65e9afbaf83daf64c569651d9cdc52aca2478b15b50422e9bb9a,PodSandboxId:349471269d2f80ee73b495f546885b4a41e8886cdc755a52b91ba41f16669f77,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733740506615913756,Labels:map[string]string{io.kubern
etes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 105ef2e5-38ab-44ff-9b22-17aea32e722a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bc8f7eb6f9e3d4fef8a83f5803bff22ae2c79298d343bfc37123f5f724cb7bc,PodSandboxId:5e55d595d4610753a6e4830b76c80c3384530eb911db14afb2b430ba0c18eb75,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733740503019052026,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-cd4lm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29f3ba07-4465-49c1-89c9-7963559eb074,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:caaaa0e3d8bef385b8e9d924a28834f3d28156742f878b33c9f3ca3839a8061d,PodSandboxId:f4d203d196bafbf9883905950139a3cefe11830b57e72e0d21291758e5a2a9ce,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733740501365124514,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bthmb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a3b6ebf-90ff-4b75-b064-8de7e85140a0,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32f3e3cc7ba9b044d6281de503c488e78f6fe147933ca3eddafd455ff57969f0,PodSandboxId:a92c79d980fe69e7985961b19f79015693e14e1d03775e561680218da8a22ac7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26
915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733740489663843532,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-156041,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5804f4020c516b70575448cdaf565d0,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5be4a6b2466ed9d67eab208c35cd2ca892798f62c660be84f94db3f8ab52d683,PodSandboxId:10ff2598d0df13f7ddde83a59ee7ab879c10020281e7999028f08ab8d5451316,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e
415a49173,State:CONTAINER_RUNNING,CreatedAt:1733740489640050025,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-156041,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 656fd1e0a35f1dffa82d5963f298e8ee,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69162feab66f8d9454b1e7b9084d063da9017e38a8589fb25dcebe2fda8589e9,PodSandboxId:1587ded85bbe1aa9fa4b317e1399a61ead1e72f6ea584ec56ad355cd2c55d810,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:C
ONTAINER_RUNNING,CreatedAt:1733740489631611141,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-156041,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f79094dcc5999520aa9623cd82617e9f,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d58a34c77c2cf417f71d9147333ecb767357bdae1c925fa33ad4de8512e260b,PodSandboxId:97a28aa36f69636a4ed07c89c43e16f80a0c7f7c46b91799e4ef830ad16b1b57,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER
_RUNNING,CreatedAt:1733740489645315020,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-156041,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d59fd670d2a9b851fabc09bfa591e92,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=434cee91-6261-438b-abad-1498e004f5ff name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 10:42:22 addons-156041 crio[662]: time="2024-12-09 10:42:22.504737849Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7679eabf-86de-45ec-9aaf-db73275fe8f7 name=/runtime.v1.RuntimeService/Version
	Dec 09 10:42:22 addons-156041 crio[662]: time="2024-12-09 10:42:22.504813592Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7679eabf-86de-45ec-9aaf-db73275fe8f7 name=/runtime.v1.RuntimeService/Version
	Dec 09 10:42:22 addons-156041 crio[662]: time="2024-12-09 10:42:22.506302876Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ee80913a-e6f9-4a66-b8e5-407341142316 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 10:42:22 addons-156041 crio[662]: time="2024-12-09 10:42:22.507453500Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733740942507427016,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604540,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ee80913a-e6f9-4a66-b8e5-407341142316 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 10:42:22 addons-156041 crio[662]: time="2024-12-09 10:42:22.508010437Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e85d9e9f-5de7-43d2-bea6-51d17a98890e name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 10:42:22 addons-156041 crio[662]: time="2024-12-09 10:42:22.508064665Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e85d9e9f-5de7-43d2-bea6-51d17a98890e name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 10:42:22 addons-156041 crio[662]: time="2024-12-09 10:42:22.508449500Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0156508be550053639469886fe0b24b0acfeb2f0f487f8d2e4317544e5c26b39,PodSandboxId:e296c75389c1fce69f6691b166f60eb93c4c3dc898e64c52b251bbb31de2dd33,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1733740811624619523,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-nfmnx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a6cde186-7633-4323-9b33-f3737f01184c,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b168e9562cf64220e8a9c4b8beac12735826788d716e089cb7e34fdae303b2f,PodSandboxId:ac3a87c920ab6b60db82caa81206de42aa3c987c0937f468cb0066372d894be9,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91ca84b4f57794f97f70443afccff26aed771e36bc48bad1e26c2ce66124ea66,State:CONTAINER_RUNNING,CreatedAt:1733740670424687973,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 63f24e56-7ff2-470f-aef8-eaf2dada0965,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff776faa63ae9dcd8b90e9dd5374810408479ee8ea214a4add287e5ebb8365cb,PodSandboxId:a3b4719506e0b21e0c353d9bf21cd48f3f51a0849734c71ca7ca024789a384dc,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1733740596918948092,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 750ac467-92cd-4f0f-8
288-ccecae9af727,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:394115f8b81ee2fb3ecfaf0e3323653056440234aef7037a4f7ff12fbb0ce841,PodSandboxId:db0494eee99c9c8630c77e161af89475c89af4b5c3633f0b6b527c0ae756303b,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1733740545768877462,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-s7gmn,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: a2e3bba5-5ed2-4131-a072-a3597c3d28b1,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eab70ffac425c2e03e7b07a17673804d9f18c462bdcf94ec70b00b8447221c59,PodSandboxId:1039eee9ff97dab2e58d5634aa48bbb54bd7d8a6daf640cead89335f0d80d391,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1733740533539218551,Labels:map[string]string{io.kube
rnetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-hbkzd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68ff1229-b428-4958-bcad-1fa9f1bb55a4,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e75072ac379d65e9afbaf83daf64c569651d9cdc52aca2478b15b50422e9bb9a,PodSandboxId:349471269d2f80ee73b495f546885b4a41e8886cdc755a52b91ba41f16669f77,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733740506615913756,Labels:map[string]string{io.kubern
etes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 105ef2e5-38ab-44ff-9b22-17aea32e722a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bc8f7eb6f9e3d4fef8a83f5803bff22ae2c79298d343bfc37123f5f724cb7bc,PodSandboxId:5e55d595d4610753a6e4830b76c80c3384530eb911db14afb2b430ba0c18eb75,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733740503019052026,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-cd4lm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29f3ba07-4465-49c1-89c9-7963559eb074,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:caaaa0e3d8bef385b8e9d924a28834f3d28156742f878b33c9f3ca3839a8061d,PodSandboxId:f4d203d196bafbf9883905950139a3cefe11830b57e72e0d21291758e5a2a9ce,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733740501365124514,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bthmb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a3b6ebf-90ff-4b75-b064-8de7e85140a0,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32f3e3cc7ba9b044d6281de503c488e78f6fe147933ca3eddafd455ff57969f0,PodSandboxId:a92c79d980fe69e7985961b19f79015693e14e1d03775e561680218da8a22ac7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26
915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733740489663843532,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-156041,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5804f4020c516b70575448cdaf565d0,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5be4a6b2466ed9d67eab208c35cd2ca892798f62c660be84f94db3f8ab52d683,PodSandboxId:10ff2598d0df13f7ddde83a59ee7ab879c10020281e7999028f08ab8d5451316,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e
415a49173,State:CONTAINER_RUNNING,CreatedAt:1733740489640050025,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-156041,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 656fd1e0a35f1dffa82d5963f298e8ee,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69162feab66f8d9454b1e7b9084d063da9017e38a8589fb25dcebe2fda8589e9,PodSandboxId:1587ded85bbe1aa9fa4b317e1399a61ead1e72f6ea584ec56ad355cd2c55d810,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:C
ONTAINER_RUNNING,CreatedAt:1733740489631611141,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-156041,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f79094dcc5999520aa9623cd82617e9f,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d58a34c77c2cf417f71d9147333ecb767357bdae1c925fa33ad4de8512e260b,PodSandboxId:97a28aa36f69636a4ed07c89c43e16f80a0c7f7c46b91799e4ef830ad16b1b57,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER
_RUNNING,CreatedAt:1733740489645315020,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-156041,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d59fd670d2a9b851fabc09bfa591e92,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e85d9e9f-5de7-43d2-bea6-51d17a98890e name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 10:42:22 addons-156041 crio[662]: time="2024-12-09 10:42:22.541372539Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=137f43f3-5f6d-40ba-9f0f-744221b2de75 name=/runtime.v1.RuntimeService/Version
	Dec 09 10:42:22 addons-156041 crio[662]: time="2024-12-09 10:42:22.541461980Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=137f43f3-5f6d-40ba-9f0f-744221b2de75 name=/runtime.v1.RuntimeService/Version
	Dec 09 10:42:22 addons-156041 crio[662]: time="2024-12-09 10:42:22.542880072Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7472276c-ab12-4c7b-ab99-57016bd77072 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 10:42:22 addons-156041 crio[662]: time="2024-12-09 10:42:22.544276016Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733740942544249152,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604540,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7472276c-ab12-4c7b-ab99-57016bd77072 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 10:42:22 addons-156041 crio[662]: time="2024-12-09 10:42:22.544902630Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2ec6012a-f57e-424e-a2df-2b2edc3075a0 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 10:42:22 addons-156041 crio[662]: time="2024-12-09 10:42:22.544955398Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2ec6012a-f57e-424e-a2df-2b2edc3075a0 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 10:42:22 addons-156041 crio[662]: time="2024-12-09 10:42:22.545365445Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0156508be550053639469886fe0b24b0acfeb2f0f487f8d2e4317544e5c26b39,PodSandboxId:e296c75389c1fce69f6691b166f60eb93c4c3dc898e64c52b251bbb31de2dd33,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1733740811624619523,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-nfmnx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a6cde186-7633-4323-9b33-f3737f01184c,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b168e9562cf64220e8a9c4b8beac12735826788d716e089cb7e34fdae303b2f,PodSandboxId:ac3a87c920ab6b60db82caa81206de42aa3c987c0937f468cb0066372d894be9,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91ca84b4f57794f97f70443afccff26aed771e36bc48bad1e26c2ce66124ea66,State:CONTAINER_RUNNING,CreatedAt:1733740670424687973,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 63f24e56-7ff2-470f-aef8-eaf2dada0965,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff776faa63ae9dcd8b90e9dd5374810408479ee8ea214a4add287e5ebb8365cb,PodSandboxId:a3b4719506e0b21e0c353d9bf21cd48f3f51a0849734c71ca7ca024789a384dc,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1733740596918948092,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 750ac467-92cd-4f0f-8
288-ccecae9af727,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:394115f8b81ee2fb3ecfaf0e3323653056440234aef7037a4f7ff12fbb0ce841,PodSandboxId:db0494eee99c9c8630c77e161af89475c89af4b5c3633f0b6b527c0ae756303b,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1733740545768877462,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-s7gmn,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: a2e3bba5-5ed2-4131-a072-a3597c3d28b1,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eab70ffac425c2e03e7b07a17673804d9f18c462bdcf94ec70b00b8447221c59,PodSandboxId:1039eee9ff97dab2e58d5634aa48bbb54bd7d8a6daf640cead89335f0d80d391,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1733740533539218551,Labels:map[string]string{io.kube
rnetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-hbkzd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68ff1229-b428-4958-bcad-1fa9f1bb55a4,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e75072ac379d65e9afbaf83daf64c569651d9cdc52aca2478b15b50422e9bb9a,PodSandboxId:349471269d2f80ee73b495f546885b4a41e8886cdc755a52b91ba41f16669f77,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733740506615913756,Labels:map[string]string{io.kubern
etes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 105ef2e5-38ab-44ff-9b22-17aea32e722a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bc8f7eb6f9e3d4fef8a83f5803bff22ae2c79298d343bfc37123f5f724cb7bc,PodSandboxId:5e55d595d4610753a6e4830b76c80c3384530eb911db14afb2b430ba0c18eb75,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733740503019052026,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-cd4lm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29f3ba07-4465-49c1-89c9-7963559eb074,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:caaaa0e3d8bef385b8e9d924a28834f3d28156742f878b33c9f3ca3839a8061d,PodSandboxId:f4d203d196bafbf9883905950139a3cefe11830b57e72e0d21291758e5a2a9ce,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733740501365124514,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bthmb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a3b6ebf-90ff-4b75-b064-8de7e85140a0,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32f3e3cc7ba9b044d6281de503c488e78f6fe147933ca3eddafd455ff57969f0,PodSandboxId:a92c79d980fe69e7985961b19f79015693e14e1d03775e561680218da8a22ac7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26
915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733740489663843532,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-156041,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5804f4020c516b70575448cdaf565d0,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5be4a6b2466ed9d67eab208c35cd2ca892798f62c660be84f94db3f8ab52d683,PodSandboxId:10ff2598d0df13f7ddde83a59ee7ab879c10020281e7999028f08ab8d5451316,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e
415a49173,State:CONTAINER_RUNNING,CreatedAt:1733740489640050025,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-156041,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 656fd1e0a35f1dffa82d5963f298e8ee,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69162feab66f8d9454b1e7b9084d063da9017e38a8589fb25dcebe2fda8589e9,PodSandboxId:1587ded85bbe1aa9fa4b317e1399a61ead1e72f6ea584ec56ad355cd2c55d810,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:C
ONTAINER_RUNNING,CreatedAt:1733740489631611141,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-156041,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f79094dcc5999520aa9623cd82617e9f,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d58a34c77c2cf417f71d9147333ecb767357bdae1c925fa33ad4de8512e260b,PodSandboxId:97a28aa36f69636a4ed07c89c43e16f80a0c7f7c46b91799e4ef830ad16b1b57,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER
_RUNNING,CreatedAt:1733740489645315020,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-156041,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d59fd670d2a9b851fabc09bfa591e92,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2ec6012a-f57e-424e-a2df-2b2edc3075a0 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 10:42:22 addons-156041 crio[662]: time="2024-12-09 10:42:22.568471310Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=31438351-9adf-4afa-8eb8-5fe3b4e74b3b name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 09 10:42:22 addons-156041 crio[662]: time="2024-12-09 10:42:22.568782703Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:e296c75389c1fce69f6691b166f60eb93c4c3dc898e64c52b251bbb31de2dd33,Metadata:&PodSandboxMetadata{Name:hello-world-app-55bf9c44b4-nfmnx,Uid:a6cde186-7633-4323-9b33-f3737f01184c,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733740809010955279,Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-nfmnx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a6cde186-7633-4323-9b33-f3737f01184c,pod-template-hash: 55bf9c44b4,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-09T10:40:08.693267074Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ac3a87c920ab6b60db82caa81206de42aa3c987c0937f468cb0066372d894be9,Metadata:&PodSandboxMetadata{Name:nginx,Uid:63f24e56-7ff2-470f-aef8-eaf2dada0965,Namespace:default,Attempt:0,}
,State:SANDBOX_READY,CreatedAt:1733740666379894206,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 63f24e56-7ff2-470f-aef8-eaf2dada0965,run: nginx,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-09T10:37:46.072239576Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a3b4719506e0b21e0c353d9bf21cd48f3f51a0849734c71ca7ca024789a384dc,Metadata:&PodSandboxMetadata{Name:busybox,Uid:750ac467-92cd-4f0f-8288-ccecae9af727,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733740593558736122,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 750ac467-92cd-4f0f-8288-ccecae9af727,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-09T10:36:33.238697354Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:db0494eee99c9c8630
c77e161af89475c89af4b5c3633f0b6b527c0ae756303b,Metadata:&PodSandboxMetadata{Name:metrics-server-84c5f94fbc-s7gmn,Uid:a2e3bba5-5ed2-4131-a072-a3597c3d28b1,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733740505743893610,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-84c5f94fbc-s7gmn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2e3bba5-5ed2-4131-a072-a3597c3d28b1,k8s-app: metrics-server,pod-template-hash: 84c5f94fbc,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-09T10:35:05.418820180Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:349471269d2f80ee73b495f546885b4a41e8886cdc755a52b91ba41f16669f77,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:105ef2e5-38ab-44ff-9b22-17aea32e722a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733740505191808602,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kuber
netes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 105ef2e5-38ab-44ff-9b22-17aea32e722a,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-12-09T10:35:04.797201445Z,kubernetes.io/config.source: api,},RuntimeHandler:,
},&PodSandbox{Id:1039eee9ff97dab2e58d5634aa48bbb54bd7d8a6daf640cead89335f0d80d391,Metadata:&PodSandboxMetadata{Name:amd-gpu-device-plugin-hbkzd,Uid:68ff1229-b428-4958-bcad-1fa9f1bb55a4,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733740502807560500,Labels:map[string]string{controller-revision-hash: 59cf7d9b45,io.kubernetes.container.name: POD,io.kubernetes.pod.name: amd-gpu-device-plugin-hbkzd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68ff1229-b428-4958-bcad-1fa9f1bb55a4,k8s-app: amd-gpu-device-plugin,name: amd-gpu-device-plugin,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-09T10:35:02.497018160Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f4d203d196bafbf9883905950139a3cefe11830b57e72e0d21291758e5a2a9ce,Metadata:&PodSandboxMetadata{Name:kube-proxy-bthmb,Uid:5a3b6ebf-90ff-4b75-b064-8de7e85140a0,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733740500593670046,Labels:map[string]str
ing{controller-revision-hash: 77987969cc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-bthmb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a3b6ebf-90ff-4b75-b064-8de7e85140a0,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-09T10:34:59.686589007Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5e55d595d4610753a6e4830b76c80c3384530eb911db14afb2b430ba0c18eb75,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-cd4lm,Uid:29f3ba07-4465-49c1-89c9-7963559eb074,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733740500330595688,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-cd4lm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29f3ba07-4465-49c1-89c9-7963559eb074,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-09T10:35:00.024893776Z,kubern
etes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:97a28aa36f69636a4ed07c89c43e16f80a0c7f7c46b91799e4ef830ad16b1b57,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-addons-156041,Uid:6d59fd670d2a9b851fabc09bfa591e92,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733740489492403683,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-addons-156041,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d59fd670d2a9b851fabc09bfa591e92,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 6d59fd670d2a9b851fabc09bfa591e92,kubernetes.io/config.seen: 2024-12-09T10:34:48.811982512Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:10ff2598d0df13f7ddde83a59ee7ab879c10020281e7999028f08ab8d5451316,Metadata:&PodSandboxMetadata{Name:kube-apiserver-addons-156041,Uid:656fd1e0a35f1dffa82d5963f298e8ee,Namespace:kube-system,Attempt:0,},State:SANDBOX
_READY,CreatedAt:1733740489490842745,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-addons-156041,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 656fd1e0a35f1dffa82d5963f298e8ee,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.161:8443,kubernetes.io/config.hash: 656fd1e0a35f1dffa82d5963f298e8ee,kubernetes.io/config.seen: 2024-12-09T10:34:48.811981623Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a92c79d980fe69e7985961b19f79015693e14e1d03775e561680218da8a22ac7,Metadata:&PodSandboxMetadata{Name:etcd-addons-156041,Uid:f5804f4020c516b70575448cdaf565d0,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733740489472067556,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-addons-156041,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5804f4
020c516b70575448cdaf565d0,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.161:2379,kubernetes.io/config.hash: f5804f4020c516b70575448cdaf565d0,kubernetes.io/config.seen: 2024-12-09T10:34:48.811978252Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:1587ded85bbe1aa9fa4b317e1399a61ead1e72f6ea584ec56ad355cd2c55d810,Metadata:&PodSandboxMetadata{Name:kube-scheduler-addons-156041,Uid:f79094dcc5999520aa9623cd82617e9f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733740489470973784,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-addons-156041,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f79094dcc5999520aa9623cd82617e9f,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: f79094dcc5999520aa9623cd82617e9f,kubernetes.io/config.seen: 2024-12-09T10:34:48.811983266Z,kubernetes.io/config.source
: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=31438351-9adf-4afa-8eb8-5fe3b4e74b3b name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 09 10:42:22 addons-156041 crio[662]: time="2024-12-09 10:42:22.569487919Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=52b2630e-b013-4abc-85d1-afe149e38d1e name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 10:42:22 addons-156041 crio[662]: time="2024-12-09 10:42:22.569545942Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=52b2630e-b013-4abc-85d1-afe149e38d1e name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 10:42:22 addons-156041 crio[662]: time="2024-12-09 10:42:22.569831055Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0156508be550053639469886fe0b24b0acfeb2f0f487f8d2e4317544e5c26b39,PodSandboxId:e296c75389c1fce69f6691b166f60eb93c4c3dc898e64c52b251bbb31de2dd33,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1733740811624619523,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-nfmnx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a6cde186-7633-4323-9b33-f3737f01184c,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b168e9562cf64220e8a9c4b8beac12735826788d716e089cb7e34fdae303b2f,PodSandboxId:ac3a87c920ab6b60db82caa81206de42aa3c987c0937f468cb0066372d894be9,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91ca84b4f57794f97f70443afccff26aed771e36bc48bad1e26c2ce66124ea66,State:CONTAINER_RUNNING,CreatedAt:1733740670424687973,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 63f24e56-7ff2-470f-aef8-eaf2dada0965,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff776faa63ae9dcd8b90e9dd5374810408479ee8ea214a4add287e5ebb8365cb,PodSandboxId:a3b4719506e0b21e0c353d9bf21cd48f3f51a0849734c71ca7ca024789a384dc,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1733740596918948092,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 750ac467-92cd-4f0f-8
288-ccecae9af727,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:394115f8b81ee2fb3ecfaf0e3323653056440234aef7037a4f7ff12fbb0ce841,PodSandboxId:db0494eee99c9c8630c77e161af89475c89af4b5c3633f0b6b527c0ae756303b,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1733740545768877462,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-s7gmn,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: a2e3bba5-5ed2-4131-a072-a3597c3d28b1,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eab70ffac425c2e03e7b07a17673804d9f18c462bdcf94ec70b00b8447221c59,PodSandboxId:1039eee9ff97dab2e58d5634aa48bbb54bd7d8a6daf640cead89335f0d80d391,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1733740533539218551,Labels:map[string]string{io.kube
rnetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-hbkzd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68ff1229-b428-4958-bcad-1fa9f1bb55a4,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e75072ac379d65e9afbaf83daf64c569651d9cdc52aca2478b15b50422e9bb9a,PodSandboxId:349471269d2f80ee73b495f546885b4a41e8886cdc755a52b91ba41f16669f77,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733740506615913756,Labels:map[string]string{io.kubern
etes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 105ef2e5-38ab-44ff-9b22-17aea32e722a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bc8f7eb6f9e3d4fef8a83f5803bff22ae2c79298d343bfc37123f5f724cb7bc,PodSandboxId:5e55d595d4610753a6e4830b76c80c3384530eb911db14afb2b430ba0c18eb75,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733740503019052026,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-cd4lm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29f3ba07-4465-49c1-89c9-7963559eb074,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:caaaa0e3d8bef385b8e9d924a28834f3d28156742f878b33c9f3ca3839a8061d,PodSandboxId:f4d203d196bafbf9883905950139a3cefe11830b57e72e0d21291758e5a2a9ce,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733740501365124514,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bthmb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a3b6ebf-90ff-4b75-b064-8de7e85140a0,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32f3e3cc7ba9b044d6281de503c488e78f6fe147933ca3eddafd455ff57969f0,PodSandboxId:a92c79d980fe69e7985961b19f79015693e14e1d03775e561680218da8a22ac7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26
915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733740489663843532,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-156041,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5804f4020c516b70575448cdaf565d0,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5be4a6b2466ed9d67eab208c35cd2ca892798f62c660be84f94db3f8ab52d683,PodSandboxId:10ff2598d0df13f7ddde83a59ee7ab879c10020281e7999028f08ab8d5451316,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e
415a49173,State:CONTAINER_RUNNING,CreatedAt:1733740489640050025,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-156041,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 656fd1e0a35f1dffa82d5963f298e8ee,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69162feab66f8d9454b1e7b9084d063da9017e38a8589fb25dcebe2fda8589e9,PodSandboxId:1587ded85bbe1aa9fa4b317e1399a61ead1e72f6ea584ec56ad355cd2c55d810,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:C
ONTAINER_RUNNING,CreatedAt:1733740489631611141,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-156041,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f79094dcc5999520aa9623cd82617e9f,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d58a34c77c2cf417f71d9147333ecb767357bdae1c925fa33ad4de8512e260b,PodSandboxId:97a28aa36f69636a4ed07c89c43e16f80a0c7f7c46b91799e4ef830ad16b1b57,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER
_RUNNING,CreatedAt:1733740489645315020,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-156041,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d59fd670d2a9b851fabc09bfa591e92,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=52b2630e-b013-4abc-85d1-afe149e38d1e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	0156508be5500       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   2 minutes ago       Running             hello-world-app           0                   e296c75389c1f       hello-world-app-55bf9c44b4-nfmnx
	6b168e9562cf6       docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4                         4 minutes ago       Running             nginx                     0                   ac3a87c920ab6       nginx
	ff776faa63ae9       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                     5 minutes ago       Running             busybox                   0                   a3b4719506e0b       busybox
	394115f8b81ee       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a   6 minutes ago       Running             metrics-server            0                   db0494eee99c9       metrics-server-84c5f94fbc-s7gmn
	eab70ffac425c       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                6 minutes ago       Running             amd-gpu-device-plugin     0                   1039eee9ff97d       amd-gpu-device-plugin-hbkzd
	e75072ac379d6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        7 minutes ago       Running             storage-provisioner       0                   349471269d2f8       storage-provisioner
	6bc8f7eb6f9e3       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                        7 minutes ago       Running             coredns                   0                   5e55d595d4610       coredns-7c65d6cfc9-cd4lm
	caaaa0e3d8bef       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                                        7 minutes ago       Running             kube-proxy                0                   f4d203d196baf       kube-proxy-bthmb
	32f3e3cc7ba9b       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                        7 minutes ago       Running             etcd                      0                   a92c79d980fe6       etcd-addons-156041
	7d58a34c77c2c       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                                        7 minutes ago       Running             kube-controller-manager   0                   97a28aa36f696       kube-controller-manager-addons-156041
	5be4a6b2466ed       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                                        7 minutes ago       Running             kube-apiserver            0                   10ff2598d0df1       kube-apiserver-addons-156041
	69162feab66f8       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                                        7 minutes ago       Running             kube-scheduler            0                   1587ded85bbe1       kube-scheduler-addons-156041
	
	
	==> coredns [6bc8f7eb6f9e3d4fef8a83f5803bff22ae2c79298d343bfc37123f5f724cb7bc] <==
	[INFO] 10.244.0.22:37674 - 18654 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000123004s
	[INFO] 10.244.0.22:37674 - 1395 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000124573s
	[INFO] 10.244.0.22:58375 - 49567 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000069243s
	[INFO] 10.244.0.22:37674 - 222 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000126822s
	[INFO] 10.244.0.22:37674 - 8566 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000133008s
	[INFO] 10.244.0.22:58375 - 26760 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000096094s
	[INFO] 10.244.0.22:58375 - 15627 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000094988s
	[INFO] 10.244.0.22:58375 - 182 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000052672s
	[INFO] 10.244.0.22:58375 - 26097 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000059907s
	[INFO] 10.244.0.22:58375 - 15331 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000090065s
	[INFO] 10.244.0.22:58375 - 1316 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000055976s
	[INFO] 10.244.0.22:35546 - 52065 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000092901s
	[INFO] 10.244.0.22:51117 - 10321 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000051734s
	[INFO] 10.244.0.22:51117 - 19122 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000176292s
	[INFO] 10.244.0.22:51117 - 17157 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000040768s
	[INFO] 10.244.0.22:51117 - 28408 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000038266s
	[INFO] 10.244.0.22:51117 - 28749 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000079078s
	[INFO] 10.244.0.22:51117 - 23704 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000081884s
	[INFO] 10.244.0.22:35546 - 19261 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00010089s
	[INFO] 10.244.0.22:51117 - 22612 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00012757s
	[INFO] 10.244.0.22:35546 - 47778 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000133264s
	[INFO] 10.244.0.22:35546 - 26199 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000177568s
	[INFO] 10.244.0.22:35546 - 61333 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000057598s
	[INFO] 10.244.0.22:35546 - 58613 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000057824s
	[INFO] 10.244.0.22:35546 - 15709 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000072761s
	
	
	==> describe nodes <==
	Name:               addons-156041
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-156041
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=60110addcdbf0fec7168b962521659e922988d6c
	                    minikube.k8s.io/name=addons-156041
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_09T10_34_55_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-156041
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 09 Dec 2024 10:34:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-156041
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 09 Dec 2024 10:42:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 09 Dec 2024 10:40:31 +0000   Mon, 09 Dec 2024 10:34:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 09 Dec 2024 10:40:31 +0000   Mon, 09 Dec 2024 10:34:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 09 Dec 2024 10:40:31 +0000   Mon, 09 Dec 2024 10:34:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 09 Dec 2024 10:40:31 +0000   Mon, 09 Dec 2024 10:34:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.161
	  Hostname:    addons-156041
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 af3881500388411695ff2439e8e5bf3a
	  System UUID:                af388150-0388-4116-95ff-2439e8e5bf3a
	  Boot ID:                    3c44fb91-9e20-4f02-a13a-4dfef199939f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m49s
	  default                     hello-world-app-55bf9c44b4-nfmnx         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m14s
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m36s
	  kube-system                 amd-gpu-device-plugin-hbkzd              0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m20s
	  kube-system                 coredns-7c65d6cfc9-cd4lm                 100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     7m23s
	  kube-system                 etcd-addons-156041                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         7m28s
	  kube-system                 kube-apiserver-addons-156041             250m (12%)    0 (0%)      0 (0%)           0 (0%)         7m28s
	  kube-system                 kube-controller-manager-addons-156041    200m (10%)    0 (0%)      0 (0%)           0 (0%)         7m28s
	  kube-system                 kube-proxy-bthmb                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m23s
	  kube-system                 kube-scheduler-addons-156041             100m (5%)     0 (0%)      0 (0%)           0 (0%)         7m28s
	  kube-system                 metrics-server-84c5f94fbc-s7gmn          100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         7m17s
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m18s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             370Mi (9%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 7m20s  kube-proxy       
	  Normal  Starting                 7m28s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m28s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m28s  kubelet          Node addons-156041 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m28s  kubelet          Node addons-156041 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m28s  kubelet          Node addons-156041 status is now: NodeHasSufficientPID
	  Normal  NodeReady                7m27s  kubelet          Node addons-156041 status is now: NodeReady
	  Normal  RegisteredNode           7m24s  node-controller  Node addons-156041 event: Registered Node addons-156041 in Controller
	
	
	==> dmesg <==
	[  +0.149509] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.001627] kauditd_printk_skb: 113 callbacks suppressed
	[  +5.001348] kauditd_printk_skb: 162 callbacks suppressed
	[  +6.098600] kauditd_printk_skb: 44 callbacks suppressed
	[ +14.476865] kauditd_printk_skb: 5 callbacks suppressed
	[ +19.863642] kauditd_printk_skb: 4 callbacks suppressed
	[  +8.298900] kauditd_printk_skb: 27 callbacks suppressed
	[Dec 9 10:36] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.624561] kauditd_printk_skb: 35 callbacks suppressed
	[  +5.089000] kauditd_printk_skb: 50 callbacks suppressed
	[  +5.738809] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.865174] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.735390] kauditd_printk_skb: 12 callbacks suppressed
	[ +14.776639] kauditd_printk_skb: 11 callbacks suppressed
	[  +8.124724] kauditd_printk_skb: 2 callbacks suppressed
	[Dec 9 10:37] kauditd_printk_skb: 8 callbacks suppressed
	[  +7.463808] kauditd_printk_skb: 19 callbacks suppressed
	[  +5.494145] kauditd_printk_skb: 48 callbacks suppressed
	[  +5.014184] kauditd_printk_skb: 15 callbacks suppressed
	[  +6.217761] kauditd_printk_skb: 32 callbacks suppressed
	[ +12.317805] kauditd_printk_skb: 15 callbacks suppressed
	[ +11.114463] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.039426] kauditd_printk_skb: 15 callbacks suppressed
	[Dec 9 10:40] kauditd_printk_skb: 49 callbacks suppressed
	[  +5.771258] kauditd_printk_skb: 19 callbacks suppressed
	
	
	==> etcd [32f3e3cc7ba9b044d6281de503c488e78f6fe147933ca3eddafd455ff57969f0] <==
	{"level":"warn","ts":"2024-12-09T10:36:21.097850Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-09T10:36:20.704574Z","time spent":"393.272804ms","remote":"127.0.0.1:33808","response type":"/etcdserverpb.KV/Range","request count":0,"request size":82,"response count":0,"response size":27,"request content":"key:\"/registry/validatingadmissionpolicies/\" range_end:\"/registry/validatingadmissionpolicies0\" count_only:true "}
	{"level":"warn","ts":"2024-12-09T10:36:21.097941Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"342.625143ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-09T10:36:21.097955Z","caller":"traceutil/trace.go:171","msg":"trace[351815813] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1089; }","duration":"342.638371ms","start":"2024-12-09T10:36:20.755312Z","end":"2024-12-09T10:36:21.097951Z","steps":["trace[351815813] 'agreement among raft nodes before linearized reading'  (duration: 342.614811ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-09T10:36:21.097966Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-09T10:36:20.755280Z","time spent":"342.68316ms","remote":"127.0.0.1:33520","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":27,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2024-12-09T10:36:21.098208Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"253.779731ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-09T10:36:21.098258Z","caller":"traceutil/trace.go:171","msg":"trace[791460012] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1089; }","duration":"253.832343ms","start":"2024-12-09T10:36:20.844419Z","end":"2024-12-09T10:36:21.098252Z","steps":["trace[791460012] 'agreement among raft nodes before linearized reading'  (duration: 253.772593ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-09T10:36:21.098322Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"321.247002ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-09T10:36:21.098334Z","caller":"traceutil/trace.go:171","msg":"trace[545256593] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1089; }","duration":"321.259478ms","start":"2024-12-09T10:36:20.777070Z","end":"2024-12-09T10:36:21.098330Z","steps":["trace[545256593] 'agreement among raft nodes before linearized reading'  (duration: 321.241095ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-09T10:36:31.782191Z","caller":"traceutil/trace.go:171","msg":"trace[887302388] linearizableReadLoop","detail":"{readStateIndex:1187; appliedIndex:1186; }","duration":"199.179993ms","start":"2024-12-09T10:36:31.582997Z","end":"2024-12-09T10:36:31.782177Z","steps":["trace[887302388] 'read index received'  (duration: 198.890735ms)","trace[887302388] 'applied index is now lower than readState.Index'  (duration: 288.665µs)"],"step_count":2}
	{"level":"info","ts":"2024-12-09T10:36:31.782382Z","caller":"traceutil/trace.go:171","msg":"trace[1371218546] transaction","detail":"{read_only:false; response_revision:1150; number_of_response:1; }","duration":"285.965033ms","start":"2024-12-09T10:36:31.496410Z","end":"2024-12-09T10:36:31.782375Z","steps":["trace[1371218546] 'process raft request'  (duration: 285.516505ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-09T10:36:31.783279Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"200.202998ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-09T10:36:31.784208Z","caller":"traceutil/trace.go:171","msg":"trace[205452147] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1150; }","duration":"201.218315ms","start":"2024-12-09T10:36:31.582978Z","end":"2024-12-09T10:36:31.784197Z","steps":["trace[205452147] 'agreement among raft nodes before linearized reading'  (duration: 200.181589ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-09T10:37:03.501699Z","caller":"traceutil/trace.go:171","msg":"trace[983110363] linearizableReadLoop","detail":"{readStateIndex:1367; appliedIndex:1366; }","duration":"164.085201ms","start":"2024-12-09T10:37:03.337595Z","end":"2024-12-09T10:37:03.501680Z","steps":["trace[983110363] 'read index received'  (duration: 163.884895ms)","trace[983110363] 'applied index is now lower than readState.Index'  (duration: 199.3µs)"],"step_count":2}
	{"level":"info","ts":"2024-12-09T10:37:03.501825Z","caller":"traceutil/trace.go:171","msg":"trace[1263777103] transaction","detail":"{read_only:false; response_revision:1321; number_of_response:1; }","duration":"334.283368ms","start":"2024-12-09T10:37:03.167532Z","end":"2024-12-09T10:37:03.501816Z","steps":["trace[1263777103] 'process raft request'  (duration: 333.992621ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-09T10:37:03.501922Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-09T10:37:03.167514Z","time spent":"334.341426ms","remote":"127.0.0.1:33586","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":537,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" mod_revision:1314 > success:<request_put:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" value_size:450 >> failure:<request_range:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" > >"}
	{"level":"warn","ts":"2024-12-09T10:37:03.501935Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"114.957133ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-09T10:37:03.501966Z","caller":"traceutil/trace.go:171","msg":"trace[903923745] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1321; }","duration":"114.987375ms","start":"2024-12-09T10:37:03.386971Z","end":"2024-12-09T10:37:03.501958Z","steps":["trace[903923745] 'agreement among raft nodes before linearized reading'  (duration: 114.933885ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-09T10:37:03.502196Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"164.593005ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset\" ","response":"range_response_count:1 size:3395"}
	{"level":"info","ts":"2024-12-09T10:37:03.502231Z","caller":"traceutil/trace.go:171","msg":"trace[919535365] range","detail":"{range_begin:/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset; range_end:; response_count:1; response_revision:1321; }","duration":"164.627907ms","start":"2024-12-09T10:37:03.337591Z","end":"2024-12-09T10:37:03.502219Z","steps":["trace[919535365] 'agreement among raft nodes before linearized reading'  (duration: 164.464661ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-09T10:37:33.606958Z","caller":"traceutil/trace.go:171","msg":"trace[550684819] transaction","detail":"{read_only:false; response_revision:1534; number_of_response:1; }","duration":"195.113759ms","start":"2024-12-09T10:37:33.411825Z","end":"2024-12-09T10:37:33.606939Z","steps":["trace[550684819] 'process raft request'  (duration: 194.812417ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-09T10:37:39.421340Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"320.593874ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-09T10:37:39.421382Z","caller":"traceutil/trace.go:171","msg":"trace[858721861] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1583; }","duration":"320.650246ms","start":"2024-12-09T10:37:39.100722Z","end":"2024-12-09T10:37:39.421372Z","steps":["trace[858721861] 'range keys from in-memory index tree'  (duration: 320.547608ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-09T10:37:39.421415Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-09T10:37:39.100685Z","time spent":"320.723166ms","remote":"127.0.0.1:33520","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":27,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2024-12-09T10:38:04.993007Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"215.108475ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-09T10:38:04.993146Z","caller":"traceutil/trace.go:171","msg":"trace[1223110132] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1742; }","duration":"215.266284ms","start":"2024-12-09T10:38:04.777869Z","end":"2024-12-09T10:38:04.993135Z","steps":["trace[1223110132] 'range keys from in-memory index tree'  (duration: 215.097466ms)"],"step_count":1}
	
	
	==> kernel <==
	 10:42:22 up 8 min,  0 users,  load average: 0.05, 0.59, 0.46
	Linux addons-156041 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [5be4a6b2466ed9d67eab208c35cd2ca892798f62c660be84f94db3f8ab52d683] <==
	 > logger="UnhandledError"
	E1209 10:36:55.777467       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.111.60.41:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.111.60.41:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.111.60.41:443: connect: connection refused" logger="UnhandledError"
	E1209 10:36:55.784986       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.111.60.41:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.111.60.41:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.111.60.41:443: connect: connection refused" logger="UnhandledError"
	I1209 10:36:55.857786       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1209 10:36:57.513785       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.98.155.64"}
	I1209 10:37:33.721035       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1209 10:37:34.851757       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	E1209 10:37:41.211914       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1209 10:37:45.937314       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I1209 10:37:46.111496       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.100.141.144"}
	I1209 10:37:47.972807       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1209 10:38:04.201732       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1209 10:38:04.201767       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1209 10:38:04.221039       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1209 10:38:04.221193       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1209 10:38:04.239856       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1209 10:38:04.240440       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1209 10:38:04.273597       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1209 10:38:04.273651       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1209 10:38:04.356607       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1209 10:38:04.356696       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1209 10:38:05.273698       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W1209 10:38:05.357815       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1209 10:38:05.362669       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I1209 10:40:08.901754       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.104.234.181"}
	
	
	==> kube-controller-manager [7d58a34c77c2cf417f71d9147333ecb767357bdae1c925fa33ad4de8512e260b] <==
	E1209 10:40:19.421966       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1209 10:40:23.763132       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="ingress-nginx"
	W1209 10:40:30.681569       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1209 10:40:30.681653       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1209 10:40:31.237684       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-156041"
	W1209 10:40:40.416531       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1209 10:40:40.416597       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1209 10:40:47.420937       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1209 10:40:47.420990       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1209 10:40:57.465789       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1209 10:40:57.465846       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1209 10:41:25.848757       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1209 10:41:25.848817       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1209 10:41:31.675838       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1209 10:41:31.675885       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1209 10:41:33.850198       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1209 10:41:33.850286       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1209 10:41:41.134830       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1209 10:41:41.134899       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1209 10:42:10.864656       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1209 10:42:10.865042       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1209 10:42:11.617070       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1209 10:42:11.617158       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1209 10:42:12.839791       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1209 10:42:12.839904       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [caaaa0e3d8bef385b8e9d924a28834f3d28156742f878b33c9f3ca3839a8061d] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1209 10:35:02.287373       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1209 10:35:02.298445       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.161"]
	E1209 10:35:02.298522       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1209 10:35:02.396828       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1209 10:35:02.396865       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1209 10:35:02.396925       1 server_linux.go:169] "Using iptables Proxier"
	I1209 10:35:02.399843       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1209 10:35:02.400173       1 server.go:483] "Version info" version="v1.31.2"
	I1209 10:35:02.400184       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1209 10:35:02.401421       1 config.go:199] "Starting service config controller"
	I1209 10:35:02.401436       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1209 10:35:02.401466       1 config.go:105] "Starting endpoint slice config controller"
	I1209 10:35:02.401469       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1209 10:35:02.401977       1 config.go:328] "Starting node config controller"
	I1209 10:35:02.401989       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1209 10:35:02.501603       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1209 10:35:02.501668       1 shared_informer.go:320] Caches are synced for service config
	I1209 10:35:02.502275       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [69162feab66f8d9454b1e7b9084d063da9017e38a8589fb25dcebe2fda8589e9] <==
	W1209 10:34:52.190757       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1209 10:34:52.192377       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1209 10:34:52.190794       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1209 10:34:52.192477       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1209 10:34:52.190832       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1209 10:34:52.195241       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1209 10:34:52.190871       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1209 10:34:52.195434       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1209 10:34:52.191018       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1209 10:34:52.195549       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1209 10:34:53.051931       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1209 10:34:53.051969       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1209 10:34:53.060851       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1209 10:34:53.060898       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1209 10:34:53.066537       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1209 10:34:53.068478       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1209 10:34:53.081535       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1209 10:34:53.082186       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1209 10:34:53.092324       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1209 10:34:53.092369       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1209 10:34:53.171566       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1209 10:34:53.171647       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1209 10:34:53.398292       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1209 10:34:53.398338       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1209 10:34:55.084840       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 09 10:40:55 addons-156041 kubelet[1220]: I1209 10:40:55.099225    1220 scope.go:117] "RemoveContainer" containerID="c43d4f61885a87b0aba332542acce1eb8cd51fb0e025f8c0cfafa32bfccc1697"
	Dec 09 10:40:55 addons-156041 kubelet[1220]: I1209 10:40:55.116754    1220 scope.go:117] "RemoveContainer" containerID="41ee9c95df1f4ede24dfd3c41f7b3ab5d5965af558374e35fd11ae922165ff16"
	Dec 09 10:41:04 addons-156041 kubelet[1220]: E1209 10:41:04.776787    1220 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733740864776531136,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604540,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 10:41:04 addons-156041 kubelet[1220]: E1209 10:41:04.776994    1220 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733740864776531136,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604540,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 10:41:14 addons-156041 kubelet[1220]: E1209 10:41:14.780569    1220 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733740874779696594,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604540,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 10:41:14 addons-156041 kubelet[1220]: E1209 10:41:14.780592    1220 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733740874779696594,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604540,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 10:41:24 addons-156041 kubelet[1220]: E1209 10:41:24.783155    1220 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733740884782791302,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604540,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 10:41:24 addons-156041 kubelet[1220]: E1209 10:41:24.783416    1220 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733740884782791302,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604540,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 10:41:34 addons-156041 kubelet[1220]: E1209 10:41:34.786047    1220 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733740894785769948,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604540,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 10:41:34 addons-156041 kubelet[1220]: E1209 10:41:34.786129    1220 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733740894785769948,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604540,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 10:41:44 addons-156041 kubelet[1220]: E1209 10:41:44.788454    1220 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733740904788238171,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604540,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 10:41:44 addons-156041 kubelet[1220]: E1209 10:41:44.788491    1220 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733740904788238171,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604540,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 10:41:54 addons-156041 kubelet[1220]: E1209 10:41:54.621646    1220 iptables.go:577] "Could not set up iptables canary" err=<
	Dec 09 10:41:54 addons-156041 kubelet[1220]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 09 10:41:54 addons-156041 kubelet[1220]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 09 10:41:54 addons-156041 kubelet[1220]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 09 10:41:54 addons-156041 kubelet[1220]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 09 10:41:54 addons-156041 kubelet[1220]: E1209 10:41:54.790693    1220 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733740914790207011,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604540,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 10:41:54 addons-156041 kubelet[1220]: E1209 10:41:54.790716    1220 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733740914790207011,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604540,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 10:41:58 addons-156041 kubelet[1220]: I1209 10:41:58.606241    1220 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-hbkzd" secret="" err="secret \"gcp-auth\" not found"
	Dec 09 10:42:04 addons-156041 kubelet[1220]: I1209 10:42:04.606856    1220 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Dec 09 10:42:04 addons-156041 kubelet[1220]: E1209 10:42:04.793191    1220 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733740924792901586,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604540,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 10:42:04 addons-156041 kubelet[1220]: E1209 10:42:04.793226    1220 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733740924792901586,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604540,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 10:42:14 addons-156041 kubelet[1220]: E1209 10:42:14.795680    1220 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733740934795302784,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604540,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 10:42:14 addons-156041 kubelet[1220]: E1209 10:42:14.795965    1220 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733740934795302784,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604540,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [e75072ac379d65e9afbaf83daf64c569651d9cdc52aca2478b15b50422e9bb9a] <==
	I1209 10:35:07.796260       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1209 10:35:07.836192       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1209 10:35:07.836272       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1209 10:35:07.927783       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1209 10:35:07.927954       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-156041_ebf2427a-c807-47ef-a9e5-1cb1fc71f37a!
	I1209 10:35:07.934947       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b5aa873f-06fb-48fe-8c0c-8c1d664836ee", APIVersion:"v1", ResourceVersion:"660", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-156041_ebf2427a-c807-47ef-a9e5-1cb1fc71f37a became leader
	I1209 10:35:08.044411       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-156041_ebf2427a-c807-47ef-a9e5-1cb1fc71f37a!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-156041 -n addons-156041
helpers_test.go:261: (dbg) Run:  kubectl --context addons-156041 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-156041 addons disable metrics-server --alsologtostderr -v=1
--- FAIL: TestAddons/parallel/MetricsServer (327.51s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (154.43s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-156041
addons_test.go:170: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-156041: exit status 82 (2m0.471926857s)

                                                
                                                
-- stdout --
	* Stopping node "addons-156041"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:172: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-156041" : exit status 82
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-156041
addons_test.go:174: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-156041: exit status 11 (21.665517326s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.161:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:176: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-156041" : exit status 11
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-156041
addons_test.go:178: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-156041: exit status 11 (6.143512614s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.161:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:180: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-156041" : exit status 11
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-156041
addons_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-156041: exit status 11 (6.144578354s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.161:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:185: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-156041" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (154.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (141.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-792382 node stop m02 -v=7 --alsologtostderr
E1209 10:54:03.628665  617017 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/functional-032350/client.crt: no such file or directory" logger="UnhandledError"
E1209 10:54:44.590566  617017 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/functional-032350/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:365: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-792382 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.466863365s)

                                                
                                                
-- stdout --
	* Stopping node "ha-792382-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 10:53:46.285193  631402 out.go:345] Setting OutFile to fd 1 ...
	I1209 10:53:46.285485  631402 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 10:53:46.285496  631402 out.go:358] Setting ErrFile to fd 2...
	I1209 10:53:46.285502  631402 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 10:53:46.285719  631402 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20068-609844/.minikube/bin
	I1209 10:53:46.286003  631402 mustload.go:65] Loading cluster: ha-792382
	I1209 10:53:46.286504  631402 config.go:182] Loaded profile config "ha-792382": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 10:53:46.286526  631402 stop.go:39] StopHost: ha-792382-m02
	I1209 10:53:46.286913  631402 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:53:46.286978  631402 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:53:46.304340  631402 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37049
	I1209 10:53:46.304987  631402 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:53:46.305691  631402 main.go:141] libmachine: Using API Version  1
	I1209 10:53:46.305723  631402 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:53:46.306205  631402 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:53:46.308214  631402 out.go:177] * Stopping node "ha-792382-m02"  ...
	I1209 10:53:46.309519  631402 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1209 10:53:46.309549  631402 main.go:141] libmachine: (ha-792382-m02) Calling .DriverName
	I1209 10:53:46.309775  631402 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1209 10:53:46.309802  631402 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHHostname
	I1209 10:53:46.313056  631402 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:53:46.313578  631402 main.go:141] libmachine: (ha-792382-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:40:00", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:50:11 +0000 UTC Type:0 Mac:52:54:00:95:40:00 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-792382-m02 Clientid:01:52:54:00:95:40:00}
	I1209 10:53:46.313612  631402 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:53:46.313905  631402 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHPort
	I1209 10:53:46.314118  631402 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHKeyPath
	I1209 10:53:46.314295  631402 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHUsername
	I1209 10:53:46.314462  631402 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382-m02/id_rsa Username:docker}
	I1209 10:53:46.397403  631402 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1209 10:53:46.454612  631402 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1209 10:53:46.489671  631402 main.go:141] libmachine: Stopping "ha-792382-m02"...
	I1209 10:53:46.489703  631402 main.go:141] libmachine: (ha-792382-m02) Calling .GetState
	I1209 10:53:46.491422  631402 main.go:141] libmachine: (ha-792382-m02) Calling .Stop
	I1209 10:53:46.495193  631402 main.go:141] libmachine: (ha-792382-m02) Waiting for machine to stop 0/120
	I1209 10:53:47.496726  631402 main.go:141] libmachine: (ha-792382-m02) Waiting for machine to stop 1/120
	I1209 10:53:48.498000  631402 main.go:141] libmachine: (ha-792382-m02) Waiting for machine to stop 2/120
	I1209 10:53:49.499313  631402 main.go:141] libmachine: (ha-792382-m02) Waiting for machine to stop 3/120
	I1209 10:53:50.500798  631402 main.go:141] libmachine: (ha-792382-m02) Waiting for machine to stop 4/120
	I1209 10:53:51.502986  631402 main.go:141] libmachine: (ha-792382-m02) Waiting for machine to stop 5/120
	I1209 10:53:52.504882  631402 main.go:141] libmachine: (ha-792382-m02) Waiting for machine to stop 6/120
	I1209 10:53:53.506047  631402 main.go:141] libmachine: (ha-792382-m02) Waiting for machine to stop 7/120
	I1209 10:53:54.507536  631402 main.go:141] libmachine: (ha-792382-m02) Waiting for machine to stop 8/120
	I1209 10:53:55.508800  631402 main.go:141] libmachine: (ha-792382-m02) Waiting for machine to stop 9/120
	I1209 10:53:56.510108  631402 main.go:141] libmachine: (ha-792382-m02) Waiting for machine to stop 10/120
	I1209 10:53:57.511503  631402 main.go:141] libmachine: (ha-792382-m02) Waiting for machine to stop 11/120
	I1209 10:53:58.512970  631402 main.go:141] libmachine: (ha-792382-m02) Waiting for machine to stop 12/120
	I1209 10:53:59.514226  631402 main.go:141] libmachine: (ha-792382-m02) Waiting for machine to stop 13/120
	I1209 10:54:00.515846  631402 main.go:141] libmachine: (ha-792382-m02) Waiting for machine to stop 14/120
	I1209 10:54:01.518195  631402 main.go:141] libmachine: (ha-792382-m02) Waiting for machine to stop 15/120
	I1209 10:54:02.519596  631402 main.go:141] libmachine: (ha-792382-m02) Waiting for machine to stop 16/120
	I1209 10:54:03.521274  631402 main.go:141] libmachine: (ha-792382-m02) Waiting for machine to stop 17/120
	I1209 10:54:04.522612  631402 main.go:141] libmachine: (ha-792382-m02) Waiting for machine to stop 18/120
	I1209 10:54:05.524623  631402 main.go:141] libmachine: (ha-792382-m02) Waiting for machine to stop 19/120
	I1209 10:54:06.527065  631402 main.go:141] libmachine: (ha-792382-m02) Waiting for machine to stop 20/120
	I1209 10:54:07.528508  631402 main.go:141] libmachine: (ha-792382-m02) Waiting for machine to stop 21/120
	I1209 10:54:08.529903  631402 main.go:141] libmachine: (ha-792382-m02) Waiting for machine to stop 22/120
	I1209 10:54:09.531382  631402 main.go:141] libmachine: (ha-792382-m02) Waiting for machine to stop 23/120
	I1209 10:54:10.532742  631402 main.go:141] libmachine: (ha-792382-m02) Waiting for machine to stop 24/120
	I1209 10:54:11.534884  631402 main.go:141] libmachine: (ha-792382-m02) Waiting for machine to stop 25/120
	I1209 10:54:12.536982  631402 main.go:141] libmachine: (ha-792382-m02) Waiting for machine to stop 26/120
	I1209 10:54:13.538512  631402 main.go:141] libmachine: (ha-792382-m02) Waiting for machine to stop 27/120
	I1209 10:54:14.541039  631402 main.go:141] libmachine: (ha-792382-m02) Waiting for machine to stop 28/120
	I1209 10:54:15.543111  631402 main.go:141] libmachine: (ha-792382-m02) Waiting for machine to stop 29/120
	I1209 10:54:16.545626  631402 main.go:141] libmachine: (ha-792382-m02) Waiting for machine to stop 30/120
	I1209 10:54:17.547217  631402 main.go:141] libmachine: (ha-792382-m02) Waiting for machine to stop 31/120
	I1209 10:54:18.548715  631402 main.go:141] libmachine: (ha-792382-m02) Waiting for machine to stop 32/120
	I1209 10:54:19.550021  631402 main.go:141] libmachine: (ha-792382-m02) Waiting for machine to stop 33/120
	I1209 10:54:20.552016  631402 main.go:141] libmachine: (ha-792382-m02) Waiting for machine to stop 34/120
	I1209 10:54:21.553503  631402 main.go:141] libmachine: (ha-792382-m02) Waiting for machine to stop 35/120
	I1209 10:54:22.555115  631402 main.go:141] libmachine: (ha-792382-m02) Waiting for machine to stop 36/120
	I1209 10:54:23.556641  631402 main.go:141] libmachine: (ha-792382-m02) Waiting for machine to stop 37/120
	I1209 10:54:24.558135  631402 main.go:141] libmachine: (ha-792382-m02) Waiting for machine to stop 38/120
	I1209 10:54:25.559630  631402 main.go:141] libmachine: (ha-792382-m02) Waiting for machine to stop 39/120
	I1209 10:54:26.561720  631402 main.go:141] libmachine: (ha-792382-m02) Waiting for machine to stop 40/120
	I1209 10:54:27.563162  631402 main.go:141] libmachine: (ha-792382-m02) Waiting for machine to stop 41/120
	I1209 10:54:28.564764  631402 main.go:141] libmachine: (ha-792382-m02) Waiting for machine to stop 42/120
	I1209 10:54:29.566292  631402 main.go:141] libmachine: (ha-792382-m02) Waiting for machine to stop 43/120
	I1209 10:54:30.567591  631402 main.go:141] libmachine: (ha-792382-m02) Waiting for machine to stop 44/120
	I1209 10:54:31.569652  631402 main.go:141] libmachine: (ha-792382-m02) Waiting for machine to stop 45/120
	I1209 10:54:32.571149  631402 main.go:141] libmachine: (ha-792382-m02) Waiting for machine to stop 46/120
	I1209 10:54:33.572652  631402 main.go:141] libmachine: (ha-792382-m02) Waiting for machine to stop 47/120
	I1209 10:54:34.574066  631402 main.go:141] libmachine: (ha-792382-m02) Waiting for machine to stop 48/120
	I1209 10:54:35.576108  631402 main.go:141] libmachine: (ha-792382-m02) Waiting for machine to stop 49/120
	I1209 10:54:36.578427  631402 main.go:141] libmachine: (ha-792382-m02) Waiting for machine to stop 50/120
	I1209 10:54:37.579882  631402 main.go:141] libmachine: (ha-792382-m02) Waiting for machine to stop 51/120
	I1209 10:54:38.581255  631402 main.go:141] libmachine: (ha-792382-m02) Waiting for machine to stop 52/120
	I1209 10:54:39.582891  631402 main.go:141] libmachine: (ha-792382-m02) Waiting for machine to stop 53/120
	I1209 10:54:40.584354  631402 main.go:141] libmachine: (ha-792382-m02) Waiting for machine to stop 54/120
	I1209 10:54:41.586788  631402 main.go:141] libmachine: (ha-792382-m02) Waiting for machine to stop 55/120
	I1209 10:54:42.588815  631402 main.go:141] libmachine: (ha-792382-m02) Waiting for machine to stop 56/120
	I1209 10:54:43.590271  631402 main.go:141] libmachine: (ha-792382-m02) Waiting for machine to stop 57/120
	I1209 10:54:44.591589  631402 main.go:141] libmachine: (ha-792382-m02) Waiting for machine to stop 58/120
	I1209 10:54:45.593113  631402 main.go:141] libmachine: (ha-792382-m02) Waiting for machine to stop 59/120
	I1209 10:54:46.595383  631402 main.go:141] libmachine: (ha-792382-m02) Waiting for machine to stop 60/120
	I1209 10:54:47.596584  631402 main.go:141] libmachine: (ha-792382-m02) Waiting for machine to stop 61/120
	I1209 10:54:48.597990  631402 main.go:141] libmachine: (ha-792382-m02) Waiting for machine to stop 62/120
	I1209 10:54:49.599474  631402 main.go:141] libmachine: (ha-792382-m02) Waiting for machine to stop 63/120
	I1209 10:54:50.600974  631402 main.go:141] libmachine: (ha-792382-m02) Waiting for machine to stop 64/120
	I1209 10:54:51.603139  631402 main.go:141] libmachine: (ha-792382-m02) Waiting for machine to stop 65/120
	I1209 10:54:52.605088  631402 main.go:141] libmachine: (ha-792382-m02) Waiting for machine to stop 66/120
	I1209 10:54:53.606553  631402 main.go:141] libmachine: (ha-792382-m02) Waiting for machine to stop 67/120
	I1209 10:54:54.608071  631402 main.go:141] libmachine: (ha-792382-m02) Waiting for machine to stop 68/120
	I1209 10:54:55.609865  631402 main.go:141] libmachine: (ha-792382-m02) Waiting for machine to stop 69/120
	I1209 10:54:56.611450  631402 main.go:141] libmachine: (ha-792382-m02) Waiting for machine to stop 70/120
	I1209 10:54:57.612741  631402 main.go:141] libmachine: (ha-792382-m02) Waiting for machine to stop 71/120
	I1209 10:54:58.614405  631402 main.go:141] libmachine: (ha-792382-m02) Waiting for machine to stop 72/120
	I1209 10:54:59.616783  631402 main.go:141] libmachine: (ha-792382-m02) Waiting for machine to stop 73/120
	I1209 10:55:00.618526  631402 main.go:141] libmachine: (ha-792382-m02) Waiting for machine to stop 74/120
	I1209 10:55:01.620662  631402 main.go:141] libmachine: (ha-792382-m02) Waiting for machine to stop 75/120
	I1209 10:55:02.622398  631402 main.go:141] libmachine: (ha-792382-m02) Waiting for machine to stop 76/120
	I1209 10:55:03.624733  631402 main.go:141] libmachine: (ha-792382-m02) Waiting for machine to stop 77/120
	I1209 10:55:04.626052  631402 main.go:141] libmachine: (ha-792382-m02) Waiting for machine to stop 78/120
	I1209 10:55:05.627491  631402 main.go:141] libmachine: (ha-792382-m02) Waiting for machine to stop 79/120
	I1209 10:55:06.629614  631402 main.go:141] libmachine: (ha-792382-m02) Waiting for machine to stop 80/120
	I1209 10:55:07.630866  631402 main.go:141] libmachine: (ha-792382-m02) Waiting for machine to stop 81/120
	I1209 10:55:08.633059  631402 main.go:141] libmachine: (ha-792382-m02) Waiting for machine to stop 82/120
	I1209 10:55:09.634455  631402 main.go:141] libmachine: (ha-792382-m02) Waiting for machine to stop 83/120
	I1209 10:55:10.635726  631402 main.go:141] libmachine: (ha-792382-m02) Waiting for machine to stop 84/120
	I1209 10:55:11.637459  631402 main.go:141] libmachine: (ha-792382-m02) Waiting for machine to stop 85/120
	I1209 10:55:12.638973  631402 main.go:141] libmachine: (ha-792382-m02) Waiting for machine to stop 86/120
	I1209 10:55:13.640983  631402 main.go:141] libmachine: (ha-792382-m02) Waiting for machine to stop 87/120
	I1209 10:55:14.643290  631402 main.go:141] libmachine: (ha-792382-m02) Waiting for machine to stop 88/120
	I1209 10:55:15.645418  631402 main.go:141] libmachine: (ha-792382-m02) Waiting for machine to stop 89/120
	I1209 10:55:16.647806  631402 main.go:141] libmachine: (ha-792382-m02) Waiting for machine to stop 90/120
	I1209 10:55:17.649668  631402 main.go:141] libmachine: (ha-792382-m02) Waiting for machine to stop 91/120
	I1209 10:55:18.651128  631402 main.go:141] libmachine: (ha-792382-m02) Waiting for machine to stop 92/120
	I1209 10:55:19.652684  631402 main.go:141] libmachine: (ha-792382-m02) Waiting for machine to stop 93/120
	I1209 10:55:20.654509  631402 main.go:141] libmachine: (ha-792382-m02) Waiting for machine to stop 94/120
	I1209 10:55:21.656699  631402 main.go:141] libmachine: (ha-792382-m02) Waiting for machine to stop 95/120
	I1209 10:55:22.658505  631402 main.go:141] libmachine: (ha-792382-m02) Waiting for machine to stop 96/120
	I1209 10:55:23.659825  631402 main.go:141] libmachine: (ha-792382-m02) Waiting for machine to stop 97/120
	I1209 10:55:24.661437  631402 main.go:141] libmachine: (ha-792382-m02) Waiting for machine to stop 98/120
	I1209 10:55:25.663230  631402 main.go:141] libmachine: (ha-792382-m02) Waiting for machine to stop 99/120
	I1209 10:55:26.665437  631402 main.go:141] libmachine: (ha-792382-m02) Waiting for machine to stop 100/120
	I1209 10:55:27.667004  631402 main.go:141] libmachine: (ha-792382-m02) Waiting for machine to stop 101/120
	I1209 10:55:28.668407  631402 main.go:141] libmachine: (ha-792382-m02) Waiting for machine to stop 102/120
	I1209 10:55:29.669738  631402 main.go:141] libmachine: (ha-792382-m02) Waiting for machine to stop 103/120
	I1209 10:55:30.671024  631402 main.go:141] libmachine: (ha-792382-m02) Waiting for machine to stop 104/120
	I1209 10:55:31.672998  631402 main.go:141] libmachine: (ha-792382-m02) Waiting for machine to stop 105/120
	I1209 10:55:32.674474  631402 main.go:141] libmachine: (ha-792382-m02) Waiting for machine to stop 106/120
	I1209 10:55:33.675746  631402 main.go:141] libmachine: (ha-792382-m02) Waiting for machine to stop 107/120
	I1209 10:55:34.677110  631402 main.go:141] libmachine: (ha-792382-m02) Waiting for machine to stop 108/120
	I1209 10:55:35.678430  631402 main.go:141] libmachine: (ha-792382-m02) Waiting for machine to stop 109/120
	I1209 10:55:36.680483  631402 main.go:141] libmachine: (ha-792382-m02) Waiting for machine to stop 110/120
	I1209 10:55:37.681845  631402 main.go:141] libmachine: (ha-792382-m02) Waiting for machine to stop 111/120
	I1209 10:55:38.683472  631402 main.go:141] libmachine: (ha-792382-m02) Waiting for machine to stop 112/120
	I1209 10:55:39.684790  631402 main.go:141] libmachine: (ha-792382-m02) Waiting for machine to stop 113/120
	I1209 10:55:40.686241  631402 main.go:141] libmachine: (ha-792382-m02) Waiting for machine to stop 114/120
	I1209 10:55:41.688263  631402 main.go:141] libmachine: (ha-792382-m02) Waiting for machine to stop 115/120
	I1209 10:55:42.689786  631402 main.go:141] libmachine: (ha-792382-m02) Waiting for machine to stop 116/120
	I1209 10:55:43.691086  631402 main.go:141] libmachine: (ha-792382-m02) Waiting for machine to stop 117/120
	I1209 10:55:44.692404  631402 main.go:141] libmachine: (ha-792382-m02) Waiting for machine to stop 118/120
	I1209 10:55:45.693891  631402 main.go:141] libmachine: (ha-792382-m02) Waiting for machine to stop 119/120
	I1209 10:55:46.694816  631402 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1209 10:55:46.694985  631402 out.go:270] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:367: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-792382 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-792382 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Done: out/minikube-linux-amd64 -p ha-792382 status -v=7 --alsologtostderr: (18.856465088s)
ha_test.go:377: status says not all three control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-792382 status -v=7 --alsologtostderr": 
ha_test.go:380: status says not three hosts are running: args "out/minikube-linux-amd64 -p ha-792382 status -v=7 --alsologtostderr": 
ha_test.go:383: status says not three kubelets are running: args "out/minikube-linux-amd64 -p ha-792382 status -v=7 --alsologtostderr": 
ha_test.go:386: status says not two apiservers are running: args "out/minikube-linux-amd64 -p ha-792382 status -v=7 --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-792382 -n ha-792382
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-792382 logs -n 25
E1209 10:56:06.512578  617017 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/functional-032350/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-792382 logs -n 25: (1.426758385s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-792382 cp ha-792382-m03:/home/docker/cp-test.txt                              | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3558467820/001/cp-test_ha-792382-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-792382 ssh -n                                                                 | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | ha-792382-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-792382 cp ha-792382-m03:/home/docker/cp-test.txt                              | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | ha-792382:/home/docker/cp-test_ha-792382-m03_ha-792382.txt                       |           |         |         |                     |                     |
	| ssh     | ha-792382 ssh -n                                                                 | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | ha-792382-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-792382 ssh -n ha-792382 sudo cat                                              | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | /home/docker/cp-test_ha-792382-m03_ha-792382.txt                                 |           |         |         |                     |                     |
	| cp      | ha-792382 cp ha-792382-m03:/home/docker/cp-test.txt                              | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | ha-792382-m02:/home/docker/cp-test_ha-792382-m03_ha-792382-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-792382 ssh -n                                                                 | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | ha-792382-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-792382 ssh -n ha-792382-m02 sudo cat                                          | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | /home/docker/cp-test_ha-792382-m03_ha-792382-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-792382 cp ha-792382-m03:/home/docker/cp-test.txt                              | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | ha-792382-m04:/home/docker/cp-test_ha-792382-m03_ha-792382-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-792382 ssh -n                                                                 | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | ha-792382-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-792382 ssh -n ha-792382-m04 sudo cat                                          | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | /home/docker/cp-test_ha-792382-m03_ha-792382-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-792382 cp testdata/cp-test.txt                                                | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | ha-792382-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-792382 ssh -n                                                                 | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | ha-792382-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-792382 cp ha-792382-m04:/home/docker/cp-test.txt                              | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3558467820/001/cp-test_ha-792382-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-792382 ssh -n                                                                 | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | ha-792382-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-792382 cp ha-792382-m04:/home/docker/cp-test.txt                              | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | ha-792382:/home/docker/cp-test_ha-792382-m04_ha-792382.txt                       |           |         |         |                     |                     |
	| ssh     | ha-792382 ssh -n                                                                 | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | ha-792382-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-792382 ssh -n ha-792382 sudo cat                                              | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | /home/docker/cp-test_ha-792382-m04_ha-792382.txt                                 |           |         |         |                     |                     |
	| cp      | ha-792382 cp ha-792382-m04:/home/docker/cp-test.txt                              | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | ha-792382-m02:/home/docker/cp-test_ha-792382-m04_ha-792382-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-792382 ssh -n                                                                 | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | ha-792382-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-792382 ssh -n ha-792382-m02 sudo cat                                          | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | /home/docker/cp-test_ha-792382-m04_ha-792382-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-792382 cp ha-792382-m04:/home/docker/cp-test.txt                              | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | ha-792382-m03:/home/docker/cp-test_ha-792382-m04_ha-792382-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-792382 ssh -n                                                                 | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | ha-792382-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-792382 ssh -n ha-792382-m03 sudo cat                                          | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | /home/docker/cp-test_ha-792382-m04_ha-792382-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-792382 node stop m02 -v=7                                                     | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/09 10:49:12
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 10:49:12.155112  627293 out.go:345] Setting OutFile to fd 1 ...
	I1209 10:49:12.155243  627293 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 10:49:12.155252  627293 out.go:358] Setting ErrFile to fd 2...
	I1209 10:49:12.155256  627293 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 10:49:12.155455  627293 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20068-609844/.minikube/bin
	I1209 10:49:12.156111  627293 out.go:352] Setting JSON to false
	I1209 10:49:12.157109  627293 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":12696,"bootTime":1733728656,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 10:49:12.157245  627293 start.go:139] virtualization: kvm guest
	I1209 10:49:12.159303  627293 out.go:177] * [ha-792382] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1209 10:49:12.160611  627293 out.go:177]   - MINIKUBE_LOCATION=20068
	I1209 10:49:12.160611  627293 notify.go:220] Checking for updates...
	I1209 10:49:12.163029  627293 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 10:49:12.164218  627293 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20068-609844/kubeconfig
	I1209 10:49:12.165346  627293 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20068-609844/.minikube
	I1209 10:49:12.166392  627293 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1209 10:49:12.168066  627293 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 10:49:12.169526  627293 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 10:49:12.205667  627293 out.go:177] * Using the kvm2 driver based on user configuration
	I1209 10:49:12.206853  627293 start.go:297] selected driver: kvm2
	I1209 10:49:12.206869  627293 start.go:901] validating driver "kvm2" against <nil>
	I1209 10:49:12.206881  627293 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 10:49:12.207633  627293 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 10:49:12.207718  627293 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20068-609844/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1209 10:49:12.223409  627293 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1209 10:49:12.223621  627293 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1209 10:49:12.224275  627293 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 10:49:12.224320  627293 cni.go:84] Creating CNI manager for ""
	I1209 10:49:12.224382  627293 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1209 10:49:12.224394  627293 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1209 10:49:12.224467  627293 start.go:340] cluster config:
	{Name:ha-792382 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-792382 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1209 10:49:12.224624  627293 iso.go:125] acquiring lock: {Name:mk3bf4d9da9b75cc0ae0a3732f4492bac54a4b46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 10:49:12.226221  627293 out.go:177] * Starting "ha-792382" primary control-plane node in "ha-792382" cluster
	I1209 10:49:12.227308  627293 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 10:49:12.227336  627293 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1209 10:49:12.227354  627293 cache.go:56] Caching tarball of preloaded images
	I1209 10:49:12.227432  627293 preload.go:172] Found /home/jenkins/minikube-integration/20068-609844/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1209 10:49:12.227447  627293 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1209 10:49:12.227749  627293 profile.go:143] Saving config to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/config.json ...
	I1209 10:49:12.227772  627293 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/config.json: {Name:mkc1440c2022322fca4f71077ddb8bd509450a18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 10:49:12.227928  627293 start.go:360] acquireMachinesLock for ha-792382: {Name:mka6afffe9ddf2faebdc603ed0401805d58bf31e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 10:49:12.227972  627293 start.go:364] duration metric: took 26.731µs to acquireMachinesLock for "ha-792382"
	I1209 10:49:12.227996  627293 start.go:93] Provisioning new machine with config: &{Name:ha-792382 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-792382 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 10:49:12.228057  627293 start.go:125] createHost starting for "" (driver="kvm2")
	I1209 10:49:12.229507  627293 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1209 10:49:12.229650  627293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:49:12.229688  627293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:49:12.243739  627293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40935
	I1209 10:49:12.244181  627293 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:49:12.244733  627293 main.go:141] libmachine: Using API Version  1
	I1209 10:49:12.244754  627293 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:49:12.245151  627293 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:49:12.245359  627293 main.go:141] libmachine: (ha-792382) Calling .GetMachineName
	I1209 10:49:12.245524  627293 main.go:141] libmachine: (ha-792382) Calling .DriverName
	I1209 10:49:12.245673  627293 start.go:159] libmachine.API.Create for "ha-792382" (driver="kvm2")
	I1209 10:49:12.245706  627293 client.go:168] LocalClient.Create starting
	I1209 10:49:12.245734  627293 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem
	I1209 10:49:12.245764  627293 main.go:141] libmachine: Decoding PEM data...
	I1209 10:49:12.245782  627293 main.go:141] libmachine: Parsing certificate...
	I1209 10:49:12.245831  627293 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem
	I1209 10:49:12.245849  627293 main.go:141] libmachine: Decoding PEM data...
	I1209 10:49:12.245860  627293 main.go:141] libmachine: Parsing certificate...
	I1209 10:49:12.245876  627293 main.go:141] libmachine: Running pre-create checks...
	I1209 10:49:12.245884  627293 main.go:141] libmachine: (ha-792382) Calling .PreCreateCheck
	I1209 10:49:12.246327  627293 main.go:141] libmachine: (ha-792382) Calling .GetConfigRaw
	I1209 10:49:12.246669  627293 main.go:141] libmachine: Creating machine...
	I1209 10:49:12.246682  627293 main.go:141] libmachine: (ha-792382) Calling .Create
	I1209 10:49:12.246831  627293 main.go:141] libmachine: (ha-792382) Creating KVM machine...
	I1209 10:49:12.248145  627293 main.go:141] libmachine: (ha-792382) DBG | found existing default KVM network
	I1209 10:49:12.248911  627293 main.go:141] libmachine: (ha-792382) DBG | I1209 10:49:12.248755  627316 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000123350}
	I1209 10:49:12.248939  627293 main.go:141] libmachine: (ha-792382) DBG | created network xml: 
	I1209 10:49:12.248951  627293 main.go:141] libmachine: (ha-792382) DBG | <network>
	I1209 10:49:12.248971  627293 main.go:141] libmachine: (ha-792382) DBG |   <name>mk-ha-792382</name>
	I1209 10:49:12.248981  627293 main.go:141] libmachine: (ha-792382) DBG |   <dns enable='no'/>
	I1209 10:49:12.248994  627293 main.go:141] libmachine: (ha-792382) DBG |   
	I1209 10:49:12.249009  627293 main.go:141] libmachine: (ha-792382) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1209 10:49:12.249019  627293 main.go:141] libmachine: (ha-792382) DBG |     <dhcp>
	I1209 10:49:12.249032  627293 main.go:141] libmachine: (ha-792382) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1209 10:49:12.249045  627293 main.go:141] libmachine: (ha-792382) DBG |     </dhcp>
	I1209 10:49:12.249058  627293 main.go:141] libmachine: (ha-792382) DBG |   </ip>
	I1209 10:49:12.249067  627293 main.go:141] libmachine: (ha-792382) DBG |   
	I1209 10:49:12.249134  627293 main.go:141] libmachine: (ha-792382) DBG | </network>
	I1209 10:49:12.249173  627293 main.go:141] libmachine: (ha-792382) DBG | 
	I1209 10:49:12.253952  627293 main.go:141] libmachine: (ha-792382) DBG | trying to create private KVM network mk-ha-792382 192.168.39.0/24...
	I1209 10:49:12.320765  627293 main.go:141] libmachine: (ha-792382) DBG | private KVM network mk-ha-792382 192.168.39.0/24 created
	I1209 10:49:12.320810  627293 main.go:141] libmachine: (ha-792382) Setting up store path in /home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382 ...
	I1209 10:49:12.320824  627293 main.go:141] libmachine: (ha-792382) DBG | I1209 10:49:12.320703  627316 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20068-609844/.minikube
	I1209 10:49:12.320846  627293 main.go:141] libmachine: (ha-792382) Building disk image from file:///home/jenkins/minikube-integration/20068-609844/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1209 10:49:12.320864  627293 main.go:141] libmachine: (ha-792382) Downloading /home/jenkins/minikube-integration/20068-609844/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20068-609844/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1209 10:49:12.624365  627293 main.go:141] libmachine: (ha-792382) DBG | I1209 10:49:12.624217  627316 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382/id_rsa...
	I1209 10:49:12.718158  627293 main.go:141] libmachine: (ha-792382) DBG | I1209 10:49:12.718015  627316 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382/ha-792382.rawdisk...
	I1209 10:49:12.718234  627293 main.go:141] libmachine: (ha-792382) DBG | Writing magic tar header
	I1209 10:49:12.718307  627293 main.go:141] libmachine: (ha-792382) DBG | Writing SSH key tar header
	I1209 10:49:12.718345  627293 main.go:141] libmachine: (ha-792382) DBG | I1209 10:49:12.718134  627316 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382 ...
	I1209 10:49:12.718360  627293 main.go:141] libmachine: (ha-792382) Setting executable bit set on /home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382 (perms=drwx------)
	I1209 10:49:12.718367  627293 main.go:141] libmachine: (ha-792382) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382
	I1209 10:49:12.718384  627293 main.go:141] libmachine: (ha-792382) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20068-609844/.minikube/machines
	I1209 10:49:12.718399  627293 main.go:141] libmachine: (ha-792382) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20068-609844/.minikube
	I1209 10:49:12.718409  627293 main.go:141] libmachine: (ha-792382) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20068-609844
	I1209 10:49:12.718416  627293 main.go:141] libmachine: (ha-792382) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1209 10:49:12.718424  627293 main.go:141] libmachine: (ha-792382) Setting executable bit set on /home/jenkins/minikube-integration/20068-609844/.minikube/machines (perms=drwxr-xr-x)
	I1209 10:49:12.718431  627293 main.go:141] libmachine: (ha-792382) Setting executable bit set on /home/jenkins/minikube-integration/20068-609844/.minikube (perms=drwxr-xr-x)
	I1209 10:49:12.718436  627293 main.go:141] libmachine: (ha-792382) DBG | Checking permissions on dir: /home/jenkins
	I1209 10:49:12.718443  627293 main.go:141] libmachine: (ha-792382) DBG | Checking permissions on dir: /home
	I1209 10:49:12.718449  627293 main.go:141] libmachine: (ha-792382) DBG | Skipping /home - not owner
	I1209 10:49:12.718461  627293 main.go:141] libmachine: (ha-792382) Setting executable bit set on /home/jenkins/minikube-integration/20068-609844 (perms=drwxrwxr-x)
	I1209 10:49:12.718475  627293 main.go:141] libmachine: (ha-792382) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1209 10:49:12.718495  627293 main.go:141] libmachine: (ha-792382) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1209 10:49:12.718506  627293 main.go:141] libmachine: (ha-792382) Creating domain...
	I1209 10:49:12.719443  627293 main.go:141] libmachine: (ha-792382) define libvirt domain using xml: 
	I1209 10:49:12.719473  627293 main.go:141] libmachine: (ha-792382) <domain type='kvm'>
	I1209 10:49:12.719482  627293 main.go:141] libmachine: (ha-792382)   <name>ha-792382</name>
	I1209 10:49:12.719490  627293 main.go:141] libmachine: (ha-792382)   <memory unit='MiB'>2200</memory>
	I1209 10:49:12.719512  627293 main.go:141] libmachine: (ha-792382)   <vcpu>2</vcpu>
	I1209 10:49:12.719521  627293 main.go:141] libmachine: (ha-792382)   <features>
	I1209 10:49:12.719529  627293 main.go:141] libmachine: (ha-792382)     <acpi/>
	I1209 10:49:12.719537  627293 main.go:141] libmachine: (ha-792382)     <apic/>
	I1209 10:49:12.719561  627293 main.go:141] libmachine: (ha-792382)     <pae/>
	I1209 10:49:12.719580  627293 main.go:141] libmachine: (ha-792382)     
	I1209 10:49:12.719586  627293 main.go:141] libmachine: (ha-792382)   </features>
	I1209 10:49:12.719602  627293 main.go:141] libmachine: (ha-792382)   <cpu mode='host-passthrough'>
	I1209 10:49:12.719613  627293 main.go:141] libmachine: (ha-792382)   
	I1209 10:49:12.719619  627293 main.go:141] libmachine: (ha-792382)   </cpu>
	I1209 10:49:12.719631  627293 main.go:141] libmachine: (ha-792382)   <os>
	I1209 10:49:12.719637  627293 main.go:141] libmachine: (ha-792382)     <type>hvm</type>
	I1209 10:49:12.719648  627293 main.go:141] libmachine: (ha-792382)     <boot dev='cdrom'/>
	I1209 10:49:12.719659  627293 main.go:141] libmachine: (ha-792382)     <boot dev='hd'/>
	I1209 10:49:12.719681  627293 main.go:141] libmachine: (ha-792382)     <bootmenu enable='no'/>
	I1209 10:49:12.719701  627293 main.go:141] libmachine: (ha-792382)   </os>
	I1209 10:49:12.719719  627293 main.go:141] libmachine: (ha-792382)   <devices>
	I1209 10:49:12.719738  627293 main.go:141] libmachine: (ha-792382)     <disk type='file' device='cdrom'>
	I1209 10:49:12.719756  627293 main.go:141] libmachine: (ha-792382)       <source file='/home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382/boot2docker.iso'/>
	I1209 10:49:12.719767  627293 main.go:141] libmachine: (ha-792382)       <target dev='hdc' bus='scsi'/>
	I1209 10:49:12.719777  627293 main.go:141] libmachine: (ha-792382)       <readonly/>
	I1209 10:49:12.719791  627293 main.go:141] libmachine: (ha-792382)     </disk>
	I1209 10:49:12.719805  627293 main.go:141] libmachine: (ha-792382)     <disk type='file' device='disk'>
	I1209 10:49:12.719816  627293 main.go:141] libmachine: (ha-792382)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1209 10:49:12.719831  627293 main.go:141] libmachine: (ha-792382)       <source file='/home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382/ha-792382.rawdisk'/>
	I1209 10:49:12.719845  627293 main.go:141] libmachine: (ha-792382)       <target dev='hda' bus='virtio'/>
	I1209 10:49:12.719857  627293 main.go:141] libmachine: (ha-792382)     </disk>
	I1209 10:49:12.719868  627293 main.go:141] libmachine: (ha-792382)     <interface type='network'>
	I1209 10:49:12.719881  627293 main.go:141] libmachine: (ha-792382)       <source network='mk-ha-792382'/>
	I1209 10:49:12.719892  627293 main.go:141] libmachine: (ha-792382)       <model type='virtio'/>
	I1209 10:49:12.719902  627293 main.go:141] libmachine: (ha-792382)     </interface>
	I1209 10:49:12.719910  627293 main.go:141] libmachine: (ha-792382)     <interface type='network'>
	I1209 10:49:12.719940  627293 main.go:141] libmachine: (ha-792382)       <source network='default'/>
	I1209 10:49:12.719966  627293 main.go:141] libmachine: (ha-792382)       <model type='virtio'/>
	I1209 10:49:12.719981  627293 main.go:141] libmachine: (ha-792382)     </interface>
	I1209 10:49:12.719994  627293 main.go:141] libmachine: (ha-792382)     <serial type='pty'>
	I1209 10:49:12.720009  627293 main.go:141] libmachine: (ha-792382)       <target port='0'/>
	I1209 10:49:12.720026  627293 main.go:141] libmachine: (ha-792382)     </serial>
	I1209 10:49:12.720038  627293 main.go:141] libmachine: (ha-792382)     <console type='pty'>
	I1209 10:49:12.720049  627293 main.go:141] libmachine: (ha-792382)       <target type='serial' port='0'/>
	I1209 10:49:12.720070  627293 main.go:141] libmachine: (ha-792382)     </console>
	I1209 10:49:12.720083  627293 main.go:141] libmachine: (ha-792382)     <rng model='virtio'>
	I1209 10:49:12.720106  627293 main.go:141] libmachine: (ha-792382)       <backend model='random'>/dev/random</backend>
	I1209 10:49:12.720122  627293 main.go:141] libmachine: (ha-792382)     </rng>
	I1209 10:49:12.720133  627293 main.go:141] libmachine: (ha-792382)     
	I1209 10:49:12.720141  627293 main.go:141] libmachine: (ha-792382)     
	I1209 10:49:12.720152  627293 main.go:141] libmachine: (ha-792382)   </devices>
	I1209 10:49:12.720161  627293 main.go:141] libmachine: (ha-792382) </domain>
	I1209 10:49:12.720175  627293 main.go:141] libmachine: (ha-792382) 
	I1209 10:49:12.724156  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:b1:77:e1 in network default
	I1209 10:49:12.724674  627293 main.go:141] libmachine: (ha-792382) Ensuring networks are active...
	I1209 10:49:12.724713  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:12.725331  627293 main.go:141] libmachine: (ha-792382) Ensuring network default is active
	I1209 10:49:12.725573  627293 main.go:141] libmachine: (ha-792382) Ensuring network mk-ha-792382 is active
	I1209 10:49:12.726011  627293 main.go:141] libmachine: (ha-792382) Getting domain xml...
	I1209 10:49:12.726856  627293 main.go:141] libmachine: (ha-792382) Creating domain...
	I1209 10:49:13.913426  627293 main.go:141] libmachine: (ha-792382) Waiting to get IP...
	I1209 10:49:13.914474  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:13.914854  627293 main.go:141] libmachine: (ha-792382) DBG | unable to find current IP address of domain ha-792382 in network mk-ha-792382
	I1209 10:49:13.914884  627293 main.go:141] libmachine: (ha-792382) DBG | I1209 10:49:13.914843  627316 retry.go:31] will retry after 231.46558ms: waiting for machine to come up
	I1209 10:49:14.148392  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:14.148786  627293 main.go:141] libmachine: (ha-792382) DBG | unable to find current IP address of domain ha-792382 in network mk-ha-792382
	I1209 10:49:14.148818  627293 main.go:141] libmachine: (ha-792382) DBG | I1209 10:49:14.148733  627316 retry.go:31] will retry after 323.334507ms: waiting for machine to come up
	I1209 10:49:14.473105  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:14.473482  627293 main.go:141] libmachine: (ha-792382) DBG | unable to find current IP address of domain ha-792382 in network mk-ha-792382
	I1209 10:49:14.473521  627293 main.go:141] libmachine: (ha-792382) DBG | I1209 10:49:14.473432  627316 retry.go:31] will retry after 293.410473ms: waiting for machine to come up
	I1209 10:49:14.769073  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:14.769413  627293 main.go:141] libmachine: (ha-792382) DBG | unable to find current IP address of domain ha-792382 in network mk-ha-792382
	I1209 10:49:14.769442  627293 main.go:141] libmachine: (ha-792382) DBG | I1209 10:49:14.769369  627316 retry.go:31] will retry after 414.561658ms: waiting for machine to come up
	I1209 10:49:15.186115  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:15.186526  627293 main.go:141] libmachine: (ha-792382) DBG | unable to find current IP address of domain ha-792382 in network mk-ha-792382
	I1209 10:49:15.186550  627293 main.go:141] libmachine: (ha-792382) DBG | I1209 10:49:15.186486  627316 retry.go:31] will retry after 602.170929ms: waiting for machine to come up
	I1209 10:49:15.790232  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:15.790609  627293 main.go:141] libmachine: (ha-792382) DBG | unable to find current IP address of domain ha-792382 in network mk-ha-792382
	I1209 10:49:15.790636  627293 main.go:141] libmachine: (ha-792382) DBG | I1209 10:49:15.790561  627316 retry.go:31] will retry after 626.828073ms: waiting for machine to come up
	I1209 10:49:16.419433  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:16.419896  627293 main.go:141] libmachine: (ha-792382) DBG | unable to find current IP address of domain ha-792382 in network mk-ha-792382
	I1209 10:49:16.419938  627293 main.go:141] libmachine: (ha-792382) DBG | I1209 10:49:16.419857  627316 retry.go:31] will retry after 735.370165ms: waiting for machine to come up
	I1209 10:49:17.156849  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:17.157231  627293 main.go:141] libmachine: (ha-792382) DBG | unable to find current IP address of domain ha-792382 in network mk-ha-792382
	I1209 10:49:17.157266  627293 main.go:141] libmachine: (ha-792382) DBG | I1209 10:49:17.157218  627316 retry.go:31] will retry after 1.229419392s: waiting for machine to come up
	I1209 10:49:18.387855  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:18.388261  627293 main.go:141] libmachine: (ha-792382) DBG | unable to find current IP address of domain ha-792382 in network mk-ha-792382
	I1209 10:49:18.388300  627293 main.go:141] libmachine: (ha-792382) DBG | I1209 10:49:18.388201  627316 retry.go:31] will retry after 1.781823768s: waiting for machine to come up
	I1209 10:49:20.172140  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:20.172552  627293 main.go:141] libmachine: (ha-792382) DBG | unable to find current IP address of domain ha-792382 in network mk-ha-792382
	I1209 10:49:20.172583  627293 main.go:141] libmachine: (ha-792382) DBG | I1209 10:49:20.172526  627316 retry.go:31] will retry after 1.563022016s: waiting for machine to come up
	I1209 10:49:21.736731  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:21.737192  627293 main.go:141] libmachine: (ha-792382) DBG | unable to find current IP address of domain ha-792382 in network mk-ha-792382
	I1209 10:49:21.737227  627293 main.go:141] libmachine: (ha-792382) DBG | I1209 10:49:21.737132  627316 retry.go:31] will retry after 1.796183688s: waiting for machine to come up
	I1209 10:49:23.536165  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:23.536600  627293 main.go:141] libmachine: (ha-792382) DBG | unable to find current IP address of domain ha-792382 in network mk-ha-792382
	I1209 10:49:23.536633  627293 main.go:141] libmachine: (ha-792382) DBG | I1209 10:49:23.536553  627316 retry.go:31] will retry after 2.766987907s: waiting for machine to come up
	I1209 10:49:26.306562  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:26.306896  627293 main.go:141] libmachine: (ha-792382) DBG | unable to find current IP address of domain ha-792382 in network mk-ha-792382
	I1209 10:49:26.306918  627293 main.go:141] libmachine: (ha-792382) DBG | I1209 10:49:26.306878  627316 retry.go:31] will retry after 3.713874413s: waiting for machine to come up
	I1209 10:49:30.024188  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:30.024650  627293 main.go:141] libmachine: (ha-792382) DBG | unable to find current IP address of domain ha-792382 in network mk-ha-792382
	I1209 10:49:30.024693  627293 main.go:141] libmachine: (ha-792382) DBG | I1209 10:49:30.024632  627316 retry.go:31] will retry after 4.575233995s: waiting for machine to come up
	I1209 10:49:34.603079  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:34.603556  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has current primary IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:34.603577  627293 main.go:141] libmachine: (ha-792382) Found IP for machine: 192.168.39.69
	I1209 10:49:34.603593  627293 main.go:141] libmachine: (ha-792382) Reserving static IP address...
	I1209 10:49:34.603995  627293 main.go:141] libmachine: (ha-792382) DBG | unable to find host DHCP lease matching {name: "ha-792382", mac: "52:54:00:a8:82:f7", ip: "192.168.39.69"} in network mk-ha-792382
	I1209 10:49:34.677115  627293 main.go:141] libmachine: (ha-792382) DBG | Getting to WaitForSSH function...
	I1209 10:49:34.677150  627293 main.go:141] libmachine: (ha-792382) Reserved static IP address: 192.168.39.69
	I1209 10:49:34.677164  627293 main.go:141] libmachine: (ha-792382) Waiting for SSH to be available...
	I1209 10:49:34.680016  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:34.680510  627293 main.go:141] libmachine: (ha-792382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:82:f7", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:49:26 +0000 UTC Type:0 Mac:52:54:00:a8:82:f7 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:minikube Clientid:01:52:54:00:a8:82:f7}
	I1209 10:49:34.680547  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:34.680683  627293 main.go:141] libmachine: (ha-792382) DBG | Using SSH client type: external
	I1209 10:49:34.680713  627293 main.go:141] libmachine: (ha-792382) DBG | Using SSH private key: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382/id_rsa (-rw-------)
	I1209 10:49:34.680743  627293 main.go:141] libmachine: (ha-792382) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.69 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1209 10:49:34.680759  627293 main.go:141] libmachine: (ha-792382) DBG | About to run SSH command:
	I1209 10:49:34.680771  627293 main.go:141] libmachine: (ha-792382) DBG | exit 0
	I1209 10:49:34.802056  627293 main.go:141] libmachine: (ha-792382) DBG | SSH cmd err, output: <nil>: 
	I1209 10:49:34.802342  627293 main.go:141] libmachine: (ha-792382) KVM machine creation complete!
	I1209 10:49:34.802652  627293 main.go:141] libmachine: (ha-792382) Calling .GetConfigRaw
	I1209 10:49:34.803265  627293 main.go:141] libmachine: (ha-792382) Calling .DriverName
	I1209 10:49:34.803470  627293 main.go:141] libmachine: (ha-792382) Calling .DriverName
	I1209 10:49:34.803641  627293 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1209 10:49:34.803655  627293 main.go:141] libmachine: (ha-792382) Calling .GetState
	I1209 10:49:34.804897  627293 main.go:141] libmachine: Detecting operating system of created instance...
	I1209 10:49:34.804910  627293 main.go:141] libmachine: Waiting for SSH to be available...
	I1209 10:49:34.804920  627293 main.go:141] libmachine: Getting to WaitForSSH function...
	I1209 10:49:34.804925  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHHostname
	I1209 10:49:34.807181  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:34.807580  627293 main.go:141] libmachine: (ha-792382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:82:f7", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:49:26 +0000 UTC Type:0 Mac:52:54:00:a8:82:f7 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-792382 Clientid:01:52:54:00:a8:82:f7}
	I1209 10:49:34.807606  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:34.807797  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHPort
	I1209 10:49:34.807971  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHKeyPath
	I1209 10:49:34.808252  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHKeyPath
	I1209 10:49:34.808380  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHUsername
	I1209 10:49:34.808550  627293 main.go:141] libmachine: Using SSH client type: native
	I1209 10:49:34.808901  627293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.69 22 <nil> <nil>}
	I1209 10:49:34.808916  627293 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1209 10:49:34.901048  627293 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 10:49:34.901075  627293 main.go:141] libmachine: Detecting the provisioner...
	I1209 10:49:34.901084  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHHostname
	I1209 10:49:34.903801  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:34.904137  627293 main.go:141] libmachine: (ha-792382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:82:f7", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:49:26 +0000 UTC Type:0 Mac:52:54:00:a8:82:f7 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-792382 Clientid:01:52:54:00:a8:82:f7}
	I1209 10:49:34.904167  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:34.904294  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHPort
	I1209 10:49:34.904473  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHKeyPath
	I1209 10:49:34.904619  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHKeyPath
	I1209 10:49:34.904801  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHUsername
	I1209 10:49:34.904935  627293 main.go:141] libmachine: Using SSH client type: native
	I1209 10:49:34.905144  627293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.69 22 <nil> <nil>}
	I1209 10:49:34.905156  627293 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1209 10:49:34.998134  627293 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1209 10:49:34.998232  627293 main.go:141] libmachine: found compatible host: buildroot
	I1209 10:49:34.998245  627293 main.go:141] libmachine: Provisioning with buildroot...
	I1209 10:49:34.998256  627293 main.go:141] libmachine: (ha-792382) Calling .GetMachineName
	I1209 10:49:34.998517  627293 buildroot.go:166] provisioning hostname "ha-792382"
	I1209 10:49:34.998550  627293 main.go:141] libmachine: (ha-792382) Calling .GetMachineName
	I1209 10:49:34.998742  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHHostname
	I1209 10:49:35.001204  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:35.001556  627293 main.go:141] libmachine: (ha-792382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:82:f7", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:49:26 +0000 UTC Type:0 Mac:52:54:00:a8:82:f7 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-792382 Clientid:01:52:54:00:a8:82:f7}
	I1209 10:49:35.001585  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:35.001746  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHPort
	I1209 10:49:35.001925  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHKeyPath
	I1209 10:49:35.002086  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHKeyPath
	I1209 10:49:35.002233  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHUsername
	I1209 10:49:35.002387  627293 main.go:141] libmachine: Using SSH client type: native
	I1209 10:49:35.002580  627293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.69 22 <nil> <nil>}
	I1209 10:49:35.002594  627293 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-792382 && echo "ha-792382" | sudo tee /etc/hostname
	I1209 10:49:35.111878  627293 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-792382
	
	I1209 10:49:35.111914  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHHostname
	I1209 10:49:35.114679  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:35.114968  627293 main.go:141] libmachine: (ha-792382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:82:f7", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:49:26 +0000 UTC Type:0 Mac:52:54:00:a8:82:f7 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-792382 Clientid:01:52:54:00:a8:82:f7}
	I1209 10:49:35.114999  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:35.115174  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHPort
	I1209 10:49:35.115415  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHKeyPath
	I1209 10:49:35.115601  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHKeyPath
	I1209 10:49:35.115731  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHUsername
	I1209 10:49:35.115880  627293 main.go:141] libmachine: Using SSH client type: native
	I1209 10:49:35.116106  627293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.69 22 <nil> <nil>}
	I1209 10:49:35.116130  627293 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-792382' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-792382/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-792382' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 10:49:35.218632  627293 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 10:49:35.218667  627293 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20068-609844/.minikube CaCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20068-609844/.minikube}
	I1209 10:49:35.218688  627293 buildroot.go:174] setting up certificates
	I1209 10:49:35.218699  627293 provision.go:84] configureAuth start
	I1209 10:49:35.218708  627293 main.go:141] libmachine: (ha-792382) Calling .GetMachineName
	I1209 10:49:35.218985  627293 main.go:141] libmachine: (ha-792382) Calling .GetIP
	I1209 10:49:35.221513  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:35.221813  627293 main.go:141] libmachine: (ha-792382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:82:f7", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:49:26 +0000 UTC Type:0 Mac:52:54:00:a8:82:f7 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-792382 Clientid:01:52:54:00:a8:82:f7}
	I1209 10:49:35.221835  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:35.221978  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHHostname
	I1209 10:49:35.224283  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:35.224638  627293 main.go:141] libmachine: (ha-792382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:82:f7", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:49:26 +0000 UTC Type:0 Mac:52:54:00:a8:82:f7 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-792382 Clientid:01:52:54:00:a8:82:f7}
	I1209 10:49:35.224666  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:35.224816  627293 provision.go:143] copyHostCerts
	I1209 10:49:35.224849  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem
	I1209 10:49:35.224892  627293 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem, removing ...
	I1209 10:49:35.224913  627293 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem
	I1209 10:49:35.225004  627293 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem (1123 bytes)
	I1209 10:49:35.225113  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem
	I1209 10:49:35.225145  627293 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem, removing ...
	I1209 10:49:35.225155  627293 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem
	I1209 10:49:35.225195  627293 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem (1679 bytes)
	I1209 10:49:35.225255  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem
	I1209 10:49:35.225280  627293 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem, removing ...
	I1209 10:49:35.225290  627293 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem
	I1209 10:49:35.225325  627293 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem (1082 bytes)
	I1209 10:49:35.225392  627293 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem org=jenkins.ha-792382 san=[127.0.0.1 192.168.39.69 ha-792382 localhost minikube]
	I1209 10:49:35.530739  627293 provision.go:177] copyRemoteCerts
	I1209 10:49:35.530807  627293 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 10:49:35.530832  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHHostname
	I1209 10:49:35.533806  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:35.534127  627293 main.go:141] libmachine: (ha-792382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:82:f7", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:49:26 +0000 UTC Type:0 Mac:52:54:00:a8:82:f7 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-792382 Clientid:01:52:54:00:a8:82:f7}
	I1209 10:49:35.534158  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:35.534311  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHPort
	I1209 10:49:35.534552  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHKeyPath
	I1209 10:49:35.534707  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHUsername
	I1209 10:49:35.534862  627293 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382/id_rsa Username:docker}
	I1209 10:49:35.611999  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1209 10:49:35.612097  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1209 10:49:35.633738  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1209 10:49:35.633820  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1209 10:49:35.654744  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1209 10:49:35.654813  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1209 10:49:35.675689  627293 provision.go:87] duration metric: took 456.977679ms to configureAuth
	I1209 10:49:35.675718  627293 buildroot.go:189] setting minikube options for container-runtime
	I1209 10:49:35.675925  627293 config.go:182] Loaded profile config "ha-792382": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 10:49:35.676032  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHHostname
	I1209 10:49:35.678943  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:35.679261  627293 main.go:141] libmachine: (ha-792382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:82:f7", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:49:26 +0000 UTC Type:0 Mac:52:54:00:a8:82:f7 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-792382 Clientid:01:52:54:00:a8:82:f7}
	I1209 10:49:35.679289  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:35.679496  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHPort
	I1209 10:49:35.679710  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHKeyPath
	I1209 10:49:35.679841  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHKeyPath
	I1209 10:49:35.679959  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHUsername
	I1209 10:49:35.680105  627293 main.go:141] libmachine: Using SSH client type: native
	I1209 10:49:35.680332  627293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.69 22 <nil> <nil>}
	I1209 10:49:35.680355  627293 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 10:49:35.879810  627293 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 10:49:35.879848  627293 main.go:141] libmachine: Checking connection to Docker...
	I1209 10:49:35.879878  627293 main.go:141] libmachine: (ha-792382) Calling .GetURL
	I1209 10:49:35.881298  627293 main.go:141] libmachine: (ha-792382) DBG | Using libvirt version 6000000
	I1209 10:49:35.883322  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:35.883653  627293 main.go:141] libmachine: (ha-792382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:82:f7", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:49:26 +0000 UTC Type:0 Mac:52:54:00:a8:82:f7 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-792382 Clientid:01:52:54:00:a8:82:f7}
	I1209 10:49:35.883694  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:35.883840  627293 main.go:141] libmachine: Docker is up and running!
	I1209 10:49:35.883855  627293 main.go:141] libmachine: Reticulating splines...
	I1209 10:49:35.883863  627293 client.go:171] duration metric: took 23.63814664s to LocalClient.Create
	I1209 10:49:35.883888  627293 start.go:167] duration metric: took 23.638217304s to libmachine.API.Create "ha-792382"
	I1209 10:49:35.883903  627293 start.go:293] postStartSetup for "ha-792382" (driver="kvm2")
	I1209 10:49:35.883916  627293 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 10:49:35.883934  627293 main.go:141] libmachine: (ha-792382) Calling .DriverName
	I1209 10:49:35.884193  627293 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 10:49:35.884224  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHHostname
	I1209 10:49:35.886333  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:35.886719  627293 main.go:141] libmachine: (ha-792382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:82:f7", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:49:26 +0000 UTC Type:0 Mac:52:54:00:a8:82:f7 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-792382 Clientid:01:52:54:00:a8:82:f7}
	I1209 10:49:35.886746  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:35.886830  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHPort
	I1209 10:49:35.887023  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHKeyPath
	I1209 10:49:35.887177  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHUsername
	I1209 10:49:35.887342  627293 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382/id_rsa Username:docker}
	I1209 10:49:35.963840  627293 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 10:49:35.967678  627293 info.go:137] Remote host: Buildroot 2023.02.9
	I1209 10:49:35.967709  627293 filesync.go:126] Scanning /home/jenkins/minikube-integration/20068-609844/.minikube/addons for local assets ...
	I1209 10:49:35.967791  627293 filesync.go:126] Scanning /home/jenkins/minikube-integration/20068-609844/.minikube/files for local assets ...
	I1209 10:49:35.967866  627293 filesync.go:149] local asset: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem -> 6170172.pem in /etc/ssl/certs
	I1209 10:49:35.967876  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem -> /etc/ssl/certs/6170172.pem
	I1209 10:49:35.967969  627293 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 10:49:35.976432  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem --> /etc/ssl/certs/6170172.pem (1708 bytes)
	I1209 10:49:35.997593  627293 start.go:296] duration metric: took 113.67336ms for postStartSetup
	I1209 10:49:35.997658  627293 main.go:141] libmachine: (ha-792382) Calling .GetConfigRaw
	I1209 10:49:35.998325  627293 main.go:141] libmachine: (ha-792382) Calling .GetIP
	I1209 10:49:36.000848  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:36.001239  627293 main.go:141] libmachine: (ha-792382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:82:f7", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:49:26 +0000 UTC Type:0 Mac:52:54:00:a8:82:f7 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-792382 Clientid:01:52:54:00:a8:82:f7}
	I1209 10:49:36.001267  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:36.001479  627293 profile.go:143] Saving config to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/config.json ...
	I1209 10:49:36.001656  627293 start.go:128] duration metric: took 23.77358998s to createHost
	I1209 10:49:36.001690  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHHostname
	I1209 10:49:36.004043  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:36.004400  627293 main.go:141] libmachine: (ha-792382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:82:f7", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:49:26 +0000 UTC Type:0 Mac:52:54:00:a8:82:f7 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-792382 Clientid:01:52:54:00:a8:82:f7}
	I1209 10:49:36.004431  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:36.004549  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHPort
	I1209 10:49:36.004734  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHKeyPath
	I1209 10:49:36.004893  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHKeyPath
	I1209 10:49:36.005024  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHUsername
	I1209 10:49:36.005202  627293 main.go:141] libmachine: Using SSH client type: native
	I1209 10:49:36.005368  627293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.69 22 <nil> <nil>}
	I1209 10:49:36.005389  627293 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1209 10:49:36.102487  627293 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733741376.078541083
	
	I1209 10:49:36.102513  627293 fix.go:216] guest clock: 1733741376.078541083
	I1209 10:49:36.102520  627293 fix.go:229] Guest: 2024-12-09 10:49:36.078541083 +0000 UTC Remote: 2024-12-09 10:49:36.001674575 +0000 UTC m=+23.885913523 (delta=76.866508ms)
	I1209 10:49:36.102562  627293 fix.go:200] guest clock delta is within tolerance: 76.866508ms
	I1209 10:49:36.102567  627293 start.go:83] releasing machines lock for "ha-792382", held for 23.874584082s
	I1209 10:49:36.102599  627293 main.go:141] libmachine: (ha-792382) Calling .DriverName
	I1209 10:49:36.102894  627293 main.go:141] libmachine: (ha-792382) Calling .GetIP
	I1209 10:49:36.105447  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:36.105786  627293 main.go:141] libmachine: (ha-792382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:82:f7", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:49:26 +0000 UTC Type:0 Mac:52:54:00:a8:82:f7 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-792382 Clientid:01:52:54:00:a8:82:f7}
	I1209 10:49:36.105824  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:36.105948  627293 main.go:141] libmachine: (ha-792382) Calling .DriverName
	I1209 10:49:36.106428  627293 main.go:141] libmachine: (ha-792382) Calling .DriverName
	I1209 10:49:36.106564  627293 main.go:141] libmachine: (ha-792382) Calling .DriverName
	I1209 10:49:36.106659  627293 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 10:49:36.106712  627293 ssh_runner.go:195] Run: cat /version.json
	I1209 10:49:36.106729  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHHostname
	I1209 10:49:36.106735  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHHostname
	I1209 10:49:36.108936  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:36.108975  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:36.109292  627293 main.go:141] libmachine: (ha-792382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:82:f7", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:49:26 +0000 UTC Type:0 Mac:52:54:00:a8:82:f7 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-792382 Clientid:01:52:54:00:a8:82:f7}
	I1209 10:49:36.109315  627293 main.go:141] libmachine: (ha-792382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:82:f7", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:49:26 +0000 UTC Type:0 Mac:52:54:00:a8:82:f7 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-792382 Clientid:01:52:54:00:a8:82:f7}
	I1209 10:49:36.109331  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:36.109347  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:36.109458  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHPort
	I1209 10:49:36.109631  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHPort
	I1209 10:49:36.109648  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHKeyPath
	I1209 10:49:36.109795  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHKeyPath
	I1209 10:49:36.109838  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHUsername
	I1209 10:49:36.109969  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHUsername
	I1209 10:49:36.109997  627293 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382/id_rsa Username:docker}
	I1209 10:49:36.110076  627293 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382/id_rsa Username:docker}
	I1209 10:49:36.213912  627293 ssh_runner.go:195] Run: systemctl --version
	I1209 10:49:36.219737  627293 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 10:49:36.373775  627293 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 10:49:36.379232  627293 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 10:49:36.379295  627293 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 10:49:36.394395  627293 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1209 10:49:36.394420  627293 start.go:495] detecting cgroup driver to use...
	I1209 10:49:36.394492  627293 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 10:49:36.409701  627293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 10:49:36.422542  627293 docker.go:217] disabling cri-docker service (if available) ...
	I1209 10:49:36.422600  627293 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 10:49:36.434811  627293 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 10:49:36.447372  627293 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 10:49:36.555614  627293 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 10:49:36.712890  627293 docker.go:233] disabling docker service ...
	I1209 10:49:36.712971  627293 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 10:49:36.726789  627293 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 10:49:36.738514  627293 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 10:49:36.860478  627293 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 10:49:36.981442  627293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 10:49:36.994232  627293 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 10:49:37.010639  627293 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1209 10:49:37.010699  627293 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 10:49:37.019623  627293 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 10:49:37.019678  627293 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 10:49:37.028741  627293 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 10:49:37.037802  627293 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 10:49:37.047112  627293 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 10:49:37.056587  627293 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 10:49:37.065626  627293 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 10:49:37.081471  627293 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 10:49:37.090400  627293 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 10:49:37.098511  627293 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1209 10:49:37.098567  627293 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1209 10:49:37.112020  627293 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 10:49:37.122574  627293 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 10:49:37.244301  627293 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 10:49:37.327990  627293 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 10:49:37.328076  627293 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 10:49:37.332519  627293 start.go:563] Will wait 60s for crictl version
	I1209 10:49:37.332580  627293 ssh_runner.go:195] Run: which crictl
	I1209 10:49:37.336027  627293 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 10:49:37.371600  627293 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1209 10:49:37.371689  627293 ssh_runner.go:195] Run: crio --version
	I1209 10:49:37.397060  627293 ssh_runner.go:195] Run: crio --version
	I1209 10:49:37.427301  627293 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1209 10:49:37.428631  627293 main.go:141] libmachine: (ha-792382) Calling .GetIP
	I1209 10:49:37.431338  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:37.431646  627293 main.go:141] libmachine: (ha-792382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:82:f7", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:49:26 +0000 UTC Type:0 Mac:52:54:00:a8:82:f7 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-792382 Clientid:01:52:54:00:a8:82:f7}
	I1209 10:49:37.431664  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:37.431871  627293 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1209 10:49:37.435530  627293 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 10:49:37.447078  627293 kubeadm.go:883] updating cluster {Name:ha-792382 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cl
usterName:ha-792382 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.69 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 10:49:37.447263  627293 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 10:49:37.447334  627293 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 10:49:37.477408  627293 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1209 10:49:37.477478  627293 ssh_runner.go:195] Run: which lz4
	I1209 10:49:37.480957  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1209 10:49:37.481050  627293 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1209 10:49:37.484762  627293 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1209 10:49:37.484788  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1209 10:49:38.710605  627293 crio.go:462] duration metric: took 1.229579062s to copy over tarball
	I1209 10:49:38.710680  627293 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1209 10:49:40.690695  627293 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.979974769s)
	I1209 10:49:40.690734  627293 crio.go:469] duration metric: took 1.980097705s to extract the tarball
	I1209 10:49:40.690745  627293 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1209 10:49:40.726929  627293 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 10:49:40.771095  627293 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 10:49:40.771125  627293 cache_images.go:84] Images are preloaded, skipping loading
	I1209 10:49:40.771136  627293 kubeadm.go:934] updating node { 192.168.39.69 8443 v1.31.2 crio true true} ...
	I1209 10:49:40.771264  627293 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-792382 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.69
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-792382 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 10:49:40.771357  627293 ssh_runner.go:195] Run: crio config
	I1209 10:49:40.816747  627293 cni.go:84] Creating CNI manager for ""
	I1209 10:49:40.816772  627293 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1209 10:49:40.816783  627293 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1209 10:49:40.816808  627293 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.69 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-792382 NodeName:ha-792382 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.69"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.69 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 10:49:40.816935  627293 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.69
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-792382"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.69"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.69"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 10:49:40.816960  627293 kube-vip.go:115] generating kube-vip config ...
	I1209 10:49:40.817003  627293 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1209 10:49:40.831794  627293 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1209 10:49:40.831917  627293 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.7
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1209 10:49:40.831988  627293 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1209 10:49:40.841266  627293 binaries.go:44] Found k8s binaries, skipping transfer
	I1209 10:49:40.841344  627293 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1209 10:49:40.850351  627293 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I1209 10:49:40.865301  627293 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 10:49:40.880173  627293 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2286 bytes)
	I1209 10:49:40.895089  627293 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I1209 10:49:40.909836  627293 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1209 10:49:40.913336  627293 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 10:49:40.924356  627293 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 10:49:41.046665  627293 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 10:49:41.063018  627293 certs.go:68] Setting up /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382 for IP: 192.168.39.69
	I1209 10:49:41.063041  627293 certs.go:194] generating shared ca certs ...
	I1209 10:49:41.063062  627293 certs.go:226] acquiring lock for ca certs: {Name:mk2df5887e08965d909a9c950da5dfffb8a04ddc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 10:49:41.063244  627293 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.key
	I1209 10:49:41.063289  627293 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.key
	I1209 10:49:41.063300  627293 certs.go:256] generating profile certs ...
	I1209 10:49:41.063355  627293 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/client.key
	I1209 10:49:41.063367  627293 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/client.crt with IP's: []
	I1209 10:49:41.129843  627293 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/client.crt ...
	I1209 10:49:41.129870  627293 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/client.crt: {Name:mkf984c9e526db9b810af9b168d6930601d7ed72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 10:49:41.130077  627293 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/client.key ...
	I1209 10:49:41.130094  627293 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/client.key: {Name:mk7ce7334711bfa08abe5164a05b3a0e352b8f81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 10:49:41.130213  627293 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.key.27c0a765
	I1209 10:49:41.130234  627293 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.crt.27c0a765 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.69 192.168.39.254]
	I1209 10:49:41.505985  627293 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.crt.27c0a765 ...
	I1209 10:49:41.506019  627293 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.crt.27c0a765: {Name:mkd0b0619960f58505ea5c5b1f53c5a2d8b55baf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 10:49:41.506242  627293 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.key.27c0a765 ...
	I1209 10:49:41.506261  627293 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.key.27c0a765: {Name:mk67bc39f2b151954187d9bdff2b01a7060c0444 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 10:49:41.506368  627293 certs.go:381] copying /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.crt.27c0a765 -> /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.crt
	I1209 10:49:41.506445  627293 certs.go:385] copying /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.key.27c0a765 -> /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.key
	I1209 10:49:41.506499  627293 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/proxy-client.key
	I1209 10:49:41.506513  627293 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/proxy-client.crt with IP's: []
	I1209 10:49:41.582775  627293 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/proxy-client.crt ...
	I1209 10:49:41.582805  627293 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/proxy-client.crt: {Name:mk8ba382df4a8d41cbb5595274fb67800a146923 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 10:49:41.582997  627293 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/proxy-client.key ...
	I1209 10:49:41.583012  627293 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/proxy-client.key: {Name:mka4002ccf01f2f736e4a0e998ece96628af1083 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 10:49:41.583117  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1209 10:49:41.583147  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1209 10:49:41.583161  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1209 10:49:41.583173  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1209 10:49:41.583197  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1209 10:49:41.583210  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1209 10:49:41.583222  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1209 10:49:41.583234  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1209 10:49:41.583286  627293 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017.pem (1338 bytes)
	W1209 10:49:41.583322  627293 certs.go:480] ignoring /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017_empty.pem, impossibly tiny 0 bytes
	I1209 10:49:41.583332  627293 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem (1675 bytes)
	I1209 10:49:41.583354  627293 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem (1082 bytes)
	I1209 10:49:41.583377  627293 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem (1123 bytes)
	I1209 10:49:41.583404  627293 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem (1679 bytes)
	I1209 10:49:41.583441  627293 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem (1708 bytes)
	I1209 10:49:41.583468  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem -> /usr/share/ca-certificates/6170172.pem
	I1209 10:49:41.583481  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1209 10:49:41.583493  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017.pem -> /usr/share/ca-certificates/617017.pem
	I1209 10:49:41.584023  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 10:49:41.607858  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1209 10:49:41.629298  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 10:49:41.650915  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1209 10:49:41.672892  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1209 10:49:41.695834  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1209 10:49:41.719653  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 10:49:41.742298  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1209 10:49:41.764468  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem --> /usr/share/ca-certificates/6170172.pem (1708 bytes)
	I1209 10:49:41.786947  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 10:49:41.811703  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017.pem --> /usr/share/ca-certificates/617017.pem (1338 bytes)
	I1209 10:49:41.837346  627293 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 10:49:41.855854  627293 ssh_runner.go:195] Run: openssl version
	I1209 10:49:41.862371  627293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/617017.pem && ln -fs /usr/share/ca-certificates/617017.pem /etc/ssl/certs/617017.pem"
	I1209 10:49:41.872771  627293 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/617017.pem
	I1209 10:49:41.878140  627293 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 10:45 /usr/share/ca-certificates/617017.pem
	I1209 10:49:41.878210  627293 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/617017.pem
	I1209 10:49:41.883640  627293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/617017.pem /etc/ssl/certs/51391683.0"
	I1209 10:49:41.893209  627293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6170172.pem && ln -fs /usr/share/ca-certificates/6170172.pem /etc/ssl/certs/6170172.pem"
	I1209 10:49:41.902869  627293 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6170172.pem
	I1209 10:49:41.906850  627293 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 10:45 /usr/share/ca-certificates/6170172.pem
	I1209 10:49:41.906898  627293 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6170172.pem
	I1209 10:49:41.912084  627293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6170172.pem /etc/ssl/certs/3ec20f2e.0"
	I1209 10:49:41.922405  627293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1209 10:49:41.932252  627293 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 10:49:41.936213  627293 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 10:34 /usr/share/ca-certificates/minikubeCA.pem
	I1209 10:49:41.936274  627293 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 10:49:41.941486  627293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1209 10:49:41.951188  627293 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 10:49:41.954834  627293 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1209 10:49:41.954890  627293 kubeadm.go:392] StartCluster: {Name:ha-792382 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Clust
erName:ha-792382 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.69 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 10:49:41.954978  627293 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 10:49:41.955029  627293 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 10:49:41.990596  627293 cri.go:89] found id: ""
	I1209 10:49:41.990674  627293 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 10:49:41.999783  627293 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 10:49:42.008238  627293 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 10:49:42.016846  627293 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 10:49:42.016865  627293 kubeadm.go:157] found existing configuration files:
	
	I1209 10:49:42.016904  627293 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 10:49:42.024739  627293 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 10:49:42.024809  627293 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 10:49:42.033044  627293 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 10:49:42.040972  627293 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 10:49:42.041020  627293 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 10:49:42.049238  627293 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 10:49:42.056966  627293 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 10:49:42.057032  627293 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 10:49:42.065232  627293 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 10:49:42.073082  627293 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 10:49:42.073123  627293 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 10:49:42.081145  627293 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1209 10:49:42.179849  627293 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1209 10:49:42.179910  627293 kubeadm.go:310] [preflight] Running pre-flight checks
	I1209 10:49:42.276408  627293 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1209 10:49:42.276561  627293 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1209 10:49:42.276716  627293 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1209 10:49:42.284852  627293 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1209 10:49:42.286435  627293 out.go:235]   - Generating certificates and keys ...
	I1209 10:49:42.286522  627293 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1209 10:49:42.286594  627293 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1209 10:49:42.590387  627293 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1209 10:49:42.745055  627293 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1209 10:49:42.887467  627293 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1209 10:49:43.151549  627293 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1209 10:49:43.207644  627293 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1209 10:49:43.207798  627293 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-792382 localhost] and IPs [192.168.39.69 127.0.0.1 ::1]
	I1209 10:49:43.393565  627293 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1209 10:49:43.393710  627293 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-792382 localhost] and IPs [192.168.39.69 127.0.0.1 ::1]
	I1209 10:49:43.595429  627293 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1209 10:49:43.672644  627293 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1209 10:49:43.819815  627293 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1209 10:49:43.819914  627293 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1209 10:49:44.041243  627293 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1209 10:49:44.173892  627293 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1209 10:49:44.337644  627293 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1209 10:49:44.481944  627293 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1209 10:49:44.539526  627293 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1209 10:49:44.540094  627293 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1209 10:49:44.543689  627293 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1209 10:49:44.575870  627293 out.go:235]   - Booting up control plane ...
	I1209 10:49:44.576053  627293 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1209 10:49:44.576187  627293 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1209 10:49:44.576309  627293 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1209 10:49:44.576459  627293 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1209 10:49:44.576560  627293 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1209 10:49:44.576606  627293 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1209 10:49:44.708364  627293 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1209 10:49:44.708561  627293 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1209 10:49:45.209677  627293 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.518639ms
	I1209 10:49:45.209811  627293 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1209 10:49:51.244834  627293 kubeadm.go:310] [api-check] The API server is healthy after 6.038769474s
	I1209 10:49:51.258766  627293 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1209 10:49:51.275586  627293 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1209 10:49:51.347505  627293 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1209 10:49:51.347730  627293 kubeadm.go:310] [mark-control-plane] Marking the node ha-792382 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1209 10:49:51.363557  627293 kubeadm.go:310] [bootstrap-token] Using token: 3fogiz.oanziwjzsm1wr1kv
	I1209 10:49:51.364826  627293 out.go:235]   - Configuring RBAC rules ...
	I1209 10:49:51.364951  627293 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1209 10:49:51.370786  627293 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1209 10:49:51.381797  627293 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1209 10:49:51.388857  627293 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1209 10:49:51.392743  627293 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1209 10:49:51.397933  627293 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1209 10:49:51.652382  627293 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1209 10:49:52.085079  627293 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1209 10:49:52.651844  627293 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1209 10:49:52.653438  627293 kubeadm.go:310] 
	I1209 10:49:52.653557  627293 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1209 10:49:52.653580  627293 kubeadm.go:310] 
	I1209 10:49:52.653672  627293 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1209 10:49:52.653682  627293 kubeadm.go:310] 
	I1209 10:49:52.653710  627293 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1209 10:49:52.653783  627293 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1209 10:49:52.653859  627293 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1209 10:49:52.653869  627293 kubeadm.go:310] 
	I1209 10:49:52.653946  627293 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1209 10:49:52.653955  627293 kubeadm.go:310] 
	I1209 10:49:52.654040  627293 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1209 10:49:52.654062  627293 kubeadm.go:310] 
	I1209 10:49:52.654116  627293 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1209 10:49:52.654229  627293 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1209 10:49:52.654328  627293 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1209 10:49:52.654347  627293 kubeadm.go:310] 
	I1209 10:49:52.654461  627293 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1209 10:49:52.654579  627293 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1209 10:49:52.654591  627293 kubeadm.go:310] 
	I1209 10:49:52.654710  627293 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 3fogiz.oanziwjzsm1wr1kv \
	I1209 10:49:52.654860  627293 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c011865ce27f188d55feab5f04226df0aae8d2e7cfe661283806c6f2de0ef29a \
	I1209 10:49:52.654894  627293 kubeadm.go:310] 	--control-plane 
	I1209 10:49:52.654903  627293 kubeadm.go:310] 
	I1209 10:49:52.655035  627293 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1209 10:49:52.655045  627293 kubeadm.go:310] 
	I1209 10:49:52.655125  627293 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 3fogiz.oanziwjzsm1wr1kv \
	I1209 10:49:52.655253  627293 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c011865ce27f188d55feab5f04226df0aae8d2e7cfe661283806c6f2de0ef29a 
	I1209 10:49:52.656128  627293 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1209 10:49:52.656180  627293 cni.go:84] Creating CNI manager for ""
	I1209 10:49:52.656208  627293 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1209 10:49:52.657779  627293 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1209 10:49:52.659033  627293 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1209 10:49:52.663808  627293 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.2/kubectl ...
	I1209 10:49:52.663829  627293 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1209 10:49:52.683028  627293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1209 10:49:53.058715  627293 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1209 10:49:53.058827  627293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 10:49:53.058833  627293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-792382 minikube.k8s.io/updated_at=2024_12_09T10_49_53_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=60110addcdbf0fec7168b962521659e922988d6c minikube.k8s.io/name=ha-792382 minikube.k8s.io/primary=true
	I1209 10:49:53.086878  627293 ops.go:34] apiserver oom_adj: -16
	I1209 10:49:53.256202  627293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 10:49:53.756573  627293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 10:49:54.256994  627293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 10:49:54.756404  627293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 10:49:55.257137  627293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 10:49:55.756813  627293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 10:49:56.256686  627293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 10:49:56.352743  627293 kubeadm.go:1113] duration metric: took 3.294004538s to wait for elevateKubeSystemPrivileges
	I1209 10:49:56.352793  627293 kubeadm.go:394] duration metric: took 14.397907015s to StartCluster
	I1209 10:49:56.352820  627293 settings.go:142] acquiring lock: {Name:mkd819f3469f56ef651f2e5bb461e93bb6b2b5fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 10:49:56.352918  627293 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20068-609844/kubeconfig
	I1209 10:49:56.354019  627293 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/kubeconfig: {Name:mk32876a1ab47f13c0d7c0ba1ee5e32de01b4a4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 10:49:56.354304  627293 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1209 10:49:56.354300  627293 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.69 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 10:49:56.354326  627293 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1209 10:49:56.354417  627293 start.go:241] waiting for startup goroutines ...
	I1209 10:49:56.354432  627293 addons.go:69] Setting storage-provisioner=true in profile "ha-792382"
	I1209 10:49:56.354455  627293 addons.go:234] Setting addon storage-provisioner=true in "ha-792382"
	I1209 10:49:56.354464  627293 addons.go:69] Setting default-storageclass=true in profile "ha-792382"
	I1209 10:49:56.354495  627293 host.go:66] Checking if "ha-792382" exists ...
	I1209 10:49:56.354504  627293 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-792382"
	I1209 10:49:56.354547  627293 config.go:182] Loaded profile config "ha-792382": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 10:49:56.354836  627293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:49:56.354867  627293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:49:56.354970  627293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:49:56.355019  627293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:49:56.371190  627293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42209
	I1209 10:49:56.371264  627293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40229
	I1209 10:49:56.371767  627293 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:49:56.371795  627293 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:49:56.372258  627293 main.go:141] libmachine: Using API Version  1
	I1209 10:49:56.372273  627293 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:49:56.372420  627293 main.go:141] libmachine: Using API Version  1
	I1209 10:49:56.372446  627293 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:49:56.372589  627293 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:49:56.372844  627293 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:49:56.373068  627293 main.go:141] libmachine: (ha-792382) Calling .GetState
	I1209 10:49:56.373184  627293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:49:56.373230  627293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:49:56.375150  627293 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/20068-609844/kubeconfig
	I1209 10:49:56.375437  627293 kapi.go:59] client config for ha-792382: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/client.crt", KeyFile:"/home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/client.key", CAFile:"/home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243b6e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1209 10:49:56.375916  627293 cert_rotation.go:140] Starting client certificate rotation controller
	I1209 10:49:56.376176  627293 addons.go:234] Setting addon default-storageclass=true in "ha-792382"
	I1209 10:49:56.376225  627293 host.go:66] Checking if "ha-792382" exists ...
	I1209 10:49:56.376515  627293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:49:56.376560  627293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:49:56.389420  627293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45615
	I1209 10:49:56.390064  627293 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:49:56.390648  627293 main.go:141] libmachine: Using API Version  1
	I1209 10:49:56.390676  627293 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:49:56.391072  627293 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:49:56.391316  627293 main.go:141] libmachine: (ha-792382) Calling .GetState
	I1209 10:49:56.391995  627293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42071
	I1209 10:49:56.392539  627293 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:49:56.393048  627293 main.go:141] libmachine: Using API Version  1
	I1209 10:49:56.393071  627293 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:49:56.393381  627293 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:49:56.393446  627293 main.go:141] libmachine: (ha-792382) Calling .DriverName
	I1209 10:49:56.393880  627293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:49:56.393927  627293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:49:56.395537  627293 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 10:49:56.396877  627293 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 10:49:56.396901  627293 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1209 10:49:56.396927  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHHostname
	I1209 10:49:56.399986  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:56.400413  627293 main.go:141] libmachine: (ha-792382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:82:f7", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:49:26 +0000 UTC Type:0 Mac:52:54:00:a8:82:f7 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-792382 Clientid:01:52:54:00:a8:82:f7}
	I1209 10:49:56.400445  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:56.400639  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHPort
	I1209 10:49:56.400862  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHKeyPath
	I1209 10:49:56.401027  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHUsername
	I1209 10:49:56.401192  627293 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382/id_rsa Username:docker}
	I1209 10:49:56.410237  627293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41957
	I1209 10:49:56.411256  627293 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:49:56.413501  627293 main.go:141] libmachine: Using API Version  1
	I1209 10:49:56.413527  627293 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:49:56.414391  627293 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:49:56.414656  627293 main.go:141] libmachine: (ha-792382) Calling .GetState
	I1209 10:49:56.416343  627293 main.go:141] libmachine: (ha-792382) Calling .DriverName
	I1209 10:49:56.416575  627293 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1209 10:49:56.416592  627293 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1209 10:49:56.416608  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHHostname
	I1209 10:49:56.419239  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:56.419746  627293 main.go:141] libmachine: (ha-792382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:82:f7", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:49:26 +0000 UTC Type:0 Mac:52:54:00:a8:82:f7 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-792382 Clientid:01:52:54:00:a8:82:f7}
	I1209 10:49:56.419776  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:56.419875  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHPort
	I1209 10:49:56.420076  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHKeyPath
	I1209 10:49:56.420261  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHUsername
	I1209 10:49:56.420422  627293 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382/id_rsa Username:docker}
	I1209 10:49:56.497434  627293 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1209 10:49:56.595755  627293 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 10:49:56.677666  627293 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1209 10:49:57.066334  627293 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1209 10:49:57.258939  627293 main.go:141] libmachine: Making call to close driver server
	I1209 10:49:57.258974  627293 main.go:141] libmachine: (ha-792382) Calling .Close
	I1209 10:49:57.258947  627293 main.go:141] libmachine: Making call to close driver server
	I1209 10:49:57.259060  627293 main.go:141] libmachine: (ha-792382) Calling .Close
	I1209 10:49:57.259277  627293 main.go:141] libmachine: Successfully made call to close driver server
	I1209 10:49:57.259322  627293 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 10:49:57.259343  627293 main.go:141] libmachine: Making call to close driver server
	I1209 10:49:57.259358  627293 main.go:141] libmachine: (ha-792382) Calling .Close
	I1209 10:49:57.259450  627293 main.go:141] libmachine: (ha-792382) DBG | Closing plugin on server side
	I1209 10:49:57.259495  627293 main.go:141] libmachine: Successfully made call to close driver server
	I1209 10:49:57.259510  627293 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 10:49:57.259523  627293 main.go:141] libmachine: Making call to close driver server
	I1209 10:49:57.259535  627293 main.go:141] libmachine: (ha-792382) Calling .Close
	I1209 10:49:57.259638  627293 main.go:141] libmachine: Successfully made call to close driver server
	I1209 10:49:57.259658  627293 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 10:49:57.259664  627293 main.go:141] libmachine: (ha-792382) DBG | Closing plugin on server side
	I1209 10:49:57.259795  627293 main.go:141] libmachine: Successfully made call to close driver server
	I1209 10:49:57.259815  627293 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 10:49:57.259895  627293 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1209 10:49:57.259914  627293 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1209 10:49:57.260014  627293 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1209 10:49:57.260024  627293 round_trippers.go:469] Request Headers:
	I1209 10:49:57.260035  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:49:57.260046  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:49:57.272826  627293 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1209 10:49:57.273379  627293 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1209 10:49:57.273393  627293 round_trippers.go:469] Request Headers:
	I1209 10:49:57.273400  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:49:57.273404  627293 round_trippers.go:473]     Content-Type: application/json
	I1209 10:49:57.273408  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:49:57.276004  627293 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 10:49:57.276170  627293 main.go:141] libmachine: Making call to close driver server
	I1209 10:49:57.276182  627293 main.go:141] libmachine: (ha-792382) Calling .Close
	I1209 10:49:57.276582  627293 main.go:141] libmachine: Successfully made call to close driver server
	I1209 10:49:57.276606  627293 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 10:49:57.278423  627293 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1209 10:49:57.279715  627293 addons.go:510] duration metric: took 925.38672ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1209 10:49:57.279752  627293 start.go:246] waiting for cluster config update ...
	I1209 10:49:57.279765  627293 start.go:255] writing updated cluster config ...
	I1209 10:49:57.281341  627293 out.go:201] 
	I1209 10:49:57.282688  627293 config.go:182] Loaded profile config "ha-792382": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 10:49:57.282758  627293 profile.go:143] Saving config to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/config.json ...
	I1209 10:49:57.284265  627293 out.go:177] * Starting "ha-792382-m02" control-plane node in "ha-792382" cluster
	I1209 10:49:57.285340  627293 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 10:49:57.285363  627293 cache.go:56] Caching tarball of preloaded images
	I1209 10:49:57.285479  627293 preload.go:172] Found /home/jenkins/minikube-integration/20068-609844/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1209 10:49:57.285499  627293 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1209 10:49:57.285580  627293 profile.go:143] Saving config to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/config.json ...
	I1209 10:49:57.285772  627293 start.go:360] acquireMachinesLock for ha-792382-m02: {Name:mka6afffe9ddf2faebdc603ed0401805d58bf31e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 10:49:57.285830  627293 start.go:364] duration metric: took 34.649µs to acquireMachinesLock for "ha-792382-m02"
	I1209 10:49:57.285855  627293 start.go:93] Provisioning new machine with config: &{Name:ha-792382 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-792382 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.69 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 10:49:57.285945  627293 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1209 10:49:57.287544  627293 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1209 10:49:57.287637  627293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:49:57.287679  627293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:49:57.302923  627293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45627
	I1209 10:49:57.303345  627293 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:49:57.303929  627293 main.go:141] libmachine: Using API Version  1
	I1209 10:49:57.303955  627293 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:49:57.304276  627293 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:49:57.304507  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetMachineName
	I1209 10:49:57.304682  627293 main.go:141] libmachine: (ha-792382-m02) Calling .DriverName
	I1209 10:49:57.304915  627293 start.go:159] libmachine.API.Create for "ha-792382" (driver="kvm2")
	I1209 10:49:57.304958  627293 client.go:168] LocalClient.Create starting
	I1209 10:49:57.305006  627293 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem
	I1209 10:49:57.305054  627293 main.go:141] libmachine: Decoding PEM data...
	I1209 10:49:57.305076  627293 main.go:141] libmachine: Parsing certificate...
	I1209 10:49:57.305150  627293 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem
	I1209 10:49:57.305184  627293 main.go:141] libmachine: Decoding PEM data...
	I1209 10:49:57.305200  627293 main.go:141] libmachine: Parsing certificate...
	I1209 10:49:57.305226  627293 main.go:141] libmachine: Running pre-create checks...
	I1209 10:49:57.305237  627293 main.go:141] libmachine: (ha-792382-m02) Calling .PreCreateCheck
	I1209 10:49:57.305467  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetConfigRaw
	I1209 10:49:57.305949  627293 main.go:141] libmachine: Creating machine...
	I1209 10:49:57.305967  627293 main.go:141] libmachine: (ha-792382-m02) Calling .Create
	I1209 10:49:57.306165  627293 main.go:141] libmachine: (ha-792382-m02) Creating KVM machine...
	I1209 10:49:57.307365  627293 main.go:141] libmachine: (ha-792382-m02) DBG | found existing default KVM network
	I1209 10:49:57.307532  627293 main.go:141] libmachine: (ha-792382-m02) DBG | found existing private KVM network mk-ha-792382
	I1209 10:49:57.307606  627293 main.go:141] libmachine: (ha-792382-m02) Setting up store path in /home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382-m02 ...
	I1209 10:49:57.307640  627293 main.go:141] libmachine: (ha-792382-m02) Building disk image from file:///home/jenkins/minikube-integration/20068-609844/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1209 10:49:57.307676  627293 main.go:141] libmachine: (ha-792382-m02) DBG | I1209 10:49:57.307595  627662 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20068-609844/.minikube
	I1209 10:49:57.307776  627293 main.go:141] libmachine: (ha-792382-m02) Downloading /home/jenkins/minikube-integration/20068-609844/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20068-609844/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1209 10:49:57.586533  627293 main.go:141] libmachine: (ha-792382-m02) DBG | I1209 10:49:57.586377  627662 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382-m02/id_rsa...
	I1209 10:49:57.697560  627293 main.go:141] libmachine: (ha-792382-m02) DBG | I1209 10:49:57.697424  627662 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382-m02/ha-792382-m02.rawdisk...
	I1209 10:49:57.697602  627293 main.go:141] libmachine: (ha-792382-m02) DBG | Writing magic tar header
	I1209 10:49:57.697613  627293 main.go:141] libmachine: (ha-792382-m02) DBG | Writing SSH key tar header
	I1209 10:49:57.697621  627293 main.go:141] libmachine: (ha-792382-m02) DBG | I1209 10:49:57.697562  627662 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382-m02 ...
	I1209 10:49:57.697695  627293 main.go:141] libmachine: (ha-792382-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382-m02
	I1209 10:49:57.697714  627293 main.go:141] libmachine: (ha-792382-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20068-609844/.minikube/machines
	I1209 10:49:57.697722  627293 main.go:141] libmachine: (ha-792382-m02) Setting executable bit set on /home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382-m02 (perms=drwx------)
	I1209 10:49:57.697738  627293 main.go:141] libmachine: (ha-792382-m02) Setting executable bit set on /home/jenkins/minikube-integration/20068-609844/.minikube/machines (perms=drwxr-xr-x)
	I1209 10:49:57.697757  627293 main.go:141] libmachine: (ha-792382-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20068-609844/.minikube
	I1209 10:49:57.697771  627293 main.go:141] libmachine: (ha-792382-m02) Setting executable bit set on /home/jenkins/minikube-integration/20068-609844/.minikube (perms=drwxr-xr-x)
	I1209 10:49:57.697780  627293 main.go:141] libmachine: (ha-792382-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20068-609844
	I1209 10:49:57.697790  627293 main.go:141] libmachine: (ha-792382-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1209 10:49:57.697797  627293 main.go:141] libmachine: (ha-792382-m02) DBG | Checking permissions on dir: /home/jenkins
	I1209 10:49:57.697803  627293 main.go:141] libmachine: (ha-792382-m02) DBG | Checking permissions on dir: /home
	I1209 10:49:57.697812  627293 main.go:141] libmachine: (ha-792382-m02) DBG | Skipping /home - not owner
	I1209 10:49:57.697828  627293 main.go:141] libmachine: (ha-792382-m02) Setting executable bit set on /home/jenkins/minikube-integration/20068-609844 (perms=drwxrwxr-x)
	I1209 10:49:57.697853  627293 main.go:141] libmachine: (ha-792382-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1209 10:49:57.697862  627293 main.go:141] libmachine: (ha-792382-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1209 10:49:57.697867  627293 main.go:141] libmachine: (ha-792382-m02) Creating domain...
	I1209 10:49:57.698931  627293 main.go:141] libmachine: (ha-792382-m02) define libvirt domain using xml: 
	I1209 10:49:57.698948  627293 main.go:141] libmachine: (ha-792382-m02) <domain type='kvm'>
	I1209 10:49:57.698955  627293 main.go:141] libmachine: (ha-792382-m02)   <name>ha-792382-m02</name>
	I1209 10:49:57.698960  627293 main.go:141] libmachine: (ha-792382-m02)   <memory unit='MiB'>2200</memory>
	I1209 10:49:57.698965  627293 main.go:141] libmachine: (ha-792382-m02)   <vcpu>2</vcpu>
	I1209 10:49:57.698968  627293 main.go:141] libmachine: (ha-792382-m02)   <features>
	I1209 10:49:57.698974  627293 main.go:141] libmachine: (ha-792382-m02)     <acpi/>
	I1209 10:49:57.698977  627293 main.go:141] libmachine: (ha-792382-m02)     <apic/>
	I1209 10:49:57.698982  627293 main.go:141] libmachine: (ha-792382-m02)     <pae/>
	I1209 10:49:57.698985  627293 main.go:141] libmachine: (ha-792382-m02)     
	I1209 10:49:57.698991  627293 main.go:141] libmachine: (ha-792382-m02)   </features>
	I1209 10:49:57.698996  627293 main.go:141] libmachine: (ha-792382-m02)   <cpu mode='host-passthrough'>
	I1209 10:49:57.699000  627293 main.go:141] libmachine: (ha-792382-m02)   
	I1209 10:49:57.699004  627293 main.go:141] libmachine: (ha-792382-m02)   </cpu>
	I1209 10:49:57.699009  627293 main.go:141] libmachine: (ha-792382-m02)   <os>
	I1209 10:49:57.699013  627293 main.go:141] libmachine: (ha-792382-m02)     <type>hvm</type>
	I1209 10:49:57.699018  627293 main.go:141] libmachine: (ha-792382-m02)     <boot dev='cdrom'/>
	I1209 10:49:57.699034  627293 main.go:141] libmachine: (ha-792382-m02)     <boot dev='hd'/>
	I1209 10:49:57.699053  627293 main.go:141] libmachine: (ha-792382-m02)     <bootmenu enable='no'/>
	I1209 10:49:57.699065  627293 main.go:141] libmachine: (ha-792382-m02)   </os>
	I1209 10:49:57.699070  627293 main.go:141] libmachine: (ha-792382-m02)   <devices>
	I1209 10:49:57.699074  627293 main.go:141] libmachine: (ha-792382-m02)     <disk type='file' device='cdrom'>
	I1209 10:49:57.699083  627293 main.go:141] libmachine: (ha-792382-m02)       <source file='/home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382-m02/boot2docker.iso'/>
	I1209 10:49:57.699087  627293 main.go:141] libmachine: (ha-792382-m02)       <target dev='hdc' bus='scsi'/>
	I1209 10:49:57.699092  627293 main.go:141] libmachine: (ha-792382-m02)       <readonly/>
	I1209 10:49:57.699095  627293 main.go:141] libmachine: (ha-792382-m02)     </disk>
	I1209 10:49:57.699101  627293 main.go:141] libmachine: (ha-792382-m02)     <disk type='file' device='disk'>
	I1209 10:49:57.699106  627293 main.go:141] libmachine: (ha-792382-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1209 10:49:57.699114  627293 main.go:141] libmachine: (ha-792382-m02)       <source file='/home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382-m02/ha-792382-m02.rawdisk'/>
	I1209 10:49:57.699122  627293 main.go:141] libmachine: (ha-792382-m02)       <target dev='hda' bus='virtio'/>
	I1209 10:49:57.699137  627293 main.go:141] libmachine: (ha-792382-m02)     </disk>
	I1209 10:49:57.699147  627293 main.go:141] libmachine: (ha-792382-m02)     <interface type='network'>
	I1209 10:49:57.699179  627293 main.go:141] libmachine: (ha-792382-m02)       <source network='mk-ha-792382'/>
	I1209 10:49:57.699205  627293 main.go:141] libmachine: (ha-792382-m02)       <model type='virtio'/>
	I1209 10:49:57.699214  627293 main.go:141] libmachine: (ha-792382-m02)     </interface>
	I1209 10:49:57.699227  627293 main.go:141] libmachine: (ha-792382-m02)     <interface type='network'>
	I1209 10:49:57.699257  627293 main.go:141] libmachine: (ha-792382-m02)       <source network='default'/>
	I1209 10:49:57.699276  627293 main.go:141] libmachine: (ha-792382-m02)       <model type='virtio'/>
	I1209 10:49:57.699287  627293 main.go:141] libmachine: (ha-792382-m02)     </interface>
	I1209 10:49:57.699295  627293 main.go:141] libmachine: (ha-792382-m02)     <serial type='pty'>
	I1209 10:49:57.699302  627293 main.go:141] libmachine: (ha-792382-m02)       <target port='0'/>
	I1209 10:49:57.699309  627293 main.go:141] libmachine: (ha-792382-m02)     </serial>
	I1209 10:49:57.699314  627293 main.go:141] libmachine: (ha-792382-m02)     <console type='pty'>
	I1209 10:49:57.699320  627293 main.go:141] libmachine: (ha-792382-m02)       <target type='serial' port='0'/>
	I1209 10:49:57.699325  627293 main.go:141] libmachine: (ha-792382-m02)     </console>
	I1209 10:49:57.699332  627293 main.go:141] libmachine: (ha-792382-m02)     <rng model='virtio'>
	I1209 10:49:57.699338  627293 main.go:141] libmachine: (ha-792382-m02)       <backend model='random'>/dev/random</backend>
	I1209 10:49:57.699352  627293 main.go:141] libmachine: (ha-792382-m02)     </rng>
	I1209 10:49:57.699360  627293 main.go:141] libmachine: (ha-792382-m02)     
	I1209 10:49:57.699364  627293 main.go:141] libmachine: (ha-792382-m02)     
	I1209 10:49:57.699370  627293 main.go:141] libmachine: (ha-792382-m02)   </devices>
	I1209 10:49:57.699374  627293 main.go:141] libmachine: (ha-792382-m02) </domain>
	I1209 10:49:57.699384  627293 main.go:141] libmachine: (ha-792382-m02) 
	I1209 10:49:57.706829  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:be:31:4f in network default
	I1209 10:49:57.707394  627293 main.go:141] libmachine: (ha-792382-m02) Ensuring networks are active...
	I1209 10:49:57.707420  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:49:57.708099  627293 main.go:141] libmachine: (ha-792382-m02) Ensuring network default is active
	I1209 10:49:57.708447  627293 main.go:141] libmachine: (ha-792382-m02) Ensuring network mk-ha-792382 is active
	I1209 10:49:57.708833  627293 main.go:141] libmachine: (ha-792382-m02) Getting domain xml...
	I1209 10:49:57.709562  627293 main.go:141] libmachine: (ha-792382-m02) Creating domain...
	I1209 10:49:58.965991  627293 main.go:141] libmachine: (ha-792382-m02) Waiting to get IP...
	I1209 10:49:58.967025  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:49:58.967615  627293 main.go:141] libmachine: (ha-792382-m02) DBG | unable to find current IP address of domain ha-792382-m02 in network mk-ha-792382
	I1209 10:49:58.967718  627293 main.go:141] libmachine: (ha-792382-m02) DBG | I1209 10:49:58.967609  627662 retry.go:31] will retry after 289.483594ms: waiting for machine to come up
	I1209 10:49:59.259398  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:49:59.259929  627293 main.go:141] libmachine: (ha-792382-m02) DBG | unable to find current IP address of domain ha-792382-m02 in network mk-ha-792382
	I1209 10:49:59.259958  627293 main.go:141] libmachine: (ha-792382-m02) DBG | I1209 10:49:59.259877  627662 retry.go:31] will retry after 368.739813ms: waiting for machine to come up
	I1209 10:49:59.630595  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:49:59.631082  627293 main.go:141] libmachine: (ha-792382-m02) DBG | unable to find current IP address of domain ha-792382-m02 in network mk-ha-792382
	I1209 10:49:59.631111  627293 main.go:141] libmachine: (ha-792382-m02) DBG | I1209 10:49:59.631032  627662 retry.go:31] will retry after 468.793736ms: waiting for machine to come up
	I1209 10:50:00.101924  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:00.102437  627293 main.go:141] libmachine: (ha-792382-m02) DBG | unable to find current IP address of domain ha-792382-m02 in network mk-ha-792382
	I1209 10:50:00.102468  627293 main.go:141] libmachine: (ha-792382-m02) DBG | I1209 10:50:00.102389  627662 retry.go:31] will retry after 467.16032ms: waiting for machine to come up
	I1209 10:50:00.571568  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:00.572085  627293 main.go:141] libmachine: (ha-792382-m02) DBG | unable to find current IP address of domain ha-792382-m02 in network mk-ha-792382
	I1209 10:50:00.572158  627293 main.go:141] libmachine: (ha-792382-m02) DBG | I1209 10:50:00.571967  627662 retry.go:31] will retry after 614.331886ms: waiting for machine to come up
	I1209 10:50:01.188165  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:01.188721  627293 main.go:141] libmachine: (ha-792382-m02) DBG | unable to find current IP address of domain ha-792382-m02 in network mk-ha-792382
	I1209 10:50:01.188753  627293 main.go:141] libmachine: (ha-792382-m02) DBG | I1209 10:50:01.188683  627662 retry.go:31] will retry after 622.291039ms: waiting for machine to come up
	I1209 10:50:01.812761  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:01.813166  627293 main.go:141] libmachine: (ha-792382-m02) DBG | unable to find current IP address of domain ha-792382-m02 in network mk-ha-792382
	I1209 10:50:01.813197  627293 main.go:141] libmachine: (ha-792382-m02) DBG | I1209 10:50:01.813093  627662 retry.go:31] will retry after 970.350077ms: waiting for machine to come up
	I1209 10:50:02.785861  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:02.786416  627293 main.go:141] libmachine: (ha-792382-m02) DBG | unable to find current IP address of domain ha-792382-m02 in network mk-ha-792382
	I1209 10:50:02.786477  627293 main.go:141] libmachine: (ha-792382-m02) DBG | I1209 10:50:02.786368  627662 retry.go:31] will retry after 1.09205339s: waiting for machine to come up
	I1209 10:50:03.879814  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:03.880303  627293 main.go:141] libmachine: (ha-792382-m02) DBG | unable to find current IP address of domain ha-792382-m02 in network mk-ha-792382
	I1209 10:50:03.880327  627293 main.go:141] libmachine: (ha-792382-m02) DBG | I1209 10:50:03.880248  627662 retry.go:31] will retry after 1.765651975s: waiting for machine to come up
	I1209 10:50:05.648159  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:05.648671  627293 main.go:141] libmachine: (ha-792382-m02) DBG | unable to find current IP address of domain ha-792382-m02 in network mk-ha-792382
	I1209 10:50:05.648696  627293 main.go:141] libmachine: (ha-792382-m02) DBG | I1209 10:50:05.648615  627662 retry.go:31] will retry after 1.762832578s: waiting for machine to come up
	I1209 10:50:07.413599  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:07.414030  627293 main.go:141] libmachine: (ha-792382-m02) DBG | unable to find current IP address of domain ha-792382-m02 in network mk-ha-792382
	I1209 10:50:07.414059  627293 main.go:141] libmachine: (ha-792382-m02) DBG | I1209 10:50:07.413978  627662 retry.go:31] will retry after 2.150383903s: waiting for machine to come up
	I1209 10:50:09.565911  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:09.566390  627293 main.go:141] libmachine: (ha-792382-m02) DBG | unable to find current IP address of domain ha-792382-m02 in network mk-ha-792382
	I1209 10:50:09.566420  627293 main.go:141] libmachine: (ha-792382-m02) DBG | I1209 10:50:09.566350  627662 retry.go:31] will retry after 3.049537741s: waiting for machine to come up
	I1209 10:50:12.617744  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:12.618241  627293 main.go:141] libmachine: (ha-792382-m02) DBG | unable to find current IP address of domain ha-792382-m02 in network mk-ha-792382
	I1209 10:50:12.618276  627293 main.go:141] libmachine: (ha-792382-m02) DBG | I1209 10:50:12.618155  627662 retry.go:31] will retry after 3.599687882s: waiting for machine to come up
	I1209 10:50:16.219399  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:16.219837  627293 main.go:141] libmachine: (ha-792382-m02) DBG | unable to find current IP address of domain ha-792382-m02 in network mk-ha-792382
	I1209 10:50:16.219868  627293 main.go:141] libmachine: (ha-792382-m02) DBG | I1209 10:50:16.219789  627662 retry.go:31] will retry after 3.518875962s: waiting for machine to come up
	I1209 10:50:19.740130  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:19.740985  627293 main.go:141] libmachine: (ha-792382-m02) Found IP for machine: 192.168.39.89
	I1209 10:50:19.741024  627293 main.go:141] libmachine: (ha-792382-m02) Reserving static IP address...
	I1209 10:50:19.741037  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has current primary IP address 192.168.39.89 and MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:19.741518  627293 main.go:141] libmachine: (ha-792382-m02) DBG | unable to find host DHCP lease matching {name: "ha-792382-m02", mac: "52:54:00:95:40:00", ip: "192.168.39.89"} in network mk-ha-792382
	I1209 10:50:19.814048  627293 main.go:141] libmachine: (ha-792382-m02) DBG | Getting to WaitForSSH function...
	I1209 10:50:19.814070  627293 main.go:141] libmachine: (ha-792382-m02) Reserved static IP address: 192.168.39.89
	I1209 10:50:19.814078  627293 main.go:141] libmachine: (ha-792382-m02) Waiting for SSH to be available...
	I1209 10:50:19.816613  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:19.817057  627293 main.go:141] libmachine: (ha-792382-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:40:00", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:50:11 +0000 UTC Type:0 Mac:52:54:00:95:40:00 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:minikube Clientid:01:52:54:00:95:40:00}
	I1209 10:50:19.817166  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:19.817261  627293 main.go:141] libmachine: (ha-792382-m02) DBG | Using SSH client type: external
	I1209 10:50:19.817282  627293 main.go:141] libmachine: (ha-792382-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382-m02/id_rsa (-rw-------)
	I1209 10:50:19.817362  627293 main.go:141] libmachine: (ha-792382-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.89 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1209 10:50:19.817390  627293 main.go:141] libmachine: (ha-792382-m02) DBG | About to run SSH command:
	I1209 10:50:19.817411  627293 main.go:141] libmachine: (ha-792382-m02) DBG | exit 0
	I1209 10:50:19.942297  627293 main.go:141] libmachine: (ha-792382-m02) DBG | SSH cmd err, output: <nil>: 
	I1209 10:50:19.942595  627293 main.go:141] libmachine: (ha-792382-m02) KVM machine creation complete!
	I1209 10:50:19.942914  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetConfigRaw
	I1209 10:50:19.943559  627293 main.go:141] libmachine: (ha-792382-m02) Calling .DriverName
	I1209 10:50:19.943781  627293 main.go:141] libmachine: (ha-792382-m02) Calling .DriverName
	I1209 10:50:19.943947  627293 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1209 10:50:19.943965  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetState
	I1209 10:50:19.945579  627293 main.go:141] libmachine: Detecting operating system of created instance...
	I1209 10:50:19.945598  627293 main.go:141] libmachine: Waiting for SSH to be available...
	I1209 10:50:19.945607  627293 main.go:141] libmachine: Getting to WaitForSSH function...
	I1209 10:50:19.945616  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHHostname
	I1209 10:50:19.947916  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:19.948374  627293 main.go:141] libmachine: (ha-792382-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:40:00", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:50:11 +0000 UTC Type:0 Mac:52:54:00:95:40:00 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-792382-m02 Clientid:01:52:54:00:95:40:00}
	I1209 10:50:19.948400  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:19.948582  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHPort
	I1209 10:50:19.948773  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHKeyPath
	I1209 10:50:19.948920  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHKeyPath
	I1209 10:50:19.949049  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHUsername
	I1209 10:50:19.949307  627293 main.go:141] libmachine: Using SSH client type: native
	I1209 10:50:19.949555  627293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I1209 10:50:19.949573  627293 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1209 10:50:20.053499  627293 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 10:50:20.053528  627293 main.go:141] libmachine: Detecting the provisioner...
	I1209 10:50:20.053541  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHHostname
	I1209 10:50:20.056444  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:20.056881  627293 main.go:141] libmachine: (ha-792382-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:40:00", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:50:11 +0000 UTC Type:0 Mac:52:54:00:95:40:00 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-792382-m02 Clientid:01:52:54:00:95:40:00}
	I1209 10:50:20.056911  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:20.057119  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHPort
	I1209 10:50:20.057366  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHKeyPath
	I1209 10:50:20.057545  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHKeyPath
	I1209 10:50:20.057698  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHUsername
	I1209 10:50:20.057856  627293 main.go:141] libmachine: Using SSH client type: native
	I1209 10:50:20.058022  627293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I1209 10:50:20.058034  627293 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1209 10:50:20.162532  627293 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1209 10:50:20.162621  627293 main.go:141] libmachine: found compatible host: buildroot
	I1209 10:50:20.162636  627293 main.go:141] libmachine: Provisioning with buildroot...
	I1209 10:50:20.162651  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetMachineName
	I1209 10:50:20.162892  627293 buildroot.go:166] provisioning hostname "ha-792382-m02"
	I1209 10:50:20.162921  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetMachineName
	I1209 10:50:20.163135  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHHostname
	I1209 10:50:20.165692  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:20.166051  627293 main.go:141] libmachine: (ha-792382-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:40:00", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:50:11 +0000 UTC Type:0 Mac:52:54:00:95:40:00 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-792382-m02 Clientid:01:52:54:00:95:40:00}
	I1209 10:50:20.166078  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:20.166237  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHPort
	I1209 10:50:20.166425  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHKeyPath
	I1209 10:50:20.166592  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHKeyPath
	I1209 10:50:20.166734  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHUsername
	I1209 10:50:20.166863  627293 main.go:141] libmachine: Using SSH client type: native
	I1209 10:50:20.167071  627293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I1209 10:50:20.167087  627293 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-792382-m02 && echo "ha-792382-m02" | sudo tee /etc/hostname
	I1209 10:50:20.285783  627293 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-792382-m02
	
	I1209 10:50:20.285812  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHHostname
	I1209 10:50:20.288581  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:20.288945  627293 main.go:141] libmachine: (ha-792382-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:40:00", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:50:11 +0000 UTC Type:0 Mac:52:54:00:95:40:00 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-792382-m02 Clientid:01:52:54:00:95:40:00}
	I1209 10:50:20.289006  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:20.289156  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHPort
	I1209 10:50:20.289374  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHKeyPath
	I1209 10:50:20.289525  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHKeyPath
	I1209 10:50:20.289675  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHUsername
	I1209 10:50:20.289834  627293 main.go:141] libmachine: Using SSH client type: native
	I1209 10:50:20.290050  627293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I1209 10:50:20.290067  627293 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-792382-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-792382-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-792382-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 10:50:20.403745  627293 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 10:50:20.403780  627293 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20068-609844/.minikube CaCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20068-609844/.minikube}
	I1209 10:50:20.403797  627293 buildroot.go:174] setting up certificates
	I1209 10:50:20.403807  627293 provision.go:84] configureAuth start
	I1209 10:50:20.403816  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetMachineName
	I1209 10:50:20.404127  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetIP
	I1209 10:50:20.406853  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:20.407317  627293 main.go:141] libmachine: (ha-792382-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:40:00", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:50:11 +0000 UTC Type:0 Mac:52:54:00:95:40:00 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-792382-m02 Clientid:01:52:54:00:95:40:00}
	I1209 10:50:20.407339  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:20.407523  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHHostname
	I1209 10:50:20.410235  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:20.410616  627293 main.go:141] libmachine: (ha-792382-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:40:00", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:50:11 +0000 UTC Type:0 Mac:52:54:00:95:40:00 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-792382-m02 Clientid:01:52:54:00:95:40:00}
	I1209 10:50:20.410641  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:20.410813  627293 provision.go:143] copyHostCerts
	I1209 10:50:20.410851  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem
	I1209 10:50:20.410897  627293 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem, removing ...
	I1209 10:50:20.410910  627293 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem
	I1209 10:50:20.410996  627293 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem (1082 bytes)
	I1209 10:50:20.411092  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem
	I1209 10:50:20.411117  627293 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem, removing ...
	I1209 10:50:20.411127  627293 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem
	I1209 10:50:20.411167  627293 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem (1123 bytes)
	I1209 10:50:20.411241  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem
	I1209 10:50:20.411265  627293 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem, removing ...
	I1209 10:50:20.411274  627293 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem
	I1209 10:50:20.411310  627293 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem (1679 bytes)
	I1209 10:50:20.411379  627293 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem org=jenkins.ha-792382-m02 san=[127.0.0.1 192.168.39.89 ha-792382-m02 localhost minikube]
	I1209 10:50:20.506946  627293 provision.go:177] copyRemoteCerts
	I1209 10:50:20.507013  627293 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 10:50:20.507043  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHHostname
	I1209 10:50:20.509588  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:20.509997  627293 main.go:141] libmachine: (ha-792382-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:40:00", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:50:11 +0000 UTC Type:0 Mac:52:54:00:95:40:00 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-792382-m02 Clientid:01:52:54:00:95:40:00}
	I1209 10:50:20.510031  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:20.510256  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHPort
	I1209 10:50:20.510485  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHKeyPath
	I1209 10:50:20.510630  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHUsername
	I1209 10:50:20.510792  627293 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382-m02/id_rsa Username:docker}
	I1209 10:50:20.591669  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1209 10:50:20.591738  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1209 10:50:20.614379  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1209 10:50:20.614474  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1209 10:50:20.635752  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1209 10:50:20.635819  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1209 10:50:20.657840  627293 provision.go:87] duration metric: took 254.019642ms to configureAuth
	I1209 10:50:20.657873  627293 buildroot.go:189] setting minikube options for container-runtime
	I1209 10:50:20.658088  627293 config.go:182] Loaded profile config "ha-792382": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 10:50:20.658221  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHHostname
	I1209 10:50:20.661758  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:20.662150  627293 main.go:141] libmachine: (ha-792382-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:40:00", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:50:11 +0000 UTC Type:0 Mac:52:54:00:95:40:00 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-792382-m02 Clientid:01:52:54:00:95:40:00}
	I1209 10:50:20.662207  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:20.662350  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHPort
	I1209 10:50:20.662590  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHKeyPath
	I1209 10:50:20.662773  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHKeyPath
	I1209 10:50:20.662982  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHUsername
	I1209 10:50:20.663174  627293 main.go:141] libmachine: Using SSH client type: native
	I1209 10:50:20.663396  627293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I1209 10:50:20.663417  627293 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 10:50:20.895342  627293 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 10:50:20.895376  627293 main.go:141] libmachine: Checking connection to Docker...
	I1209 10:50:20.895386  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetURL
	I1209 10:50:20.896678  627293 main.go:141] libmachine: (ha-792382-m02) DBG | Using libvirt version 6000000
	I1209 10:50:20.899127  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:20.899492  627293 main.go:141] libmachine: (ha-792382-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:40:00", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:50:11 +0000 UTC Type:0 Mac:52:54:00:95:40:00 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-792382-m02 Clientid:01:52:54:00:95:40:00}
	I1209 10:50:20.899524  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:20.899662  627293 main.go:141] libmachine: Docker is up and running!
	I1209 10:50:20.899675  627293 main.go:141] libmachine: Reticulating splines...
	I1209 10:50:20.899683  627293 client.go:171] duration metric: took 23.594715586s to LocalClient.Create
	I1209 10:50:20.899712  627293 start.go:167] duration metric: took 23.594799788s to libmachine.API.Create "ha-792382"
	I1209 10:50:20.899727  627293 start.go:293] postStartSetup for "ha-792382-m02" (driver="kvm2")
	I1209 10:50:20.899740  627293 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 10:50:20.899762  627293 main.go:141] libmachine: (ha-792382-m02) Calling .DriverName
	I1209 10:50:20.899988  627293 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 10:50:20.900011  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHHostname
	I1209 10:50:20.902193  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:20.902545  627293 main.go:141] libmachine: (ha-792382-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:40:00", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:50:11 +0000 UTC Type:0 Mac:52:54:00:95:40:00 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-792382-m02 Clientid:01:52:54:00:95:40:00}
	I1209 10:50:20.902574  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:20.902733  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHPort
	I1209 10:50:20.902907  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHKeyPath
	I1209 10:50:20.903055  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHUsername
	I1209 10:50:20.903224  627293 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382-m02/id_rsa Username:docker}
	I1209 10:50:20.987979  627293 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 10:50:20.992183  627293 info.go:137] Remote host: Buildroot 2023.02.9
	I1209 10:50:20.992210  627293 filesync.go:126] Scanning /home/jenkins/minikube-integration/20068-609844/.minikube/addons for local assets ...
	I1209 10:50:20.992280  627293 filesync.go:126] Scanning /home/jenkins/minikube-integration/20068-609844/.minikube/files for local assets ...
	I1209 10:50:20.992373  627293 filesync.go:149] local asset: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem -> 6170172.pem in /etc/ssl/certs
	I1209 10:50:20.992388  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem -> /etc/ssl/certs/6170172.pem
	I1209 10:50:20.992517  627293 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 10:50:21.001255  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem --> /etc/ssl/certs/6170172.pem (1708 bytes)
	I1209 10:50:21.023333  627293 start.go:296] duration metric: took 123.590873ms for postStartSetup
	I1209 10:50:21.023382  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetConfigRaw
	I1209 10:50:21.024074  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetIP
	I1209 10:50:21.026760  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:21.027216  627293 main.go:141] libmachine: (ha-792382-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:40:00", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:50:11 +0000 UTC Type:0 Mac:52:54:00:95:40:00 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-792382-m02 Clientid:01:52:54:00:95:40:00}
	I1209 10:50:21.027253  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:21.027452  627293 profile.go:143] Saving config to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/config.json ...
	I1209 10:50:21.027657  627293 start.go:128] duration metric: took 23.741699232s to createHost
	I1209 10:50:21.027689  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHHostname
	I1209 10:50:21.029948  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:21.030322  627293 main.go:141] libmachine: (ha-792382-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:40:00", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:50:11 +0000 UTC Type:0 Mac:52:54:00:95:40:00 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-792382-m02 Clientid:01:52:54:00:95:40:00}
	I1209 10:50:21.030343  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:21.030537  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHPort
	I1209 10:50:21.030708  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHKeyPath
	I1209 10:50:21.030868  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHKeyPath
	I1209 10:50:21.031040  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHUsername
	I1209 10:50:21.031235  627293 main.go:141] libmachine: Using SSH client type: native
	I1209 10:50:21.031525  627293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I1209 10:50:21.031542  627293 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1209 10:50:21.134634  627293 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733741421.109382404
	
	I1209 10:50:21.134664  627293 fix.go:216] guest clock: 1733741421.109382404
	I1209 10:50:21.134671  627293 fix.go:229] Guest: 2024-12-09 10:50:21.109382404 +0000 UTC Remote: 2024-12-09 10:50:21.027672389 +0000 UTC m=+68.911911388 (delta=81.710015ms)
	I1209 10:50:21.134687  627293 fix.go:200] guest clock delta is within tolerance: 81.710015ms
	I1209 10:50:21.134693  627293 start.go:83] releasing machines lock for "ha-792382-m02", held for 23.84885063s
	I1209 10:50:21.134711  627293 main.go:141] libmachine: (ha-792382-m02) Calling .DriverName
	I1209 10:50:21.135011  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetIP
	I1209 10:50:21.137922  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:21.138329  627293 main.go:141] libmachine: (ha-792382-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:40:00", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:50:11 +0000 UTC Type:0 Mac:52:54:00:95:40:00 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-792382-m02 Clientid:01:52:54:00:95:40:00}
	I1209 10:50:21.138359  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:21.140711  627293 out.go:177] * Found network options:
	I1209 10:50:21.142033  627293 out.go:177]   - NO_PROXY=192.168.39.69
	W1209 10:50:21.143264  627293 proxy.go:119] fail to check proxy env: Error ip not in block
	I1209 10:50:21.143304  627293 main.go:141] libmachine: (ha-792382-m02) Calling .DriverName
	I1209 10:50:21.143961  627293 main.go:141] libmachine: (ha-792382-m02) Calling .DriverName
	I1209 10:50:21.144186  627293 main.go:141] libmachine: (ha-792382-m02) Calling .DriverName
	I1209 10:50:21.144305  627293 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 10:50:21.144354  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHHostname
	W1209 10:50:21.144454  627293 proxy.go:119] fail to check proxy env: Error ip not in block
	I1209 10:50:21.144534  627293 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 10:50:21.144559  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHHostname
	I1209 10:50:21.147622  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:21.147846  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:21.147959  627293 main.go:141] libmachine: (ha-792382-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:40:00", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:50:11 +0000 UTC Type:0 Mac:52:54:00:95:40:00 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-792382-m02 Clientid:01:52:54:00:95:40:00}
	I1209 10:50:21.147994  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:21.148084  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHPort
	I1209 10:50:21.148250  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHKeyPath
	I1209 10:50:21.148369  627293 main.go:141] libmachine: (ha-792382-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:40:00", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:50:11 +0000 UTC Type:0 Mac:52:54:00:95:40:00 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-792382-m02 Clientid:01:52:54:00:95:40:00}
	I1209 10:50:21.148396  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:21.148419  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHUsername
	I1209 10:50:21.148619  627293 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382-m02/id_rsa Username:docker}
	I1209 10:50:21.148763  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHPort
	I1209 10:50:21.148870  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHKeyPath
	I1209 10:50:21.149177  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHUsername
	I1209 10:50:21.149326  627293 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382-m02/id_rsa Username:docker}
	I1209 10:50:21.377528  627293 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 10:50:21.383869  627293 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 10:50:21.383962  627293 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 10:50:21.402713  627293 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1209 10:50:21.402747  627293 start.go:495] detecting cgroup driver to use...
	I1209 10:50:21.402836  627293 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 10:50:21.418644  627293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 10:50:21.431825  627293 docker.go:217] disabling cri-docker service (if available) ...
	I1209 10:50:21.431894  627293 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 10:50:21.445030  627293 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 10:50:21.458235  627293 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 10:50:21.576888  627293 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 10:50:21.715254  627293 docker.go:233] disabling docker service ...
	I1209 10:50:21.715337  627293 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 10:50:21.728777  627293 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 10:50:21.741484  627293 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 10:50:21.877920  627293 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 10:50:21.987438  627293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 10:50:22.000287  627293 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 10:50:22.017967  627293 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1209 10:50:22.018044  627293 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 10:50:22.027586  627293 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 10:50:22.027647  627293 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 10:50:22.037032  627293 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 10:50:22.046716  627293 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 10:50:22.056390  627293 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 10:50:22.066025  627293 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 10:50:22.075591  627293 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 10:50:22.092169  627293 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 10:50:22.102292  627293 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 10:50:22.111580  627293 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1209 10:50:22.111645  627293 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1209 10:50:22.124823  627293 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 10:50:22.134059  627293 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 10:50:22.267517  627293 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 10:50:22.360113  627293 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 10:50:22.360202  627293 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 10:50:22.366049  627293 start.go:563] Will wait 60s for crictl version
	I1209 10:50:22.366124  627293 ssh_runner.go:195] Run: which crictl
	I1209 10:50:22.369685  627293 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 10:50:22.406117  627293 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1209 10:50:22.406233  627293 ssh_runner.go:195] Run: crio --version
	I1209 10:50:22.433831  627293 ssh_runner.go:195] Run: crio --version
	I1209 10:50:22.466702  627293 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1209 10:50:22.468114  627293 out.go:177]   - env NO_PROXY=192.168.39.69
	I1209 10:50:22.469415  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetIP
	I1209 10:50:22.472354  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:22.472792  627293 main.go:141] libmachine: (ha-792382-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:40:00", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:50:11 +0000 UTC Type:0 Mac:52:54:00:95:40:00 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-792382-m02 Clientid:01:52:54:00:95:40:00}
	I1209 10:50:22.472838  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:22.473063  627293 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1209 10:50:22.478206  627293 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 10:50:22.490975  627293 mustload.go:65] Loading cluster: ha-792382
	I1209 10:50:22.491223  627293 config.go:182] Loaded profile config "ha-792382": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 10:50:22.491515  627293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:50:22.491566  627293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:50:22.507354  627293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34813
	I1209 10:50:22.507839  627293 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:50:22.508378  627293 main.go:141] libmachine: Using API Version  1
	I1209 10:50:22.508407  627293 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:50:22.508811  627293 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:50:22.509022  627293 main.go:141] libmachine: (ha-792382) Calling .GetState
	I1209 10:50:22.510469  627293 host.go:66] Checking if "ha-792382" exists ...
	I1209 10:50:22.510748  627293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:50:22.510785  627293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:50:22.525474  627293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34445
	I1209 10:50:22.525972  627293 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:50:22.526524  627293 main.go:141] libmachine: Using API Version  1
	I1209 10:50:22.526554  627293 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:50:22.526848  627293 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:50:22.527055  627293 main.go:141] libmachine: (ha-792382) Calling .DriverName
	I1209 10:50:22.527271  627293 certs.go:68] Setting up /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382 for IP: 192.168.39.89
	I1209 10:50:22.527285  627293 certs.go:194] generating shared ca certs ...
	I1209 10:50:22.527308  627293 certs.go:226] acquiring lock for ca certs: {Name:mk2df5887e08965d909a9c950da5dfffb8a04ddc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 10:50:22.527465  627293 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.key
	I1209 10:50:22.527507  627293 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.key
	I1209 10:50:22.527514  627293 certs.go:256] generating profile certs ...
	I1209 10:50:22.527587  627293 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/client.key
	I1209 10:50:22.527613  627293 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.key.8c4cfabb
	I1209 10:50:22.527628  627293 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.crt.8c4cfabb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.69 192.168.39.89 192.168.39.254]
	I1209 10:50:22.618893  627293 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.crt.8c4cfabb ...
	I1209 10:50:22.618924  627293 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.crt.8c4cfabb: {Name:mk9fc14aa3aaf65091f9f2d45f3765515e31473e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 10:50:22.619129  627293 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.key.8c4cfabb ...
	I1209 10:50:22.619148  627293 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.key.8c4cfabb: {Name:mk41f99fa98267e5a58e4b407fa7296350fea4d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 10:50:22.619255  627293 certs.go:381] copying /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.crt.8c4cfabb -> /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.crt
	I1209 10:50:22.619394  627293 certs.go:385] copying /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.key.8c4cfabb -> /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.key
	I1209 10:50:22.619538  627293 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/proxy-client.key
	I1209 10:50:22.619555  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1209 10:50:22.619568  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1209 10:50:22.619579  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1209 10:50:22.619593  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1209 10:50:22.619603  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1209 10:50:22.619614  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1209 10:50:22.619626  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1209 10:50:22.619636  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1209 10:50:22.619683  627293 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017.pem (1338 bytes)
	W1209 10:50:22.619711  627293 certs.go:480] ignoring /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017_empty.pem, impossibly tiny 0 bytes
	I1209 10:50:22.619720  627293 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem (1675 bytes)
	I1209 10:50:22.619743  627293 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem (1082 bytes)
	I1209 10:50:22.619767  627293 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem (1123 bytes)
	I1209 10:50:22.619790  627293 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem (1679 bytes)
	I1209 10:50:22.619828  627293 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem (1708 bytes)
	I1209 10:50:22.619853  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem -> /usr/share/ca-certificates/6170172.pem
	I1209 10:50:22.619866  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1209 10:50:22.619877  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017.pem -> /usr/share/ca-certificates/617017.pem
	I1209 10:50:22.619908  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHHostname
	I1209 10:50:22.623291  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:50:22.623706  627293 main.go:141] libmachine: (ha-792382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:82:f7", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:49:26 +0000 UTC Type:0 Mac:52:54:00:a8:82:f7 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-792382 Clientid:01:52:54:00:a8:82:f7}
	I1209 10:50:22.623734  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:50:22.623919  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHPort
	I1209 10:50:22.624122  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHKeyPath
	I1209 10:50:22.624329  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHUsername
	I1209 10:50:22.624526  627293 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382/id_rsa Username:docker}
	I1209 10:50:22.694590  627293 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1209 10:50:22.700190  627293 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1209 10:50:22.715537  627293 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1209 10:50:22.720737  627293 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1209 10:50:22.731623  627293 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1209 10:50:22.736050  627293 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1209 10:50:22.747578  627293 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1209 10:50:22.752312  627293 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1209 10:50:22.763588  627293 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1209 10:50:22.768050  627293 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1209 10:50:22.777655  627293 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1209 10:50:22.781717  627293 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1209 10:50:22.792464  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 10:50:22.816318  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1209 10:50:22.837988  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 10:50:22.861671  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1209 10:50:22.883735  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1209 10:50:22.904888  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1209 10:50:22.926092  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 10:50:22.947329  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1209 10:50:22.968466  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem --> /usr/share/ca-certificates/6170172.pem (1708 bytes)
	I1209 10:50:22.989908  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 10:50:23.012190  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017.pem --> /usr/share/ca-certificates/617017.pem (1338 bytes)
	I1209 10:50:23.036349  627293 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1209 10:50:23.051329  627293 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1209 10:50:23.066824  627293 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1209 10:50:23.081626  627293 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1209 10:50:23.096856  627293 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1209 10:50:23.112249  627293 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1209 10:50:23.126784  627293 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1209 10:50:23.141365  627293 ssh_runner.go:195] Run: openssl version
	I1209 10:50:23.146879  627293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6170172.pem && ln -fs /usr/share/ca-certificates/6170172.pem /etc/ssl/certs/6170172.pem"
	I1209 10:50:23.156698  627293 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6170172.pem
	I1209 10:50:23.160669  627293 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 10:45 /usr/share/ca-certificates/6170172.pem
	I1209 10:50:23.160717  627293 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6170172.pem
	I1209 10:50:23.166987  627293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6170172.pem /etc/ssl/certs/3ec20f2e.0"
	I1209 10:50:23.176745  627293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1209 10:50:23.186586  627293 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 10:50:23.190639  627293 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 10:34 /usr/share/ca-certificates/minikubeCA.pem
	I1209 10:50:23.190687  627293 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 10:50:23.195990  627293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1209 10:50:23.205745  627293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/617017.pem && ln -fs /usr/share/ca-certificates/617017.pem /etc/ssl/certs/617017.pem"
	I1209 10:50:23.215364  627293 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/617017.pem
	I1209 10:50:23.219316  627293 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 10:45 /usr/share/ca-certificates/617017.pem
	I1209 10:50:23.219368  627293 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/617017.pem
	I1209 10:50:23.225208  627293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/617017.pem /etc/ssl/certs/51391683.0"
	I1209 10:50:23.235141  627293 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 10:50:23.238820  627293 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1209 10:50:23.238882  627293 kubeadm.go:934] updating node {m02 192.168.39.89 8443 v1.31.2 crio true true} ...
	I1209 10:50:23.238988  627293 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-792382-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.89
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-792382 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 10:50:23.239016  627293 kube-vip.go:115] generating kube-vip config ...
	I1209 10:50:23.239060  627293 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1209 10:50:23.254073  627293 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1209 10:50:23.254184  627293 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.7
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1209 10:50:23.254233  627293 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1209 10:50:23.263688  627293 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1209 10:50:23.263749  627293 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1209 10:50:23.272494  627293 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1209 10:50:23.272527  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1209 10:50:23.272570  627293 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/20068-609844/.minikube/cache/linux/amd64/v1.31.2/kubeadm
	I1209 10:50:23.272599  627293 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1209 10:50:23.272611  627293 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/20068-609844/.minikube/cache/linux/amd64/v1.31.2/kubelet
	I1209 10:50:23.276784  627293 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1209 10:50:23.276819  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1209 10:50:24.168986  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1209 10:50:24.169072  627293 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1209 10:50:24.174707  627293 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1209 10:50:24.174764  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1209 10:50:24.294393  627293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 10:50:24.325197  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1209 10:50:24.325289  627293 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1209 10:50:24.335547  627293 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1209 10:50:24.335594  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1209 10:50:24.706937  627293 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1209 10:50:24.715886  627293 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1209 10:50:24.731189  627293 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 10:50:24.746662  627293 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1209 10:50:24.762089  627293 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1209 10:50:24.765881  627293 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 10:50:24.777191  627293 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 10:50:24.904006  627293 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 10:50:24.921009  627293 host.go:66] Checking if "ha-792382" exists ...
	I1209 10:50:24.921461  627293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:50:24.921511  627293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:50:24.937482  627293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34099
	I1209 10:50:24.937973  627293 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:50:24.938486  627293 main.go:141] libmachine: Using API Version  1
	I1209 10:50:24.938508  627293 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:50:24.938885  627293 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:50:24.939098  627293 main.go:141] libmachine: (ha-792382) Calling .DriverName
	I1209 10:50:24.939248  627293 start.go:317] joinCluster: &{Name:ha-792382 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-792382 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.69 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.89 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 10:50:24.939386  627293 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1209 10:50:24.939418  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHHostname
	I1209 10:50:24.942285  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:50:24.942827  627293 main.go:141] libmachine: (ha-792382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:82:f7", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:49:26 +0000 UTC Type:0 Mac:52:54:00:a8:82:f7 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-792382 Clientid:01:52:54:00:a8:82:f7}
	I1209 10:50:24.942855  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:50:24.942985  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHPort
	I1209 10:50:24.943215  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHKeyPath
	I1209 10:50:24.943387  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHUsername
	I1209 10:50:24.943515  627293 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382/id_rsa Username:docker}
	I1209 10:50:25.097594  627293 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.89 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 10:50:25.097643  627293 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token dvotig.smgl74cs6saznre8 --discovery-token-ca-cert-hash sha256:c011865ce27f188d55feab5f04226df0aae8d2e7cfe661283806c6f2de0ef29a --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-792382-m02 --control-plane --apiserver-advertise-address=192.168.39.89 --apiserver-bind-port=8443"
	I1209 10:50:47.230030  627293 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token dvotig.smgl74cs6saznre8 --discovery-token-ca-cert-hash sha256:c011865ce27f188d55feab5f04226df0aae8d2e7cfe661283806c6f2de0ef29a --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-792382-m02 --control-plane --apiserver-advertise-address=192.168.39.89 --apiserver-bind-port=8443": (22.132356511s)
	I1209 10:50:47.230081  627293 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1209 10:50:47.777805  627293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-792382-m02 minikube.k8s.io/updated_at=2024_12_09T10_50_47_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=60110addcdbf0fec7168b962521659e922988d6c minikube.k8s.io/name=ha-792382 minikube.k8s.io/primary=false
	I1209 10:50:47.938150  627293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-792382-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I1209 10:50:48.082480  627293 start.go:319] duration metric: took 23.143228187s to joinCluster
	I1209 10:50:48.082581  627293 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.89 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 10:50:48.082941  627293 config.go:182] Loaded profile config "ha-792382": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 10:50:48.084770  627293 out.go:177] * Verifying Kubernetes components...
	I1209 10:50:48.085991  627293 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 10:50:48.337603  627293 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 10:50:48.368412  627293 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/20068-609844/kubeconfig
	I1209 10:50:48.368651  627293 kapi.go:59] client config for ha-792382: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/client.crt", KeyFile:"/home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/client.key", CAFile:"/home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243b6e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1209 10:50:48.368776  627293 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.69:8443
	I1209 10:50:48.369027  627293 node_ready.go:35] waiting up to 6m0s for node "ha-792382-m02" to be "Ready" ...
	I1209 10:50:48.369182  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:50:48.369197  627293 round_trippers.go:469] Request Headers:
	I1209 10:50:48.369210  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:50:48.369215  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:50:48.379219  627293 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1209 10:50:48.869436  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:50:48.869471  627293 round_trippers.go:469] Request Headers:
	I1209 10:50:48.869484  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:50:48.869491  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:50:48.873562  627293 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 10:50:49.369649  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:50:49.369671  627293 round_trippers.go:469] Request Headers:
	I1209 10:50:49.369679  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:50:49.369685  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:50:49.372678  627293 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 10:50:49.869490  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:50:49.869516  627293 round_trippers.go:469] Request Headers:
	I1209 10:50:49.869525  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:50:49.869529  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:50:49.872495  627293 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 10:50:50.369998  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:50:50.370028  627293 round_trippers.go:469] Request Headers:
	I1209 10:50:50.370038  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:50:50.370043  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:50:50.374983  627293 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 10:50:50.377595  627293 node_ready.go:53] node "ha-792382-m02" has status "Ready":"False"
	I1209 10:50:50.869651  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:50:50.869674  627293 round_trippers.go:469] Request Headers:
	I1209 10:50:50.869688  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:50:50.869692  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:50:50.906453  627293 round_trippers.go:574] Response Status: 200 OK in 36 milliseconds
	I1209 10:50:51.369287  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:50:51.369317  627293 round_trippers.go:469] Request Headers:
	I1209 10:50:51.369329  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:50:51.369335  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:50:51.372362  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:50:51.870258  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:50:51.870289  627293 round_trippers.go:469] Request Headers:
	I1209 10:50:51.870302  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:50:51.870310  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:50:51.873898  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:50:52.370080  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:50:52.370105  627293 round_trippers.go:469] Request Headers:
	I1209 10:50:52.370115  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:50:52.370118  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:50:52.376430  627293 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1209 10:50:52.869331  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:50:52.869355  627293 round_trippers.go:469] Request Headers:
	I1209 10:50:52.869364  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:50:52.869368  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:50:52.873136  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:50:52.873737  627293 node_ready.go:53] node "ha-792382-m02" has status "Ready":"False"
	I1209 10:50:53.370232  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:50:53.370258  627293 round_trippers.go:469] Request Headers:
	I1209 10:50:53.370267  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:50:53.370272  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:50:53.373647  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:50:53.869640  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:50:53.869666  627293 round_trippers.go:469] Request Headers:
	I1209 10:50:53.869674  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:50:53.869678  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:50:53.872620  627293 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 10:50:54.369762  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:50:54.369789  627293 round_trippers.go:469] Request Headers:
	I1209 10:50:54.369798  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:50:54.369802  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:50:54.373551  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:50:54.869513  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:50:54.869538  627293 round_trippers.go:469] Request Headers:
	I1209 10:50:54.869547  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:50:54.869552  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:50:54.872817  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:50:55.369351  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:50:55.369377  627293 round_trippers.go:469] Request Headers:
	I1209 10:50:55.369387  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:50:55.369391  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:50:55.372662  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:50:55.373185  627293 node_ready.go:53] node "ha-792382-m02" has status "Ready":"False"
	I1209 10:50:55.869601  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:50:55.869626  627293 round_trippers.go:469] Request Headers:
	I1209 10:50:55.869636  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:50:55.869642  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:50:55.873128  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:50:56.369713  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:50:56.369741  627293 round_trippers.go:469] Request Headers:
	I1209 10:50:56.369751  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:50:56.369755  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:50:56.373053  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:50:56.870191  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:50:56.870225  627293 round_trippers.go:469] Request Headers:
	I1209 10:50:56.870238  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:50:56.870247  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:50:56.873685  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:50:57.369825  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:50:57.369849  627293 round_trippers.go:469] Request Headers:
	I1209 10:50:57.369858  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:50:57.369861  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:50:57.373394  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:50:57.373898  627293 node_ready.go:53] node "ha-792382-m02" has status "Ready":"False"
	I1209 10:50:57.869257  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:50:57.869284  627293 round_trippers.go:469] Request Headers:
	I1209 10:50:57.869293  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:50:57.869297  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:50:57.872590  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:50:58.369600  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:50:58.369629  627293 round_trippers.go:469] Request Headers:
	I1209 10:50:58.369641  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:50:58.369648  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:50:58.372771  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:50:58.869748  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:50:58.869775  627293 round_trippers.go:469] Request Headers:
	I1209 10:50:58.869784  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:50:58.869788  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:50:58.873037  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:50:59.369979  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:50:59.370004  627293 round_trippers.go:469] Request Headers:
	I1209 10:50:59.370013  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:50:59.370017  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:50:59.373442  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:50:59.869269  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:50:59.869294  627293 round_trippers.go:469] Request Headers:
	I1209 10:50:59.869309  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:50:59.869314  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:50:59.872720  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:50:59.873370  627293 node_ready.go:53] node "ha-792382-m02" has status "Ready":"False"
	I1209 10:51:00.369254  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:51:00.369281  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:00.369289  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:00.369294  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:00.372431  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:51:00.869327  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:51:00.869352  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:00.869361  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:00.869365  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:00.872790  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:51:01.369711  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:51:01.369743  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:01.369755  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:01.369761  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:01.372739  627293 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 10:51:01.869629  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:51:01.869659  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:01.869672  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:01.869680  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:01.873312  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:51:01.873858  627293 node_ready.go:53] node "ha-792382-m02" has status "Ready":"False"
	I1209 10:51:02.369761  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:51:02.369798  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:02.369811  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:02.369818  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:02.373514  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:51:02.869485  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:51:02.869511  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:02.869524  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:02.869530  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:02.875847  627293 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1209 10:51:03.369998  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:51:03.370025  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:03.370034  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:03.370039  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:03.373227  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:51:03.870196  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:51:03.870226  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:03.870238  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:03.870245  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:03.873280  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:51:03.873981  627293 node_ready.go:53] node "ha-792382-m02" has status "Ready":"False"
	I1209 10:51:04.369276  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:51:04.369305  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:04.369314  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:04.369318  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:04.373386  627293 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 10:51:04.869282  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:51:04.869309  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:04.869317  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:04.869321  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:04.872919  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:51:05.369501  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:51:05.369531  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:05.369544  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:05.369551  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:05.373273  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:51:05.869275  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:51:05.869301  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:05.869313  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:05.869319  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:05.875077  627293 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1209 10:51:05.875712  627293 node_ready.go:49] node "ha-792382-m02" has status "Ready":"True"
	I1209 10:51:05.875741  627293 node_ready.go:38] duration metric: took 17.506691417s for node "ha-792382-m02" to be "Ready" ...
	I1209 10:51:05.875753  627293 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 10:51:05.875877  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods
	I1209 10:51:05.875894  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:05.875903  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:05.875908  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:05.880622  627293 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 10:51:05.886687  627293 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-8hlml" in "kube-system" namespace to be "Ready" ...
	I1209 10:51:05.886796  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-8hlml
	I1209 10:51:05.886807  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:05.886815  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:05.886820  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:05.891623  627293 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 10:51:05.892565  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382
	I1209 10:51:05.892583  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:05.892608  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:05.892615  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:05.895456  627293 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 10:51:05.895899  627293 pod_ready.go:93] pod "coredns-7c65d6cfc9-8hlml" in "kube-system" namespace has status "Ready":"True"
	I1209 10:51:05.895917  627293 pod_ready.go:82] duration metric: took 9.205439ms for pod "coredns-7c65d6cfc9-8hlml" in "kube-system" namespace to be "Ready" ...
	I1209 10:51:05.895927  627293 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-rz6mw" in "kube-system" namespace to be "Ready" ...
	I1209 10:51:05.895993  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-rz6mw
	I1209 10:51:05.896006  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:05.896013  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:05.896016  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:05.898484  627293 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 10:51:05.899083  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382
	I1209 10:51:05.899101  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:05.899108  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:05.899112  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:05.901260  627293 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 10:51:05.901817  627293 pod_ready.go:93] pod "coredns-7c65d6cfc9-rz6mw" in "kube-system" namespace has status "Ready":"True"
	I1209 10:51:05.901842  627293 pod_ready.go:82] duration metric: took 5.908358ms for pod "coredns-7c65d6cfc9-rz6mw" in "kube-system" namespace to be "Ready" ...
	I1209 10:51:05.901854  627293 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-792382" in "kube-system" namespace to be "Ready" ...
	I1209 10:51:05.901923  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-792382
	I1209 10:51:05.901934  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:05.901946  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:05.901953  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:05.904274  627293 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 10:51:05.905123  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382
	I1209 10:51:05.905142  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:05.905152  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:05.905158  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:05.907644  627293 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 10:51:05.908181  627293 pod_ready.go:93] pod "etcd-ha-792382" in "kube-system" namespace has status "Ready":"True"
	I1209 10:51:05.908211  627293 pod_ready.go:82] duration metric: took 6.349761ms for pod "etcd-ha-792382" in "kube-system" namespace to be "Ready" ...
	I1209 10:51:05.908224  627293 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-792382-m02" in "kube-system" namespace to be "Ready" ...
	I1209 10:51:05.908297  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-792382-m02
	I1209 10:51:05.908307  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:05.908318  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:05.908329  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:05.910369  627293 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 10:51:05.910967  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:51:05.910983  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:05.910992  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:05.910997  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:05.913018  627293 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 10:51:05.913518  627293 pod_ready.go:93] pod "etcd-ha-792382-m02" in "kube-system" namespace has status "Ready":"True"
	I1209 10:51:05.913539  627293 pod_ready.go:82] duration metric: took 5.308048ms for pod "etcd-ha-792382-m02" in "kube-system" namespace to be "Ready" ...
	I1209 10:51:05.913558  627293 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-792382" in "kube-system" namespace to be "Ready" ...
	I1209 10:51:06.070017  627293 request.go:632] Waited for 156.363826ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-792382
	I1209 10:51:06.070081  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-792382
	I1209 10:51:06.070086  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:06.070095  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:06.070102  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:06.073645  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:51:06.269848  627293 request.go:632] Waited for 195.364699ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-792382
	I1209 10:51:06.269918  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382
	I1209 10:51:06.269924  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:06.269931  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:06.269935  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:06.272803  627293 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 10:51:06.273443  627293 pod_ready.go:93] pod "kube-apiserver-ha-792382" in "kube-system" namespace has status "Ready":"True"
	I1209 10:51:06.273469  627293 pod_ready.go:82] duration metric: took 359.901606ms for pod "kube-apiserver-ha-792382" in "kube-system" namespace to be "Ready" ...
	I1209 10:51:06.273484  627293 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-792382-m02" in "kube-system" namespace to be "Ready" ...
	I1209 10:51:06.469639  627293 request.go:632] Waited for 196.043735ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-792382-m02
	I1209 10:51:06.469733  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-792382-m02
	I1209 10:51:06.469741  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:06.469754  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:06.469762  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:06.473158  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:51:06.670306  627293 request.go:632] Waited for 196.412719ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:51:06.670379  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:51:06.670387  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:06.670399  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:06.670409  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:06.673435  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:51:06.673975  627293 pod_ready.go:93] pod "kube-apiserver-ha-792382-m02" in "kube-system" namespace has status "Ready":"True"
	I1209 10:51:06.673996  627293 pod_ready.go:82] duration metric: took 400.504015ms for pod "kube-apiserver-ha-792382-m02" in "kube-system" namespace to be "Ready" ...
	I1209 10:51:06.674006  627293 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-792382" in "kube-system" namespace to be "Ready" ...
	I1209 10:51:06.870147  627293 request.go:632] Waited for 196.063707ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-792382
	I1209 10:51:06.870265  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-792382
	I1209 10:51:06.870276  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:06.870285  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:06.870292  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:06.873707  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:51:07.069908  627293 request.go:632] Waited for 195.387799ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-792382
	I1209 10:51:07.069975  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382
	I1209 10:51:07.069983  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:07.069995  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:07.070015  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:07.073101  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:51:07.073736  627293 pod_ready.go:93] pod "kube-controller-manager-ha-792382" in "kube-system" namespace has status "Ready":"True"
	I1209 10:51:07.073758  627293 pod_ready.go:82] duration metric: took 399.744041ms for pod "kube-controller-manager-ha-792382" in "kube-system" namespace to be "Ready" ...
	I1209 10:51:07.073774  627293 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-792382-m02" in "kube-system" namespace to be "Ready" ...
	I1209 10:51:07.269459  627293 request.go:632] Waited for 195.589987ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-792382-m02
	I1209 10:51:07.269554  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-792382-m02
	I1209 10:51:07.269566  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:07.269577  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:07.269584  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:07.273156  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:51:07.470290  627293 request.go:632] Waited for 196.338376ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:51:07.470357  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:51:07.470364  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:07.470374  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:07.470384  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:07.474385  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:51:07.474970  627293 pod_ready.go:93] pod "kube-controller-manager-ha-792382-m02" in "kube-system" namespace has status "Ready":"True"
	I1209 10:51:07.474989  627293 pod_ready.go:82] duration metric: took 401.206827ms for pod "kube-controller-manager-ha-792382-m02" in "kube-system" namespace to be "Ready" ...
	I1209 10:51:07.475001  627293 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-dckpl" in "kube-system" namespace to be "Ready" ...
	I1209 10:51:07.670046  627293 request.go:632] Waited for 194.938435ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dckpl
	I1209 10:51:07.670123  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dckpl
	I1209 10:51:07.670153  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:07.670161  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:07.670177  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:07.673612  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:51:07.869971  627293 request.go:632] Waited for 195.374837ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:51:07.870066  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:51:07.870077  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:07.870089  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:07.870096  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:07.873498  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:51:07.873966  627293 pod_ready.go:93] pod "kube-proxy-dckpl" in "kube-system" namespace has status "Ready":"True"
	I1209 10:51:07.873986  627293 pod_ready.go:82] duration metric: took 398.974048ms for pod "kube-proxy-dckpl" in "kube-system" namespace to be "Ready" ...
	I1209 10:51:07.873999  627293 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-wrvgb" in "kube-system" namespace to be "Ready" ...
	I1209 10:51:08.070122  627293 request.go:632] Waited for 195.97145ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wrvgb
	I1209 10:51:08.070208  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wrvgb
	I1209 10:51:08.070220  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:08.070232  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:08.070246  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:08.073337  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:51:08.270335  627293 request.go:632] Waited for 196.383902ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-792382
	I1209 10:51:08.270428  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382
	I1209 10:51:08.270439  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:08.270446  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:08.270450  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:08.273875  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:51:08.274422  627293 pod_ready.go:93] pod "kube-proxy-wrvgb" in "kube-system" namespace has status "Ready":"True"
	I1209 10:51:08.274444  627293 pod_ready.go:82] duration metric: took 400.436343ms for pod "kube-proxy-wrvgb" in "kube-system" namespace to be "Ready" ...
	I1209 10:51:08.274455  627293 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-792382" in "kube-system" namespace to be "Ready" ...
	I1209 10:51:08.469480  627293 request.go:632] Waited for 194.92406ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-792382
	I1209 10:51:08.469571  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-792382
	I1209 10:51:08.469579  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:08.469593  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:08.469604  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:08.473101  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:51:08.670247  627293 request.go:632] Waited for 196.404632ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-792382
	I1209 10:51:08.670318  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382
	I1209 10:51:08.670323  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:08.670331  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:08.670334  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:08.673487  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:51:08.674226  627293 pod_ready.go:93] pod "kube-scheduler-ha-792382" in "kube-system" namespace has status "Ready":"True"
	I1209 10:51:08.674250  627293 pod_ready.go:82] duration metric: took 399.789273ms for pod "kube-scheduler-ha-792382" in "kube-system" namespace to be "Ready" ...
	I1209 10:51:08.674263  627293 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-792382-m02" in "kube-system" namespace to be "Ready" ...
	I1209 10:51:08.870290  627293 request.go:632] Waited for 195.926045ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-792382-m02
	I1209 10:51:08.870371  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-792382-m02
	I1209 10:51:08.870379  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:08.870387  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:08.870393  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:08.873809  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:51:09.069870  627293 request.go:632] Waited for 195.368943ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:51:09.069944  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:51:09.069950  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:09.069962  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:09.069967  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:09.074483  627293 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 10:51:09.075070  627293 pod_ready.go:93] pod "kube-scheduler-ha-792382-m02" in "kube-system" namespace has status "Ready":"True"
	I1209 10:51:09.075095  627293 pod_ready.go:82] duration metric: took 400.825701ms for pod "kube-scheduler-ha-792382-m02" in "kube-system" namespace to be "Ready" ...
	I1209 10:51:09.075107  627293 pod_ready.go:39] duration metric: took 3.199339967s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 10:51:09.075137  627293 api_server.go:52] waiting for apiserver process to appear ...
	I1209 10:51:09.075203  627293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 10:51:09.089759  627293 api_server.go:72] duration metric: took 21.007136874s to wait for apiserver process to appear ...
	I1209 10:51:09.089785  627293 api_server.go:88] waiting for apiserver healthz status ...
	I1209 10:51:09.089806  627293 api_server.go:253] Checking apiserver healthz at https://192.168.39.69:8443/healthz ...
	I1209 10:51:09.093868  627293 api_server.go:279] https://192.168.39.69:8443/healthz returned 200:
	ok
	I1209 10:51:09.093935  627293 round_trippers.go:463] GET https://192.168.39.69:8443/version
	I1209 10:51:09.093940  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:09.093949  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:09.093957  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:09.094830  627293 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1209 10:51:09.094916  627293 api_server.go:141] control plane version: v1.31.2
	I1209 10:51:09.094932  627293 api_server.go:131] duration metric: took 5.141357ms to wait for apiserver health ...
	I1209 10:51:09.094940  627293 system_pods.go:43] waiting for kube-system pods to appear ...
	I1209 10:51:09.269312  627293 request.go:632] Waited for 174.277582ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods
	I1209 10:51:09.269388  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods
	I1209 10:51:09.269394  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:09.269402  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:09.269407  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:09.274316  627293 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 10:51:09.278484  627293 system_pods.go:59] 17 kube-system pods found
	I1209 10:51:09.278512  627293 system_pods.go:61] "coredns-7c65d6cfc9-8hlml" [d820cd6c-5064-4934-adc8-c68f84c09b46] Running
	I1209 10:51:09.278518  627293 system_pods.go:61] "coredns-7c65d6cfc9-rz6mw" [af297b6d-91f1-4114-b98c-cdfdfbd1589e] Running
	I1209 10:51:09.278523  627293 system_pods.go:61] "etcd-ha-792382" [eee2fbad-7f2f-4f5a-b701-abdbf8456f99] Running
	I1209 10:51:09.278527  627293 system_pods.go:61] "etcd-ha-792382-m02" [424768ff-af5d-4a31-b062-ab2c9576884d] Running
	I1209 10:51:09.278531  627293 system_pods.go:61] "kindnet-bqp2z" [b2c40579-4d72-4efe-b921-1e0f98b91544] Running
	I1209 10:51:09.278534  627293 system_pods.go:61] "kindnet-hkrhk" [9b35011c-ab45-4f55-a60f-08f9c4509c1d] Running
	I1209 10:51:09.278540  627293 system_pods.go:61] "kube-apiserver-ha-792382" [5157cfb0-bc91-4efe-b2e2-689778d5b012] Running
	I1209 10:51:09.278544  627293 system_pods.go:61] "kube-apiserver-ha-792382-m02" [71f9ce1c-aeff-4853-83ec-01a7fb6a81d5] Running
	I1209 10:51:09.278547  627293 system_pods.go:61] "kube-controller-manager-ha-792382" [f8cb7175-f6fd-4dcf-a6a5-66c44aa136ce] Running
	I1209 10:51:09.278550  627293 system_pods.go:61] "kube-controller-manager-ha-792382-m02" [2f7d8deb-ddad-47ac-ad18-5f8e7d0e095d] Running
	I1209 10:51:09.278553  627293 system_pods.go:61] "kube-proxy-dckpl" [13f7bda1-f9c2-4fd0-96e0-b6aee1139bc1] Running
	I1209 10:51:09.278556  627293 system_pods.go:61] "kube-proxy-wrvgb" [2531e29f-a4d5-41f9-8c38-3220b4caf96b] Running
	I1209 10:51:09.278560  627293 system_pods.go:61] "kube-scheduler-ha-792382" [b11693d0-b2e7-45a1-a1a2-6519a1535c45] Running
	I1209 10:51:09.278566  627293 system_pods.go:61] "kube-scheduler-ha-792382-m02" [250ce40c-5cbf-4f5d-a475-4f3d7ec100d9] Running
	I1209 10:51:09.278569  627293 system_pods.go:61] "kube-vip-ha-792382" [511f3a84-b444-4603-8f6f-eeeb262b1384] Running
	I1209 10:51:09.278574  627293 system_pods.go:61] "kube-vip-ha-792382-m02" [f2110c28-09f5-42f9-8e16-beec9759a0a2] Running
	I1209 10:51:09.278578  627293 system_pods.go:61] "storage-provisioner" [4419fe4f-e2ed-4ecb-a912-2dd074e29727] Running
	I1209 10:51:09.278587  627293 system_pods.go:74] duration metric: took 183.639674ms to wait for pod list to return data ...
	I1209 10:51:09.278598  627293 default_sa.go:34] waiting for default service account to be created ...
	I1209 10:51:09.470106  627293 request.go:632] Waited for 191.4045ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/default/serviceaccounts
	I1209 10:51:09.470215  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/default/serviceaccounts
	I1209 10:51:09.470227  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:09.470242  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:09.470252  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:09.479626  627293 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1209 10:51:09.479907  627293 default_sa.go:45] found service account: "default"
	I1209 10:51:09.479929  627293 default_sa.go:55] duration metric: took 201.319758ms for default service account to be created ...
	I1209 10:51:09.479942  627293 system_pods.go:116] waiting for k8s-apps to be running ...
	I1209 10:51:09.670105  627293 request.go:632] Waited for 190.065824ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods
	I1209 10:51:09.670208  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods
	I1209 10:51:09.670215  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:09.670223  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:09.670228  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:09.674641  627293 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 10:51:09.679080  627293 system_pods.go:86] 17 kube-system pods found
	I1209 10:51:09.679114  627293 system_pods.go:89] "coredns-7c65d6cfc9-8hlml" [d820cd6c-5064-4934-adc8-c68f84c09b46] Running
	I1209 10:51:09.679123  627293 system_pods.go:89] "coredns-7c65d6cfc9-rz6mw" [af297b6d-91f1-4114-b98c-cdfdfbd1589e] Running
	I1209 10:51:09.679131  627293 system_pods.go:89] "etcd-ha-792382" [eee2fbad-7f2f-4f5a-b701-abdbf8456f99] Running
	I1209 10:51:09.679138  627293 system_pods.go:89] "etcd-ha-792382-m02" [424768ff-af5d-4a31-b062-ab2c9576884d] Running
	I1209 10:51:09.679143  627293 system_pods.go:89] "kindnet-bqp2z" [b2c40579-4d72-4efe-b921-1e0f98b91544] Running
	I1209 10:51:09.679149  627293 system_pods.go:89] "kindnet-hkrhk" [9b35011c-ab45-4f55-a60f-08f9c4509c1d] Running
	I1209 10:51:09.679156  627293 system_pods.go:89] "kube-apiserver-ha-792382" [5157cfb0-bc91-4efe-b2e2-689778d5b012] Running
	I1209 10:51:09.679165  627293 system_pods.go:89] "kube-apiserver-ha-792382-m02" [71f9ce1c-aeff-4853-83ec-01a7fb6a81d5] Running
	I1209 10:51:09.679171  627293 system_pods.go:89] "kube-controller-manager-ha-792382" [f8cb7175-f6fd-4dcf-a6a5-66c44aa136ce] Running
	I1209 10:51:09.679180  627293 system_pods.go:89] "kube-controller-manager-ha-792382-m02" [2f7d8deb-ddad-47ac-ad18-5f8e7d0e095d] Running
	I1209 10:51:09.679184  627293 system_pods.go:89] "kube-proxy-dckpl" [13f7bda1-f9c2-4fd0-96e0-b6aee1139bc1] Running
	I1209 10:51:09.679188  627293 system_pods.go:89] "kube-proxy-wrvgb" [2531e29f-a4d5-41f9-8c38-3220b4caf96b] Running
	I1209 10:51:09.679195  627293 system_pods.go:89] "kube-scheduler-ha-792382" [b11693d0-b2e7-45a1-a1a2-6519a1535c45] Running
	I1209 10:51:09.679198  627293 system_pods.go:89] "kube-scheduler-ha-792382-m02" [250ce40c-5cbf-4f5d-a475-4f3d7ec100d9] Running
	I1209 10:51:09.679204  627293 system_pods.go:89] "kube-vip-ha-792382" [511f3a84-b444-4603-8f6f-eeeb262b1384] Running
	I1209 10:51:09.679208  627293 system_pods.go:89] "kube-vip-ha-792382-m02" [f2110c28-09f5-42f9-8e16-beec9759a0a2] Running
	I1209 10:51:09.679214  627293 system_pods.go:89] "storage-provisioner" [4419fe4f-e2ed-4ecb-a912-2dd074e29727] Running
	I1209 10:51:09.679221  627293 system_pods.go:126] duration metric: took 199.268781ms to wait for k8s-apps to be running ...
	I1209 10:51:09.679230  627293 system_svc.go:44] waiting for kubelet service to be running ....
	I1209 10:51:09.679276  627293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 10:51:09.694076  627293 system_svc.go:56] duration metric: took 14.835467ms WaitForService to wait for kubelet
	I1209 10:51:09.694109  627293 kubeadm.go:582] duration metric: took 21.611489035s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 10:51:09.694134  627293 node_conditions.go:102] verifying NodePressure condition ...
	I1209 10:51:09.869608  627293 request.go:632] Waited for 175.356595ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes
	I1209 10:51:09.869706  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes
	I1209 10:51:09.869714  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:09.869723  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:09.869734  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:09.873420  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:51:09.874254  627293 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1209 10:51:09.874278  627293 node_conditions.go:123] node cpu capacity is 2
	I1209 10:51:09.874300  627293 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1209 10:51:09.874304  627293 node_conditions.go:123] node cpu capacity is 2
	I1209 10:51:09.874310  627293 node_conditions.go:105] duration metric: took 180.168766ms to run NodePressure ...
	I1209 10:51:09.874324  627293 start.go:241] waiting for startup goroutines ...
	I1209 10:51:09.874349  627293 start.go:255] writing updated cluster config ...
	I1209 10:51:09.876293  627293 out.go:201] 
	I1209 10:51:09.877844  627293 config.go:182] Loaded profile config "ha-792382": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 10:51:09.877938  627293 profile.go:143] Saving config to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/config.json ...
	I1209 10:51:09.879618  627293 out.go:177] * Starting "ha-792382-m03" control-plane node in "ha-792382" cluster
	I1209 10:51:09.880651  627293 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 10:51:09.880677  627293 cache.go:56] Caching tarball of preloaded images
	I1209 10:51:09.880794  627293 preload.go:172] Found /home/jenkins/minikube-integration/20068-609844/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1209 10:51:09.880808  627293 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1209 10:51:09.880894  627293 profile.go:143] Saving config to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/config.json ...
	I1209 10:51:09.881065  627293 start.go:360] acquireMachinesLock for ha-792382-m03: {Name:mka6afffe9ddf2faebdc603ed0401805d58bf31e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 10:51:09.881109  627293 start.go:364] duration metric: took 24.695µs to acquireMachinesLock for "ha-792382-m03"
	I1209 10:51:09.881155  627293 start.go:93] Provisioning new machine with config: &{Name:ha-792382 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-792382 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.69 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.89 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 10:51:09.881251  627293 start.go:125] createHost starting for "m03" (driver="kvm2")
	I1209 10:51:09.882597  627293 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1209 10:51:09.882697  627293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:51:09.882736  627293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:51:09.898133  627293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41609
	I1209 10:51:09.898752  627293 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:51:09.899364  627293 main.go:141] libmachine: Using API Version  1
	I1209 10:51:09.899388  627293 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:51:09.899714  627293 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:51:09.899932  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetMachineName
	I1209 10:51:09.900153  627293 main.go:141] libmachine: (ha-792382-m03) Calling .DriverName
	I1209 10:51:09.900311  627293 start.go:159] libmachine.API.Create for "ha-792382" (driver="kvm2")
	I1209 10:51:09.900340  627293 client.go:168] LocalClient.Create starting
	I1209 10:51:09.900368  627293 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem
	I1209 10:51:09.900399  627293 main.go:141] libmachine: Decoding PEM data...
	I1209 10:51:09.900414  627293 main.go:141] libmachine: Parsing certificate...
	I1209 10:51:09.900469  627293 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem
	I1209 10:51:09.900490  627293 main.go:141] libmachine: Decoding PEM data...
	I1209 10:51:09.900500  627293 main.go:141] libmachine: Parsing certificate...
	I1209 10:51:09.900517  627293 main.go:141] libmachine: Running pre-create checks...
	I1209 10:51:09.900526  627293 main.go:141] libmachine: (ha-792382-m03) Calling .PreCreateCheck
	I1209 10:51:09.900676  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetConfigRaw
	I1209 10:51:09.901024  627293 main.go:141] libmachine: Creating machine...
	I1209 10:51:09.901037  627293 main.go:141] libmachine: (ha-792382-m03) Calling .Create
	I1209 10:51:09.901229  627293 main.go:141] libmachine: (ha-792382-m03) Creating KVM machine...
	I1209 10:51:09.902418  627293 main.go:141] libmachine: (ha-792382-m03) DBG | found existing default KVM network
	I1209 10:51:09.902584  627293 main.go:141] libmachine: (ha-792382-m03) DBG | found existing private KVM network mk-ha-792382
	I1209 10:51:09.902745  627293 main.go:141] libmachine: (ha-792382-m03) Setting up store path in /home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382-m03 ...
	I1209 10:51:09.902768  627293 main.go:141] libmachine: (ha-792382-m03) Building disk image from file:///home/jenkins/minikube-integration/20068-609844/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1209 10:51:09.902867  627293 main.go:141] libmachine: (ha-792382-m03) DBG | I1209 10:51:09.902742  628056 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20068-609844/.minikube
	I1209 10:51:09.902959  627293 main.go:141] libmachine: (ha-792382-m03) Downloading /home/jenkins/minikube-integration/20068-609844/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20068-609844/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1209 10:51:10.187575  627293 main.go:141] libmachine: (ha-792382-m03) DBG | I1209 10:51:10.187437  628056 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382-m03/id_rsa...
	I1209 10:51:10.500975  627293 main.go:141] libmachine: (ha-792382-m03) DBG | I1209 10:51:10.500841  628056 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382-m03/ha-792382-m03.rawdisk...
	I1209 10:51:10.501016  627293 main.go:141] libmachine: (ha-792382-m03) DBG | Writing magic tar header
	I1209 10:51:10.501026  627293 main.go:141] libmachine: (ha-792382-m03) DBG | Writing SSH key tar header
	I1209 10:51:10.501034  627293 main.go:141] libmachine: (ha-792382-m03) DBG | I1209 10:51:10.500985  628056 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382-m03 ...
	I1209 10:51:10.501188  627293 main.go:141] libmachine: (ha-792382-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382-m03
	I1209 10:51:10.501214  627293 main.go:141] libmachine: (ha-792382-m03) Setting executable bit set on /home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382-m03 (perms=drwx------)
	I1209 10:51:10.501235  627293 main.go:141] libmachine: (ha-792382-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20068-609844/.minikube/machines
	I1209 10:51:10.501255  627293 main.go:141] libmachine: (ha-792382-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20068-609844/.minikube
	I1209 10:51:10.501270  627293 main.go:141] libmachine: (ha-792382-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20068-609844
	I1209 10:51:10.501289  627293 main.go:141] libmachine: (ha-792382-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1209 10:51:10.501315  627293 main.go:141] libmachine: (ha-792382-m03) Setting executable bit set on /home/jenkins/minikube-integration/20068-609844/.minikube/machines (perms=drwxr-xr-x)
	I1209 10:51:10.501328  627293 main.go:141] libmachine: (ha-792382-m03) DBG | Checking permissions on dir: /home/jenkins
	I1209 10:51:10.501340  627293 main.go:141] libmachine: (ha-792382-m03) DBG | Checking permissions on dir: /home
	I1209 10:51:10.501354  627293 main.go:141] libmachine: (ha-792382-m03) DBG | Skipping /home - not owner
	I1209 10:51:10.501371  627293 main.go:141] libmachine: (ha-792382-m03) Setting executable bit set on /home/jenkins/minikube-integration/20068-609844/.minikube (perms=drwxr-xr-x)
	I1209 10:51:10.501393  627293 main.go:141] libmachine: (ha-792382-m03) Setting executable bit set on /home/jenkins/minikube-integration/20068-609844 (perms=drwxrwxr-x)
	I1209 10:51:10.501413  627293 main.go:141] libmachine: (ha-792382-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1209 10:51:10.501426  627293 main.go:141] libmachine: (ha-792382-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1209 10:51:10.501440  627293 main.go:141] libmachine: (ha-792382-m03) Creating domain...
	I1209 10:51:10.502439  627293 main.go:141] libmachine: (ha-792382-m03) define libvirt domain using xml: 
	I1209 10:51:10.502466  627293 main.go:141] libmachine: (ha-792382-m03) <domain type='kvm'>
	I1209 10:51:10.502476  627293 main.go:141] libmachine: (ha-792382-m03)   <name>ha-792382-m03</name>
	I1209 10:51:10.502484  627293 main.go:141] libmachine: (ha-792382-m03)   <memory unit='MiB'>2200</memory>
	I1209 10:51:10.502490  627293 main.go:141] libmachine: (ha-792382-m03)   <vcpu>2</vcpu>
	I1209 10:51:10.502495  627293 main.go:141] libmachine: (ha-792382-m03)   <features>
	I1209 10:51:10.502506  627293 main.go:141] libmachine: (ha-792382-m03)     <acpi/>
	I1209 10:51:10.502516  627293 main.go:141] libmachine: (ha-792382-m03)     <apic/>
	I1209 10:51:10.502524  627293 main.go:141] libmachine: (ha-792382-m03)     <pae/>
	I1209 10:51:10.502534  627293 main.go:141] libmachine: (ha-792382-m03)     
	I1209 10:51:10.502544  627293 main.go:141] libmachine: (ha-792382-m03)   </features>
	I1209 10:51:10.502552  627293 main.go:141] libmachine: (ha-792382-m03)   <cpu mode='host-passthrough'>
	I1209 10:51:10.502587  627293 main.go:141] libmachine: (ha-792382-m03)   
	I1209 10:51:10.502612  627293 main.go:141] libmachine: (ha-792382-m03)   </cpu>
	I1209 10:51:10.502650  627293 main.go:141] libmachine: (ha-792382-m03)   <os>
	I1209 10:51:10.502668  627293 main.go:141] libmachine: (ha-792382-m03)     <type>hvm</type>
	I1209 10:51:10.502674  627293 main.go:141] libmachine: (ha-792382-m03)     <boot dev='cdrom'/>
	I1209 10:51:10.502679  627293 main.go:141] libmachine: (ha-792382-m03)     <boot dev='hd'/>
	I1209 10:51:10.502688  627293 main.go:141] libmachine: (ha-792382-m03)     <bootmenu enable='no'/>
	I1209 10:51:10.502693  627293 main.go:141] libmachine: (ha-792382-m03)   </os>
	I1209 10:51:10.502731  627293 main.go:141] libmachine: (ha-792382-m03)   <devices>
	I1209 10:51:10.502756  627293 main.go:141] libmachine: (ha-792382-m03)     <disk type='file' device='cdrom'>
	I1209 10:51:10.502773  627293 main.go:141] libmachine: (ha-792382-m03)       <source file='/home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382-m03/boot2docker.iso'/>
	I1209 10:51:10.502784  627293 main.go:141] libmachine: (ha-792382-m03)       <target dev='hdc' bus='scsi'/>
	I1209 10:51:10.502796  627293 main.go:141] libmachine: (ha-792382-m03)       <readonly/>
	I1209 10:51:10.502806  627293 main.go:141] libmachine: (ha-792382-m03)     </disk>
	I1209 10:51:10.502815  627293 main.go:141] libmachine: (ha-792382-m03)     <disk type='file' device='disk'>
	I1209 10:51:10.502827  627293 main.go:141] libmachine: (ha-792382-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1209 10:51:10.502844  627293 main.go:141] libmachine: (ha-792382-m03)       <source file='/home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382-m03/ha-792382-m03.rawdisk'/>
	I1209 10:51:10.502854  627293 main.go:141] libmachine: (ha-792382-m03)       <target dev='hda' bus='virtio'/>
	I1209 10:51:10.502862  627293 main.go:141] libmachine: (ha-792382-m03)     </disk>
	I1209 10:51:10.502873  627293 main.go:141] libmachine: (ha-792382-m03)     <interface type='network'>
	I1209 10:51:10.502886  627293 main.go:141] libmachine: (ha-792382-m03)       <source network='mk-ha-792382'/>
	I1209 10:51:10.502901  627293 main.go:141] libmachine: (ha-792382-m03)       <model type='virtio'/>
	I1209 10:51:10.502917  627293 main.go:141] libmachine: (ha-792382-m03)     </interface>
	I1209 10:51:10.502927  627293 main.go:141] libmachine: (ha-792382-m03)     <interface type='network'>
	I1209 10:51:10.502935  627293 main.go:141] libmachine: (ha-792382-m03)       <source network='default'/>
	I1209 10:51:10.502945  627293 main.go:141] libmachine: (ha-792382-m03)       <model type='virtio'/>
	I1209 10:51:10.502954  627293 main.go:141] libmachine: (ha-792382-m03)     </interface>
	I1209 10:51:10.502965  627293 main.go:141] libmachine: (ha-792382-m03)     <serial type='pty'>
	I1209 10:51:10.502981  627293 main.go:141] libmachine: (ha-792382-m03)       <target port='0'/>
	I1209 10:51:10.503011  627293 main.go:141] libmachine: (ha-792382-m03)     </serial>
	I1209 10:51:10.503041  627293 main.go:141] libmachine: (ha-792382-m03)     <console type='pty'>
	I1209 10:51:10.503058  627293 main.go:141] libmachine: (ha-792382-m03)       <target type='serial' port='0'/>
	I1209 10:51:10.503071  627293 main.go:141] libmachine: (ha-792382-m03)     </console>
	I1209 10:51:10.503082  627293 main.go:141] libmachine: (ha-792382-m03)     <rng model='virtio'>
	I1209 10:51:10.503096  627293 main.go:141] libmachine: (ha-792382-m03)       <backend model='random'>/dev/random</backend>
	I1209 10:51:10.503113  627293 main.go:141] libmachine: (ha-792382-m03)     </rng>
	I1209 10:51:10.503127  627293 main.go:141] libmachine: (ha-792382-m03)     
	I1209 10:51:10.503136  627293 main.go:141] libmachine: (ha-792382-m03)     
	I1209 10:51:10.503142  627293 main.go:141] libmachine: (ha-792382-m03)   </devices>
	I1209 10:51:10.503150  627293 main.go:141] libmachine: (ha-792382-m03) </domain>
	I1209 10:51:10.503164  627293 main.go:141] libmachine: (ha-792382-m03) 
	I1209 10:51:10.509799  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:26:51:82 in network default
	I1209 10:51:10.510544  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:10.510571  627293 main.go:141] libmachine: (ha-792382-m03) Ensuring networks are active...
	I1209 10:51:10.511459  627293 main.go:141] libmachine: (ha-792382-m03) Ensuring network default is active
	I1209 10:51:10.511785  627293 main.go:141] libmachine: (ha-792382-m03) Ensuring network mk-ha-792382 is active
	I1209 10:51:10.512166  627293 main.go:141] libmachine: (ha-792382-m03) Getting domain xml...
	I1209 10:51:10.512954  627293 main.go:141] libmachine: (ha-792382-m03) Creating domain...
	I1209 10:51:11.772243  627293 main.go:141] libmachine: (ha-792382-m03) Waiting to get IP...
	I1209 10:51:11.773341  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:11.773804  627293 main.go:141] libmachine: (ha-792382-m03) DBG | unable to find current IP address of domain ha-792382-m03 in network mk-ha-792382
	I1209 10:51:11.773837  627293 main.go:141] libmachine: (ha-792382-m03) DBG | I1209 10:51:11.773768  628056 retry.go:31] will retry after 261.519944ms: waiting for machine to come up
	I1209 10:51:12.038077  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:12.038774  627293 main.go:141] libmachine: (ha-792382-m03) DBG | unable to find current IP address of domain ha-792382-m03 in network mk-ha-792382
	I1209 10:51:12.038804  627293 main.go:141] libmachine: (ha-792382-m03) DBG | I1209 10:51:12.038709  628056 retry.go:31] will retry after 310.562513ms: waiting for machine to come up
	I1209 10:51:12.350405  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:12.350812  627293 main.go:141] libmachine: (ha-792382-m03) DBG | unable to find current IP address of domain ha-792382-m03 in network mk-ha-792382
	I1209 10:51:12.350870  627293 main.go:141] libmachine: (ha-792382-m03) DBG | I1209 10:51:12.350779  628056 retry.go:31] will retry after 381.875413ms: waiting for machine to come up
	I1209 10:51:12.734428  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:12.734917  627293 main.go:141] libmachine: (ha-792382-m03) DBG | unable to find current IP address of domain ha-792382-m03 in network mk-ha-792382
	I1209 10:51:12.734939  627293 main.go:141] libmachine: (ha-792382-m03) DBG | I1209 10:51:12.734868  628056 retry.go:31] will retry after 376.611685ms: waiting for machine to come up
	I1209 10:51:13.113430  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:13.113850  627293 main.go:141] libmachine: (ha-792382-m03) DBG | unable to find current IP address of domain ha-792382-m03 in network mk-ha-792382
	I1209 10:51:13.113878  627293 main.go:141] libmachine: (ha-792382-m03) DBG | I1209 10:51:13.113807  628056 retry.go:31] will retry after 480.736793ms: waiting for machine to come up
	I1209 10:51:13.596329  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:13.596796  627293 main.go:141] libmachine: (ha-792382-m03) DBG | unable to find current IP address of domain ha-792382-m03 in network mk-ha-792382
	I1209 10:51:13.596819  627293 main.go:141] libmachine: (ha-792382-m03) DBG | I1209 10:51:13.596753  628056 retry.go:31] will retry after 875.034768ms: waiting for machine to come up
	I1209 10:51:14.473751  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:14.474126  627293 main.go:141] libmachine: (ha-792382-m03) DBG | unable to find current IP address of domain ha-792382-m03 in network mk-ha-792382
	I1209 10:51:14.474155  627293 main.go:141] libmachine: (ha-792382-m03) DBG | I1209 10:51:14.474088  628056 retry.go:31] will retry after 816.368717ms: waiting for machine to come up
	I1209 10:51:15.292960  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:15.293587  627293 main.go:141] libmachine: (ha-792382-m03) DBG | unable to find current IP address of domain ha-792382-m03 in network mk-ha-792382
	I1209 10:51:15.293618  627293 main.go:141] libmachine: (ha-792382-m03) DBG | I1209 10:51:15.293489  628056 retry.go:31] will retry after 1.183655157s: waiting for machine to come up
	I1209 10:51:16.478955  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:16.479455  627293 main.go:141] libmachine: (ha-792382-m03) DBG | unable to find current IP address of domain ha-792382-m03 in network mk-ha-792382
	I1209 10:51:16.479486  627293 main.go:141] libmachine: (ha-792382-m03) DBG | I1209 10:51:16.479390  628056 retry.go:31] will retry after 1.459421983s: waiting for machine to come up
	I1209 10:51:17.940565  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:17.940909  627293 main.go:141] libmachine: (ha-792382-m03) DBG | unable to find current IP address of domain ha-792382-m03 in network mk-ha-792382
	I1209 10:51:17.940939  627293 main.go:141] libmachine: (ha-792382-m03) DBG | I1209 10:51:17.940853  628056 retry.go:31] will retry after 2.01883018s: waiting for machine to come up
	I1209 10:51:19.961861  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:19.962417  627293 main.go:141] libmachine: (ha-792382-m03) DBG | unable to find current IP address of domain ha-792382-m03 in network mk-ha-792382
	I1209 10:51:19.962457  627293 main.go:141] libmachine: (ha-792382-m03) DBG | I1209 10:51:19.962353  628056 retry.go:31] will retry after 1.857861431s: waiting for machine to come up
	I1209 10:51:21.822060  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:21.822610  627293 main.go:141] libmachine: (ha-792382-m03) DBG | unable to find current IP address of domain ha-792382-m03 in network mk-ha-792382
	I1209 10:51:21.822640  627293 main.go:141] libmachine: (ha-792382-m03) DBG | I1209 10:51:21.822556  628056 retry.go:31] will retry after 2.674364218s: waiting for machine to come up
	I1209 10:51:24.499290  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:24.499696  627293 main.go:141] libmachine: (ha-792382-m03) DBG | unable to find current IP address of domain ha-792382-m03 in network mk-ha-792382
	I1209 10:51:24.499718  627293 main.go:141] libmachine: (ha-792382-m03) DBG | I1209 10:51:24.499647  628056 retry.go:31] will retry after 3.815833745s: waiting for machine to come up
	I1209 10:51:28.319279  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:28.319654  627293 main.go:141] libmachine: (ha-792382-m03) DBG | unable to find current IP address of domain ha-792382-m03 in network mk-ha-792382
	I1209 10:51:28.319685  627293 main.go:141] libmachine: (ha-792382-m03) DBG | I1209 10:51:28.319601  628056 retry.go:31] will retry after 5.165694329s: waiting for machine to come up
	I1209 10:51:33.487484  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:33.487908  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has current primary IP address 192.168.39.82 and MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:33.487939  627293 main.go:141] libmachine: (ha-792382-m03) Found IP for machine: 192.168.39.82
	I1209 10:51:33.487954  627293 main.go:141] libmachine: (ha-792382-m03) Reserving static IP address...
	I1209 10:51:33.488381  627293 main.go:141] libmachine: (ha-792382-m03) DBG | unable to find host DHCP lease matching {name: "ha-792382-m03", mac: "52:54:00:10:ae:3c", ip: "192.168.39.82"} in network mk-ha-792382
	I1209 10:51:33.564150  627293 main.go:141] libmachine: (ha-792382-m03) Reserved static IP address: 192.168.39.82
	I1209 10:51:33.564197  627293 main.go:141] libmachine: (ha-792382-m03) DBG | Getting to WaitForSSH function...
	I1209 10:51:33.564206  627293 main.go:141] libmachine: (ha-792382-m03) Waiting for SSH to be available...
	I1209 10:51:33.567024  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:33.567471  627293 main.go:141] libmachine: (ha-792382-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:ae:3c", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:51:25 +0000 UTC Type:0 Mac:52:54:00:10:ae:3c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:minikube Clientid:01:52:54:00:10:ae:3c}
	I1209 10:51:33.567501  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined IP address 192.168.39.82 and MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:33.567664  627293 main.go:141] libmachine: (ha-792382-m03) DBG | Using SSH client type: external
	I1209 10:51:33.567687  627293 main.go:141] libmachine: (ha-792382-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382-m03/id_rsa (-rw-------)
	I1209 10:51:33.567722  627293 main.go:141] libmachine: (ha-792382-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.82 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1209 10:51:33.567734  627293 main.go:141] libmachine: (ha-792382-m03) DBG | About to run SSH command:
	I1209 10:51:33.567748  627293 main.go:141] libmachine: (ha-792382-m03) DBG | exit 0
	I1209 10:51:33.698092  627293 main.go:141] libmachine: (ha-792382-m03) DBG | SSH cmd err, output: <nil>: 
	I1209 10:51:33.698421  627293 main.go:141] libmachine: (ha-792382-m03) KVM machine creation complete!
	I1209 10:51:33.698819  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetConfigRaw
	I1209 10:51:33.699478  627293 main.go:141] libmachine: (ha-792382-m03) Calling .DriverName
	I1209 10:51:33.699674  627293 main.go:141] libmachine: (ha-792382-m03) Calling .DriverName
	I1209 10:51:33.699826  627293 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1209 10:51:33.699837  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetState
	I1209 10:51:33.701167  627293 main.go:141] libmachine: Detecting operating system of created instance...
	I1209 10:51:33.701183  627293 main.go:141] libmachine: Waiting for SSH to be available...
	I1209 10:51:33.701191  627293 main.go:141] libmachine: Getting to WaitForSSH function...
	I1209 10:51:33.701198  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHHostname
	I1209 10:51:33.703744  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:33.704133  627293 main.go:141] libmachine: (ha-792382-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:ae:3c", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:51:25 +0000 UTC Type:0 Mac:52:54:00:10:ae:3c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:ha-792382-m03 Clientid:01:52:54:00:10:ae:3c}
	I1209 10:51:33.704162  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined IP address 192.168.39.82 and MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:33.704266  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHPort
	I1209 10:51:33.704462  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHKeyPath
	I1209 10:51:33.704600  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHKeyPath
	I1209 10:51:33.704723  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHUsername
	I1209 10:51:33.704916  627293 main.go:141] libmachine: Using SSH client type: native
	I1209 10:51:33.705157  627293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.82 22 <nil> <nil>}
	I1209 10:51:33.705168  627293 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1209 10:51:33.813390  627293 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 10:51:33.813423  627293 main.go:141] libmachine: Detecting the provisioner...
	I1209 10:51:33.813436  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHHostname
	I1209 10:51:33.816441  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:33.816804  627293 main.go:141] libmachine: (ha-792382-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:ae:3c", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:51:25 +0000 UTC Type:0 Mac:52:54:00:10:ae:3c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:ha-792382-m03 Clientid:01:52:54:00:10:ae:3c}
	I1209 10:51:33.816841  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined IP address 192.168.39.82 and MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:33.816951  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHPort
	I1209 10:51:33.817167  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHKeyPath
	I1209 10:51:33.817376  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHKeyPath
	I1209 10:51:33.817559  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHUsername
	I1209 10:51:33.817716  627293 main.go:141] libmachine: Using SSH client type: native
	I1209 10:51:33.817907  627293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.82 22 <nil> <nil>}
	I1209 10:51:33.817921  627293 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1209 10:51:33.926605  627293 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1209 10:51:33.926676  627293 main.go:141] libmachine: found compatible host: buildroot
	I1209 10:51:33.926683  627293 main.go:141] libmachine: Provisioning with buildroot...
	I1209 10:51:33.926691  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetMachineName
	I1209 10:51:33.926942  627293 buildroot.go:166] provisioning hostname "ha-792382-m03"
	I1209 10:51:33.926972  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetMachineName
	I1209 10:51:33.927120  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHHostname
	I1209 10:51:33.929899  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:33.930353  627293 main.go:141] libmachine: (ha-792382-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:ae:3c", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:51:25 +0000 UTC Type:0 Mac:52:54:00:10:ae:3c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:ha-792382-m03 Clientid:01:52:54:00:10:ae:3c}
	I1209 10:51:33.930382  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined IP address 192.168.39.82 and MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:33.930545  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHPort
	I1209 10:51:33.930780  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHKeyPath
	I1209 10:51:33.930935  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHKeyPath
	I1209 10:51:33.931076  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHUsername
	I1209 10:51:33.931236  627293 main.go:141] libmachine: Using SSH client type: native
	I1209 10:51:33.931442  627293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.82 22 <nil> <nil>}
	I1209 10:51:33.931455  627293 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-792382-m03 && echo "ha-792382-m03" | sudo tee /etc/hostname
	I1209 10:51:34.053804  627293 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-792382-m03
	
	I1209 10:51:34.053838  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHHostname
	I1209 10:51:34.056450  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:34.056795  627293 main.go:141] libmachine: (ha-792382-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:ae:3c", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:51:25 +0000 UTC Type:0 Mac:52:54:00:10:ae:3c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:ha-792382-m03 Clientid:01:52:54:00:10:ae:3c}
	I1209 10:51:34.056821  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined IP address 192.168.39.82 and MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:34.057070  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHPort
	I1209 10:51:34.057253  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHKeyPath
	I1209 10:51:34.057460  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHKeyPath
	I1209 10:51:34.057580  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHUsername
	I1209 10:51:34.057749  627293 main.go:141] libmachine: Using SSH client type: native
	I1209 10:51:34.057912  627293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.82 22 <nil> <nil>}
	I1209 10:51:34.057932  627293 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-792382-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-792382-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-792382-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 10:51:34.174396  627293 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 10:51:34.174436  627293 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20068-609844/.minikube CaCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20068-609844/.minikube}
	I1209 10:51:34.174459  627293 buildroot.go:174] setting up certificates
	I1209 10:51:34.174471  627293 provision.go:84] configureAuth start
	I1209 10:51:34.174484  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetMachineName
	I1209 10:51:34.174826  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetIP
	I1209 10:51:34.178006  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:34.178384  627293 main.go:141] libmachine: (ha-792382-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:ae:3c", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:51:25 +0000 UTC Type:0 Mac:52:54:00:10:ae:3c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:ha-792382-m03 Clientid:01:52:54:00:10:ae:3c}
	I1209 10:51:34.178414  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined IP address 192.168.39.82 and MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:34.178593  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHHostname
	I1209 10:51:34.180882  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:34.181259  627293 main.go:141] libmachine: (ha-792382-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:ae:3c", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:51:25 +0000 UTC Type:0 Mac:52:54:00:10:ae:3c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:ha-792382-m03 Clientid:01:52:54:00:10:ae:3c}
	I1209 10:51:34.181297  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined IP address 192.168.39.82 and MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:34.181434  627293 provision.go:143] copyHostCerts
	I1209 10:51:34.181467  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem
	I1209 10:51:34.181509  627293 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem, removing ...
	I1209 10:51:34.181521  627293 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem
	I1209 10:51:34.181599  627293 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem (1082 bytes)
	I1209 10:51:34.181708  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem
	I1209 10:51:34.181739  627293 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem, removing ...
	I1209 10:51:34.181750  627293 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem
	I1209 10:51:34.181796  627293 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem (1123 bytes)
	I1209 10:51:34.181862  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem
	I1209 10:51:34.181879  627293 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem, removing ...
	I1209 10:51:34.181885  627293 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem
	I1209 10:51:34.181910  627293 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem (1679 bytes)
	I1209 10:51:34.181961  627293 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem org=jenkins.ha-792382-m03 san=[127.0.0.1 192.168.39.82 ha-792382-m03 localhost minikube]
	I1209 10:51:34.410867  627293 provision.go:177] copyRemoteCerts
	I1209 10:51:34.410930  627293 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 10:51:34.410961  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHHostname
	I1209 10:51:34.414202  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:34.414663  627293 main.go:141] libmachine: (ha-792382-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:ae:3c", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:51:25 +0000 UTC Type:0 Mac:52:54:00:10:ae:3c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:ha-792382-m03 Clientid:01:52:54:00:10:ae:3c}
	I1209 10:51:34.414696  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined IP address 192.168.39.82 and MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:34.414964  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHPort
	I1209 10:51:34.415202  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHKeyPath
	I1209 10:51:34.415374  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHUsername
	I1209 10:51:34.415561  627293 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382-m03/id_rsa Username:docker}
	I1209 10:51:34.500121  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1209 10:51:34.500216  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1209 10:51:34.525465  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1209 10:51:34.525566  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1209 10:51:34.548733  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1209 10:51:34.548819  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1209 10:51:34.570848  627293 provision.go:87] duration metric: took 396.361471ms to configureAuth
	I1209 10:51:34.570884  627293 buildroot.go:189] setting minikube options for container-runtime
	I1209 10:51:34.571164  627293 config.go:182] Loaded profile config "ha-792382": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 10:51:34.571276  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHHostname
	I1209 10:51:34.574107  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:34.574532  627293 main.go:141] libmachine: (ha-792382-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:ae:3c", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:51:25 +0000 UTC Type:0 Mac:52:54:00:10:ae:3c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:ha-792382-m03 Clientid:01:52:54:00:10:ae:3c}
	I1209 10:51:34.574557  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined IP address 192.168.39.82 and MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:34.574761  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHPort
	I1209 10:51:34.574957  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHKeyPath
	I1209 10:51:34.575114  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHKeyPath
	I1209 10:51:34.575329  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHUsername
	I1209 10:51:34.575548  627293 main.go:141] libmachine: Using SSH client type: native
	I1209 10:51:34.575797  627293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.82 22 <nil> <nil>}
	I1209 10:51:34.575824  627293 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 10:51:34.816625  627293 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 10:51:34.816655  627293 main.go:141] libmachine: Checking connection to Docker...
	I1209 10:51:34.816670  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetURL
	I1209 10:51:34.817924  627293 main.go:141] libmachine: (ha-792382-m03) DBG | Using libvirt version 6000000
	I1209 10:51:34.820293  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:34.820739  627293 main.go:141] libmachine: (ha-792382-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:ae:3c", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:51:25 +0000 UTC Type:0 Mac:52:54:00:10:ae:3c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:ha-792382-m03 Clientid:01:52:54:00:10:ae:3c}
	I1209 10:51:34.820782  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined IP address 192.168.39.82 and MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:34.820943  627293 main.go:141] libmachine: Docker is up and running!
	I1209 10:51:34.820954  627293 main.go:141] libmachine: Reticulating splines...
	I1209 10:51:34.820962  627293 client.go:171] duration metric: took 24.920612799s to LocalClient.Create
	I1209 10:51:34.820990  627293 start.go:167] duration metric: took 24.920677638s to libmachine.API.Create "ha-792382"
	I1209 10:51:34.821001  627293 start.go:293] postStartSetup for "ha-792382-m03" (driver="kvm2")
	I1209 10:51:34.821015  627293 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 10:51:34.821041  627293 main.go:141] libmachine: (ha-792382-m03) Calling .DriverName
	I1209 10:51:34.821314  627293 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 10:51:34.821340  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHHostname
	I1209 10:51:34.823716  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:34.824123  627293 main.go:141] libmachine: (ha-792382-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:ae:3c", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:51:25 +0000 UTC Type:0 Mac:52:54:00:10:ae:3c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:ha-792382-m03 Clientid:01:52:54:00:10:ae:3c}
	I1209 10:51:34.824150  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined IP address 192.168.39.82 and MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:34.824346  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHPort
	I1209 10:51:34.824540  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHKeyPath
	I1209 10:51:34.824710  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHUsername
	I1209 10:51:34.824899  627293 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382-m03/id_rsa Username:docker}
	I1209 10:51:34.908596  627293 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 10:51:34.912587  627293 info.go:137] Remote host: Buildroot 2023.02.9
	I1209 10:51:34.912634  627293 filesync.go:126] Scanning /home/jenkins/minikube-integration/20068-609844/.minikube/addons for local assets ...
	I1209 10:51:34.912758  627293 filesync.go:126] Scanning /home/jenkins/minikube-integration/20068-609844/.minikube/files for local assets ...
	I1209 10:51:34.912881  627293 filesync.go:149] local asset: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem -> 6170172.pem in /etc/ssl/certs
	I1209 10:51:34.912894  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem -> /etc/ssl/certs/6170172.pem
	I1209 10:51:34.913014  627293 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 10:51:34.921828  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem --> /etc/ssl/certs/6170172.pem (1708 bytes)
	I1209 10:51:34.944676  627293 start.go:296] duration metric: took 123.657477ms for postStartSetup
	I1209 10:51:34.944735  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetConfigRaw
	I1209 10:51:34.945372  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetIP
	I1209 10:51:34.948020  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:34.948350  627293 main.go:141] libmachine: (ha-792382-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:ae:3c", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:51:25 +0000 UTC Type:0 Mac:52:54:00:10:ae:3c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:ha-792382-m03 Clientid:01:52:54:00:10:ae:3c}
	I1209 10:51:34.948374  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined IP address 192.168.39.82 and MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:34.948706  627293 profile.go:143] Saving config to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/config.json ...
	I1209 10:51:34.948901  627293 start.go:128] duration metric: took 25.067639086s to createHost
	I1209 10:51:34.948928  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHHostname
	I1209 10:51:34.951092  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:34.951471  627293 main.go:141] libmachine: (ha-792382-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:ae:3c", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:51:25 +0000 UTC Type:0 Mac:52:54:00:10:ae:3c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:ha-792382-m03 Clientid:01:52:54:00:10:ae:3c}
	I1209 10:51:34.951504  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined IP address 192.168.39.82 and MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:34.951672  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHPort
	I1209 10:51:34.951858  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHKeyPath
	I1209 10:51:34.952015  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHKeyPath
	I1209 10:51:34.952130  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHUsername
	I1209 10:51:34.952269  627293 main.go:141] libmachine: Using SSH client type: native
	I1209 10:51:34.952475  627293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.82 22 <nil> <nil>}
	I1209 10:51:34.952491  627293 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1209 10:51:35.062736  627293 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733741495.040495881
	
	I1209 10:51:35.062764  627293 fix.go:216] guest clock: 1733741495.040495881
	I1209 10:51:35.062773  627293 fix.go:229] Guest: 2024-12-09 10:51:35.040495881 +0000 UTC Remote: 2024-12-09 10:51:34.948914535 +0000 UTC m=+142.833153468 (delta=91.581346ms)
	I1209 10:51:35.062795  627293 fix.go:200] guest clock delta is within tolerance: 91.581346ms
	I1209 10:51:35.062802  627293 start.go:83] releasing machines lock for "ha-792382-m03", held for 25.181683585s
	I1209 10:51:35.062825  627293 main.go:141] libmachine: (ha-792382-m03) Calling .DriverName
	I1209 10:51:35.063125  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetIP
	I1209 10:51:35.065564  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:35.065919  627293 main.go:141] libmachine: (ha-792382-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:ae:3c", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:51:25 +0000 UTC Type:0 Mac:52:54:00:10:ae:3c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:ha-792382-m03 Clientid:01:52:54:00:10:ae:3c}
	I1209 10:51:35.065950  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined IP address 192.168.39.82 and MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:35.068041  627293 out.go:177] * Found network options:
	I1209 10:51:35.069311  627293 out.go:177]   - NO_PROXY=192.168.39.69,192.168.39.89
	W1209 10:51:35.070337  627293 proxy.go:119] fail to check proxy env: Error ip not in block
	W1209 10:51:35.070367  627293 proxy.go:119] fail to check proxy env: Error ip not in block
	I1209 10:51:35.070382  627293 main.go:141] libmachine: (ha-792382-m03) Calling .DriverName
	I1209 10:51:35.070888  627293 main.go:141] libmachine: (ha-792382-m03) Calling .DriverName
	I1209 10:51:35.071098  627293 main.go:141] libmachine: (ha-792382-m03) Calling .DriverName
	I1209 10:51:35.071216  627293 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 10:51:35.071260  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHHostname
	W1209 10:51:35.071333  627293 proxy.go:119] fail to check proxy env: Error ip not in block
	W1209 10:51:35.071358  627293 proxy.go:119] fail to check proxy env: Error ip not in block
	I1209 10:51:35.071448  627293 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 10:51:35.071472  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHHostname
	I1209 10:51:35.074136  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:35.074287  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:35.074566  627293 main.go:141] libmachine: (ha-792382-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:ae:3c", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:51:25 +0000 UTC Type:0 Mac:52:54:00:10:ae:3c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:ha-792382-m03 Clientid:01:52:54:00:10:ae:3c}
	I1209 10:51:35.074588  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined IP address 192.168.39.82 and MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:35.074614  627293 main.go:141] libmachine: (ha-792382-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:ae:3c", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:51:25 +0000 UTC Type:0 Mac:52:54:00:10:ae:3c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:ha-792382-m03 Clientid:01:52:54:00:10:ae:3c}
	I1209 10:51:35.074633  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined IP address 192.168.39.82 and MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:35.074729  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHPort
	I1209 10:51:35.074920  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHPort
	I1209 10:51:35.074923  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHKeyPath
	I1209 10:51:35.075091  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHKeyPath
	I1209 10:51:35.075094  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHUsername
	I1209 10:51:35.075270  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHUsername
	I1209 10:51:35.075298  627293 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382-m03/id_rsa Username:docker}
	I1209 10:51:35.075413  627293 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382-m03/id_rsa Username:docker}
	I1209 10:51:35.318511  627293 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 10:51:35.324511  627293 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 10:51:35.324586  627293 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 10:51:35.341575  627293 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1209 10:51:35.341607  627293 start.go:495] detecting cgroup driver to use...
	I1209 10:51:35.341686  627293 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 10:51:35.357724  627293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 10:51:35.372685  627293 docker.go:217] disabling cri-docker service (if available) ...
	I1209 10:51:35.372771  627293 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 10:51:35.387627  627293 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 10:51:35.401716  627293 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 10:51:35.525416  627293 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 10:51:35.688544  627293 docker.go:233] disabling docker service ...
	I1209 10:51:35.688627  627293 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 10:51:35.703495  627293 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 10:51:35.717769  627293 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 10:51:35.838656  627293 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 10:51:35.968740  627293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 10:51:35.982914  627293 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 10:51:36.001011  627293 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1209 10:51:36.001092  627293 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 10:51:36.011496  627293 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 10:51:36.011565  627293 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 10:51:36.021527  627293 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 10:51:36.031202  627293 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 10:51:36.041196  627293 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 10:51:36.051656  627293 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 10:51:36.062085  627293 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 10:51:36.078955  627293 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 10:51:36.088919  627293 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 10:51:36.098428  627293 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1209 10:51:36.098491  627293 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1209 10:51:36.112478  627293 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 10:51:36.121985  627293 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 10:51:36.236147  627293 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 10:51:36.331891  627293 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 10:51:36.331989  627293 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 10:51:36.336578  627293 start.go:563] Will wait 60s for crictl version
	I1209 10:51:36.336641  627293 ssh_runner.go:195] Run: which crictl
	I1209 10:51:36.340301  627293 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 10:51:36.380474  627293 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1209 10:51:36.380557  627293 ssh_runner.go:195] Run: crio --version
	I1209 10:51:36.408527  627293 ssh_runner.go:195] Run: crio --version
	I1209 10:51:36.438078  627293 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1209 10:51:36.439329  627293 out.go:177]   - env NO_PROXY=192.168.39.69
	I1209 10:51:36.440501  627293 out.go:177]   - env NO_PROXY=192.168.39.69,192.168.39.89
	I1209 10:51:36.441659  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetIP
	I1209 10:51:36.444828  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:36.445310  627293 main.go:141] libmachine: (ha-792382-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:ae:3c", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:51:25 +0000 UTC Type:0 Mac:52:54:00:10:ae:3c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:ha-792382-m03 Clientid:01:52:54:00:10:ae:3c}
	I1209 10:51:36.445339  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined IP address 192.168.39.82 and MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:36.445521  627293 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1209 10:51:36.449517  627293 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 10:51:36.461352  627293 mustload.go:65] Loading cluster: ha-792382
	I1209 10:51:36.461581  627293 config.go:182] Loaded profile config "ha-792382": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 10:51:36.461851  627293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:51:36.461915  627293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:51:36.476757  627293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45915
	I1209 10:51:36.477266  627293 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:51:36.477839  627293 main.go:141] libmachine: Using API Version  1
	I1209 10:51:36.477861  627293 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:51:36.478264  627293 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:51:36.478470  627293 main.go:141] libmachine: (ha-792382) Calling .GetState
	I1209 10:51:36.480228  627293 host.go:66] Checking if "ha-792382" exists ...
	I1209 10:51:36.480540  627293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:51:36.480578  627293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:51:36.495892  627293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37057
	I1209 10:51:36.496439  627293 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:51:36.496999  627293 main.go:141] libmachine: Using API Version  1
	I1209 10:51:36.497024  627293 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:51:36.497365  627293 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:51:36.497597  627293 main.go:141] libmachine: (ha-792382) Calling .DriverName
	I1209 10:51:36.497777  627293 certs.go:68] Setting up /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382 for IP: 192.168.39.82
	I1209 10:51:36.497796  627293 certs.go:194] generating shared ca certs ...
	I1209 10:51:36.497816  627293 certs.go:226] acquiring lock for ca certs: {Name:mk2df5887e08965d909a9c950da5dfffb8a04ddc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 10:51:36.497951  627293 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.key
	I1209 10:51:36.497987  627293 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.key
	I1209 10:51:36.497996  627293 certs.go:256] generating profile certs ...
	I1209 10:51:36.498067  627293 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/client.key
	I1209 10:51:36.498091  627293 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.key.74be6275
	I1209 10:51:36.498107  627293 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.crt.74be6275 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.69 192.168.39.89 192.168.39.82 192.168.39.254]
	I1209 10:51:36.575706  627293 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.crt.74be6275 ...
	I1209 10:51:36.575744  627293 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.crt.74be6275: {Name:mkc0279d5f95c7c05a4a03239304c698f543bc23 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 10:51:36.575927  627293 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.key.74be6275 ...
	I1209 10:51:36.575940  627293 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.key.74be6275: {Name:mk628bdb195c5612308f11734296bd7934f36956 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 10:51:36.576016  627293 certs.go:381] copying /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.crt.74be6275 -> /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.crt
	I1209 10:51:36.576148  627293 certs.go:385] copying /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.key.74be6275 -> /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.key
	I1209 10:51:36.576277  627293 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/proxy-client.key
	I1209 10:51:36.576293  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1209 10:51:36.576307  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1209 10:51:36.576321  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1209 10:51:36.576334  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1209 10:51:36.576347  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1209 10:51:36.576359  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1209 10:51:36.576371  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1209 10:51:36.590260  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1209 10:51:36.590358  627293 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017.pem (1338 bytes)
	W1209 10:51:36.590394  627293 certs.go:480] ignoring /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017_empty.pem, impossibly tiny 0 bytes
	I1209 10:51:36.590412  627293 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem (1675 bytes)
	I1209 10:51:36.590439  627293 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem (1082 bytes)
	I1209 10:51:36.590462  627293 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem (1123 bytes)
	I1209 10:51:36.590483  627293 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem (1679 bytes)
	I1209 10:51:36.590521  627293 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem (1708 bytes)
	I1209 10:51:36.590548  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem -> /usr/share/ca-certificates/6170172.pem
	I1209 10:51:36.590563  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1209 10:51:36.590576  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017.pem -> /usr/share/ca-certificates/617017.pem
	I1209 10:51:36.590614  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHHostname
	I1209 10:51:36.594031  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:51:36.594418  627293 main.go:141] libmachine: (ha-792382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:82:f7", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:49:26 +0000 UTC Type:0 Mac:52:54:00:a8:82:f7 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-792382 Clientid:01:52:54:00:a8:82:f7}
	I1209 10:51:36.594452  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:51:36.594660  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHPort
	I1209 10:51:36.594910  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHKeyPath
	I1209 10:51:36.595086  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHUsername
	I1209 10:51:36.595232  627293 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382/id_rsa Username:docker}
	I1209 10:51:36.666577  627293 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1209 10:51:36.671392  627293 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1209 10:51:36.681688  627293 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1209 10:51:36.685694  627293 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1209 10:51:36.696364  627293 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1209 10:51:36.700718  627293 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1209 10:51:36.712302  627293 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1209 10:51:36.716534  627293 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1209 10:51:36.728128  627293 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1209 10:51:36.732026  627293 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1209 10:51:36.743956  627293 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1209 10:51:36.748200  627293 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1209 10:51:36.761818  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 10:51:36.786260  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1209 10:51:36.809394  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 10:51:36.832350  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1209 10:51:36.854875  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1209 10:51:36.876691  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1209 10:51:36.900011  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 10:51:36.922859  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1209 10:51:36.945086  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem --> /usr/share/ca-certificates/6170172.pem (1708 bytes)
	I1209 10:51:36.966983  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 10:51:36.989660  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017.pem --> /usr/share/ca-certificates/617017.pem (1338 bytes)
	I1209 10:51:37.011442  627293 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1209 10:51:37.027256  627293 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1209 10:51:37.042921  627293 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1209 10:51:37.059579  627293 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1209 10:51:37.078911  627293 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1209 10:51:37.094738  627293 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1209 10:51:37.112113  627293 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1209 10:51:37.130720  627293 ssh_runner.go:195] Run: openssl version
	I1209 10:51:37.136460  627293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1209 10:51:37.148061  627293 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 10:51:37.152555  627293 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 10:34 /usr/share/ca-certificates/minikubeCA.pem
	I1209 10:51:37.152627  627293 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 10:51:37.158639  627293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1209 10:51:37.170061  627293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/617017.pem && ln -fs /usr/share/ca-certificates/617017.pem /etc/ssl/certs/617017.pem"
	I1209 10:51:37.180567  627293 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/617017.pem
	I1209 10:51:37.184633  627293 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 10:45 /usr/share/ca-certificates/617017.pem
	I1209 10:51:37.184695  627293 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/617017.pem
	I1209 10:51:37.190044  627293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/617017.pem /etc/ssl/certs/51391683.0"
	I1209 10:51:37.200767  627293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6170172.pem && ln -fs /usr/share/ca-certificates/6170172.pem /etc/ssl/certs/6170172.pem"
	I1209 10:51:37.211239  627293 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6170172.pem
	I1209 10:51:37.215531  627293 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 10:45 /usr/share/ca-certificates/6170172.pem
	I1209 10:51:37.215617  627293 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6170172.pem
	I1209 10:51:37.221282  627293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6170172.pem /etc/ssl/certs/3ec20f2e.0"
	I1209 10:51:37.232891  627293 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 10:51:37.237033  627293 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1209 10:51:37.237096  627293 kubeadm.go:934] updating node {m03 192.168.39.82 8443 v1.31.2 crio true true} ...
	I1209 10:51:37.237210  627293 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-792382-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.82
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-792382 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 10:51:37.237247  627293 kube-vip.go:115] generating kube-vip config ...
	I1209 10:51:37.237291  627293 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1209 10:51:37.254154  627293 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1209 10:51:37.254286  627293 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.7
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1209 10:51:37.254376  627293 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1209 10:51:37.266499  627293 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1209 10:51:37.266573  627293 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1209 10:51:37.276989  627293 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256
	I1209 10:51:37.277004  627293 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256
	I1209 10:51:37.277031  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1209 10:51:37.277052  627293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 10:51:37.277099  627293 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1209 10:51:37.276989  627293 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1209 10:51:37.277162  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1209 10:51:37.277221  627293 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1209 10:51:37.294260  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1209 10:51:37.294329  627293 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1209 10:51:37.294354  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1209 10:51:37.294397  627293 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1209 10:51:37.294410  627293 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1209 10:51:37.294447  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1209 10:51:37.309738  627293 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1209 10:51:37.309777  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1209 10:51:38.106081  627293 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1209 10:51:38.115636  627293 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1209 10:51:38.132759  627293 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 10:51:38.149726  627293 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1209 10:51:38.166083  627293 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1209 10:51:38.169937  627293 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 10:51:38.181150  627293 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 10:51:38.308494  627293 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 10:51:38.325679  627293 host.go:66] Checking if "ha-792382" exists ...
	I1209 10:51:38.326045  627293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:51:38.326105  627293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:51:38.344459  627293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45865
	I1209 10:51:38.345084  627293 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:51:38.345753  627293 main.go:141] libmachine: Using API Version  1
	I1209 10:51:38.345796  627293 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:51:38.346197  627293 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:51:38.346437  627293 main.go:141] libmachine: (ha-792382) Calling .DriverName
	I1209 10:51:38.346586  627293 start.go:317] joinCluster: &{Name:ha-792382 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-792382 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.69 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.89 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.82 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns
:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 10:51:38.346740  627293 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1209 10:51:38.346768  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHHostname
	I1209 10:51:38.349642  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:51:38.350099  627293 main.go:141] libmachine: (ha-792382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:82:f7", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:49:26 +0000 UTC Type:0 Mac:52:54:00:a8:82:f7 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-792382 Clientid:01:52:54:00:a8:82:f7}
	I1209 10:51:38.350125  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:51:38.350286  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHPort
	I1209 10:51:38.350484  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHKeyPath
	I1209 10:51:38.350634  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHUsername
	I1209 10:51:38.350780  627293 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382/id_rsa Username:docker}
	I1209 10:51:38.514216  627293 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.82 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 10:51:38.514274  627293 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token exrmr9.huiz7swpoaojy929 --discovery-token-ca-cert-hash sha256:c011865ce27f188d55feab5f04226df0aae8d2e7cfe661283806c6f2de0ef29a --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-792382-m03 --control-plane --apiserver-advertise-address=192.168.39.82 --apiserver-bind-port=8443"
	I1209 10:52:01.803198  627293 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token exrmr9.huiz7swpoaojy929 --discovery-token-ca-cert-hash sha256:c011865ce27f188d55feab5f04226df0aae8d2e7cfe661283806c6f2de0ef29a --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-792382-m03 --control-plane --apiserver-advertise-address=192.168.39.82 --apiserver-bind-port=8443": (23.288893034s)
	I1209 10:52:01.803245  627293 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1209 10:52:02.338453  627293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-792382-m03 minikube.k8s.io/updated_at=2024_12_09T10_52_02_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=60110addcdbf0fec7168b962521659e922988d6c minikube.k8s.io/name=ha-792382 minikube.k8s.io/primary=false
	I1209 10:52:02.475613  627293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-792382-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I1209 10:52:02.591820  627293 start.go:319] duration metric: took 24.245228011s to joinCluster
	I1209 10:52:02.591921  627293 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.82 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 10:52:02.592324  627293 config.go:182] Loaded profile config "ha-792382": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 10:52:02.593526  627293 out.go:177] * Verifying Kubernetes components...
	I1209 10:52:02.594809  627293 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 10:52:02.839263  627293 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 10:52:02.861519  627293 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/20068-609844/kubeconfig
	I1209 10:52:02.861874  627293 kapi.go:59] client config for ha-792382: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/client.crt", KeyFile:"/home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/client.key", CAFile:"/home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243b6e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1209 10:52:02.861974  627293 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.69:8443
	I1209 10:52:02.862413  627293 node_ready.go:35] waiting up to 6m0s for node "ha-792382-m03" to be "Ready" ...
	I1209 10:52:02.862536  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:02.862551  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:02.862563  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:02.862569  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:02.866706  627293 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 10:52:03.363562  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:03.363585  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:03.363593  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:03.363597  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:03.367171  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:03.863250  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:03.863275  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:03.863284  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:03.863288  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:03.866476  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:04.363562  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:04.363593  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:04.363607  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:04.363611  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:04.367286  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:04.862912  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:04.862943  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:04.862957  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:04.862964  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:04.866217  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:04.866889  627293 node_ready.go:53] node "ha-792382-m03" has status "Ready":"False"
	I1209 10:52:05.363334  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:05.363359  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:05.363368  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:05.363371  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:05.366850  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:05.863531  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:05.863565  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:05.863577  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:05.863584  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:05.867191  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:06.363075  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:06.363103  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:06.363116  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:06.363123  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:06.368722  627293 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1209 10:52:06.862720  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:06.862750  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:06.862764  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:06.862773  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:06.865876  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:07.363131  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:07.363158  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:07.363167  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:07.363181  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:07.366603  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:07.367388  627293 node_ready.go:53] node "ha-792382-m03" has status "Ready":"False"
	I1209 10:52:07.862715  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:07.862743  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:07.862756  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:07.862762  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:07.866073  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:08.362710  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:08.362744  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:08.362756  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:08.362763  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:08.366953  627293 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 10:52:08.862771  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:08.862799  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:08.862808  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:08.862813  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:08.866875  627293 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 10:52:09.362787  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:09.362812  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:09.362820  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:09.362824  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:09.367053  627293 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 10:52:09.367603  627293 node_ready.go:53] node "ha-792382-m03" has status "Ready":"False"
	I1209 10:52:09.862752  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:09.862786  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:09.862803  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:09.862809  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:09.866207  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:10.363296  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:10.363329  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:10.363341  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:10.363347  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:10.368594  627293 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1209 10:52:10.863471  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:10.863504  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:10.863518  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:10.863523  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:10.868956  627293 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1209 10:52:11.362961  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:11.362988  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:11.362998  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:11.363003  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:11.366828  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:11.862866  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:11.862896  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:11.862906  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:11.862912  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:11.868040  627293 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1209 10:52:11.868910  627293 node_ready.go:53] node "ha-792382-m03" has status "Ready":"False"
	I1209 10:52:12.363520  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:12.363543  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:12.363551  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:12.363555  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:12.367064  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:12.862709  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:12.862738  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:12.862747  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:12.862751  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:12.866024  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:13.362946  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:13.362972  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:13.362981  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:13.362985  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:13.367208  627293 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 10:52:13.863257  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:13.863282  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:13.863291  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:13.863295  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:13.866570  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:14.363551  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:14.363576  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:14.363588  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:14.363595  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:14.367509  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:14.368341  627293 node_ready.go:53] node "ha-792382-m03" has status "Ready":"False"
	I1209 10:52:14.863449  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:14.863475  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:14.863485  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:14.863492  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:14.866808  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:15.363473  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:15.363501  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:15.363510  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:15.363514  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:15.367252  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:15.863063  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:15.863086  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:15.863095  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:15.863099  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:15.866694  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:16.363487  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:16.363515  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:16.363525  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:16.363529  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:16.366968  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:16.863237  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:16.863267  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:16.863277  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:16.863285  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:16.866528  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:16.867067  627293 node_ready.go:53] node "ha-792382-m03" has status "Ready":"False"
	I1209 10:52:17.363592  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:17.363616  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:17.363628  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:17.363634  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:17.367261  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:17.863310  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:17.863334  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:17.863343  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:17.863347  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:17.866881  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:18.363575  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:18.363603  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:18.363614  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:18.363624  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:18.368502  627293 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 10:52:18.863660  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:18.863684  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:18.863693  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:18.863698  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:18.866946  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:18.867391  627293 node_ready.go:53] node "ha-792382-m03" has status "Ready":"False"
	I1209 10:52:19.362762  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:19.362786  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:19.362794  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:19.362798  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:19.366684  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:19.863495  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:19.863581  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:19.863600  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:19.863608  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:19.870858  627293 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1209 10:52:20.363448  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:20.363473  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:20.363482  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:20.363487  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:20.367472  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:20.368003  627293 node_ready.go:49] node "ha-792382-m03" has status "Ready":"True"
	I1209 10:52:20.368025  627293 node_ready.go:38] duration metric: took 17.505584111s for node "ha-792382-m03" to be "Ready" ...
	I1209 10:52:20.368035  627293 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 10:52:20.368124  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods
	I1209 10:52:20.368135  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:20.368143  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:20.368147  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:20.375067  627293 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1209 10:52:20.382809  627293 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-8hlml" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:20.382913  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-8hlml
	I1209 10:52:20.382922  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:20.382932  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:20.382939  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:20.386681  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:20.387473  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382
	I1209 10:52:20.387492  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:20.387502  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:20.387506  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:20.390201  627293 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 10:52:20.390989  627293 pod_ready.go:93] pod "coredns-7c65d6cfc9-8hlml" in "kube-system" namespace has status "Ready":"True"
	I1209 10:52:20.391012  627293 pod_ready.go:82] duration metric: took 8.170284ms for pod "coredns-7c65d6cfc9-8hlml" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:20.391025  627293 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-rz6mw" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:20.391107  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-rz6mw
	I1209 10:52:20.391121  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:20.391132  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:20.391139  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:20.393896  627293 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 10:52:20.394886  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382
	I1209 10:52:20.394902  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:20.394910  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:20.394913  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:20.397630  627293 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 10:52:20.398092  627293 pod_ready.go:93] pod "coredns-7c65d6cfc9-rz6mw" in "kube-system" namespace has status "Ready":"True"
	I1209 10:52:20.398114  627293 pod_ready.go:82] duration metric: took 7.080989ms for pod "coredns-7c65d6cfc9-rz6mw" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:20.398128  627293 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-792382" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:20.398227  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-792382
	I1209 10:52:20.398238  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:20.398249  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:20.398255  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:20.402755  627293 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 10:52:20.403454  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382
	I1209 10:52:20.403477  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:20.403487  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:20.403495  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:20.407171  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:20.407675  627293 pod_ready.go:93] pod "etcd-ha-792382" in "kube-system" namespace has status "Ready":"True"
	I1209 10:52:20.407690  627293 pod_ready.go:82] duration metric: took 9.55619ms for pod "etcd-ha-792382" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:20.407701  627293 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-792382-m02" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:20.407761  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-792382-m02
	I1209 10:52:20.407769  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:20.407776  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:20.407782  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:20.411699  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:20.412198  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:52:20.412214  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:20.412221  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:20.412228  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:20.415128  627293 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 10:52:20.415876  627293 pod_ready.go:93] pod "etcd-ha-792382-m02" in "kube-system" namespace has status "Ready":"True"
	I1209 10:52:20.415895  627293 pod_ready.go:82] duration metric: took 8.185439ms for pod "etcd-ha-792382-m02" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:20.415927  627293 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-792382-m03" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:20.564348  627293 request.go:632] Waited for 148.293235ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-792382-m03
	I1209 10:52:20.564443  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-792382-m03
	I1209 10:52:20.564455  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:20.564475  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:20.564485  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:20.567758  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:20.763843  627293 request.go:632] Waited for 195.366287ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:20.763920  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:20.763933  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:20.763945  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:20.763957  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:20.772124  627293 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1209 10:52:20.772769  627293 pod_ready.go:93] pod "etcd-ha-792382-m03" in "kube-system" namespace has status "Ready":"True"
	I1209 10:52:20.772802  627293 pod_ready.go:82] duration metric: took 356.849767ms for pod "etcd-ha-792382-m03" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:20.772827  627293 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-792382" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:20.963692  627293 request.go:632] Waited for 190.744323ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-792382
	I1209 10:52:20.963762  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-792382
	I1209 10:52:20.963767  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:20.963775  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:20.963781  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:20.966983  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:21.163987  627293 request.go:632] Waited for 196.382643ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-792382
	I1209 10:52:21.164057  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382
	I1209 10:52:21.164062  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:21.164070  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:21.164074  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:21.167406  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:21.168047  627293 pod_ready.go:93] pod "kube-apiserver-ha-792382" in "kube-system" namespace has status "Ready":"True"
	I1209 10:52:21.168074  627293 pod_ready.go:82] duration metric: took 395.237987ms for pod "kube-apiserver-ha-792382" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:21.168086  627293 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-792382-m02" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:21.364059  627293 request.go:632] Waited for 195.853676ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-792382-m02
	I1209 10:52:21.364141  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-792382-m02
	I1209 10:52:21.364147  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:21.364155  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:21.364164  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:21.368500  627293 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 10:52:21.563923  627293 request.go:632] Waited for 194.790397ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:52:21.563997  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:52:21.564006  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:21.564018  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:21.564029  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:21.567739  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:21.568495  627293 pod_ready.go:93] pod "kube-apiserver-ha-792382-m02" in "kube-system" namespace has status "Ready":"True"
	I1209 10:52:21.568518  627293 pod_ready.go:82] duration metric: took 400.423423ms for pod "kube-apiserver-ha-792382-m02" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:21.568529  627293 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-792382-m03" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:21.763480  627293 request.go:632] Waited for 194.86491ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-792382-m03
	I1209 10:52:21.763574  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-792382-m03
	I1209 10:52:21.763581  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:21.763594  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:21.763602  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:21.767033  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:21.964208  627293 request.go:632] Waited for 196.356498ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:21.964296  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:21.964305  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:21.964340  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:21.964351  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:21.967752  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:21.968228  627293 pod_ready.go:93] pod "kube-apiserver-ha-792382-m03" in "kube-system" namespace has status "Ready":"True"
	I1209 10:52:21.968247  627293 pod_ready.go:82] duration metric: took 399.712092ms for pod "kube-apiserver-ha-792382-m03" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:21.968258  627293 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-792382" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:22.163746  627293 request.go:632] Waited for 195.415661ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-792382
	I1209 10:52:22.163805  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-792382
	I1209 10:52:22.163810  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:22.163823  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:22.163830  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:22.166645  627293 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 10:52:22.364336  627293 request.go:632] Waited for 197.03194ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-792382
	I1209 10:52:22.364428  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382
	I1209 10:52:22.364449  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:22.364480  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:22.364491  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:22.368286  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:22.369016  627293 pod_ready.go:93] pod "kube-controller-manager-ha-792382" in "kube-system" namespace has status "Ready":"True"
	I1209 10:52:22.369039  627293 pod_ready.go:82] duration metric: took 400.774826ms for pod "kube-controller-manager-ha-792382" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:22.369050  627293 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-792382-m02" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:22.564041  627293 request.go:632] Waited for 194.907266ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-792382-m02
	I1209 10:52:22.564119  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-792382-m02
	I1209 10:52:22.564127  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:22.564140  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:22.564149  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:22.567707  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:22.763845  627293 request.go:632] Waited for 195.40032ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:52:22.763928  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:52:22.763935  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:22.763956  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:22.763982  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:22.767705  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:22.768312  627293 pod_ready.go:93] pod "kube-controller-manager-ha-792382-m02" in "kube-system" namespace has status "Ready":"True"
	I1209 10:52:22.768335  627293 pod_ready.go:82] duration metric: took 399.277854ms for pod "kube-controller-manager-ha-792382-m02" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:22.768350  627293 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-792382-m03" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:22.964360  627293 request.go:632] Waited for 195.903206ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-792382-m03
	I1209 10:52:22.964433  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-792382-m03
	I1209 10:52:22.964446  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:22.964457  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:22.964465  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:22.967540  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:23.163523  627293 request.go:632] Waited for 195.162382ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:23.163590  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:23.163596  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:23.163611  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:23.163618  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:23.166875  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:23.167557  627293 pod_ready.go:93] pod "kube-controller-manager-ha-792382-m03" in "kube-system" namespace has status "Ready":"True"
	I1209 10:52:23.167581  627293 pod_ready.go:82] duration metric: took 399.219283ms for pod "kube-controller-manager-ha-792382-m03" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:23.167592  627293 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-2l42s" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:23.364163  627293 request.go:632] Waited for 196.469736ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2l42s
	I1209 10:52:23.364233  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2l42s
	I1209 10:52:23.364240  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:23.364250  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:23.364256  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:23.368871  627293 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 10:52:23.564369  627293 request.go:632] Waited for 194.396631ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:23.564485  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:23.564496  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:23.564504  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:23.564509  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:23.567861  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:23.568367  627293 pod_ready.go:93] pod "kube-proxy-2l42s" in "kube-system" namespace has status "Ready":"True"
	I1209 10:52:23.568387  627293 pod_ready.go:82] duration metric: took 400.786442ms for pod "kube-proxy-2l42s" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:23.568400  627293 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-dckpl" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:23.763515  627293 request.go:632] Waited for 195.023087ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dckpl
	I1209 10:52:23.763600  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dckpl
	I1209 10:52:23.763608  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:23.763619  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:23.763628  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:23.767899  627293 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 10:52:23.964038  627293 request.go:632] Waited for 195.369645ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:52:23.964137  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:52:23.964144  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:23.964152  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:23.964161  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:23.967628  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:23.968543  627293 pod_ready.go:93] pod "kube-proxy-dckpl" in "kube-system" namespace has status "Ready":"True"
	I1209 10:52:23.968572  627293 pod_ready.go:82] duration metric: took 400.162458ms for pod "kube-proxy-dckpl" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:23.968586  627293 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-wrvgb" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:24.164418  627293 request.go:632] Waited for 195.731455ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wrvgb
	I1209 10:52:24.164497  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wrvgb
	I1209 10:52:24.164502  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:24.164511  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:24.164516  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:24.167227  627293 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 10:52:24.364211  627293 request.go:632] Waited for 196.319396ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-792382
	I1209 10:52:24.364296  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382
	I1209 10:52:24.364308  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:24.364319  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:24.364330  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:24.368387  627293 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 10:52:24.369158  627293 pod_ready.go:93] pod "kube-proxy-wrvgb" in "kube-system" namespace has status "Ready":"True"
	I1209 10:52:24.369182  627293 pod_ready.go:82] duration metric: took 400.580765ms for pod "kube-proxy-wrvgb" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:24.369195  627293 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-792382" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:24.564251  627293 request.go:632] Waited for 194.959562ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-792382
	I1209 10:52:24.564342  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-792382
	I1209 10:52:24.564348  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:24.564357  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:24.564361  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:24.568298  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:24.764304  627293 request.go:632] Waited for 195.363618ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-792382
	I1209 10:52:24.764392  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382
	I1209 10:52:24.764408  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:24.764418  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:24.764425  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:24.768139  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:24.768711  627293 pod_ready.go:93] pod "kube-scheduler-ha-792382" in "kube-system" namespace has status "Ready":"True"
	I1209 10:52:24.768733  627293 pod_ready.go:82] duration metric: took 399.519254ms for pod "kube-scheduler-ha-792382" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:24.768746  627293 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-792382-m02" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:24.963667  627293 request.go:632] Waited for 194.82946ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-792382-m02
	I1209 10:52:24.963730  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-792382-m02
	I1209 10:52:24.963736  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:24.963744  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:24.963749  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:24.967092  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:25.164276  627293 request.go:632] Waited for 196.380929ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:52:25.164345  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:52:25.164349  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:25.164358  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:25.164364  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:25.169070  627293 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 10:52:25.169673  627293 pod_ready.go:93] pod "kube-scheduler-ha-792382-m02" in "kube-system" namespace has status "Ready":"True"
	I1209 10:52:25.169696  627293 pod_ready.go:82] duration metric: took 400.939865ms for pod "kube-scheduler-ha-792382-m02" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:25.169706  627293 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-792382-m03" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:25.363779  627293 request.go:632] Waited for 193.996151ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-792382-m03
	I1209 10:52:25.363866  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-792382-m03
	I1209 10:52:25.363882  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:25.363912  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:25.363923  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:25.367885  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:25.563919  627293 request.go:632] Waited for 195.39244ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:25.563987  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:25.563992  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:25.564000  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:25.564003  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:25.567759  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:25.568223  627293 pod_ready.go:93] pod "kube-scheduler-ha-792382-m03" in "kube-system" namespace has status "Ready":"True"
	I1209 10:52:25.568247  627293 pod_ready.go:82] duration metric: took 398.53325ms for pod "kube-scheduler-ha-792382-m03" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:25.568262  627293 pod_ready.go:39] duration metric: took 5.200212564s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 10:52:25.568288  627293 api_server.go:52] waiting for apiserver process to appear ...
	I1209 10:52:25.568359  627293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 10:52:25.588000  627293 api_server.go:72] duration metric: took 22.996035203s to wait for apiserver process to appear ...
	I1209 10:52:25.588031  627293 api_server.go:88] waiting for apiserver healthz status ...
	I1209 10:52:25.588055  627293 api_server.go:253] Checking apiserver healthz at https://192.168.39.69:8443/healthz ...
	I1209 10:52:25.592469  627293 api_server.go:279] https://192.168.39.69:8443/healthz returned 200:
	ok
	I1209 10:52:25.592544  627293 round_trippers.go:463] GET https://192.168.39.69:8443/version
	I1209 10:52:25.592549  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:25.592557  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:25.592563  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:25.593630  627293 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1209 10:52:25.593699  627293 api_server.go:141] control plane version: v1.31.2
	I1209 10:52:25.593714  627293 api_server.go:131] duration metric: took 5.676129ms to wait for apiserver health ...
	I1209 10:52:25.593722  627293 system_pods.go:43] waiting for kube-system pods to appear ...
	I1209 10:52:25.764156  627293 request.go:632] Waited for 170.352326ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods
	I1209 10:52:25.764268  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods
	I1209 10:52:25.764281  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:25.764294  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:25.764301  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:25.774462  627293 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1209 10:52:25.781848  627293 system_pods.go:59] 24 kube-system pods found
	I1209 10:52:25.781880  627293 system_pods.go:61] "coredns-7c65d6cfc9-8hlml" [d820cd6c-5064-4934-adc8-c68f84c09b46] Running
	I1209 10:52:25.781886  627293 system_pods.go:61] "coredns-7c65d6cfc9-rz6mw" [af297b6d-91f1-4114-b98c-cdfdfbd1589e] Running
	I1209 10:52:25.781890  627293 system_pods.go:61] "etcd-ha-792382" [eee2fbad-7f2f-4f5a-b701-abdbf8456f99] Running
	I1209 10:52:25.781894  627293 system_pods.go:61] "etcd-ha-792382-m02" [424768ff-af5d-4a31-b062-ab2c9576884d] Running
	I1209 10:52:25.781897  627293 system_pods.go:61] "etcd-ha-792382-m03" [4112b988-6915-413a-badd-c0207865e60d] Running
	I1209 10:52:25.781900  627293 system_pods.go:61] "kindnet-6hlht" [23156ebc-d366-4fc2-bedb-7a63e950b116] Running
	I1209 10:52:25.781903  627293 system_pods.go:61] "kindnet-bqp2z" [b2c40579-4d72-4efe-b921-1e0f98b91544] Running
	I1209 10:52:25.781906  627293 system_pods.go:61] "kindnet-hkrhk" [9b35011c-ab45-4f55-a60f-08f9c4509c1d] Running
	I1209 10:52:25.781909  627293 system_pods.go:61] "kube-apiserver-ha-792382" [5157cfb0-bc91-4efe-b2e2-689778d5b012] Running
	I1209 10:52:25.781913  627293 system_pods.go:61] "kube-apiserver-ha-792382-m02" [71f9ce1c-aeff-4853-83ec-01a7fb6a81d5] Running
	I1209 10:52:25.781916  627293 system_pods.go:61] "kube-apiserver-ha-792382-m03" [5cd4395c-58a8-45ba-90ea-72105d25fadd] Running
	I1209 10:52:25.781919  627293 system_pods.go:61] "kube-controller-manager-ha-792382" [f8cb7175-f6fd-4dcf-a6a5-66c44aa136ce] Running
	I1209 10:52:25.781922  627293 system_pods.go:61] "kube-controller-manager-ha-792382-m02" [2f7d8deb-ddad-47ac-ad18-5f8e7d0e095d] Running
	I1209 10:52:25.781926  627293 system_pods.go:61] "kube-controller-manager-ha-792382-m03" [5c5d03de-e7e9-491b-a6fd-fdc50b4ce7ed] Running
	I1209 10:52:25.781930  627293 system_pods.go:61] "kube-proxy-2l42s" [a4bfe3cb-9b06-4d1e-9887-c461d31aaaec] Running
	I1209 10:52:25.781934  627293 system_pods.go:61] "kube-proxy-dckpl" [13f7bda1-f9c2-4fd0-96e0-b6aee1139bc1] Running
	I1209 10:52:25.781940  627293 system_pods.go:61] "kube-proxy-wrvgb" [2531e29f-a4d5-41f9-8c38-3220b4caf96b] Running
	I1209 10:52:25.781942  627293 system_pods.go:61] "kube-scheduler-ha-792382" [b11693d0-b2e7-45a1-a1a2-6519a1535c45] Running
	I1209 10:52:25.781945  627293 system_pods.go:61] "kube-scheduler-ha-792382-m02" [250ce40c-5cbf-4f5d-a475-4f3d7ec100d9] Running
	I1209 10:52:25.781948  627293 system_pods.go:61] "kube-scheduler-ha-792382-m03" [b994f699-40b5-423e-b92f-3ca6208e69d0] Running
	I1209 10:52:25.781951  627293 system_pods.go:61] "kube-vip-ha-792382" [511f3a84-b444-4603-8f6f-eeeb262b1384] Running
	I1209 10:52:25.781954  627293 system_pods.go:61] "kube-vip-ha-792382-m02" [f2110c28-09f5-42f9-8e16-beec9759a0a2] Running
	I1209 10:52:25.781957  627293 system_pods.go:61] "kube-vip-ha-792382-m03" [5eee7c3c-1b75-48ad-813e-963fa4308d1b] Running
	I1209 10:52:25.781960  627293 system_pods.go:61] "storage-provisioner" [4419fe4f-e2ed-4ecb-a912-2dd074e29727] Running
	I1209 10:52:25.781965  627293 system_pods.go:74] duration metric: took 188.238253ms to wait for pod list to return data ...
	I1209 10:52:25.781976  627293 default_sa.go:34] waiting for default service account to be created ...
	I1209 10:52:25.964450  627293 request.go:632] Waited for 182.375955ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/default/serviceaccounts
	I1209 10:52:25.964524  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/default/serviceaccounts
	I1209 10:52:25.964529  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:25.964538  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:25.964543  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:25.968489  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:25.968636  627293 default_sa.go:45] found service account: "default"
	I1209 10:52:25.968653  627293 default_sa.go:55] duration metric: took 186.669919ms for default service account to be created ...
	I1209 10:52:25.968664  627293 system_pods.go:116] waiting for k8s-apps to be running ...
	I1209 10:52:26.163895  627293 request.go:632] Waited for 195.104758ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods
	I1209 10:52:26.163963  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods
	I1209 10:52:26.163969  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:26.163977  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:26.163981  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:26.169457  627293 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1209 10:52:26.176126  627293 system_pods.go:86] 24 kube-system pods found
	I1209 10:52:26.176160  627293 system_pods.go:89] "coredns-7c65d6cfc9-8hlml" [d820cd6c-5064-4934-adc8-c68f84c09b46] Running
	I1209 10:52:26.176166  627293 system_pods.go:89] "coredns-7c65d6cfc9-rz6mw" [af297b6d-91f1-4114-b98c-cdfdfbd1589e] Running
	I1209 10:52:26.176171  627293 system_pods.go:89] "etcd-ha-792382" [eee2fbad-7f2f-4f5a-b701-abdbf8456f99] Running
	I1209 10:52:26.176175  627293 system_pods.go:89] "etcd-ha-792382-m02" [424768ff-af5d-4a31-b062-ab2c9576884d] Running
	I1209 10:52:26.176178  627293 system_pods.go:89] "etcd-ha-792382-m03" [4112b988-6915-413a-badd-c0207865e60d] Running
	I1209 10:52:26.176184  627293 system_pods.go:89] "kindnet-6hlht" [23156ebc-d366-4fc2-bedb-7a63e950b116] Running
	I1209 10:52:26.176189  627293 system_pods.go:89] "kindnet-bqp2z" [b2c40579-4d72-4efe-b921-1e0f98b91544] Running
	I1209 10:52:26.176195  627293 system_pods.go:89] "kindnet-hkrhk" [9b35011c-ab45-4f55-a60f-08f9c4509c1d] Running
	I1209 10:52:26.176201  627293 system_pods.go:89] "kube-apiserver-ha-792382" [5157cfb0-bc91-4efe-b2e2-689778d5b012] Running
	I1209 10:52:26.176206  627293 system_pods.go:89] "kube-apiserver-ha-792382-m02" [71f9ce1c-aeff-4853-83ec-01a7fb6a81d5] Running
	I1209 10:52:26.176212  627293 system_pods.go:89] "kube-apiserver-ha-792382-m03" [5cd4395c-58a8-45ba-90ea-72105d25fadd] Running
	I1209 10:52:26.176220  627293 system_pods.go:89] "kube-controller-manager-ha-792382" [f8cb7175-f6fd-4dcf-a6a5-66c44aa136ce] Running
	I1209 10:52:26.176231  627293 system_pods.go:89] "kube-controller-manager-ha-792382-m02" [2f7d8deb-ddad-47ac-ad18-5f8e7d0e095d] Running
	I1209 10:52:26.176240  627293 system_pods.go:89] "kube-controller-manager-ha-792382-m03" [5c5d03de-e7e9-491b-a6fd-fdc50b4ce7ed] Running
	I1209 10:52:26.176245  627293 system_pods.go:89] "kube-proxy-2l42s" [a4bfe3cb-9b06-4d1e-9887-c461d31aaaec] Running
	I1209 10:52:26.176254  627293 system_pods.go:89] "kube-proxy-dckpl" [13f7bda1-f9c2-4fd0-96e0-b6aee1139bc1] Running
	I1209 10:52:26.176263  627293 system_pods.go:89] "kube-proxy-wrvgb" [2531e29f-a4d5-41f9-8c38-3220b4caf96b] Running
	I1209 10:52:26.176272  627293 system_pods.go:89] "kube-scheduler-ha-792382" [b11693d0-b2e7-45a1-a1a2-6519a1535c45] Running
	I1209 10:52:26.176285  627293 system_pods.go:89] "kube-scheduler-ha-792382-m02" [250ce40c-5cbf-4f5d-a475-4f3d7ec100d9] Running
	I1209 10:52:26.176294  627293 system_pods.go:89] "kube-scheduler-ha-792382-m03" [b994f699-40b5-423e-b92f-3ca6208e69d0] Running
	I1209 10:52:26.176303  627293 system_pods.go:89] "kube-vip-ha-792382" [511f3a84-b444-4603-8f6f-eeeb262b1384] Running
	I1209 10:52:26.176312  627293 system_pods.go:89] "kube-vip-ha-792382-m02" [f2110c28-09f5-42f9-8e16-beec9759a0a2] Running
	I1209 10:52:26.176320  627293 system_pods.go:89] "kube-vip-ha-792382-m03" [5eee7c3c-1b75-48ad-813e-963fa4308d1b] Running
	I1209 10:52:26.176327  627293 system_pods.go:89] "storage-provisioner" [4419fe4f-e2ed-4ecb-a912-2dd074e29727] Running
	I1209 10:52:26.176338  627293 system_pods.go:126] duration metric: took 207.663846ms to wait for k8s-apps to be running ...
	I1209 10:52:26.176348  627293 system_svc.go:44] waiting for kubelet service to be running ....
	I1209 10:52:26.176410  627293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 10:52:26.193241  627293 system_svc.go:56] duration metric: took 16.882967ms WaitForService to wait for kubelet
	I1209 10:52:26.193274  627293 kubeadm.go:582] duration metric: took 23.601316183s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 10:52:26.193295  627293 node_conditions.go:102] verifying NodePressure condition ...
	I1209 10:52:26.363791  627293 request.go:632] Waited for 170.378697ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes
	I1209 10:52:26.363869  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes
	I1209 10:52:26.363877  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:26.363893  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:26.363902  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:26.369525  627293 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1209 10:52:26.370723  627293 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1209 10:52:26.370747  627293 node_conditions.go:123] node cpu capacity is 2
	I1209 10:52:26.370760  627293 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1209 10:52:26.370763  627293 node_conditions.go:123] node cpu capacity is 2
	I1209 10:52:26.370766  627293 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1209 10:52:26.370770  627293 node_conditions.go:123] node cpu capacity is 2
	I1209 10:52:26.370774  627293 node_conditions.go:105] duration metric: took 177.473705ms to run NodePressure ...
	I1209 10:52:26.370790  627293 start.go:241] waiting for startup goroutines ...
	I1209 10:52:26.370823  627293 start.go:255] writing updated cluster config ...
	I1209 10:52:26.371156  627293 ssh_runner.go:195] Run: rm -f paused
	I1209 10:52:26.426485  627293 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1209 10:52:26.428634  627293 out.go:177] * Done! kubectl is now configured to use "ha-792382" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 09 10:56:06 ha-792382 crio[665]: time="2024-12-09 10:56:06.356249138Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=10125d20-e1a3-4d42-a4af-2c6714425501 name=/runtime.v1.RuntimeService/Version
	Dec 09 10:56:06 ha-792382 crio[665]: time="2024-12-09 10:56:06.358707583Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=00063728-11e6-4da6-bb6d-e864d421b71d name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 10:56:06 ha-792382 crio[665]: time="2024-12-09 10:56:06.359205034Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733741766359181345,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=00063728-11e6-4da6-bb6d-e864d421b71d name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 10:56:06 ha-792382 crio[665]: time="2024-12-09 10:56:06.360011316Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dbebb466-b06e-4466-ae43-ea41a9197561 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 10:56:06 ha-792382 crio[665]: time="2024-12-09 10:56:06.360068149Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dbebb466-b06e-4466-ae43-ea41a9197561 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 10:56:06 ha-792382 crio[665]: time="2024-12-09 10:56:06.360297060Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3354d3bec20606deb1f19130215939f5c7b2123b73319ef892bffc278d375fb8,PodSandboxId:e47f42b7e0900b7149c93ac504858a9b845addf2278ae1c0abd45e66fc9df066,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733741551077576684,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-z9wjm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 00b911f2-4cd1-486a-9276-1e98745ede0e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4ba11ff07ea5065d66a5e8b8d091a9ba9a1c680ab1ade1d19aecf153081d7dd,PodSandboxId:a5c60a0e3c19becc3387cabd45cb24c34922bd26351286c88744f9e2caf6f8d6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733741412981896414,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8hlml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d820cd6c-5064-4934-adc8-c68f84c09b46,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afc0f0aea4c8acce16752f3e50cc08b660c2c26b24932beaf28ba3d1bc596733,PodSandboxId:038ff3d97cfe50dddc147974bab92d243212a5896794a4bc3f1ce2b77fdb5e39,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733741412958470450,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rz6mw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
af297b6d-91f1-4114-b98c-cdfdfbd1589e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9fa96349b5a7bf6e8223fd16d01f77aa0d3c45aa83ad28fc42eec5d2a80dd24,PodSandboxId:02bd44e5a67d9df1c56cc6297ec66940ce286c89b103f7efa4b35259d2a2f8c7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733741412903452485,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4419fe4f-e2ed-4ecb-a912-2dd074e29727,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6bf7c7cf0d689f954fca58d1d94afa21f5bfd0f606552fbbf1479a9ae1593d3,PodSandboxId:cfb791c6d05cec0b9cc244408089c309a41f321fe06c2083eabaeaf9184c0ff6,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e,State:CO
NTAINER_RUNNING,CreatedAt:1733741400823508110,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bqp2z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2c40579-4d72-4efe-b921-1e0f98b91544,},Annotations:map[string]string{io.kubernetes.container.hash: 9533276,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cf6196a4789ec6471e8e2fe474e71d1756368a4423e425262410e6c7a71e522,PodSandboxId:82b54a7467a7aa9cf761959fb4e7e7ea830c6fdcf33629ec45c8b55326334f67,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733741398
382511881,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wrvgb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2531e29f-a4d5-41f9-8c38-3220b4caf96b,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:082e8ff7e6c7e3e1110852687152ddb22202b3134aa3105a8b11aa7d702bcd6a,PodSandboxId:1486ff19db45e7b9cb8b545f56579a8765f6cef7fe3b1d44f967a5c5823b4cdc,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:32829cc6f8630eba4e1b5e4df5bcbc34c767e70703d26e64a0f7317951c7b517,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1c87c24be687be21b68228b1b53b7792a3d82fb4d99bc68a5a984f582508a37,State:CONTAINER_RUNNING,CreatedAt:173374138888
4669062,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-792382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9922f13afb31842008ba0179dabd897e,},Annotations:map[string]string{io.kubernetes.container.hash: 69195791,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:778345b29099a60c663a83117fc6b409ae496d9228f79ecc48c26209e7a87f63,PodSandboxId:27e12e36b1bd81bfdb6cc876b8c1a9c925ed2eec6ac687056f4bd10eb872b7a6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733741386069265058,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-792382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2460a8b15a62b9cf3ad5343586bde402,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64b96c1c2297057b3b928bc7494df511d909f05efb4f8d5be27458c164efe95f,PodSandboxId:7bbf390b8ef03829849defea1ba3817acb7a86ce1705fb53d765d7ddca57a066,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733741386071035939,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kub
ernetes.pod.name: kube-apiserver-ha-792382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89a89b1c65df6e3ad9608c5607172f77,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d93c68b855d9f72ce368ee4f59250a988032fe781209e2115a8d70ea60f77fee,PodSandboxId:9493b93aded71efd1c05e2df3b9537c0c63dc9b2e9abdbf1be430d3c29d5ad9f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733741386009117508,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kub
e-scheduler-ha-792382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 082fcfac40bcf36b76f1e733a9f73bc8,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00db8f77881efc15e9bf57aba34890302ec37f9239ba1d191862ea78b9833604,PodSandboxId:02e8433fa67cc5aef5d43a5edf820340ad5d91eef3bdd9e7149eeec0e55a8c95,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733741385994463722,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-792382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4d8d358ed72ac30c9365aedd3aee4d1,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dbebb466-b06e-4466-ae43-ea41a9197561 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 10:56:06 ha-792382 crio[665]: time="2024-12-09 10:56:06.400510534Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a531588f-005c-4986-a7ee-a3926b9181a4 name=/runtime.v1.RuntimeService/Version
	Dec 09 10:56:06 ha-792382 crio[665]: time="2024-12-09 10:56:06.400593684Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a531588f-005c-4986-a7ee-a3926b9181a4 name=/runtime.v1.RuntimeService/Version
	Dec 09 10:56:06 ha-792382 crio[665]: time="2024-12-09 10:56:06.402488482Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d12abc89-f465-4eee-9ef7-799cc37ecb01 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 10:56:06 ha-792382 crio[665]: time="2024-12-09 10:56:06.402971599Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733741766402941294,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d12abc89-f465-4eee-9ef7-799cc37ecb01 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 10:56:06 ha-792382 crio[665]: time="2024-12-09 10:56:06.403914716Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=644b8a2f-0765-46ae-bd7f-4b28317bfbe8 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 10:56:06 ha-792382 crio[665]: time="2024-12-09 10:56:06.403980413Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=644b8a2f-0765-46ae-bd7f-4b28317bfbe8 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 10:56:06 ha-792382 crio[665]: time="2024-12-09 10:56:06.404276956Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3354d3bec20606deb1f19130215939f5c7b2123b73319ef892bffc278d375fb8,PodSandboxId:e47f42b7e0900b7149c93ac504858a9b845addf2278ae1c0abd45e66fc9df066,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733741551077576684,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-z9wjm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 00b911f2-4cd1-486a-9276-1e98745ede0e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4ba11ff07ea5065d66a5e8b8d091a9ba9a1c680ab1ade1d19aecf153081d7dd,PodSandboxId:a5c60a0e3c19becc3387cabd45cb24c34922bd26351286c88744f9e2caf6f8d6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733741412981896414,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8hlml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d820cd6c-5064-4934-adc8-c68f84c09b46,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afc0f0aea4c8acce16752f3e50cc08b660c2c26b24932beaf28ba3d1bc596733,PodSandboxId:038ff3d97cfe50dddc147974bab92d243212a5896794a4bc3f1ce2b77fdb5e39,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733741412958470450,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rz6mw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
af297b6d-91f1-4114-b98c-cdfdfbd1589e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9fa96349b5a7bf6e8223fd16d01f77aa0d3c45aa83ad28fc42eec5d2a80dd24,PodSandboxId:02bd44e5a67d9df1c56cc6297ec66940ce286c89b103f7efa4b35259d2a2f8c7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733741412903452485,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4419fe4f-e2ed-4ecb-a912-2dd074e29727,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6bf7c7cf0d689f954fca58d1d94afa21f5bfd0f606552fbbf1479a9ae1593d3,PodSandboxId:cfb791c6d05cec0b9cc244408089c309a41f321fe06c2083eabaeaf9184c0ff6,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e,State:CO
NTAINER_RUNNING,CreatedAt:1733741400823508110,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bqp2z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2c40579-4d72-4efe-b921-1e0f98b91544,},Annotations:map[string]string{io.kubernetes.container.hash: 9533276,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cf6196a4789ec6471e8e2fe474e71d1756368a4423e425262410e6c7a71e522,PodSandboxId:82b54a7467a7aa9cf761959fb4e7e7ea830c6fdcf33629ec45c8b55326334f67,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733741398
382511881,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wrvgb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2531e29f-a4d5-41f9-8c38-3220b4caf96b,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:082e8ff7e6c7e3e1110852687152ddb22202b3134aa3105a8b11aa7d702bcd6a,PodSandboxId:1486ff19db45e7b9cb8b545f56579a8765f6cef7fe3b1d44f967a5c5823b4cdc,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:32829cc6f8630eba4e1b5e4df5bcbc34c767e70703d26e64a0f7317951c7b517,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1c87c24be687be21b68228b1b53b7792a3d82fb4d99bc68a5a984f582508a37,State:CONTAINER_RUNNING,CreatedAt:173374138888
4669062,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-792382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9922f13afb31842008ba0179dabd897e,},Annotations:map[string]string{io.kubernetes.container.hash: 69195791,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:778345b29099a60c663a83117fc6b409ae496d9228f79ecc48c26209e7a87f63,PodSandboxId:27e12e36b1bd81bfdb6cc876b8c1a9c925ed2eec6ac687056f4bd10eb872b7a6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733741386069265058,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-792382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2460a8b15a62b9cf3ad5343586bde402,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64b96c1c2297057b3b928bc7494df511d909f05efb4f8d5be27458c164efe95f,PodSandboxId:7bbf390b8ef03829849defea1ba3817acb7a86ce1705fb53d765d7ddca57a066,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733741386071035939,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kub
ernetes.pod.name: kube-apiserver-ha-792382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89a89b1c65df6e3ad9608c5607172f77,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d93c68b855d9f72ce368ee4f59250a988032fe781209e2115a8d70ea60f77fee,PodSandboxId:9493b93aded71efd1c05e2df3b9537c0c63dc9b2e9abdbf1be430d3c29d5ad9f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733741386009117508,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kub
e-scheduler-ha-792382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 082fcfac40bcf36b76f1e733a9f73bc8,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00db8f77881efc15e9bf57aba34890302ec37f9239ba1d191862ea78b9833604,PodSandboxId:02e8433fa67cc5aef5d43a5edf820340ad5d91eef3bdd9e7149eeec0e55a8c95,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733741385994463722,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-792382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4d8d358ed72ac30c9365aedd3aee4d1,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=644b8a2f-0765-46ae-bd7f-4b28317bfbe8 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 10:56:06 ha-792382 crio[665]: time="2024-12-09 10:56:06.443728503Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=e94b49a5-2c60-4c1d-be4c-c9a9d606d381 name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 09 10:56:06 ha-792382 crio[665]: time="2024-12-09 10:56:06.444008463Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:e47f42b7e0900b7149c93ac504858a9b845addf2278ae1c0abd45e66fc9df066,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-z9wjm,Uid:00b911f2-4cd1-486a-9276-1e98745ede0e,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733741547721451467,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-z9wjm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 00b911f2-4cd1-486a-9276-1e98745ede0e,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-09T10:52:27.406707129Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:02bd44e5a67d9df1c56cc6297ec66940ce286c89b103f7efa4b35259d2a2f8c7,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:4419fe4f-e2ed-4ecb-a912-2dd074e29727,Namespace:kube-system,Attempt:0,},State:SANDBO
X_READY,CreatedAt:1733741412725372136,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4419fe4f-e2ed-4ecb-a912-2dd074e29727,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"ty
pe\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-12-09T10:50:12.389187976Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:038ff3d97cfe50dddc147974bab92d243212a5896794a4bc3f1ce2b77fdb5e39,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-rz6mw,Uid:af297b6d-91f1-4114-b98c-cdfdfbd1589e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733741412714144056,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-rz6mw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af297b6d-91f1-4114-b98c-cdfdfbd1589e,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-09T10:50:12.385407546Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a5c60a0e3c19becc3387cabd45cb24c34922bd26351286c88744f9e2caf6f8d6,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-8hlml,Uid:d820cd6c-5064-4934-adc8-c68f84c09b46,Namespace:kube-system,Atte
mpt:0,},State:SANDBOX_READY,CreatedAt:1733741412691272331,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-8hlml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d820cd6c-5064-4934-adc8-c68f84c09b46,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-09T10:50:12.378384594Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:82b54a7467a7aa9cf761959fb4e7e7ea830c6fdcf33629ec45c8b55326334f67,Metadata:&PodSandboxMetadata{Name:kube-proxy-wrvgb,Uid:2531e29f-a4d5-41f9-8c38-3220b4caf96b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733741398278045244,Labels:map[string]string{controller-revision-hash: 77987969cc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-wrvgb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2531e29f-a4d5-41f9-8c38-3220b4caf96b,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]
string{kubernetes.io/config.seen: 2024-12-09T10:49:56.468694189Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:cfb791c6d05cec0b9cc244408089c309a41f321fe06c2083eabaeaf9184c0ff6,Metadata:&PodSandboxMetadata{Name:kindnet-bqp2z,Uid:b2c40579-4d72-4efe-b921-1e0f98b91544,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733741396742615236,Labels:map[string]string{app: kindnet,controller-revision-hash: 7dff7cd75d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-bqp2z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2c40579-4d72-4efe-b921-1e0f98b91544,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-09T10:49:56.430662967Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9493b93aded71efd1c05e2df3b9537c0c63dc9b2e9abdbf1be430d3c29d5ad9f,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-792382,Uid:082fcfac40bcf36b76f1e733a9f73bc8,Namespace:kube-system,Attempt:0
,},State:SANDBOX_READY,CreatedAt:1733741385787750594,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-792382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 082fcfac40bcf36b76f1e733a9f73bc8,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 082fcfac40bcf36b76f1e733a9f73bc8,kubernetes.io/config.seen: 2024-12-09T10:49:45.114989762Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:02e8433fa67cc5aef5d43a5edf820340ad5d91eef3bdd9e7149eeec0e55a8c95,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-792382,Uid:a4d8d358ed72ac30c9365aedd3aee4d1,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733741385786710751,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-792382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4d8d358ed72ac30c9365aedd3aee4d
1,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: a4d8d358ed72ac30c9365aedd3aee4d1,kubernetes.io/config.seen: 2024-12-09T10:49:45.114988700Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:7bbf390b8ef03829849defea1ba3817acb7a86ce1705fb53d765d7ddca57a066,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-792382,Uid:89a89b1c65df6e3ad9608c5607172f77,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733741385780260822,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-792382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89a89b1c65df6e3ad9608c5607172f77,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.69:8443,kubernetes.io/config.hash: 89a89b1c65df6e3ad9608c5607172f77,kubernetes.io/config.seen: 2024-12-09T10:49:45.114987412Z,kubernetes.io/config.source: file,},RuntimeHandler:,}
,&PodSandbox{Id:27e12e36b1bd81bfdb6cc876b8c1a9c925ed2eec6ac687056f4bd10eb872b7a6,Metadata:&PodSandboxMetadata{Name:etcd-ha-792382,Uid:2460a8b15a62b9cf3ad5343586bde402,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733741385774534212,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-792382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2460a8b15a62b9cf3ad5343586bde402,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.69:2379,kubernetes.io/config.hash: 2460a8b15a62b9cf3ad5343586bde402,kubernetes.io/config.seen: 2024-12-09T10:49:45.114986053Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:1486ff19db45e7b9cb8b545f56579a8765f6cef7fe3b1d44f967a5c5823b4cdc,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-792382,Uid:9922f13afb31842008ba0179dabd897e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733741385757881865,Labels:
map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-792382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9922f13afb31842008ba0179dabd897e,},Annotations:map[string]string{kubernetes.io/config.hash: 9922f13afb31842008ba0179dabd897e,kubernetes.io/config.seen: 2024-12-09T10:49:45.114982392Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=e94b49a5-2c60-4c1d-be4c-c9a9d606d381 name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 09 10:56:06 ha-792382 crio[665]: time="2024-12-09 10:56:06.444940118Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1c1840aa-b1ba-48da-9223-edf944e22595 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 10:56:06 ha-792382 crio[665]: time="2024-12-09 10:56:06.445001951Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1c1840aa-b1ba-48da-9223-edf944e22595 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 10:56:06 ha-792382 crio[665]: time="2024-12-09 10:56:06.445249943Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3354d3bec20606deb1f19130215939f5c7b2123b73319ef892bffc278d375fb8,PodSandboxId:e47f42b7e0900b7149c93ac504858a9b845addf2278ae1c0abd45e66fc9df066,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733741551077576684,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-z9wjm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 00b911f2-4cd1-486a-9276-1e98745ede0e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4ba11ff07ea5065d66a5e8b8d091a9ba9a1c680ab1ade1d19aecf153081d7dd,PodSandboxId:a5c60a0e3c19becc3387cabd45cb24c34922bd26351286c88744f9e2caf6f8d6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733741412981896414,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8hlml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d820cd6c-5064-4934-adc8-c68f84c09b46,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afc0f0aea4c8acce16752f3e50cc08b660c2c26b24932beaf28ba3d1bc596733,PodSandboxId:038ff3d97cfe50dddc147974bab92d243212a5896794a4bc3f1ce2b77fdb5e39,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733741412958470450,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rz6mw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
af297b6d-91f1-4114-b98c-cdfdfbd1589e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9fa96349b5a7bf6e8223fd16d01f77aa0d3c45aa83ad28fc42eec5d2a80dd24,PodSandboxId:02bd44e5a67d9df1c56cc6297ec66940ce286c89b103f7efa4b35259d2a2f8c7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733741412903452485,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4419fe4f-e2ed-4ecb-a912-2dd074e29727,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6bf7c7cf0d689f954fca58d1d94afa21f5bfd0f606552fbbf1479a9ae1593d3,PodSandboxId:cfb791c6d05cec0b9cc244408089c309a41f321fe06c2083eabaeaf9184c0ff6,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e,State:CO
NTAINER_RUNNING,CreatedAt:1733741400823508110,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bqp2z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2c40579-4d72-4efe-b921-1e0f98b91544,},Annotations:map[string]string{io.kubernetes.container.hash: 9533276,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cf6196a4789ec6471e8e2fe474e71d1756368a4423e425262410e6c7a71e522,PodSandboxId:82b54a7467a7aa9cf761959fb4e7e7ea830c6fdcf33629ec45c8b55326334f67,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733741398
382511881,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wrvgb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2531e29f-a4d5-41f9-8c38-3220b4caf96b,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:082e8ff7e6c7e3e1110852687152ddb22202b3134aa3105a8b11aa7d702bcd6a,PodSandboxId:1486ff19db45e7b9cb8b545f56579a8765f6cef7fe3b1d44f967a5c5823b4cdc,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:32829cc6f8630eba4e1b5e4df5bcbc34c767e70703d26e64a0f7317951c7b517,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1c87c24be687be21b68228b1b53b7792a3d82fb4d99bc68a5a984f582508a37,State:CONTAINER_RUNNING,CreatedAt:173374138888
4669062,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-792382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9922f13afb31842008ba0179dabd897e,},Annotations:map[string]string{io.kubernetes.container.hash: 69195791,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:778345b29099a60c663a83117fc6b409ae496d9228f79ecc48c26209e7a87f63,PodSandboxId:27e12e36b1bd81bfdb6cc876b8c1a9c925ed2eec6ac687056f4bd10eb872b7a6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733741386069265058,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-792382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2460a8b15a62b9cf3ad5343586bde402,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64b96c1c2297057b3b928bc7494df511d909f05efb4f8d5be27458c164efe95f,PodSandboxId:7bbf390b8ef03829849defea1ba3817acb7a86ce1705fb53d765d7ddca57a066,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733741386071035939,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kub
ernetes.pod.name: kube-apiserver-ha-792382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89a89b1c65df6e3ad9608c5607172f77,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d93c68b855d9f72ce368ee4f59250a988032fe781209e2115a8d70ea60f77fee,PodSandboxId:9493b93aded71efd1c05e2df3b9537c0c63dc9b2e9abdbf1be430d3c29d5ad9f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733741386009117508,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kub
e-scheduler-ha-792382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 082fcfac40bcf36b76f1e733a9f73bc8,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00db8f77881efc15e9bf57aba34890302ec37f9239ba1d191862ea78b9833604,PodSandboxId:02e8433fa67cc5aef5d43a5edf820340ad5d91eef3bdd9e7149eeec0e55a8c95,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733741385994463722,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-792382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4d8d358ed72ac30c9365aedd3aee4d1,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1c1840aa-b1ba-48da-9223-edf944e22595 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 10:56:06 ha-792382 crio[665]: time="2024-12-09 10:56:06.454034839Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9525aa15-1e28-401a-97ad-4cf73335a18f name=/runtime.v1.RuntimeService/Version
	Dec 09 10:56:06 ha-792382 crio[665]: time="2024-12-09 10:56:06.454093385Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9525aa15-1e28-401a-97ad-4cf73335a18f name=/runtime.v1.RuntimeService/Version
	Dec 09 10:56:06 ha-792382 crio[665]: time="2024-12-09 10:56:06.455091075Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c10cc0fe-5ae0-42f4-af01-87b0942a285f name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 10:56:06 ha-792382 crio[665]: time="2024-12-09 10:56:06.455643471Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733741766455621925,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c10cc0fe-5ae0-42f4-af01-87b0942a285f name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 10:56:06 ha-792382 crio[665]: time="2024-12-09 10:56:06.456296881Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=21c68dfa-132e-4cf5-bfb0-df195331ff7a name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 10:56:06 ha-792382 crio[665]: time="2024-12-09 10:56:06.456446210Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=21c68dfa-132e-4cf5-bfb0-df195331ff7a name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 10:56:06 ha-792382 crio[665]: time="2024-12-09 10:56:06.457873621Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3354d3bec20606deb1f19130215939f5c7b2123b73319ef892bffc278d375fb8,PodSandboxId:e47f42b7e0900b7149c93ac504858a9b845addf2278ae1c0abd45e66fc9df066,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733741551077576684,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-z9wjm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 00b911f2-4cd1-486a-9276-1e98745ede0e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4ba11ff07ea5065d66a5e8b8d091a9ba9a1c680ab1ade1d19aecf153081d7dd,PodSandboxId:a5c60a0e3c19becc3387cabd45cb24c34922bd26351286c88744f9e2caf6f8d6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733741412981896414,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8hlml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d820cd6c-5064-4934-adc8-c68f84c09b46,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afc0f0aea4c8acce16752f3e50cc08b660c2c26b24932beaf28ba3d1bc596733,PodSandboxId:038ff3d97cfe50dddc147974bab92d243212a5896794a4bc3f1ce2b77fdb5e39,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733741412958470450,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rz6mw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
af297b6d-91f1-4114-b98c-cdfdfbd1589e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9fa96349b5a7bf6e8223fd16d01f77aa0d3c45aa83ad28fc42eec5d2a80dd24,PodSandboxId:02bd44e5a67d9df1c56cc6297ec66940ce286c89b103f7efa4b35259d2a2f8c7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733741412903452485,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4419fe4f-e2ed-4ecb-a912-2dd074e29727,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6bf7c7cf0d689f954fca58d1d94afa21f5bfd0f606552fbbf1479a9ae1593d3,PodSandboxId:cfb791c6d05cec0b9cc244408089c309a41f321fe06c2083eabaeaf9184c0ff6,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e,State:CO
NTAINER_RUNNING,CreatedAt:1733741400823508110,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bqp2z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2c40579-4d72-4efe-b921-1e0f98b91544,},Annotations:map[string]string{io.kubernetes.container.hash: 9533276,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cf6196a4789ec6471e8e2fe474e71d1756368a4423e425262410e6c7a71e522,PodSandboxId:82b54a7467a7aa9cf761959fb4e7e7ea830c6fdcf33629ec45c8b55326334f67,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733741398
382511881,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wrvgb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2531e29f-a4d5-41f9-8c38-3220b4caf96b,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:082e8ff7e6c7e3e1110852687152ddb22202b3134aa3105a8b11aa7d702bcd6a,PodSandboxId:1486ff19db45e7b9cb8b545f56579a8765f6cef7fe3b1d44f967a5c5823b4cdc,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:32829cc6f8630eba4e1b5e4df5bcbc34c767e70703d26e64a0f7317951c7b517,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1c87c24be687be21b68228b1b53b7792a3d82fb4d99bc68a5a984f582508a37,State:CONTAINER_RUNNING,CreatedAt:173374138888
4669062,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-792382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9922f13afb31842008ba0179dabd897e,},Annotations:map[string]string{io.kubernetes.container.hash: 69195791,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:778345b29099a60c663a83117fc6b409ae496d9228f79ecc48c26209e7a87f63,PodSandboxId:27e12e36b1bd81bfdb6cc876b8c1a9c925ed2eec6ac687056f4bd10eb872b7a6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733741386069265058,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-792382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2460a8b15a62b9cf3ad5343586bde402,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64b96c1c2297057b3b928bc7494df511d909f05efb4f8d5be27458c164efe95f,PodSandboxId:7bbf390b8ef03829849defea1ba3817acb7a86ce1705fb53d765d7ddca57a066,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733741386071035939,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kub
ernetes.pod.name: kube-apiserver-ha-792382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89a89b1c65df6e3ad9608c5607172f77,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d93c68b855d9f72ce368ee4f59250a988032fe781209e2115a8d70ea60f77fee,PodSandboxId:9493b93aded71efd1c05e2df3b9537c0c63dc9b2e9abdbf1be430d3c29d5ad9f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733741386009117508,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kub
e-scheduler-ha-792382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 082fcfac40bcf36b76f1e733a9f73bc8,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00db8f77881efc15e9bf57aba34890302ec37f9239ba1d191862ea78b9833604,PodSandboxId:02e8433fa67cc5aef5d43a5edf820340ad5d91eef3bdd9e7149eeec0e55a8c95,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733741385994463722,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-792382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4d8d358ed72ac30c9365aedd3aee4d1,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=21c68dfa-132e-4cf5-bfb0-df195331ff7a name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	3354d3bec2060       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   e47f42b7e0900       busybox-7dff88458-z9wjm
	f4ba11ff07ea5       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   0                   a5c60a0e3c19b       coredns-7c65d6cfc9-8hlml
	afc0f0aea4c8a       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   0                   038ff3d97cfe5       coredns-7c65d6cfc9-rz6mw
	d9fa96349b5a7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Running             storage-provisioner       0                   02bd44e5a67d9       storage-provisioner
	b6bf7c7cf0d68       docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3    6 minutes ago       Running             kindnet-cni               0                   cfb791c6d05ce       kindnet-bqp2z
	3cf6196a4789e       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                      6 minutes ago       Running             kube-proxy                0                   82b54a7467a7a       kube-proxy-wrvgb
	082e8ff7e6c7e       ghcr.io/kube-vip/kube-vip@sha256:32829cc6f8630eba4e1b5e4df5bcbc34c767e70703d26e64a0f7317951c7b517     6 minutes ago       Running             kube-vip                  0                   1486ff19db45e       kube-vip-ha-792382
	64b96c1c22970       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                      6 minutes ago       Running             kube-apiserver            0                   7bbf390b8ef03       kube-apiserver-ha-792382
	778345b29099a       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   27e12e36b1bd8       etcd-ha-792382
	d93c68b855d9f       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                      6 minutes ago       Running             kube-scheduler            0                   9493b93aded71       kube-scheduler-ha-792382
	00db8f77881ef       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                      6 minutes ago       Running             kube-controller-manager   0                   02e8433fa67cc       kube-controller-manager-ha-792382
	
	
	==> coredns [afc0f0aea4c8acce16752f3e50cc08b660c2c26b24932beaf28ba3d1bc596733] <==
	[INFO] 10.244.2.2:57485 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000178522s
	[INFO] 10.244.2.2:51008 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003461693s
	[INFO] 10.244.2.2:51209 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000132423s
	[INFO] 10.244.2.2:44233 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000160403s
	[INFO] 10.244.2.2:36343 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000113366s
	[INFO] 10.244.1.2:40108 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001755871s
	[INFO] 10.244.1.2:57627 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000088641s
	[INFO] 10.244.0.4:49175 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000210271s
	[INFO] 10.244.0.4:42721 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001653061s
	[INFO] 10.244.0.4:53085 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000087293s
	[INFO] 10.244.2.2:46633 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000111394s
	[INFO] 10.244.2.2:34060 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000087724s
	[INFO] 10.244.2.2:42086 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000112165s
	[INFO] 10.244.1.2:55917 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000167759s
	[INFO] 10.244.1.2:38190 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000113655s
	[INFO] 10.244.1.2:46262 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000092112s
	[INFO] 10.244.1.2:55410 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000080217s
	[INFO] 10.244.0.4:43802 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000073668s
	[INFO] 10.244.0.4:48010 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000099328s
	[INFO] 10.244.0.4:45687 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00004859s
	[INFO] 10.244.2.2:35669 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00019184s
	[INFO] 10.244.2.2:54242 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000232065s
	[INFO] 10.244.2.2:41931 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000140914s
	[INFO] 10.244.0.4:48531 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000105047s
	[INFO] 10.244.0.4:36756 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000068167s
	
	
	==> coredns [f4ba11ff07ea5065d66a5e8b8d091a9ba9a1c680ab1ade1d19aecf153081d7dd] <==
	[INFO] 10.244.0.4:58900 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000184784s
	[INFO] 10.244.0.4:59585 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.004212695s
	[INFO] 10.244.0.4:42331 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001567158s
	[INFO] 10.244.2.2:43555 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003700387s
	[INFO] 10.244.2.2:38437 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000268841s
	[INFO] 10.244.1.2:36722 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000174774s
	[INFO] 10.244.1.2:46295 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000167521s
	[INFO] 10.244.1.2:36004 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000192453s
	[INFO] 10.244.1.2:54275 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001271437s
	[INFO] 10.244.1.2:48954 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000183213s
	[INFO] 10.244.1.2:57839 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00017811s
	[INFO] 10.244.0.4:54946 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001925365s
	[INFO] 10.244.0.4:59669 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0000722s
	[INFO] 10.244.0.4:40897 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000074421s
	[INFO] 10.244.0.4:46937 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000174065s
	[INFO] 10.244.0.4:34613 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000075946s
	[INFO] 10.244.2.2:44189 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000216239s
	[INFO] 10.244.0.4:39246 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000155453s
	[INFO] 10.244.2.2:48134 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000162494s
	[INFO] 10.244.1.2:44589 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000125364s
	[INFO] 10.244.1.2:59702 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00019329s
	[INFO] 10.244.1.2:58920 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000146935s
	[INFO] 10.244.1.2:55802 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000116158s
	[INFO] 10.244.0.4:47226 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000097556s
	[INFO] 10.244.0.4:42857 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000073279s
	
	
	==> describe nodes <==
	Name:               ha-792382
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-792382
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=60110addcdbf0fec7168b962521659e922988d6c
	                    minikube.k8s.io/name=ha-792382
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_09T10_49_53_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 09 Dec 2024 10:49:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-792382
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 09 Dec 2024 10:55:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 09 Dec 2024 10:52:55 +0000   Mon, 09 Dec 2024 10:49:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 09 Dec 2024 10:52:55 +0000   Mon, 09 Dec 2024 10:49:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 09 Dec 2024 10:52:55 +0000   Mon, 09 Dec 2024 10:49:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 09 Dec 2024 10:52:55 +0000   Mon, 09 Dec 2024 10:50:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.69
	  Hostname:    ha-792382
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 2c956a5ad4d142099b593c1d9352f7b5
	  System UUID:                2c956a5a-d4d1-4209-9b59-3c1d9352f7b5
	  Boot ID:                    5140ef96-1a92-4f56-b80b-7e99ce150ca0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-z9wjm              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m39s
	  kube-system                 coredns-7c65d6cfc9-8hlml             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m10s
	  kube-system                 coredns-7c65d6cfc9-rz6mw             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m10s
	  kube-system                 etcd-ha-792382                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m14s
	  kube-system                 kindnet-bqp2z                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m10s
	  kube-system                 kube-apiserver-ha-792382             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m14s
	  kube-system                 kube-controller-manager-ha-792382    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m14s
	  kube-system                 kube-proxy-wrvgb                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m10s
	  kube-system                 kube-scheduler-ha-792382             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m15s
	  kube-system                 kube-vip-ha-792382                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m17s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m8s                   kube-proxy       
	  Normal  NodeHasSufficientPID     6m21s (x7 over 6m21s)  kubelet          Node ha-792382 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 6m21s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m21s (x8 over 6m21s)  kubelet          Node ha-792382 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m21s (x8 over 6m21s)  kubelet          Node ha-792382 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 6m15s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m14s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m14s                  kubelet          Node ha-792382 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m14s                  kubelet          Node ha-792382 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m14s                  kubelet          Node ha-792382 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m11s                  node-controller  Node ha-792382 event: Registered Node ha-792382 in Controller
	  Normal  NodeReady                5m54s                  kubelet          Node ha-792382 status is now: NodeReady
	  Normal  RegisteredNode           5m14s                  node-controller  Node ha-792382 event: Registered Node ha-792382 in Controller
	  Normal  RegisteredNode           3m59s                  node-controller  Node ha-792382 event: Registered Node ha-792382 in Controller
	
	
	Name:               ha-792382-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-792382-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=60110addcdbf0fec7168b962521659e922988d6c
	                    minikube.k8s.io/name=ha-792382
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_09T10_50_47_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 09 Dec 2024 10:50:44 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-792382-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 09 Dec 2024 10:53:37 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 09 Dec 2024 10:52:48 +0000   Mon, 09 Dec 2024 10:54:20 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 09 Dec 2024 10:52:48 +0000   Mon, 09 Dec 2024 10:54:20 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 09 Dec 2024 10:52:48 +0000   Mon, 09 Dec 2024 10:54:20 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 09 Dec 2024 10:52:48 +0000   Mon, 09 Dec 2024 10:54:20 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.89
	  Hostname:    ha-792382-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 167721adca2249268bf51688530c2893
	  System UUID:                167721ad-ca22-4926-8bf5-1688530c2893
	  Boot ID:                    74f1c671-e420-4f88-b05b-e50c0597ee01
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-rbrpt                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m39s
	  kube-system                 etcd-ha-792382-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m20s
	  kube-system                 kindnet-hkrhk                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m22s
	  kube-system                 kube-apiserver-ha-792382-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m20s
	  kube-system                 kube-controller-manager-ha-792382-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m12s
	  kube-system                 kube-proxy-dckpl                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m22s
	  kube-system                 kube-scheduler-ha-792382-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m13s
	  kube-system                 kube-vip-ha-792382-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m18s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m18s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m22s (x8 over 5m22s)  kubelet          Node ha-792382-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m22s (x8 over 5m22s)  kubelet          Node ha-792382-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m22s (x7 over 5m22s)  kubelet          Node ha-792382-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m22s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m21s                  node-controller  Node ha-792382-m02 event: Registered Node ha-792382-m02 in Controller
	  Normal  RegisteredNode           5m14s                  node-controller  Node ha-792382-m02 event: Registered Node ha-792382-m02 in Controller
	  Normal  RegisteredNode           3m59s                  node-controller  Node ha-792382-m02 event: Registered Node ha-792382-m02 in Controller
	  Normal  NodeNotReady             106s                   node-controller  Node ha-792382-m02 status is now: NodeNotReady
	
	
	Name:               ha-792382-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-792382-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=60110addcdbf0fec7168b962521659e922988d6c
	                    minikube.k8s.io/name=ha-792382
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_09T10_52_02_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 09 Dec 2024 10:51:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-792382-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 09 Dec 2024 10:56:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 09 Dec 2024 10:53:00 +0000   Mon, 09 Dec 2024 10:51:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 09 Dec 2024 10:53:00 +0000   Mon, 09 Dec 2024 10:51:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 09 Dec 2024 10:53:00 +0000   Mon, 09 Dec 2024 10:51:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 09 Dec 2024 10:53:00 +0000   Mon, 09 Dec 2024 10:52:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.82
	  Hostname:    ha-792382-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c7e770a97238401cb03ba22edd7f66bc
	  System UUID:                c7e770a9-7238-401c-b03b-a22edd7f66bc
	  Boot ID:                    75bcd068-8763-4e3a-b01e-036ac11d2956
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-ft8s2                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m39s
	  kube-system                 etcd-ha-792382-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m7s
	  kube-system                 kindnet-6hlht                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m8s
	  kube-system                 kube-apiserver-ha-792382-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m7s
	  kube-system                 kube-controller-manager-ha-792382-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m7s
	  kube-system                 kube-proxy-2l42s                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m8s
	  kube-system                 kube-scheduler-ha-792382-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m7s
	  kube-system                 kube-vip-ha-792382-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 4m4s                 kube-proxy       
	  Normal  Starting                 4m8s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m8s (x8 over 4m8s)  kubelet          Node ha-792382-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m8s (x8 over 4m8s)  kubelet          Node ha-792382-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m8s (x7 over 4m8s)  kubelet          Node ha-792382-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m8s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m6s                 node-controller  Node ha-792382-m03 event: Registered Node ha-792382-m03 in Controller
	  Normal  RegisteredNode           4m4s                 node-controller  Node ha-792382-m03 event: Registered Node ha-792382-m03 in Controller
	  Normal  RegisteredNode           3m59s                node-controller  Node ha-792382-m03 event: Registered Node ha-792382-m03 in Controller
	
	
	Name:               ha-792382-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-792382-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=60110addcdbf0fec7168b962521659e922988d6c
	                    minikube.k8s.io/name=ha-792382
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_09T10_53_05_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 09 Dec 2024 10:53:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-792382-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 09 Dec 2024 10:55:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 09 Dec 2024 10:53:35 +0000   Mon, 09 Dec 2024 10:53:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 09 Dec 2024 10:53:35 +0000   Mon, 09 Dec 2024 10:53:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 09 Dec 2024 10:53:35 +0000   Mon, 09 Dec 2024 10:53:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 09 Dec 2024 10:53:35 +0000   Mon, 09 Dec 2024 10:53:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.54
	  Hostname:    ha-792382-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f7109c0766654d148c611df97b2ed795
	  System UUID:                f7109c07-6665-4d14-8c61-1df97b2ed795
	  Boot ID:                    8d79820d-d818-486f-88fb-9a376256bc79
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-rwsmp       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m2s
	  kube-system                 kube-proxy-727n6    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m56s                kube-proxy       
	  Normal  NodeHasSufficientMemory  3m2s (x2 over 3m2s)  kubelet          Node ha-792382-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m2s (x2 over 3m2s)  kubelet          Node ha-792382-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m2s (x2 over 3m2s)  kubelet          Node ha-792382-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m2s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m1s                 node-controller  Node ha-792382-m04 event: Registered Node ha-792382-m04 in Controller
	  Normal  RegisteredNode           2m59s                node-controller  Node ha-792382-m04 event: Registered Node ha-792382-m04 in Controller
	  Normal  RegisteredNode           2m59s                node-controller  Node ha-792382-m04 event: Registered Node ha-792382-m04 in Controller
	  Normal  NodeReady                2m41s                kubelet          Node ha-792382-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec 9 10:49] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052723] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037555] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.827157] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.929161] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.560988] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.837514] systemd-fstab-generator[584]: Ignoring "noauto" option for root device
	[  +0.057481] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.052320] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.193651] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.117185] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.263430] systemd-fstab-generator[656]: Ignoring "noauto" option for root device
	[  +3.805323] systemd-fstab-generator[749]: Ignoring "noauto" option for root device
	[  +3.647118] systemd-fstab-generator[878]: Ignoring "noauto" option for root device
	[  +0.055434] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.026961] systemd-fstab-generator[1297]: Ignoring "noauto" option for root device
	[  +0.076746] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.128281] kauditd_printk_skb: 21 callbacks suppressed
	[Dec 9 10:50] kauditd_printk_skb: 38 callbacks suppressed
	[ +38.131475] kauditd_printk_skb: 28 callbacks suppressed
	
	
	==> etcd [778345b29099a60c663a83117fc6b409ae496d9228f79ecc48c26209e7a87f63] <==
	{"level":"warn","ts":"2024-12-09T10:56:06.695183Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"950d7fcf7e5c88d0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T10:56:06.710729Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"950d7fcf7e5c88d0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T10:56:06.719989Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"950d7fcf7e5c88d0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T10:56:06.724168Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"950d7fcf7e5c88d0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T10:56:06.731851Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"950d7fcf7e5c88d0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T10:56:06.738972Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"950d7fcf7e5c88d0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T10:56:06.752515Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"950d7fcf7e5c88d0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T10:56:06.761199Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"950d7fcf7e5c88d0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T10:56:06.764739Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"950d7fcf7e5c88d0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T10:56:06.767841Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"950d7fcf7e5c88d0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T10:56:06.776776Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"950d7fcf7e5c88d0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T10:56:06.784068Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"950d7fcf7e5c88d0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T10:56:06.790791Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"950d7fcf7e5c88d0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T10:56:06.793540Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"950d7fcf7e5c88d0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T10:56:06.796993Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"950d7fcf7e5c88d0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T10:56:06.803433Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"950d7fcf7e5c88d0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T10:56:06.811875Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"950d7fcf7e5c88d0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T10:56:06.832107Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"950d7fcf7e5c88d0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T10:56:06.835556Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"950d7fcf7e5c88d0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T10:56:06.844557Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"950d7fcf7e5c88d0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T10:56:06.853513Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"950d7fcf7e5c88d0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T10:56:06.863551Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"950d7fcf7e5c88d0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T10:56:06.876671Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"950d7fcf7e5c88d0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T10:56:06.888182Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"950d7fcf7e5c88d0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T10:56:06.931706Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"950d7fcf7e5c88d0","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 10:56:06 up 6 min,  0 users,  load average: 0.49, 0.32, 0.16
	Linux ha-792382 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [b6bf7c7cf0d689f954fca58d1d94afa21f5bfd0f606552fbbf1479a9ae1593d3] <==
	I1209 10:55:31.782952       1 main.go:324] Node ha-792382-m04 has CIDR [10.244.3.0/24] 
	I1209 10:55:41.791456       1 main.go:297] Handling node with IPs: map[192.168.39.69:{}]
	I1209 10:55:41.791551       1 main.go:301] handling current node
	I1209 10:55:41.791578       1 main.go:297] Handling node with IPs: map[192.168.39.89:{}]
	I1209 10:55:41.791596       1 main.go:324] Node ha-792382-m02 has CIDR [10.244.1.0/24] 
	I1209 10:55:41.791811       1 main.go:297] Handling node with IPs: map[192.168.39.82:{}]
	I1209 10:55:41.791904       1 main.go:324] Node ha-792382-m03 has CIDR [10.244.2.0/24] 
	I1209 10:55:41.792096       1 main.go:297] Handling node with IPs: map[192.168.39.54:{}]
	I1209 10:55:41.792124       1 main.go:324] Node ha-792382-m04 has CIDR [10.244.3.0/24] 
	I1209 10:55:51.785788       1 main.go:297] Handling node with IPs: map[192.168.39.69:{}]
	I1209 10:55:51.785901       1 main.go:301] handling current node
	I1209 10:55:51.785962       1 main.go:297] Handling node with IPs: map[192.168.39.89:{}]
	I1209 10:55:51.785993       1 main.go:324] Node ha-792382-m02 has CIDR [10.244.1.0/24] 
	I1209 10:55:51.786189       1 main.go:297] Handling node with IPs: map[192.168.39.82:{}]
	I1209 10:55:51.786293       1 main.go:324] Node ha-792382-m03 has CIDR [10.244.2.0/24] 
	I1209 10:55:51.786573       1 main.go:297] Handling node with IPs: map[192.168.39.54:{}]
	I1209 10:55:51.786644       1 main.go:324] Node ha-792382-m04 has CIDR [10.244.3.0/24] 
	I1209 10:56:01.783030       1 main.go:297] Handling node with IPs: map[192.168.39.69:{}]
	I1209 10:56:01.783176       1 main.go:301] handling current node
	I1209 10:56:01.783209       1 main.go:297] Handling node with IPs: map[192.168.39.89:{}]
	I1209 10:56:01.783262       1 main.go:324] Node ha-792382-m02 has CIDR [10.244.1.0/24] 
	I1209 10:56:01.783503       1 main.go:297] Handling node with IPs: map[192.168.39.82:{}]
	I1209 10:56:01.783567       1 main.go:324] Node ha-792382-m03 has CIDR [10.244.2.0/24] 
	I1209 10:56:01.784071       1 main.go:297] Handling node with IPs: map[192.168.39.54:{}]
	I1209 10:56:01.784166       1 main.go:324] Node ha-792382-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [64b96c1c2297057b3b928bc7494df511d909f05efb4f8d5be27458c164efe95f] <==
	I1209 10:49:52.072307       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1209 10:49:52.095069       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1209 10:49:56.392767       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I1209 10:49:56.516080       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E1209 10:51:59.302973       1 writers.go:122] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E1209 10:51:59.303668       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 331.746µs, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E1209 10:51:59.304570       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E1209 10:51:59.308414       1 writers.go:135] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E1209 10:51:59.309695       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="6.795998ms" method="POST" path="/api/v1/namespaces/kube-system/events" result=null
	E1209 10:52:32.421048       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:43832: use of closed network connection
	E1209 10:52:32.619590       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:43852: use of closed network connection
	E1209 10:52:32.815616       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:43862: use of closed network connection
	E1209 10:52:33.010440       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:43888: use of closed network connection
	E1209 10:52:33.191451       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:43910: use of closed network connection
	E1209 10:52:33.385647       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:43930: use of closed network connection
	E1209 10:52:33.571472       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:43946: use of closed network connection
	E1209 10:52:33.741655       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:43972: use of closed network connection
	E1209 10:52:33.919176       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:43990: use of closed network connection
	E1209 10:52:34.226233       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44000: use of closed network connection
	E1209 10:52:34.408728       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44016: use of closed network connection
	E1209 10:52:34.588897       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44034: use of closed network connection
	E1209 10:52:34.765608       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44050: use of closed network connection
	E1209 10:52:34.943122       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44058: use of closed network connection
	E1209 10:52:35.115793       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44068: use of closed network connection
	W1209 10:54:00.405476       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.69 192.168.39.82]
	
	
	==> kube-controller-manager [00db8f77881efc15e9bf57aba34890302ec37f9239ba1d191862ea78b9833604] <==
	I1209 10:53:04.483677       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-792382-m04" podCIDRs=["10.244.3.0/24"]
	I1209 10:53:04.483873       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-792382-m04"
	I1209 10:53:04.484031       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-792382-m04"
	I1209 10:53:04.508782       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-792382-m04"
	I1209 10:53:04.947247       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-792382-m04"
	I1209 10:53:05.336150       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-792382-m04"
	I1209 10:53:05.632610       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-792382-m04"
	I1209 10:53:05.665145       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-792382-m04"
	I1209 10:53:07.101579       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-792382-m04"
	I1209 10:53:07.148958       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-792382-m04"
	I1209 10:53:08.041907       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-792382-m04"
	I1209 10:53:08.474258       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-792382-m04"
	I1209 10:53:14.706287       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-792382-m04"
	I1209 10:53:25.397617       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-792382-m04"
	I1209 10:53:25.397765       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-792382-m04"
	I1209 10:53:25.412410       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-792382-m04"
	I1209 10:53:25.649201       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-792382-m04"
	I1209 10:53:35.378859       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-792382-m04"
	I1209 10:54:20.671888       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-792382-m02"
	I1209 10:54:20.672434       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-792382-m04"
	I1209 10:54:20.703980       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-792382-m02"
	I1209 10:54:20.840624       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="14.419282ms"
	I1209 10:54:20.841721       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="157.508µs"
	I1209 10:54:22.157822       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-792382-m02"
	I1209 10:54:25.899451       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-792382-m02"
	
	
	==> kube-proxy [3cf6196a4789ec6471e8e2fe474e71d1756368a4423e425262410e6c7a71e522] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1209 10:49:58.601423       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1209 10:49:58.617859       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.69"]
	E1209 10:49:58.617945       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1209 10:49:58.657152       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1209 10:49:58.657213       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1209 10:49:58.657247       1 server_linux.go:169] "Using iptables Proxier"
	I1209 10:49:58.660760       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1209 10:49:58.661154       1 server.go:483] "Version info" version="v1.31.2"
	I1209 10:49:58.661230       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1209 10:49:58.663604       1 config.go:199] "Starting service config controller"
	I1209 10:49:58.663767       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1209 10:49:58.664471       1 config.go:105] "Starting endpoint slice config controller"
	I1209 10:49:58.664498       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1209 10:49:58.666409       1 config.go:328] "Starting node config controller"
	I1209 10:49:58.666433       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1209 10:49:58.765096       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1209 10:49:58.767373       1 shared_informer.go:320] Caches are synced for service config
	I1209 10:49:58.767373       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [d93c68b855d9f72ce368ee4f59250a988032fe781209e2115a8d70ea60f77fee] <==
	W1209 10:49:49.686971       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1209 10:49:49.687036       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1209 10:49:49.693717       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1209 10:49:49.693755       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1209 10:49:49.756854       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1209 10:49:49.756907       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1209 10:49:49.761365       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1209 10:49:49.761407       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1209 10:49:49.901909       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1209 10:49:49.902484       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1209 10:49:50.012571       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1209 10:49:50.012617       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1209 10:49:50.018069       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1209 10:49:50.018128       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1209 10:49:50.045681       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1209 10:49:50.045732       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1209 10:49:50.048146       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1209 10:49:50.048203       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1209 10:49:51.665195       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1209 10:52:27.353144       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-ft8s2\": pod busybox-7dff88458-ft8s2 is already assigned to node \"ha-792382-m03\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-ft8s2" node="ha-792382-m03"
	E1209 10:52:27.354035       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 51271b6c-9fb3-4893-8502-54b74c4cbaa5(default/busybox-7dff88458-ft8s2) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-ft8s2"
	E1209 10:52:27.354086       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-ft8s2\": pod busybox-7dff88458-ft8s2 is already assigned to node \"ha-792382-m03\"" pod="default/busybox-7dff88458-ft8s2"
	I1209 10:52:27.354141       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-ft8s2" node="ha-792382-m03"
	E1209 10:52:27.402980       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-z9wjm\": pod busybox-7dff88458-z9wjm is already assigned to node \"ha-792382\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-z9wjm" node="ha-792382"
	E1209 10:52:27.403164       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-z9wjm\": pod busybox-7dff88458-z9wjm is already assigned to node \"ha-792382\"" pod="default/busybox-7dff88458-z9wjm"
	
	
	==> kubelet <==
	Dec 09 10:54:52 ha-792382 kubelet[1304]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 09 10:54:52 ha-792382 kubelet[1304]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 09 10:54:52 ha-792382 kubelet[1304]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 09 10:54:52 ha-792382 kubelet[1304]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 09 10:54:52 ha-792382 kubelet[1304]: E1209 10:54:52.082247    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733741692081818749,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 10:54:52 ha-792382 kubelet[1304]: E1209 10:54:52.082273    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733741692081818749,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 10:55:02 ha-792382 kubelet[1304]: E1209 10:55:02.088147    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733741702086894201,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 10:55:02 ha-792382 kubelet[1304]: E1209 10:55:02.088210    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733741702086894201,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 10:55:12 ha-792382 kubelet[1304]: E1209 10:55:12.089935    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733741712089600382,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 10:55:12 ha-792382 kubelet[1304]: E1209 10:55:12.090372    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733741712089600382,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 10:55:22 ha-792382 kubelet[1304]: E1209 10:55:22.094837    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733741722094438540,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 10:55:22 ha-792382 kubelet[1304]: E1209 10:55:22.094877    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733741722094438540,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 10:55:32 ha-792382 kubelet[1304]: E1209 10:55:32.096240    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733741732095902907,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 10:55:32 ha-792382 kubelet[1304]: E1209 10:55:32.096268    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733741732095902907,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 10:55:42 ha-792382 kubelet[1304]: E1209 10:55:42.098166    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733741742097877429,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 10:55:42 ha-792382 kubelet[1304]: E1209 10:55:42.098566    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733741742097877429,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 10:55:52 ha-792382 kubelet[1304]: E1209 10:55:52.004085    1304 iptables.go:577] "Could not set up iptables canary" err=<
	Dec 09 10:55:52 ha-792382 kubelet[1304]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 09 10:55:52 ha-792382 kubelet[1304]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 09 10:55:52 ha-792382 kubelet[1304]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 09 10:55:52 ha-792382 kubelet[1304]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 09 10:55:52 ha-792382 kubelet[1304]: E1209 10:55:52.100761    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733741752100425512,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 10:55:52 ha-792382 kubelet[1304]: E1209 10:55:52.100783    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733741752100425512,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 10:56:02 ha-792382 kubelet[1304]: E1209 10:56:02.102546    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733741762102177289,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 10:56:02 ha-792382 kubelet[1304]: E1209 10:56:02.102939    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733741762102177289,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-792382 -n ha-792382
helpers_test.go:261: (dbg) Run:  kubectl --context ha-792382 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (141.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (5.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:392: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.432511167s)
ha_test.go:415: expected profile "ha-792382" in json of 'profile list' to have "Degraded" status but have "Unknown" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-792382\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"ha-792382\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"kvm2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":
1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-792382\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.39.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.39.69\",\"Port\":8443,\"Kuber
netesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.39.89\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.39.82\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.39.54\",\"Port\":0,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\
":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/home/jenkins:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\
"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-792382 -n ha-792382
helpers_test.go:244: <<< TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-792382 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-792382 logs -n 25: (1.348818408s)
helpers_test.go:252: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-792382 cp ha-792382-m03:/home/docker/cp-test.txt                              | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3558467820/001/cp-test_ha-792382-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-792382 ssh -n                                                                 | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | ha-792382-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-792382 cp ha-792382-m03:/home/docker/cp-test.txt                              | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | ha-792382:/home/docker/cp-test_ha-792382-m03_ha-792382.txt                       |           |         |         |                     |                     |
	| ssh     | ha-792382 ssh -n                                                                 | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | ha-792382-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-792382 ssh -n ha-792382 sudo cat                                              | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | /home/docker/cp-test_ha-792382-m03_ha-792382.txt                                 |           |         |         |                     |                     |
	| cp      | ha-792382 cp ha-792382-m03:/home/docker/cp-test.txt                              | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | ha-792382-m02:/home/docker/cp-test_ha-792382-m03_ha-792382-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-792382 ssh -n                                                                 | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | ha-792382-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-792382 ssh -n ha-792382-m02 sudo cat                                          | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | /home/docker/cp-test_ha-792382-m03_ha-792382-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-792382 cp ha-792382-m03:/home/docker/cp-test.txt                              | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | ha-792382-m04:/home/docker/cp-test_ha-792382-m03_ha-792382-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-792382 ssh -n                                                                 | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | ha-792382-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-792382 ssh -n ha-792382-m04 sudo cat                                          | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | /home/docker/cp-test_ha-792382-m03_ha-792382-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-792382 cp testdata/cp-test.txt                                                | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | ha-792382-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-792382 ssh -n                                                                 | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | ha-792382-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-792382 cp ha-792382-m04:/home/docker/cp-test.txt                              | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3558467820/001/cp-test_ha-792382-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-792382 ssh -n                                                                 | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | ha-792382-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-792382 cp ha-792382-m04:/home/docker/cp-test.txt                              | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | ha-792382:/home/docker/cp-test_ha-792382-m04_ha-792382.txt                       |           |         |         |                     |                     |
	| ssh     | ha-792382 ssh -n                                                                 | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | ha-792382-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-792382 ssh -n ha-792382 sudo cat                                              | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | /home/docker/cp-test_ha-792382-m04_ha-792382.txt                                 |           |         |         |                     |                     |
	| cp      | ha-792382 cp ha-792382-m04:/home/docker/cp-test.txt                              | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | ha-792382-m02:/home/docker/cp-test_ha-792382-m04_ha-792382-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-792382 ssh -n                                                                 | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | ha-792382-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-792382 ssh -n ha-792382-m02 sudo cat                                          | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | /home/docker/cp-test_ha-792382-m04_ha-792382-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-792382 cp ha-792382-m04:/home/docker/cp-test.txt                              | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | ha-792382-m03:/home/docker/cp-test_ha-792382-m04_ha-792382-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-792382 ssh -n                                                                 | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | ha-792382-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-792382 ssh -n ha-792382-m03 sudo cat                                          | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | /home/docker/cp-test_ha-792382-m04_ha-792382-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-792382 node stop m02 -v=7                                                     | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/09 10:49:12
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 10:49:12.155112  627293 out.go:345] Setting OutFile to fd 1 ...
	I1209 10:49:12.155243  627293 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 10:49:12.155252  627293 out.go:358] Setting ErrFile to fd 2...
	I1209 10:49:12.155256  627293 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 10:49:12.155455  627293 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20068-609844/.minikube/bin
	I1209 10:49:12.156111  627293 out.go:352] Setting JSON to false
	I1209 10:49:12.157109  627293 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":12696,"bootTime":1733728656,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 10:49:12.157245  627293 start.go:139] virtualization: kvm guest
	I1209 10:49:12.159303  627293 out.go:177] * [ha-792382] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1209 10:49:12.160611  627293 out.go:177]   - MINIKUBE_LOCATION=20068
	I1209 10:49:12.160611  627293 notify.go:220] Checking for updates...
	I1209 10:49:12.163029  627293 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 10:49:12.164218  627293 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20068-609844/kubeconfig
	I1209 10:49:12.165346  627293 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20068-609844/.minikube
	I1209 10:49:12.166392  627293 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1209 10:49:12.168066  627293 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 10:49:12.169526  627293 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 10:49:12.205667  627293 out.go:177] * Using the kvm2 driver based on user configuration
	I1209 10:49:12.206853  627293 start.go:297] selected driver: kvm2
	I1209 10:49:12.206869  627293 start.go:901] validating driver "kvm2" against <nil>
	I1209 10:49:12.206881  627293 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 10:49:12.207633  627293 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 10:49:12.207718  627293 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20068-609844/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1209 10:49:12.223409  627293 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1209 10:49:12.223621  627293 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1209 10:49:12.224275  627293 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 10:49:12.224320  627293 cni.go:84] Creating CNI manager for ""
	I1209 10:49:12.224382  627293 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1209 10:49:12.224394  627293 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1209 10:49:12.224467  627293 start.go:340] cluster config:
	{Name:ha-792382 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-792382 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1209 10:49:12.224624  627293 iso.go:125] acquiring lock: {Name:mk3bf4d9da9b75cc0ae0a3732f4492bac54a4b46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 10:49:12.226221  627293 out.go:177] * Starting "ha-792382" primary control-plane node in "ha-792382" cluster
	I1209 10:49:12.227308  627293 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 10:49:12.227336  627293 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1209 10:49:12.227354  627293 cache.go:56] Caching tarball of preloaded images
	I1209 10:49:12.227432  627293 preload.go:172] Found /home/jenkins/minikube-integration/20068-609844/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1209 10:49:12.227447  627293 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1209 10:49:12.227749  627293 profile.go:143] Saving config to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/config.json ...
	I1209 10:49:12.227772  627293 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/config.json: {Name:mkc1440c2022322fca4f71077ddb8bd509450a18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 10:49:12.227928  627293 start.go:360] acquireMachinesLock for ha-792382: {Name:mka6afffe9ddf2faebdc603ed0401805d58bf31e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 10:49:12.227972  627293 start.go:364] duration metric: took 26.731µs to acquireMachinesLock for "ha-792382"
	I1209 10:49:12.227996  627293 start.go:93] Provisioning new machine with config: &{Name:ha-792382 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-792382 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 10:49:12.228057  627293 start.go:125] createHost starting for "" (driver="kvm2")
	I1209 10:49:12.229507  627293 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1209 10:49:12.229650  627293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:49:12.229688  627293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:49:12.243739  627293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40935
	I1209 10:49:12.244181  627293 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:49:12.244733  627293 main.go:141] libmachine: Using API Version  1
	I1209 10:49:12.244754  627293 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:49:12.245151  627293 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:49:12.245359  627293 main.go:141] libmachine: (ha-792382) Calling .GetMachineName
	I1209 10:49:12.245524  627293 main.go:141] libmachine: (ha-792382) Calling .DriverName
	I1209 10:49:12.245673  627293 start.go:159] libmachine.API.Create for "ha-792382" (driver="kvm2")
	I1209 10:49:12.245706  627293 client.go:168] LocalClient.Create starting
	I1209 10:49:12.245734  627293 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem
	I1209 10:49:12.245764  627293 main.go:141] libmachine: Decoding PEM data...
	I1209 10:49:12.245782  627293 main.go:141] libmachine: Parsing certificate...
	I1209 10:49:12.245831  627293 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem
	I1209 10:49:12.245849  627293 main.go:141] libmachine: Decoding PEM data...
	I1209 10:49:12.245860  627293 main.go:141] libmachine: Parsing certificate...
	I1209 10:49:12.245876  627293 main.go:141] libmachine: Running pre-create checks...
	I1209 10:49:12.245884  627293 main.go:141] libmachine: (ha-792382) Calling .PreCreateCheck
	I1209 10:49:12.246327  627293 main.go:141] libmachine: (ha-792382) Calling .GetConfigRaw
	I1209 10:49:12.246669  627293 main.go:141] libmachine: Creating machine...
	I1209 10:49:12.246682  627293 main.go:141] libmachine: (ha-792382) Calling .Create
	I1209 10:49:12.246831  627293 main.go:141] libmachine: (ha-792382) Creating KVM machine...
	I1209 10:49:12.248145  627293 main.go:141] libmachine: (ha-792382) DBG | found existing default KVM network
	I1209 10:49:12.248911  627293 main.go:141] libmachine: (ha-792382) DBG | I1209 10:49:12.248755  627316 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000123350}
	I1209 10:49:12.248939  627293 main.go:141] libmachine: (ha-792382) DBG | created network xml: 
	I1209 10:49:12.248951  627293 main.go:141] libmachine: (ha-792382) DBG | <network>
	I1209 10:49:12.248971  627293 main.go:141] libmachine: (ha-792382) DBG |   <name>mk-ha-792382</name>
	I1209 10:49:12.248981  627293 main.go:141] libmachine: (ha-792382) DBG |   <dns enable='no'/>
	I1209 10:49:12.248994  627293 main.go:141] libmachine: (ha-792382) DBG |   
	I1209 10:49:12.249009  627293 main.go:141] libmachine: (ha-792382) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1209 10:49:12.249019  627293 main.go:141] libmachine: (ha-792382) DBG |     <dhcp>
	I1209 10:49:12.249032  627293 main.go:141] libmachine: (ha-792382) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1209 10:49:12.249045  627293 main.go:141] libmachine: (ha-792382) DBG |     </dhcp>
	I1209 10:49:12.249058  627293 main.go:141] libmachine: (ha-792382) DBG |   </ip>
	I1209 10:49:12.249067  627293 main.go:141] libmachine: (ha-792382) DBG |   
	I1209 10:49:12.249134  627293 main.go:141] libmachine: (ha-792382) DBG | </network>
	I1209 10:49:12.249173  627293 main.go:141] libmachine: (ha-792382) DBG | 
	I1209 10:49:12.253952  627293 main.go:141] libmachine: (ha-792382) DBG | trying to create private KVM network mk-ha-792382 192.168.39.0/24...
	I1209 10:49:12.320765  627293 main.go:141] libmachine: (ha-792382) DBG | private KVM network mk-ha-792382 192.168.39.0/24 created
	I1209 10:49:12.320810  627293 main.go:141] libmachine: (ha-792382) Setting up store path in /home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382 ...
	I1209 10:49:12.320824  627293 main.go:141] libmachine: (ha-792382) DBG | I1209 10:49:12.320703  627316 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20068-609844/.minikube
	I1209 10:49:12.320846  627293 main.go:141] libmachine: (ha-792382) Building disk image from file:///home/jenkins/minikube-integration/20068-609844/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1209 10:49:12.320864  627293 main.go:141] libmachine: (ha-792382) Downloading /home/jenkins/minikube-integration/20068-609844/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20068-609844/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1209 10:49:12.624365  627293 main.go:141] libmachine: (ha-792382) DBG | I1209 10:49:12.624217  627316 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382/id_rsa...
	I1209 10:49:12.718158  627293 main.go:141] libmachine: (ha-792382) DBG | I1209 10:49:12.718015  627316 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382/ha-792382.rawdisk...
	I1209 10:49:12.718234  627293 main.go:141] libmachine: (ha-792382) DBG | Writing magic tar header
	I1209 10:49:12.718307  627293 main.go:141] libmachine: (ha-792382) DBG | Writing SSH key tar header
	I1209 10:49:12.718345  627293 main.go:141] libmachine: (ha-792382) DBG | I1209 10:49:12.718134  627316 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382 ...
	I1209 10:49:12.718360  627293 main.go:141] libmachine: (ha-792382) Setting executable bit set on /home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382 (perms=drwx------)
	I1209 10:49:12.718367  627293 main.go:141] libmachine: (ha-792382) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382
	I1209 10:49:12.718384  627293 main.go:141] libmachine: (ha-792382) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20068-609844/.minikube/machines
	I1209 10:49:12.718399  627293 main.go:141] libmachine: (ha-792382) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20068-609844/.minikube
	I1209 10:49:12.718409  627293 main.go:141] libmachine: (ha-792382) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20068-609844
	I1209 10:49:12.718416  627293 main.go:141] libmachine: (ha-792382) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1209 10:49:12.718424  627293 main.go:141] libmachine: (ha-792382) Setting executable bit set on /home/jenkins/minikube-integration/20068-609844/.minikube/machines (perms=drwxr-xr-x)
	I1209 10:49:12.718431  627293 main.go:141] libmachine: (ha-792382) Setting executable bit set on /home/jenkins/minikube-integration/20068-609844/.minikube (perms=drwxr-xr-x)
	I1209 10:49:12.718436  627293 main.go:141] libmachine: (ha-792382) DBG | Checking permissions on dir: /home/jenkins
	I1209 10:49:12.718443  627293 main.go:141] libmachine: (ha-792382) DBG | Checking permissions on dir: /home
	I1209 10:49:12.718449  627293 main.go:141] libmachine: (ha-792382) DBG | Skipping /home - not owner
	I1209 10:49:12.718461  627293 main.go:141] libmachine: (ha-792382) Setting executable bit set on /home/jenkins/minikube-integration/20068-609844 (perms=drwxrwxr-x)
	I1209 10:49:12.718475  627293 main.go:141] libmachine: (ha-792382) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1209 10:49:12.718495  627293 main.go:141] libmachine: (ha-792382) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1209 10:49:12.718506  627293 main.go:141] libmachine: (ha-792382) Creating domain...
	I1209 10:49:12.719443  627293 main.go:141] libmachine: (ha-792382) define libvirt domain using xml: 
	I1209 10:49:12.719473  627293 main.go:141] libmachine: (ha-792382) <domain type='kvm'>
	I1209 10:49:12.719482  627293 main.go:141] libmachine: (ha-792382)   <name>ha-792382</name>
	I1209 10:49:12.719490  627293 main.go:141] libmachine: (ha-792382)   <memory unit='MiB'>2200</memory>
	I1209 10:49:12.719512  627293 main.go:141] libmachine: (ha-792382)   <vcpu>2</vcpu>
	I1209 10:49:12.719521  627293 main.go:141] libmachine: (ha-792382)   <features>
	I1209 10:49:12.719529  627293 main.go:141] libmachine: (ha-792382)     <acpi/>
	I1209 10:49:12.719537  627293 main.go:141] libmachine: (ha-792382)     <apic/>
	I1209 10:49:12.719561  627293 main.go:141] libmachine: (ha-792382)     <pae/>
	I1209 10:49:12.719580  627293 main.go:141] libmachine: (ha-792382)     
	I1209 10:49:12.719586  627293 main.go:141] libmachine: (ha-792382)   </features>
	I1209 10:49:12.719602  627293 main.go:141] libmachine: (ha-792382)   <cpu mode='host-passthrough'>
	I1209 10:49:12.719613  627293 main.go:141] libmachine: (ha-792382)   
	I1209 10:49:12.719619  627293 main.go:141] libmachine: (ha-792382)   </cpu>
	I1209 10:49:12.719631  627293 main.go:141] libmachine: (ha-792382)   <os>
	I1209 10:49:12.719637  627293 main.go:141] libmachine: (ha-792382)     <type>hvm</type>
	I1209 10:49:12.719648  627293 main.go:141] libmachine: (ha-792382)     <boot dev='cdrom'/>
	I1209 10:49:12.719659  627293 main.go:141] libmachine: (ha-792382)     <boot dev='hd'/>
	I1209 10:49:12.719681  627293 main.go:141] libmachine: (ha-792382)     <bootmenu enable='no'/>
	I1209 10:49:12.719701  627293 main.go:141] libmachine: (ha-792382)   </os>
	I1209 10:49:12.719719  627293 main.go:141] libmachine: (ha-792382)   <devices>
	I1209 10:49:12.719738  627293 main.go:141] libmachine: (ha-792382)     <disk type='file' device='cdrom'>
	I1209 10:49:12.719756  627293 main.go:141] libmachine: (ha-792382)       <source file='/home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382/boot2docker.iso'/>
	I1209 10:49:12.719767  627293 main.go:141] libmachine: (ha-792382)       <target dev='hdc' bus='scsi'/>
	I1209 10:49:12.719777  627293 main.go:141] libmachine: (ha-792382)       <readonly/>
	I1209 10:49:12.719791  627293 main.go:141] libmachine: (ha-792382)     </disk>
	I1209 10:49:12.719805  627293 main.go:141] libmachine: (ha-792382)     <disk type='file' device='disk'>
	I1209 10:49:12.719816  627293 main.go:141] libmachine: (ha-792382)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1209 10:49:12.719831  627293 main.go:141] libmachine: (ha-792382)       <source file='/home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382/ha-792382.rawdisk'/>
	I1209 10:49:12.719845  627293 main.go:141] libmachine: (ha-792382)       <target dev='hda' bus='virtio'/>
	I1209 10:49:12.719857  627293 main.go:141] libmachine: (ha-792382)     </disk>
	I1209 10:49:12.719868  627293 main.go:141] libmachine: (ha-792382)     <interface type='network'>
	I1209 10:49:12.719881  627293 main.go:141] libmachine: (ha-792382)       <source network='mk-ha-792382'/>
	I1209 10:49:12.719892  627293 main.go:141] libmachine: (ha-792382)       <model type='virtio'/>
	I1209 10:49:12.719902  627293 main.go:141] libmachine: (ha-792382)     </interface>
	I1209 10:49:12.719910  627293 main.go:141] libmachine: (ha-792382)     <interface type='network'>
	I1209 10:49:12.719940  627293 main.go:141] libmachine: (ha-792382)       <source network='default'/>
	I1209 10:49:12.719966  627293 main.go:141] libmachine: (ha-792382)       <model type='virtio'/>
	I1209 10:49:12.719981  627293 main.go:141] libmachine: (ha-792382)     </interface>
	I1209 10:49:12.719994  627293 main.go:141] libmachine: (ha-792382)     <serial type='pty'>
	I1209 10:49:12.720009  627293 main.go:141] libmachine: (ha-792382)       <target port='0'/>
	I1209 10:49:12.720026  627293 main.go:141] libmachine: (ha-792382)     </serial>
	I1209 10:49:12.720038  627293 main.go:141] libmachine: (ha-792382)     <console type='pty'>
	I1209 10:49:12.720049  627293 main.go:141] libmachine: (ha-792382)       <target type='serial' port='0'/>
	I1209 10:49:12.720070  627293 main.go:141] libmachine: (ha-792382)     </console>
	I1209 10:49:12.720083  627293 main.go:141] libmachine: (ha-792382)     <rng model='virtio'>
	I1209 10:49:12.720106  627293 main.go:141] libmachine: (ha-792382)       <backend model='random'>/dev/random</backend>
	I1209 10:49:12.720122  627293 main.go:141] libmachine: (ha-792382)     </rng>
	I1209 10:49:12.720133  627293 main.go:141] libmachine: (ha-792382)     
	I1209 10:49:12.720141  627293 main.go:141] libmachine: (ha-792382)     
	I1209 10:49:12.720152  627293 main.go:141] libmachine: (ha-792382)   </devices>
	I1209 10:49:12.720161  627293 main.go:141] libmachine: (ha-792382) </domain>
	I1209 10:49:12.720175  627293 main.go:141] libmachine: (ha-792382) 
	I1209 10:49:12.724156  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:b1:77:e1 in network default
	I1209 10:49:12.724674  627293 main.go:141] libmachine: (ha-792382) Ensuring networks are active...
	I1209 10:49:12.724713  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:12.725331  627293 main.go:141] libmachine: (ha-792382) Ensuring network default is active
	I1209 10:49:12.725573  627293 main.go:141] libmachine: (ha-792382) Ensuring network mk-ha-792382 is active
	I1209 10:49:12.726011  627293 main.go:141] libmachine: (ha-792382) Getting domain xml...
	I1209 10:49:12.726856  627293 main.go:141] libmachine: (ha-792382) Creating domain...
	I1209 10:49:13.913426  627293 main.go:141] libmachine: (ha-792382) Waiting to get IP...
	I1209 10:49:13.914474  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:13.914854  627293 main.go:141] libmachine: (ha-792382) DBG | unable to find current IP address of domain ha-792382 in network mk-ha-792382
	I1209 10:49:13.914884  627293 main.go:141] libmachine: (ha-792382) DBG | I1209 10:49:13.914843  627316 retry.go:31] will retry after 231.46558ms: waiting for machine to come up
	I1209 10:49:14.148392  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:14.148786  627293 main.go:141] libmachine: (ha-792382) DBG | unable to find current IP address of domain ha-792382 in network mk-ha-792382
	I1209 10:49:14.148818  627293 main.go:141] libmachine: (ha-792382) DBG | I1209 10:49:14.148733  627316 retry.go:31] will retry after 323.334507ms: waiting for machine to come up
	I1209 10:49:14.473105  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:14.473482  627293 main.go:141] libmachine: (ha-792382) DBG | unable to find current IP address of domain ha-792382 in network mk-ha-792382
	I1209 10:49:14.473521  627293 main.go:141] libmachine: (ha-792382) DBG | I1209 10:49:14.473432  627316 retry.go:31] will retry after 293.410473ms: waiting for machine to come up
	I1209 10:49:14.769073  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:14.769413  627293 main.go:141] libmachine: (ha-792382) DBG | unable to find current IP address of domain ha-792382 in network mk-ha-792382
	I1209 10:49:14.769442  627293 main.go:141] libmachine: (ha-792382) DBG | I1209 10:49:14.769369  627316 retry.go:31] will retry after 414.561658ms: waiting for machine to come up
	I1209 10:49:15.186115  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:15.186526  627293 main.go:141] libmachine: (ha-792382) DBG | unable to find current IP address of domain ha-792382 in network mk-ha-792382
	I1209 10:49:15.186550  627293 main.go:141] libmachine: (ha-792382) DBG | I1209 10:49:15.186486  627316 retry.go:31] will retry after 602.170929ms: waiting for machine to come up
	I1209 10:49:15.790232  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:15.790609  627293 main.go:141] libmachine: (ha-792382) DBG | unable to find current IP address of domain ha-792382 in network mk-ha-792382
	I1209 10:49:15.790636  627293 main.go:141] libmachine: (ha-792382) DBG | I1209 10:49:15.790561  627316 retry.go:31] will retry after 626.828073ms: waiting for machine to come up
	I1209 10:49:16.419433  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:16.419896  627293 main.go:141] libmachine: (ha-792382) DBG | unable to find current IP address of domain ha-792382 in network mk-ha-792382
	I1209 10:49:16.419938  627293 main.go:141] libmachine: (ha-792382) DBG | I1209 10:49:16.419857  627316 retry.go:31] will retry after 735.370165ms: waiting for machine to come up
	I1209 10:49:17.156849  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:17.157231  627293 main.go:141] libmachine: (ha-792382) DBG | unable to find current IP address of domain ha-792382 in network mk-ha-792382
	I1209 10:49:17.157266  627293 main.go:141] libmachine: (ha-792382) DBG | I1209 10:49:17.157218  627316 retry.go:31] will retry after 1.229419392s: waiting for machine to come up
	I1209 10:49:18.387855  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:18.388261  627293 main.go:141] libmachine: (ha-792382) DBG | unable to find current IP address of domain ha-792382 in network mk-ha-792382
	I1209 10:49:18.388300  627293 main.go:141] libmachine: (ha-792382) DBG | I1209 10:49:18.388201  627316 retry.go:31] will retry after 1.781823768s: waiting for machine to come up
	I1209 10:49:20.172140  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:20.172552  627293 main.go:141] libmachine: (ha-792382) DBG | unable to find current IP address of domain ha-792382 in network mk-ha-792382
	I1209 10:49:20.172583  627293 main.go:141] libmachine: (ha-792382) DBG | I1209 10:49:20.172526  627316 retry.go:31] will retry after 1.563022016s: waiting for machine to come up
	I1209 10:49:21.736731  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:21.737192  627293 main.go:141] libmachine: (ha-792382) DBG | unable to find current IP address of domain ha-792382 in network mk-ha-792382
	I1209 10:49:21.737227  627293 main.go:141] libmachine: (ha-792382) DBG | I1209 10:49:21.737132  627316 retry.go:31] will retry after 1.796183688s: waiting for machine to come up
	I1209 10:49:23.536165  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:23.536600  627293 main.go:141] libmachine: (ha-792382) DBG | unable to find current IP address of domain ha-792382 in network mk-ha-792382
	I1209 10:49:23.536633  627293 main.go:141] libmachine: (ha-792382) DBG | I1209 10:49:23.536553  627316 retry.go:31] will retry after 2.766987907s: waiting for machine to come up
	I1209 10:49:26.306562  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:26.306896  627293 main.go:141] libmachine: (ha-792382) DBG | unable to find current IP address of domain ha-792382 in network mk-ha-792382
	I1209 10:49:26.306918  627293 main.go:141] libmachine: (ha-792382) DBG | I1209 10:49:26.306878  627316 retry.go:31] will retry after 3.713874413s: waiting for machine to come up
	I1209 10:49:30.024188  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:30.024650  627293 main.go:141] libmachine: (ha-792382) DBG | unable to find current IP address of domain ha-792382 in network mk-ha-792382
	I1209 10:49:30.024693  627293 main.go:141] libmachine: (ha-792382) DBG | I1209 10:49:30.024632  627316 retry.go:31] will retry after 4.575233995s: waiting for machine to come up
	I1209 10:49:34.603079  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:34.603556  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has current primary IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:34.603577  627293 main.go:141] libmachine: (ha-792382) Found IP for machine: 192.168.39.69
	I1209 10:49:34.603593  627293 main.go:141] libmachine: (ha-792382) Reserving static IP address...
	I1209 10:49:34.603995  627293 main.go:141] libmachine: (ha-792382) DBG | unable to find host DHCP lease matching {name: "ha-792382", mac: "52:54:00:a8:82:f7", ip: "192.168.39.69"} in network mk-ha-792382
	I1209 10:49:34.677115  627293 main.go:141] libmachine: (ha-792382) DBG | Getting to WaitForSSH function...
	I1209 10:49:34.677150  627293 main.go:141] libmachine: (ha-792382) Reserved static IP address: 192.168.39.69
	I1209 10:49:34.677164  627293 main.go:141] libmachine: (ha-792382) Waiting for SSH to be available...
	I1209 10:49:34.680016  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:34.680510  627293 main.go:141] libmachine: (ha-792382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:82:f7", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:49:26 +0000 UTC Type:0 Mac:52:54:00:a8:82:f7 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:minikube Clientid:01:52:54:00:a8:82:f7}
	I1209 10:49:34.680547  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:34.680683  627293 main.go:141] libmachine: (ha-792382) DBG | Using SSH client type: external
	I1209 10:49:34.680713  627293 main.go:141] libmachine: (ha-792382) DBG | Using SSH private key: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382/id_rsa (-rw-------)
	I1209 10:49:34.680743  627293 main.go:141] libmachine: (ha-792382) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.69 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1209 10:49:34.680759  627293 main.go:141] libmachine: (ha-792382) DBG | About to run SSH command:
	I1209 10:49:34.680771  627293 main.go:141] libmachine: (ha-792382) DBG | exit 0
	I1209 10:49:34.802056  627293 main.go:141] libmachine: (ha-792382) DBG | SSH cmd err, output: <nil>: 
	I1209 10:49:34.802342  627293 main.go:141] libmachine: (ha-792382) KVM machine creation complete!
	I1209 10:49:34.802652  627293 main.go:141] libmachine: (ha-792382) Calling .GetConfigRaw
	I1209 10:49:34.803265  627293 main.go:141] libmachine: (ha-792382) Calling .DriverName
	I1209 10:49:34.803470  627293 main.go:141] libmachine: (ha-792382) Calling .DriverName
	I1209 10:49:34.803641  627293 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1209 10:49:34.803655  627293 main.go:141] libmachine: (ha-792382) Calling .GetState
	I1209 10:49:34.804897  627293 main.go:141] libmachine: Detecting operating system of created instance...
	I1209 10:49:34.804910  627293 main.go:141] libmachine: Waiting for SSH to be available...
	I1209 10:49:34.804920  627293 main.go:141] libmachine: Getting to WaitForSSH function...
	I1209 10:49:34.804925  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHHostname
	I1209 10:49:34.807181  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:34.807580  627293 main.go:141] libmachine: (ha-792382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:82:f7", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:49:26 +0000 UTC Type:0 Mac:52:54:00:a8:82:f7 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-792382 Clientid:01:52:54:00:a8:82:f7}
	I1209 10:49:34.807606  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:34.807797  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHPort
	I1209 10:49:34.807971  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHKeyPath
	I1209 10:49:34.808252  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHKeyPath
	I1209 10:49:34.808380  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHUsername
	I1209 10:49:34.808550  627293 main.go:141] libmachine: Using SSH client type: native
	I1209 10:49:34.808901  627293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.69 22 <nil> <nil>}
	I1209 10:49:34.808916  627293 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1209 10:49:34.901048  627293 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 10:49:34.901075  627293 main.go:141] libmachine: Detecting the provisioner...
	I1209 10:49:34.901084  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHHostname
	I1209 10:49:34.903801  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:34.904137  627293 main.go:141] libmachine: (ha-792382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:82:f7", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:49:26 +0000 UTC Type:0 Mac:52:54:00:a8:82:f7 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-792382 Clientid:01:52:54:00:a8:82:f7}
	I1209 10:49:34.904167  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:34.904294  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHPort
	I1209 10:49:34.904473  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHKeyPath
	I1209 10:49:34.904619  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHKeyPath
	I1209 10:49:34.904801  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHUsername
	I1209 10:49:34.904935  627293 main.go:141] libmachine: Using SSH client type: native
	I1209 10:49:34.905144  627293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.69 22 <nil> <nil>}
	I1209 10:49:34.905156  627293 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1209 10:49:34.998134  627293 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1209 10:49:34.998232  627293 main.go:141] libmachine: found compatible host: buildroot
	I1209 10:49:34.998245  627293 main.go:141] libmachine: Provisioning with buildroot...
	I1209 10:49:34.998256  627293 main.go:141] libmachine: (ha-792382) Calling .GetMachineName
	I1209 10:49:34.998517  627293 buildroot.go:166] provisioning hostname "ha-792382"
	I1209 10:49:34.998550  627293 main.go:141] libmachine: (ha-792382) Calling .GetMachineName
	I1209 10:49:34.998742  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHHostname
	I1209 10:49:35.001204  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:35.001556  627293 main.go:141] libmachine: (ha-792382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:82:f7", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:49:26 +0000 UTC Type:0 Mac:52:54:00:a8:82:f7 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-792382 Clientid:01:52:54:00:a8:82:f7}
	I1209 10:49:35.001585  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:35.001746  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHPort
	I1209 10:49:35.001925  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHKeyPath
	I1209 10:49:35.002086  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHKeyPath
	I1209 10:49:35.002233  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHUsername
	I1209 10:49:35.002387  627293 main.go:141] libmachine: Using SSH client type: native
	I1209 10:49:35.002580  627293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.69 22 <nil> <nil>}
	I1209 10:49:35.002594  627293 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-792382 && echo "ha-792382" | sudo tee /etc/hostname
	I1209 10:49:35.111878  627293 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-792382
	
	I1209 10:49:35.111914  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHHostname
	I1209 10:49:35.114679  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:35.114968  627293 main.go:141] libmachine: (ha-792382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:82:f7", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:49:26 +0000 UTC Type:0 Mac:52:54:00:a8:82:f7 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-792382 Clientid:01:52:54:00:a8:82:f7}
	I1209 10:49:35.114999  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:35.115174  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHPort
	I1209 10:49:35.115415  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHKeyPath
	I1209 10:49:35.115601  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHKeyPath
	I1209 10:49:35.115731  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHUsername
	I1209 10:49:35.115880  627293 main.go:141] libmachine: Using SSH client type: native
	I1209 10:49:35.116106  627293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.69 22 <nil> <nil>}
	I1209 10:49:35.116130  627293 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-792382' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-792382/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-792382' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 10:49:35.218632  627293 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 10:49:35.218667  627293 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20068-609844/.minikube CaCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20068-609844/.minikube}
	I1209 10:49:35.218688  627293 buildroot.go:174] setting up certificates
	I1209 10:49:35.218699  627293 provision.go:84] configureAuth start
	I1209 10:49:35.218708  627293 main.go:141] libmachine: (ha-792382) Calling .GetMachineName
	I1209 10:49:35.218985  627293 main.go:141] libmachine: (ha-792382) Calling .GetIP
	I1209 10:49:35.221513  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:35.221813  627293 main.go:141] libmachine: (ha-792382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:82:f7", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:49:26 +0000 UTC Type:0 Mac:52:54:00:a8:82:f7 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-792382 Clientid:01:52:54:00:a8:82:f7}
	I1209 10:49:35.221835  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:35.221978  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHHostname
	I1209 10:49:35.224283  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:35.224638  627293 main.go:141] libmachine: (ha-792382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:82:f7", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:49:26 +0000 UTC Type:0 Mac:52:54:00:a8:82:f7 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-792382 Clientid:01:52:54:00:a8:82:f7}
	I1209 10:49:35.224666  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:35.224816  627293 provision.go:143] copyHostCerts
	I1209 10:49:35.224849  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem
	I1209 10:49:35.224892  627293 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem, removing ...
	I1209 10:49:35.224913  627293 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem
	I1209 10:49:35.225004  627293 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem (1123 bytes)
	I1209 10:49:35.225113  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem
	I1209 10:49:35.225145  627293 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem, removing ...
	I1209 10:49:35.225155  627293 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem
	I1209 10:49:35.225195  627293 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem (1679 bytes)
	I1209 10:49:35.225255  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem
	I1209 10:49:35.225280  627293 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem, removing ...
	I1209 10:49:35.225290  627293 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem
	I1209 10:49:35.225325  627293 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem (1082 bytes)
	I1209 10:49:35.225392  627293 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem org=jenkins.ha-792382 san=[127.0.0.1 192.168.39.69 ha-792382 localhost minikube]
	I1209 10:49:35.530739  627293 provision.go:177] copyRemoteCerts
	I1209 10:49:35.530807  627293 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 10:49:35.530832  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHHostname
	I1209 10:49:35.533806  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:35.534127  627293 main.go:141] libmachine: (ha-792382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:82:f7", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:49:26 +0000 UTC Type:0 Mac:52:54:00:a8:82:f7 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-792382 Clientid:01:52:54:00:a8:82:f7}
	I1209 10:49:35.534158  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:35.534311  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHPort
	I1209 10:49:35.534552  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHKeyPath
	I1209 10:49:35.534707  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHUsername
	I1209 10:49:35.534862  627293 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382/id_rsa Username:docker}
	I1209 10:49:35.611999  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1209 10:49:35.612097  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1209 10:49:35.633738  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1209 10:49:35.633820  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1209 10:49:35.654744  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1209 10:49:35.654813  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1209 10:49:35.675689  627293 provision.go:87] duration metric: took 456.977679ms to configureAuth
	I1209 10:49:35.675718  627293 buildroot.go:189] setting minikube options for container-runtime
	I1209 10:49:35.675925  627293 config.go:182] Loaded profile config "ha-792382": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 10:49:35.676032  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHHostname
	I1209 10:49:35.678943  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:35.679261  627293 main.go:141] libmachine: (ha-792382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:82:f7", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:49:26 +0000 UTC Type:0 Mac:52:54:00:a8:82:f7 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-792382 Clientid:01:52:54:00:a8:82:f7}
	I1209 10:49:35.679289  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:35.679496  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHPort
	I1209 10:49:35.679710  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHKeyPath
	I1209 10:49:35.679841  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHKeyPath
	I1209 10:49:35.679959  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHUsername
	I1209 10:49:35.680105  627293 main.go:141] libmachine: Using SSH client type: native
	I1209 10:49:35.680332  627293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.69 22 <nil> <nil>}
	I1209 10:49:35.680355  627293 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 10:49:35.879810  627293 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 10:49:35.879848  627293 main.go:141] libmachine: Checking connection to Docker...
	I1209 10:49:35.879878  627293 main.go:141] libmachine: (ha-792382) Calling .GetURL
	I1209 10:49:35.881298  627293 main.go:141] libmachine: (ha-792382) DBG | Using libvirt version 6000000
	I1209 10:49:35.883322  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:35.883653  627293 main.go:141] libmachine: (ha-792382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:82:f7", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:49:26 +0000 UTC Type:0 Mac:52:54:00:a8:82:f7 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-792382 Clientid:01:52:54:00:a8:82:f7}
	I1209 10:49:35.883694  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:35.883840  627293 main.go:141] libmachine: Docker is up and running!
	I1209 10:49:35.883855  627293 main.go:141] libmachine: Reticulating splines...
	I1209 10:49:35.883863  627293 client.go:171] duration metric: took 23.63814664s to LocalClient.Create
	I1209 10:49:35.883888  627293 start.go:167] duration metric: took 23.638217304s to libmachine.API.Create "ha-792382"
	I1209 10:49:35.883903  627293 start.go:293] postStartSetup for "ha-792382" (driver="kvm2")
	I1209 10:49:35.883916  627293 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 10:49:35.883934  627293 main.go:141] libmachine: (ha-792382) Calling .DriverName
	I1209 10:49:35.884193  627293 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 10:49:35.884224  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHHostname
	I1209 10:49:35.886333  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:35.886719  627293 main.go:141] libmachine: (ha-792382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:82:f7", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:49:26 +0000 UTC Type:0 Mac:52:54:00:a8:82:f7 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-792382 Clientid:01:52:54:00:a8:82:f7}
	I1209 10:49:35.886746  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:35.886830  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHPort
	I1209 10:49:35.887023  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHKeyPath
	I1209 10:49:35.887177  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHUsername
	I1209 10:49:35.887342  627293 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382/id_rsa Username:docker}
	I1209 10:49:35.963840  627293 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 10:49:35.967678  627293 info.go:137] Remote host: Buildroot 2023.02.9
	I1209 10:49:35.967709  627293 filesync.go:126] Scanning /home/jenkins/minikube-integration/20068-609844/.minikube/addons for local assets ...
	I1209 10:49:35.967791  627293 filesync.go:126] Scanning /home/jenkins/minikube-integration/20068-609844/.minikube/files for local assets ...
	I1209 10:49:35.967866  627293 filesync.go:149] local asset: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem -> 6170172.pem in /etc/ssl/certs
	I1209 10:49:35.967876  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem -> /etc/ssl/certs/6170172.pem
	I1209 10:49:35.967969  627293 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 10:49:35.976432  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem --> /etc/ssl/certs/6170172.pem (1708 bytes)
	I1209 10:49:35.997593  627293 start.go:296] duration metric: took 113.67336ms for postStartSetup
	I1209 10:49:35.997658  627293 main.go:141] libmachine: (ha-792382) Calling .GetConfigRaw
	I1209 10:49:35.998325  627293 main.go:141] libmachine: (ha-792382) Calling .GetIP
	I1209 10:49:36.000848  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:36.001239  627293 main.go:141] libmachine: (ha-792382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:82:f7", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:49:26 +0000 UTC Type:0 Mac:52:54:00:a8:82:f7 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-792382 Clientid:01:52:54:00:a8:82:f7}
	I1209 10:49:36.001267  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:36.001479  627293 profile.go:143] Saving config to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/config.json ...
	I1209 10:49:36.001656  627293 start.go:128] duration metric: took 23.77358998s to createHost
	I1209 10:49:36.001690  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHHostname
	I1209 10:49:36.004043  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:36.004400  627293 main.go:141] libmachine: (ha-792382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:82:f7", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:49:26 +0000 UTC Type:0 Mac:52:54:00:a8:82:f7 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-792382 Clientid:01:52:54:00:a8:82:f7}
	I1209 10:49:36.004431  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:36.004549  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHPort
	I1209 10:49:36.004734  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHKeyPath
	I1209 10:49:36.004893  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHKeyPath
	I1209 10:49:36.005024  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHUsername
	I1209 10:49:36.005202  627293 main.go:141] libmachine: Using SSH client type: native
	I1209 10:49:36.005368  627293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.69 22 <nil> <nil>}
	I1209 10:49:36.005389  627293 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1209 10:49:36.102487  627293 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733741376.078541083
	
	I1209 10:49:36.102513  627293 fix.go:216] guest clock: 1733741376.078541083
	I1209 10:49:36.102520  627293 fix.go:229] Guest: 2024-12-09 10:49:36.078541083 +0000 UTC Remote: 2024-12-09 10:49:36.001674575 +0000 UTC m=+23.885913523 (delta=76.866508ms)
	I1209 10:49:36.102562  627293 fix.go:200] guest clock delta is within tolerance: 76.866508ms
	I1209 10:49:36.102567  627293 start.go:83] releasing machines lock for "ha-792382", held for 23.874584082s
	I1209 10:49:36.102599  627293 main.go:141] libmachine: (ha-792382) Calling .DriverName
	I1209 10:49:36.102894  627293 main.go:141] libmachine: (ha-792382) Calling .GetIP
	I1209 10:49:36.105447  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:36.105786  627293 main.go:141] libmachine: (ha-792382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:82:f7", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:49:26 +0000 UTC Type:0 Mac:52:54:00:a8:82:f7 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-792382 Clientid:01:52:54:00:a8:82:f7}
	I1209 10:49:36.105824  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:36.105948  627293 main.go:141] libmachine: (ha-792382) Calling .DriverName
	I1209 10:49:36.106428  627293 main.go:141] libmachine: (ha-792382) Calling .DriverName
	I1209 10:49:36.106564  627293 main.go:141] libmachine: (ha-792382) Calling .DriverName
	I1209 10:49:36.106659  627293 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 10:49:36.106712  627293 ssh_runner.go:195] Run: cat /version.json
	I1209 10:49:36.106729  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHHostname
	I1209 10:49:36.106735  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHHostname
	I1209 10:49:36.108936  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:36.108975  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:36.109292  627293 main.go:141] libmachine: (ha-792382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:82:f7", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:49:26 +0000 UTC Type:0 Mac:52:54:00:a8:82:f7 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-792382 Clientid:01:52:54:00:a8:82:f7}
	I1209 10:49:36.109315  627293 main.go:141] libmachine: (ha-792382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:82:f7", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:49:26 +0000 UTC Type:0 Mac:52:54:00:a8:82:f7 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-792382 Clientid:01:52:54:00:a8:82:f7}
	I1209 10:49:36.109331  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:36.109347  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:36.109458  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHPort
	I1209 10:49:36.109631  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHPort
	I1209 10:49:36.109648  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHKeyPath
	I1209 10:49:36.109795  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHKeyPath
	I1209 10:49:36.109838  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHUsername
	I1209 10:49:36.109969  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHUsername
	I1209 10:49:36.109997  627293 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382/id_rsa Username:docker}
	I1209 10:49:36.110076  627293 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382/id_rsa Username:docker}
	I1209 10:49:36.213912  627293 ssh_runner.go:195] Run: systemctl --version
	I1209 10:49:36.219737  627293 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 10:49:36.373775  627293 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 10:49:36.379232  627293 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 10:49:36.379295  627293 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 10:49:36.394395  627293 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1209 10:49:36.394420  627293 start.go:495] detecting cgroup driver to use...
	I1209 10:49:36.394492  627293 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 10:49:36.409701  627293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 10:49:36.422542  627293 docker.go:217] disabling cri-docker service (if available) ...
	I1209 10:49:36.422600  627293 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 10:49:36.434811  627293 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 10:49:36.447372  627293 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 10:49:36.555614  627293 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 10:49:36.712890  627293 docker.go:233] disabling docker service ...
	I1209 10:49:36.712971  627293 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 10:49:36.726789  627293 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 10:49:36.738514  627293 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 10:49:36.860478  627293 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 10:49:36.981442  627293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 10:49:36.994232  627293 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 10:49:37.010639  627293 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1209 10:49:37.010699  627293 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 10:49:37.019623  627293 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 10:49:37.019678  627293 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 10:49:37.028741  627293 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 10:49:37.037802  627293 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 10:49:37.047112  627293 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 10:49:37.056587  627293 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 10:49:37.065626  627293 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 10:49:37.081471  627293 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 10:49:37.090400  627293 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 10:49:37.098511  627293 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1209 10:49:37.098567  627293 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1209 10:49:37.112020  627293 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 10:49:37.122574  627293 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 10:49:37.244301  627293 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 10:49:37.327990  627293 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 10:49:37.328076  627293 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 10:49:37.332519  627293 start.go:563] Will wait 60s for crictl version
	I1209 10:49:37.332580  627293 ssh_runner.go:195] Run: which crictl
	I1209 10:49:37.336027  627293 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 10:49:37.371600  627293 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1209 10:49:37.371689  627293 ssh_runner.go:195] Run: crio --version
	I1209 10:49:37.397060  627293 ssh_runner.go:195] Run: crio --version
	I1209 10:49:37.427301  627293 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1209 10:49:37.428631  627293 main.go:141] libmachine: (ha-792382) Calling .GetIP
	I1209 10:49:37.431338  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:37.431646  627293 main.go:141] libmachine: (ha-792382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:82:f7", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:49:26 +0000 UTC Type:0 Mac:52:54:00:a8:82:f7 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-792382 Clientid:01:52:54:00:a8:82:f7}
	I1209 10:49:37.431664  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:37.431871  627293 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1209 10:49:37.435530  627293 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 10:49:37.447078  627293 kubeadm.go:883] updating cluster {Name:ha-792382 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cl
usterName:ha-792382 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.69 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 10:49:37.447263  627293 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 10:49:37.447334  627293 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 10:49:37.477408  627293 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1209 10:49:37.477478  627293 ssh_runner.go:195] Run: which lz4
	I1209 10:49:37.480957  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1209 10:49:37.481050  627293 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1209 10:49:37.484762  627293 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1209 10:49:37.484788  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1209 10:49:38.710605  627293 crio.go:462] duration metric: took 1.229579062s to copy over tarball
	I1209 10:49:38.710680  627293 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1209 10:49:40.690695  627293 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.979974769s)
	I1209 10:49:40.690734  627293 crio.go:469] duration metric: took 1.980097705s to extract the tarball
	I1209 10:49:40.690745  627293 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1209 10:49:40.726929  627293 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 10:49:40.771095  627293 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 10:49:40.771125  627293 cache_images.go:84] Images are preloaded, skipping loading
	I1209 10:49:40.771136  627293 kubeadm.go:934] updating node { 192.168.39.69 8443 v1.31.2 crio true true} ...
	I1209 10:49:40.771264  627293 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-792382 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.69
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-792382 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 10:49:40.771357  627293 ssh_runner.go:195] Run: crio config
	I1209 10:49:40.816747  627293 cni.go:84] Creating CNI manager for ""
	I1209 10:49:40.816772  627293 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1209 10:49:40.816783  627293 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1209 10:49:40.816808  627293 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.69 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-792382 NodeName:ha-792382 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.69"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.69 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 10:49:40.816935  627293 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.69
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-792382"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.69"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.69"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 10:49:40.816960  627293 kube-vip.go:115] generating kube-vip config ...
	I1209 10:49:40.817003  627293 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1209 10:49:40.831794  627293 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1209 10:49:40.831917  627293 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.7
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1209 10:49:40.831988  627293 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1209 10:49:40.841266  627293 binaries.go:44] Found k8s binaries, skipping transfer
	I1209 10:49:40.841344  627293 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1209 10:49:40.850351  627293 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I1209 10:49:40.865301  627293 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 10:49:40.880173  627293 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2286 bytes)
	I1209 10:49:40.895089  627293 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I1209 10:49:40.909836  627293 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1209 10:49:40.913336  627293 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 10:49:40.924356  627293 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 10:49:41.046665  627293 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 10:49:41.063018  627293 certs.go:68] Setting up /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382 for IP: 192.168.39.69
	I1209 10:49:41.063041  627293 certs.go:194] generating shared ca certs ...
	I1209 10:49:41.063062  627293 certs.go:226] acquiring lock for ca certs: {Name:mk2df5887e08965d909a9c950da5dfffb8a04ddc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 10:49:41.063244  627293 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.key
	I1209 10:49:41.063289  627293 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.key
	I1209 10:49:41.063300  627293 certs.go:256] generating profile certs ...
	I1209 10:49:41.063355  627293 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/client.key
	I1209 10:49:41.063367  627293 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/client.crt with IP's: []
	I1209 10:49:41.129843  627293 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/client.crt ...
	I1209 10:49:41.129870  627293 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/client.crt: {Name:mkf984c9e526db9b810af9b168d6930601d7ed72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 10:49:41.130077  627293 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/client.key ...
	I1209 10:49:41.130094  627293 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/client.key: {Name:mk7ce7334711bfa08abe5164a05b3a0e352b8f81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 10:49:41.130213  627293 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.key.27c0a765
	I1209 10:49:41.130234  627293 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.crt.27c0a765 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.69 192.168.39.254]
	I1209 10:49:41.505985  627293 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.crt.27c0a765 ...
	I1209 10:49:41.506019  627293 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.crt.27c0a765: {Name:mkd0b0619960f58505ea5c5b1f53c5a2d8b55baf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 10:49:41.506242  627293 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.key.27c0a765 ...
	I1209 10:49:41.506261  627293 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.key.27c0a765: {Name:mk67bc39f2b151954187d9bdff2b01a7060c0444 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 10:49:41.506368  627293 certs.go:381] copying /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.crt.27c0a765 -> /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.crt
	I1209 10:49:41.506445  627293 certs.go:385] copying /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.key.27c0a765 -> /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.key
	I1209 10:49:41.506499  627293 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/proxy-client.key
	I1209 10:49:41.506513  627293 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/proxy-client.crt with IP's: []
	I1209 10:49:41.582775  627293 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/proxy-client.crt ...
	I1209 10:49:41.582805  627293 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/proxy-client.crt: {Name:mk8ba382df4a8d41cbb5595274fb67800a146923 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 10:49:41.582997  627293 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/proxy-client.key ...
	I1209 10:49:41.583012  627293 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/proxy-client.key: {Name:mka4002ccf01f2f736e4a0e998ece96628af1083 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 10:49:41.583117  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1209 10:49:41.583147  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1209 10:49:41.583161  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1209 10:49:41.583173  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1209 10:49:41.583197  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1209 10:49:41.583210  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1209 10:49:41.583222  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1209 10:49:41.583234  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1209 10:49:41.583286  627293 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017.pem (1338 bytes)
	W1209 10:49:41.583322  627293 certs.go:480] ignoring /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017_empty.pem, impossibly tiny 0 bytes
	I1209 10:49:41.583332  627293 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem (1675 bytes)
	I1209 10:49:41.583354  627293 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem (1082 bytes)
	I1209 10:49:41.583377  627293 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem (1123 bytes)
	I1209 10:49:41.583404  627293 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem (1679 bytes)
	I1209 10:49:41.583441  627293 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem (1708 bytes)
	I1209 10:49:41.583468  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem -> /usr/share/ca-certificates/6170172.pem
	I1209 10:49:41.583481  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1209 10:49:41.583493  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017.pem -> /usr/share/ca-certificates/617017.pem
	I1209 10:49:41.584023  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 10:49:41.607858  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1209 10:49:41.629298  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 10:49:41.650915  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1209 10:49:41.672892  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1209 10:49:41.695834  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1209 10:49:41.719653  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 10:49:41.742298  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1209 10:49:41.764468  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem --> /usr/share/ca-certificates/6170172.pem (1708 bytes)
	I1209 10:49:41.786947  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 10:49:41.811703  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017.pem --> /usr/share/ca-certificates/617017.pem (1338 bytes)
	I1209 10:49:41.837346  627293 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 10:49:41.855854  627293 ssh_runner.go:195] Run: openssl version
	I1209 10:49:41.862371  627293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/617017.pem && ln -fs /usr/share/ca-certificates/617017.pem /etc/ssl/certs/617017.pem"
	I1209 10:49:41.872771  627293 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/617017.pem
	I1209 10:49:41.878140  627293 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 10:45 /usr/share/ca-certificates/617017.pem
	I1209 10:49:41.878210  627293 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/617017.pem
	I1209 10:49:41.883640  627293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/617017.pem /etc/ssl/certs/51391683.0"
	I1209 10:49:41.893209  627293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6170172.pem && ln -fs /usr/share/ca-certificates/6170172.pem /etc/ssl/certs/6170172.pem"
	I1209 10:49:41.902869  627293 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6170172.pem
	I1209 10:49:41.906850  627293 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 10:45 /usr/share/ca-certificates/6170172.pem
	I1209 10:49:41.906898  627293 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6170172.pem
	I1209 10:49:41.912084  627293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6170172.pem /etc/ssl/certs/3ec20f2e.0"
	I1209 10:49:41.922405  627293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1209 10:49:41.932252  627293 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 10:49:41.936213  627293 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 10:34 /usr/share/ca-certificates/minikubeCA.pem
	I1209 10:49:41.936274  627293 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 10:49:41.941486  627293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1209 10:49:41.951188  627293 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 10:49:41.954834  627293 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1209 10:49:41.954890  627293 kubeadm.go:392] StartCluster: {Name:ha-792382 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Clust
erName:ha-792382 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.69 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 10:49:41.954978  627293 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 10:49:41.955029  627293 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 10:49:41.990596  627293 cri.go:89] found id: ""
	I1209 10:49:41.990674  627293 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 10:49:41.999783  627293 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 10:49:42.008238  627293 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 10:49:42.016846  627293 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 10:49:42.016865  627293 kubeadm.go:157] found existing configuration files:
	
	I1209 10:49:42.016904  627293 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 10:49:42.024739  627293 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 10:49:42.024809  627293 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 10:49:42.033044  627293 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 10:49:42.040972  627293 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 10:49:42.041020  627293 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 10:49:42.049238  627293 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 10:49:42.056966  627293 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 10:49:42.057032  627293 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 10:49:42.065232  627293 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 10:49:42.073082  627293 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 10:49:42.073123  627293 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 10:49:42.081145  627293 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1209 10:49:42.179849  627293 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1209 10:49:42.179910  627293 kubeadm.go:310] [preflight] Running pre-flight checks
	I1209 10:49:42.276408  627293 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1209 10:49:42.276561  627293 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1209 10:49:42.276716  627293 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1209 10:49:42.284852  627293 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1209 10:49:42.286435  627293 out.go:235]   - Generating certificates and keys ...
	I1209 10:49:42.286522  627293 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1209 10:49:42.286594  627293 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1209 10:49:42.590387  627293 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1209 10:49:42.745055  627293 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1209 10:49:42.887467  627293 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1209 10:49:43.151549  627293 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1209 10:49:43.207644  627293 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1209 10:49:43.207798  627293 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-792382 localhost] and IPs [192.168.39.69 127.0.0.1 ::1]
	I1209 10:49:43.393565  627293 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1209 10:49:43.393710  627293 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-792382 localhost] and IPs [192.168.39.69 127.0.0.1 ::1]
	I1209 10:49:43.595429  627293 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1209 10:49:43.672644  627293 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1209 10:49:43.819815  627293 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1209 10:49:43.819914  627293 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1209 10:49:44.041243  627293 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1209 10:49:44.173892  627293 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1209 10:49:44.337644  627293 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1209 10:49:44.481944  627293 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1209 10:49:44.539526  627293 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1209 10:49:44.540094  627293 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1209 10:49:44.543689  627293 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1209 10:49:44.575870  627293 out.go:235]   - Booting up control plane ...
	I1209 10:49:44.576053  627293 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1209 10:49:44.576187  627293 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1209 10:49:44.576309  627293 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1209 10:49:44.576459  627293 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1209 10:49:44.576560  627293 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1209 10:49:44.576606  627293 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1209 10:49:44.708364  627293 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1209 10:49:44.708561  627293 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1209 10:49:45.209677  627293 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.518639ms
	I1209 10:49:45.209811  627293 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1209 10:49:51.244834  627293 kubeadm.go:310] [api-check] The API server is healthy after 6.038769474s
	I1209 10:49:51.258766  627293 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1209 10:49:51.275586  627293 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1209 10:49:51.347505  627293 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1209 10:49:51.347730  627293 kubeadm.go:310] [mark-control-plane] Marking the node ha-792382 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1209 10:49:51.363557  627293 kubeadm.go:310] [bootstrap-token] Using token: 3fogiz.oanziwjzsm1wr1kv
	I1209 10:49:51.364826  627293 out.go:235]   - Configuring RBAC rules ...
	I1209 10:49:51.364951  627293 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1209 10:49:51.370786  627293 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1209 10:49:51.381797  627293 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1209 10:49:51.388857  627293 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1209 10:49:51.392743  627293 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1209 10:49:51.397933  627293 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1209 10:49:51.652382  627293 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1209 10:49:52.085079  627293 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1209 10:49:52.651844  627293 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1209 10:49:52.653438  627293 kubeadm.go:310] 
	I1209 10:49:52.653557  627293 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1209 10:49:52.653580  627293 kubeadm.go:310] 
	I1209 10:49:52.653672  627293 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1209 10:49:52.653682  627293 kubeadm.go:310] 
	I1209 10:49:52.653710  627293 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1209 10:49:52.653783  627293 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1209 10:49:52.653859  627293 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1209 10:49:52.653869  627293 kubeadm.go:310] 
	I1209 10:49:52.653946  627293 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1209 10:49:52.653955  627293 kubeadm.go:310] 
	I1209 10:49:52.654040  627293 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1209 10:49:52.654062  627293 kubeadm.go:310] 
	I1209 10:49:52.654116  627293 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1209 10:49:52.654229  627293 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1209 10:49:52.654328  627293 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1209 10:49:52.654347  627293 kubeadm.go:310] 
	I1209 10:49:52.654461  627293 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1209 10:49:52.654579  627293 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1209 10:49:52.654591  627293 kubeadm.go:310] 
	I1209 10:49:52.654710  627293 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 3fogiz.oanziwjzsm1wr1kv \
	I1209 10:49:52.654860  627293 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c011865ce27f188d55feab5f04226df0aae8d2e7cfe661283806c6f2de0ef29a \
	I1209 10:49:52.654894  627293 kubeadm.go:310] 	--control-plane 
	I1209 10:49:52.654903  627293 kubeadm.go:310] 
	I1209 10:49:52.655035  627293 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1209 10:49:52.655045  627293 kubeadm.go:310] 
	I1209 10:49:52.655125  627293 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 3fogiz.oanziwjzsm1wr1kv \
	I1209 10:49:52.655253  627293 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c011865ce27f188d55feab5f04226df0aae8d2e7cfe661283806c6f2de0ef29a 
	I1209 10:49:52.656128  627293 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1209 10:49:52.656180  627293 cni.go:84] Creating CNI manager for ""
	I1209 10:49:52.656208  627293 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1209 10:49:52.657779  627293 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1209 10:49:52.659033  627293 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1209 10:49:52.663808  627293 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.2/kubectl ...
	I1209 10:49:52.663829  627293 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1209 10:49:52.683028  627293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1209 10:49:53.058715  627293 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1209 10:49:53.058827  627293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 10:49:53.058833  627293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-792382 minikube.k8s.io/updated_at=2024_12_09T10_49_53_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=60110addcdbf0fec7168b962521659e922988d6c minikube.k8s.io/name=ha-792382 minikube.k8s.io/primary=true
	I1209 10:49:53.086878  627293 ops.go:34] apiserver oom_adj: -16
	I1209 10:49:53.256202  627293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 10:49:53.756573  627293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 10:49:54.256994  627293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 10:49:54.756404  627293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 10:49:55.257137  627293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 10:49:55.756813  627293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 10:49:56.256686  627293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 10:49:56.352743  627293 kubeadm.go:1113] duration metric: took 3.294004538s to wait for elevateKubeSystemPrivileges
	I1209 10:49:56.352793  627293 kubeadm.go:394] duration metric: took 14.397907015s to StartCluster
	I1209 10:49:56.352820  627293 settings.go:142] acquiring lock: {Name:mkd819f3469f56ef651f2e5bb461e93bb6b2b5fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 10:49:56.352918  627293 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20068-609844/kubeconfig
	I1209 10:49:56.354019  627293 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/kubeconfig: {Name:mk32876a1ab47f13c0d7c0ba1ee5e32de01b4a4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 10:49:56.354304  627293 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1209 10:49:56.354300  627293 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.69 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 10:49:56.354326  627293 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1209 10:49:56.354417  627293 start.go:241] waiting for startup goroutines ...
	I1209 10:49:56.354432  627293 addons.go:69] Setting storage-provisioner=true in profile "ha-792382"
	I1209 10:49:56.354455  627293 addons.go:234] Setting addon storage-provisioner=true in "ha-792382"
	I1209 10:49:56.354464  627293 addons.go:69] Setting default-storageclass=true in profile "ha-792382"
	I1209 10:49:56.354495  627293 host.go:66] Checking if "ha-792382" exists ...
	I1209 10:49:56.354504  627293 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-792382"
	I1209 10:49:56.354547  627293 config.go:182] Loaded profile config "ha-792382": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 10:49:56.354836  627293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:49:56.354867  627293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:49:56.354970  627293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:49:56.355019  627293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:49:56.371190  627293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42209
	I1209 10:49:56.371264  627293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40229
	I1209 10:49:56.371767  627293 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:49:56.371795  627293 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:49:56.372258  627293 main.go:141] libmachine: Using API Version  1
	I1209 10:49:56.372273  627293 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:49:56.372420  627293 main.go:141] libmachine: Using API Version  1
	I1209 10:49:56.372446  627293 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:49:56.372589  627293 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:49:56.372844  627293 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:49:56.373068  627293 main.go:141] libmachine: (ha-792382) Calling .GetState
	I1209 10:49:56.373184  627293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:49:56.373230  627293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:49:56.375150  627293 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/20068-609844/kubeconfig
	I1209 10:49:56.375437  627293 kapi.go:59] client config for ha-792382: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/client.crt", KeyFile:"/home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/client.key", CAFile:"/home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243b6e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1209 10:49:56.375916  627293 cert_rotation.go:140] Starting client certificate rotation controller
	I1209 10:49:56.376176  627293 addons.go:234] Setting addon default-storageclass=true in "ha-792382"
	I1209 10:49:56.376225  627293 host.go:66] Checking if "ha-792382" exists ...
	I1209 10:49:56.376515  627293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:49:56.376560  627293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:49:56.389420  627293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45615
	I1209 10:49:56.390064  627293 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:49:56.390648  627293 main.go:141] libmachine: Using API Version  1
	I1209 10:49:56.390676  627293 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:49:56.391072  627293 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:49:56.391316  627293 main.go:141] libmachine: (ha-792382) Calling .GetState
	I1209 10:49:56.391995  627293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42071
	I1209 10:49:56.392539  627293 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:49:56.393048  627293 main.go:141] libmachine: Using API Version  1
	I1209 10:49:56.393071  627293 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:49:56.393381  627293 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:49:56.393446  627293 main.go:141] libmachine: (ha-792382) Calling .DriverName
	I1209 10:49:56.393880  627293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:49:56.393927  627293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:49:56.395537  627293 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 10:49:56.396877  627293 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 10:49:56.396901  627293 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1209 10:49:56.396927  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHHostname
	I1209 10:49:56.399986  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:56.400413  627293 main.go:141] libmachine: (ha-792382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:82:f7", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:49:26 +0000 UTC Type:0 Mac:52:54:00:a8:82:f7 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-792382 Clientid:01:52:54:00:a8:82:f7}
	I1209 10:49:56.400445  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:56.400639  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHPort
	I1209 10:49:56.400862  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHKeyPath
	I1209 10:49:56.401027  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHUsername
	I1209 10:49:56.401192  627293 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382/id_rsa Username:docker}
	I1209 10:49:56.410237  627293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41957
	I1209 10:49:56.411256  627293 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:49:56.413501  627293 main.go:141] libmachine: Using API Version  1
	I1209 10:49:56.413527  627293 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:49:56.414391  627293 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:49:56.414656  627293 main.go:141] libmachine: (ha-792382) Calling .GetState
	I1209 10:49:56.416343  627293 main.go:141] libmachine: (ha-792382) Calling .DriverName
	I1209 10:49:56.416575  627293 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1209 10:49:56.416592  627293 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1209 10:49:56.416608  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHHostname
	I1209 10:49:56.419239  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:56.419746  627293 main.go:141] libmachine: (ha-792382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:82:f7", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:49:26 +0000 UTC Type:0 Mac:52:54:00:a8:82:f7 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-792382 Clientid:01:52:54:00:a8:82:f7}
	I1209 10:49:56.419776  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:56.419875  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHPort
	I1209 10:49:56.420076  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHKeyPath
	I1209 10:49:56.420261  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHUsername
	I1209 10:49:56.420422  627293 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382/id_rsa Username:docker}
	I1209 10:49:56.497434  627293 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1209 10:49:56.595755  627293 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 10:49:56.677666  627293 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1209 10:49:57.066334  627293 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1209 10:49:57.258939  627293 main.go:141] libmachine: Making call to close driver server
	I1209 10:49:57.258974  627293 main.go:141] libmachine: (ha-792382) Calling .Close
	I1209 10:49:57.258947  627293 main.go:141] libmachine: Making call to close driver server
	I1209 10:49:57.259060  627293 main.go:141] libmachine: (ha-792382) Calling .Close
	I1209 10:49:57.259277  627293 main.go:141] libmachine: Successfully made call to close driver server
	I1209 10:49:57.259322  627293 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 10:49:57.259343  627293 main.go:141] libmachine: Making call to close driver server
	I1209 10:49:57.259358  627293 main.go:141] libmachine: (ha-792382) Calling .Close
	I1209 10:49:57.259450  627293 main.go:141] libmachine: (ha-792382) DBG | Closing plugin on server side
	I1209 10:49:57.259495  627293 main.go:141] libmachine: Successfully made call to close driver server
	I1209 10:49:57.259510  627293 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 10:49:57.259523  627293 main.go:141] libmachine: Making call to close driver server
	I1209 10:49:57.259535  627293 main.go:141] libmachine: (ha-792382) Calling .Close
	I1209 10:49:57.259638  627293 main.go:141] libmachine: Successfully made call to close driver server
	I1209 10:49:57.259658  627293 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 10:49:57.259664  627293 main.go:141] libmachine: (ha-792382) DBG | Closing plugin on server side
	I1209 10:49:57.259795  627293 main.go:141] libmachine: Successfully made call to close driver server
	I1209 10:49:57.259815  627293 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 10:49:57.259895  627293 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1209 10:49:57.259914  627293 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1209 10:49:57.260014  627293 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1209 10:49:57.260024  627293 round_trippers.go:469] Request Headers:
	I1209 10:49:57.260035  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:49:57.260046  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:49:57.272826  627293 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1209 10:49:57.273379  627293 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1209 10:49:57.273393  627293 round_trippers.go:469] Request Headers:
	I1209 10:49:57.273400  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:49:57.273404  627293 round_trippers.go:473]     Content-Type: application/json
	I1209 10:49:57.273408  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:49:57.276004  627293 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 10:49:57.276170  627293 main.go:141] libmachine: Making call to close driver server
	I1209 10:49:57.276182  627293 main.go:141] libmachine: (ha-792382) Calling .Close
	I1209 10:49:57.276582  627293 main.go:141] libmachine: Successfully made call to close driver server
	I1209 10:49:57.276606  627293 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 10:49:57.278423  627293 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1209 10:49:57.279715  627293 addons.go:510] duration metric: took 925.38672ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1209 10:49:57.279752  627293 start.go:246] waiting for cluster config update ...
	I1209 10:49:57.279765  627293 start.go:255] writing updated cluster config ...
	I1209 10:49:57.281341  627293 out.go:201] 
	I1209 10:49:57.282688  627293 config.go:182] Loaded profile config "ha-792382": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 10:49:57.282758  627293 profile.go:143] Saving config to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/config.json ...
	I1209 10:49:57.284265  627293 out.go:177] * Starting "ha-792382-m02" control-plane node in "ha-792382" cluster
	I1209 10:49:57.285340  627293 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 10:49:57.285363  627293 cache.go:56] Caching tarball of preloaded images
	I1209 10:49:57.285479  627293 preload.go:172] Found /home/jenkins/minikube-integration/20068-609844/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1209 10:49:57.285499  627293 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1209 10:49:57.285580  627293 profile.go:143] Saving config to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/config.json ...
	I1209 10:49:57.285772  627293 start.go:360] acquireMachinesLock for ha-792382-m02: {Name:mka6afffe9ddf2faebdc603ed0401805d58bf31e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 10:49:57.285830  627293 start.go:364] duration metric: took 34.649µs to acquireMachinesLock for "ha-792382-m02"
	I1209 10:49:57.285855  627293 start.go:93] Provisioning new machine with config: &{Name:ha-792382 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-792382 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.69 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 10:49:57.285945  627293 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1209 10:49:57.287544  627293 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1209 10:49:57.287637  627293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:49:57.287679  627293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:49:57.302923  627293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45627
	I1209 10:49:57.303345  627293 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:49:57.303929  627293 main.go:141] libmachine: Using API Version  1
	I1209 10:49:57.303955  627293 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:49:57.304276  627293 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:49:57.304507  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetMachineName
	I1209 10:49:57.304682  627293 main.go:141] libmachine: (ha-792382-m02) Calling .DriverName
	I1209 10:49:57.304915  627293 start.go:159] libmachine.API.Create for "ha-792382" (driver="kvm2")
	I1209 10:49:57.304958  627293 client.go:168] LocalClient.Create starting
	I1209 10:49:57.305006  627293 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem
	I1209 10:49:57.305054  627293 main.go:141] libmachine: Decoding PEM data...
	I1209 10:49:57.305076  627293 main.go:141] libmachine: Parsing certificate...
	I1209 10:49:57.305150  627293 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem
	I1209 10:49:57.305184  627293 main.go:141] libmachine: Decoding PEM data...
	I1209 10:49:57.305200  627293 main.go:141] libmachine: Parsing certificate...
	I1209 10:49:57.305226  627293 main.go:141] libmachine: Running pre-create checks...
	I1209 10:49:57.305237  627293 main.go:141] libmachine: (ha-792382-m02) Calling .PreCreateCheck
	I1209 10:49:57.305467  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetConfigRaw
	I1209 10:49:57.305949  627293 main.go:141] libmachine: Creating machine...
	I1209 10:49:57.305967  627293 main.go:141] libmachine: (ha-792382-m02) Calling .Create
	I1209 10:49:57.306165  627293 main.go:141] libmachine: (ha-792382-m02) Creating KVM machine...
	I1209 10:49:57.307365  627293 main.go:141] libmachine: (ha-792382-m02) DBG | found existing default KVM network
	I1209 10:49:57.307532  627293 main.go:141] libmachine: (ha-792382-m02) DBG | found existing private KVM network mk-ha-792382
	I1209 10:49:57.307606  627293 main.go:141] libmachine: (ha-792382-m02) Setting up store path in /home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382-m02 ...
	I1209 10:49:57.307640  627293 main.go:141] libmachine: (ha-792382-m02) Building disk image from file:///home/jenkins/minikube-integration/20068-609844/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1209 10:49:57.307676  627293 main.go:141] libmachine: (ha-792382-m02) DBG | I1209 10:49:57.307595  627662 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20068-609844/.minikube
	I1209 10:49:57.307776  627293 main.go:141] libmachine: (ha-792382-m02) Downloading /home/jenkins/minikube-integration/20068-609844/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20068-609844/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1209 10:49:57.586533  627293 main.go:141] libmachine: (ha-792382-m02) DBG | I1209 10:49:57.586377  627662 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382-m02/id_rsa...
	I1209 10:49:57.697560  627293 main.go:141] libmachine: (ha-792382-m02) DBG | I1209 10:49:57.697424  627662 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382-m02/ha-792382-m02.rawdisk...
	I1209 10:49:57.697602  627293 main.go:141] libmachine: (ha-792382-m02) DBG | Writing magic tar header
	I1209 10:49:57.697613  627293 main.go:141] libmachine: (ha-792382-m02) DBG | Writing SSH key tar header
	I1209 10:49:57.697621  627293 main.go:141] libmachine: (ha-792382-m02) DBG | I1209 10:49:57.697562  627662 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382-m02 ...
	I1209 10:49:57.697695  627293 main.go:141] libmachine: (ha-792382-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382-m02
	I1209 10:49:57.697714  627293 main.go:141] libmachine: (ha-792382-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20068-609844/.minikube/machines
	I1209 10:49:57.697722  627293 main.go:141] libmachine: (ha-792382-m02) Setting executable bit set on /home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382-m02 (perms=drwx------)
	I1209 10:49:57.697738  627293 main.go:141] libmachine: (ha-792382-m02) Setting executable bit set on /home/jenkins/minikube-integration/20068-609844/.minikube/machines (perms=drwxr-xr-x)
	I1209 10:49:57.697757  627293 main.go:141] libmachine: (ha-792382-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20068-609844/.minikube
	I1209 10:49:57.697771  627293 main.go:141] libmachine: (ha-792382-m02) Setting executable bit set on /home/jenkins/minikube-integration/20068-609844/.minikube (perms=drwxr-xr-x)
	I1209 10:49:57.697780  627293 main.go:141] libmachine: (ha-792382-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20068-609844
	I1209 10:49:57.697790  627293 main.go:141] libmachine: (ha-792382-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1209 10:49:57.697797  627293 main.go:141] libmachine: (ha-792382-m02) DBG | Checking permissions on dir: /home/jenkins
	I1209 10:49:57.697803  627293 main.go:141] libmachine: (ha-792382-m02) DBG | Checking permissions on dir: /home
	I1209 10:49:57.697812  627293 main.go:141] libmachine: (ha-792382-m02) DBG | Skipping /home - not owner
	I1209 10:49:57.697828  627293 main.go:141] libmachine: (ha-792382-m02) Setting executable bit set on /home/jenkins/minikube-integration/20068-609844 (perms=drwxrwxr-x)
	I1209 10:49:57.697853  627293 main.go:141] libmachine: (ha-792382-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1209 10:49:57.697862  627293 main.go:141] libmachine: (ha-792382-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1209 10:49:57.697867  627293 main.go:141] libmachine: (ha-792382-m02) Creating domain...
	I1209 10:49:57.698931  627293 main.go:141] libmachine: (ha-792382-m02) define libvirt domain using xml: 
	I1209 10:49:57.698948  627293 main.go:141] libmachine: (ha-792382-m02) <domain type='kvm'>
	I1209 10:49:57.698955  627293 main.go:141] libmachine: (ha-792382-m02)   <name>ha-792382-m02</name>
	I1209 10:49:57.698960  627293 main.go:141] libmachine: (ha-792382-m02)   <memory unit='MiB'>2200</memory>
	I1209 10:49:57.698965  627293 main.go:141] libmachine: (ha-792382-m02)   <vcpu>2</vcpu>
	I1209 10:49:57.698968  627293 main.go:141] libmachine: (ha-792382-m02)   <features>
	I1209 10:49:57.698974  627293 main.go:141] libmachine: (ha-792382-m02)     <acpi/>
	I1209 10:49:57.698977  627293 main.go:141] libmachine: (ha-792382-m02)     <apic/>
	I1209 10:49:57.698982  627293 main.go:141] libmachine: (ha-792382-m02)     <pae/>
	I1209 10:49:57.698985  627293 main.go:141] libmachine: (ha-792382-m02)     
	I1209 10:49:57.698991  627293 main.go:141] libmachine: (ha-792382-m02)   </features>
	I1209 10:49:57.698996  627293 main.go:141] libmachine: (ha-792382-m02)   <cpu mode='host-passthrough'>
	I1209 10:49:57.699000  627293 main.go:141] libmachine: (ha-792382-m02)   
	I1209 10:49:57.699004  627293 main.go:141] libmachine: (ha-792382-m02)   </cpu>
	I1209 10:49:57.699009  627293 main.go:141] libmachine: (ha-792382-m02)   <os>
	I1209 10:49:57.699013  627293 main.go:141] libmachine: (ha-792382-m02)     <type>hvm</type>
	I1209 10:49:57.699018  627293 main.go:141] libmachine: (ha-792382-m02)     <boot dev='cdrom'/>
	I1209 10:49:57.699034  627293 main.go:141] libmachine: (ha-792382-m02)     <boot dev='hd'/>
	I1209 10:49:57.699053  627293 main.go:141] libmachine: (ha-792382-m02)     <bootmenu enable='no'/>
	I1209 10:49:57.699065  627293 main.go:141] libmachine: (ha-792382-m02)   </os>
	I1209 10:49:57.699070  627293 main.go:141] libmachine: (ha-792382-m02)   <devices>
	I1209 10:49:57.699074  627293 main.go:141] libmachine: (ha-792382-m02)     <disk type='file' device='cdrom'>
	I1209 10:49:57.699083  627293 main.go:141] libmachine: (ha-792382-m02)       <source file='/home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382-m02/boot2docker.iso'/>
	I1209 10:49:57.699087  627293 main.go:141] libmachine: (ha-792382-m02)       <target dev='hdc' bus='scsi'/>
	I1209 10:49:57.699092  627293 main.go:141] libmachine: (ha-792382-m02)       <readonly/>
	I1209 10:49:57.699095  627293 main.go:141] libmachine: (ha-792382-m02)     </disk>
	I1209 10:49:57.699101  627293 main.go:141] libmachine: (ha-792382-m02)     <disk type='file' device='disk'>
	I1209 10:49:57.699106  627293 main.go:141] libmachine: (ha-792382-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1209 10:49:57.699114  627293 main.go:141] libmachine: (ha-792382-m02)       <source file='/home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382-m02/ha-792382-m02.rawdisk'/>
	I1209 10:49:57.699122  627293 main.go:141] libmachine: (ha-792382-m02)       <target dev='hda' bus='virtio'/>
	I1209 10:49:57.699137  627293 main.go:141] libmachine: (ha-792382-m02)     </disk>
	I1209 10:49:57.699147  627293 main.go:141] libmachine: (ha-792382-m02)     <interface type='network'>
	I1209 10:49:57.699179  627293 main.go:141] libmachine: (ha-792382-m02)       <source network='mk-ha-792382'/>
	I1209 10:49:57.699205  627293 main.go:141] libmachine: (ha-792382-m02)       <model type='virtio'/>
	I1209 10:49:57.699214  627293 main.go:141] libmachine: (ha-792382-m02)     </interface>
	I1209 10:49:57.699227  627293 main.go:141] libmachine: (ha-792382-m02)     <interface type='network'>
	I1209 10:49:57.699257  627293 main.go:141] libmachine: (ha-792382-m02)       <source network='default'/>
	I1209 10:49:57.699276  627293 main.go:141] libmachine: (ha-792382-m02)       <model type='virtio'/>
	I1209 10:49:57.699287  627293 main.go:141] libmachine: (ha-792382-m02)     </interface>
	I1209 10:49:57.699295  627293 main.go:141] libmachine: (ha-792382-m02)     <serial type='pty'>
	I1209 10:49:57.699302  627293 main.go:141] libmachine: (ha-792382-m02)       <target port='0'/>
	I1209 10:49:57.699309  627293 main.go:141] libmachine: (ha-792382-m02)     </serial>
	I1209 10:49:57.699314  627293 main.go:141] libmachine: (ha-792382-m02)     <console type='pty'>
	I1209 10:49:57.699320  627293 main.go:141] libmachine: (ha-792382-m02)       <target type='serial' port='0'/>
	I1209 10:49:57.699325  627293 main.go:141] libmachine: (ha-792382-m02)     </console>
	I1209 10:49:57.699332  627293 main.go:141] libmachine: (ha-792382-m02)     <rng model='virtio'>
	I1209 10:49:57.699338  627293 main.go:141] libmachine: (ha-792382-m02)       <backend model='random'>/dev/random</backend>
	I1209 10:49:57.699352  627293 main.go:141] libmachine: (ha-792382-m02)     </rng>
	I1209 10:49:57.699360  627293 main.go:141] libmachine: (ha-792382-m02)     
	I1209 10:49:57.699364  627293 main.go:141] libmachine: (ha-792382-m02)     
	I1209 10:49:57.699370  627293 main.go:141] libmachine: (ha-792382-m02)   </devices>
	I1209 10:49:57.699374  627293 main.go:141] libmachine: (ha-792382-m02) </domain>
	I1209 10:49:57.699384  627293 main.go:141] libmachine: (ha-792382-m02) 
	I1209 10:49:57.706829  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:be:31:4f in network default
	I1209 10:49:57.707394  627293 main.go:141] libmachine: (ha-792382-m02) Ensuring networks are active...
	I1209 10:49:57.707420  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:49:57.708099  627293 main.go:141] libmachine: (ha-792382-m02) Ensuring network default is active
	I1209 10:49:57.708447  627293 main.go:141] libmachine: (ha-792382-m02) Ensuring network mk-ha-792382 is active
	I1209 10:49:57.708833  627293 main.go:141] libmachine: (ha-792382-m02) Getting domain xml...
	I1209 10:49:57.709562  627293 main.go:141] libmachine: (ha-792382-m02) Creating domain...
	I1209 10:49:58.965991  627293 main.go:141] libmachine: (ha-792382-m02) Waiting to get IP...
	I1209 10:49:58.967025  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:49:58.967615  627293 main.go:141] libmachine: (ha-792382-m02) DBG | unable to find current IP address of domain ha-792382-m02 in network mk-ha-792382
	I1209 10:49:58.967718  627293 main.go:141] libmachine: (ha-792382-m02) DBG | I1209 10:49:58.967609  627662 retry.go:31] will retry after 289.483594ms: waiting for machine to come up
	I1209 10:49:59.259398  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:49:59.259929  627293 main.go:141] libmachine: (ha-792382-m02) DBG | unable to find current IP address of domain ha-792382-m02 in network mk-ha-792382
	I1209 10:49:59.259958  627293 main.go:141] libmachine: (ha-792382-m02) DBG | I1209 10:49:59.259877  627662 retry.go:31] will retry after 368.739813ms: waiting for machine to come up
	I1209 10:49:59.630595  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:49:59.631082  627293 main.go:141] libmachine: (ha-792382-m02) DBG | unable to find current IP address of domain ha-792382-m02 in network mk-ha-792382
	I1209 10:49:59.631111  627293 main.go:141] libmachine: (ha-792382-m02) DBG | I1209 10:49:59.631032  627662 retry.go:31] will retry after 468.793736ms: waiting for machine to come up
	I1209 10:50:00.101924  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:00.102437  627293 main.go:141] libmachine: (ha-792382-m02) DBG | unable to find current IP address of domain ha-792382-m02 in network mk-ha-792382
	I1209 10:50:00.102468  627293 main.go:141] libmachine: (ha-792382-m02) DBG | I1209 10:50:00.102389  627662 retry.go:31] will retry after 467.16032ms: waiting for machine to come up
	I1209 10:50:00.571568  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:00.572085  627293 main.go:141] libmachine: (ha-792382-m02) DBG | unable to find current IP address of domain ha-792382-m02 in network mk-ha-792382
	I1209 10:50:00.572158  627293 main.go:141] libmachine: (ha-792382-m02) DBG | I1209 10:50:00.571967  627662 retry.go:31] will retry after 614.331886ms: waiting for machine to come up
	I1209 10:50:01.188165  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:01.188721  627293 main.go:141] libmachine: (ha-792382-m02) DBG | unable to find current IP address of domain ha-792382-m02 in network mk-ha-792382
	I1209 10:50:01.188753  627293 main.go:141] libmachine: (ha-792382-m02) DBG | I1209 10:50:01.188683  627662 retry.go:31] will retry after 622.291039ms: waiting for machine to come up
	I1209 10:50:01.812761  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:01.813166  627293 main.go:141] libmachine: (ha-792382-m02) DBG | unable to find current IP address of domain ha-792382-m02 in network mk-ha-792382
	I1209 10:50:01.813197  627293 main.go:141] libmachine: (ha-792382-m02) DBG | I1209 10:50:01.813093  627662 retry.go:31] will retry after 970.350077ms: waiting for machine to come up
	I1209 10:50:02.785861  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:02.786416  627293 main.go:141] libmachine: (ha-792382-m02) DBG | unable to find current IP address of domain ha-792382-m02 in network mk-ha-792382
	I1209 10:50:02.786477  627293 main.go:141] libmachine: (ha-792382-m02) DBG | I1209 10:50:02.786368  627662 retry.go:31] will retry after 1.09205339s: waiting for machine to come up
	I1209 10:50:03.879814  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:03.880303  627293 main.go:141] libmachine: (ha-792382-m02) DBG | unable to find current IP address of domain ha-792382-m02 in network mk-ha-792382
	I1209 10:50:03.880327  627293 main.go:141] libmachine: (ha-792382-m02) DBG | I1209 10:50:03.880248  627662 retry.go:31] will retry after 1.765651975s: waiting for machine to come up
	I1209 10:50:05.648159  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:05.648671  627293 main.go:141] libmachine: (ha-792382-m02) DBG | unable to find current IP address of domain ha-792382-m02 in network mk-ha-792382
	I1209 10:50:05.648696  627293 main.go:141] libmachine: (ha-792382-m02) DBG | I1209 10:50:05.648615  627662 retry.go:31] will retry after 1.762832578s: waiting for machine to come up
	I1209 10:50:07.413599  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:07.414030  627293 main.go:141] libmachine: (ha-792382-m02) DBG | unable to find current IP address of domain ha-792382-m02 in network mk-ha-792382
	I1209 10:50:07.414059  627293 main.go:141] libmachine: (ha-792382-m02) DBG | I1209 10:50:07.413978  627662 retry.go:31] will retry after 2.150383903s: waiting for machine to come up
	I1209 10:50:09.565911  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:09.566390  627293 main.go:141] libmachine: (ha-792382-m02) DBG | unable to find current IP address of domain ha-792382-m02 in network mk-ha-792382
	I1209 10:50:09.566420  627293 main.go:141] libmachine: (ha-792382-m02) DBG | I1209 10:50:09.566350  627662 retry.go:31] will retry after 3.049537741s: waiting for machine to come up
	I1209 10:50:12.617744  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:12.618241  627293 main.go:141] libmachine: (ha-792382-m02) DBG | unable to find current IP address of domain ha-792382-m02 in network mk-ha-792382
	I1209 10:50:12.618276  627293 main.go:141] libmachine: (ha-792382-m02) DBG | I1209 10:50:12.618155  627662 retry.go:31] will retry after 3.599687882s: waiting for machine to come up
	I1209 10:50:16.219399  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:16.219837  627293 main.go:141] libmachine: (ha-792382-m02) DBG | unable to find current IP address of domain ha-792382-m02 in network mk-ha-792382
	I1209 10:50:16.219868  627293 main.go:141] libmachine: (ha-792382-m02) DBG | I1209 10:50:16.219789  627662 retry.go:31] will retry after 3.518875962s: waiting for machine to come up
	I1209 10:50:19.740130  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:19.740985  627293 main.go:141] libmachine: (ha-792382-m02) Found IP for machine: 192.168.39.89
	I1209 10:50:19.741024  627293 main.go:141] libmachine: (ha-792382-m02) Reserving static IP address...
	I1209 10:50:19.741037  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has current primary IP address 192.168.39.89 and MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:19.741518  627293 main.go:141] libmachine: (ha-792382-m02) DBG | unable to find host DHCP lease matching {name: "ha-792382-m02", mac: "52:54:00:95:40:00", ip: "192.168.39.89"} in network mk-ha-792382
	I1209 10:50:19.814048  627293 main.go:141] libmachine: (ha-792382-m02) DBG | Getting to WaitForSSH function...
	I1209 10:50:19.814070  627293 main.go:141] libmachine: (ha-792382-m02) Reserved static IP address: 192.168.39.89
	I1209 10:50:19.814078  627293 main.go:141] libmachine: (ha-792382-m02) Waiting for SSH to be available...
	I1209 10:50:19.816613  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:19.817057  627293 main.go:141] libmachine: (ha-792382-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:40:00", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:50:11 +0000 UTC Type:0 Mac:52:54:00:95:40:00 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:minikube Clientid:01:52:54:00:95:40:00}
	I1209 10:50:19.817166  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:19.817261  627293 main.go:141] libmachine: (ha-792382-m02) DBG | Using SSH client type: external
	I1209 10:50:19.817282  627293 main.go:141] libmachine: (ha-792382-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382-m02/id_rsa (-rw-------)
	I1209 10:50:19.817362  627293 main.go:141] libmachine: (ha-792382-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.89 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1209 10:50:19.817390  627293 main.go:141] libmachine: (ha-792382-m02) DBG | About to run SSH command:
	I1209 10:50:19.817411  627293 main.go:141] libmachine: (ha-792382-m02) DBG | exit 0
	I1209 10:50:19.942297  627293 main.go:141] libmachine: (ha-792382-m02) DBG | SSH cmd err, output: <nil>: 
	I1209 10:50:19.942595  627293 main.go:141] libmachine: (ha-792382-m02) KVM machine creation complete!
	I1209 10:50:19.942914  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetConfigRaw
	I1209 10:50:19.943559  627293 main.go:141] libmachine: (ha-792382-m02) Calling .DriverName
	I1209 10:50:19.943781  627293 main.go:141] libmachine: (ha-792382-m02) Calling .DriverName
	I1209 10:50:19.943947  627293 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1209 10:50:19.943965  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetState
	I1209 10:50:19.945579  627293 main.go:141] libmachine: Detecting operating system of created instance...
	I1209 10:50:19.945598  627293 main.go:141] libmachine: Waiting for SSH to be available...
	I1209 10:50:19.945607  627293 main.go:141] libmachine: Getting to WaitForSSH function...
	I1209 10:50:19.945616  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHHostname
	I1209 10:50:19.947916  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:19.948374  627293 main.go:141] libmachine: (ha-792382-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:40:00", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:50:11 +0000 UTC Type:0 Mac:52:54:00:95:40:00 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-792382-m02 Clientid:01:52:54:00:95:40:00}
	I1209 10:50:19.948400  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:19.948582  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHPort
	I1209 10:50:19.948773  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHKeyPath
	I1209 10:50:19.948920  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHKeyPath
	I1209 10:50:19.949049  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHUsername
	I1209 10:50:19.949307  627293 main.go:141] libmachine: Using SSH client type: native
	I1209 10:50:19.949555  627293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I1209 10:50:19.949573  627293 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1209 10:50:20.053499  627293 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 10:50:20.053528  627293 main.go:141] libmachine: Detecting the provisioner...
	I1209 10:50:20.053541  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHHostname
	I1209 10:50:20.056444  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:20.056881  627293 main.go:141] libmachine: (ha-792382-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:40:00", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:50:11 +0000 UTC Type:0 Mac:52:54:00:95:40:00 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-792382-m02 Clientid:01:52:54:00:95:40:00}
	I1209 10:50:20.056911  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:20.057119  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHPort
	I1209 10:50:20.057366  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHKeyPath
	I1209 10:50:20.057545  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHKeyPath
	I1209 10:50:20.057698  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHUsername
	I1209 10:50:20.057856  627293 main.go:141] libmachine: Using SSH client type: native
	I1209 10:50:20.058022  627293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I1209 10:50:20.058034  627293 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1209 10:50:20.162532  627293 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1209 10:50:20.162621  627293 main.go:141] libmachine: found compatible host: buildroot
	I1209 10:50:20.162636  627293 main.go:141] libmachine: Provisioning with buildroot...
	I1209 10:50:20.162651  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetMachineName
	I1209 10:50:20.162892  627293 buildroot.go:166] provisioning hostname "ha-792382-m02"
	I1209 10:50:20.162921  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetMachineName
	I1209 10:50:20.163135  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHHostname
	I1209 10:50:20.165692  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:20.166051  627293 main.go:141] libmachine: (ha-792382-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:40:00", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:50:11 +0000 UTC Type:0 Mac:52:54:00:95:40:00 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-792382-m02 Clientid:01:52:54:00:95:40:00}
	I1209 10:50:20.166078  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:20.166237  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHPort
	I1209 10:50:20.166425  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHKeyPath
	I1209 10:50:20.166592  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHKeyPath
	I1209 10:50:20.166734  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHUsername
	I1209 10:50:20.166863  627293 main.go:141] libmachine: Using SSH client type: native
	I1209 10:50:20.167071  627293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I1209 10:50:20.167087  627293 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-792382-m02 && echo "ha-792382-m02" | sudo tee /etc/hostname
	I1209 10:50:20.285783  627293 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-792382-m02
	
	I1209 10:50:20.285812  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHHostname
	I1209 10:50:20.288581  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:20.288945  627293 main.go:141] libmachine: (ha-792382-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:40:00", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:50:11 +0000 UTC Type:0 Mac:52:54:00:95:40:00 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-792382-m02 Clientid:01:52:54:00:95:40:00}
	I1209 10:50:20.289006  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:20.289156  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHPort
	I1209 10:50:20.289374  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHKeyPath
	I1209 10:50:20.289525  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHKeyPath
	I1209 10:50:20.289675  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHUsername
	I1209 10:50:20.289834  627293 main.go:141] libmachine: Using SSH client type: native
	I1209 10:50:20.290050  627293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I1209 10:50:20.290067  627293 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-792382-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-792382-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-792382-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 10:50:20.403745  627293 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 10:50:20.403780  627293 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20068-609844/.minikube CaCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20068-609844/.minikube}
	I1209 10:50:20.403797  627293 buildroot.go:174] setting up certificates
	I1209 10:50:20.403807  627293 provision.go:84] configureAuth start
	I1209 10:50:20.403816  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetMachineName
	I1209 10:50:20.404127  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetIP
	I1209 10:50:20.406853  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:20.407317  627293 main.go:141] libmachine: (ha-792382-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:40:00", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:50:11 +0000 UTC Type:0 Mac:52:54:00:95:40:00 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-792382-m02 Clientid:01:52:54:00:95:40:00}
	I1209 10:50:20.407339  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:20.407523  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHHostname
	I1209 10:50:20.410235  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:20.410616  627293 main.go:141] libmachine: (ha-792382-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:40:00", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:50:11 +0000 UTC Type:0 Mac:52:54:00:95:40:00 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-792382-m02 Clientid:01:52:54:00:95:40:00}
	I1209 10:50:20.410641  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:20.410813  627293 provision.go:143] copyHostCerts
	I1209 10:50:20.410851  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem
	I1209 10:50:20.410897  627293 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem, removing ...
	I1209 10:50:20.410910  627293 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem
	I1209 10:50:20.410996  627293 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem (1082 bytes)
	I1209 10:50:20.411092  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem
	I1209 10:50:20.411117  627293 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem, removing ...
	I1209 10:50:20.411127  627293 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem
	I1209 10:50:20.411167  627293 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem (1123 bytes)
	I1209 10:50:20.411241  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem
	I1209 10:50:20.411265  627293 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem, removing ...
	I1209 10:50:20.411274  627293 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem
	I1209 10:50:20.411310  627293 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem (1679 bytes)
	I1209 10:50:20.411379  627293 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem org=jenkins.ha-792382-m02 san=[127.0.0.1 192.168.39.89 ha-792382-m02 localhost minikube]
	I1209 10:50:20.506946  627293 provision.go:177] copyRemoteCerts
	I1209 10:50:20.507013  627293 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 10:50:20.507043  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHHostname
	I1209 10:50:20.509588  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:20.509997  627293 main.go:141] libmachine: (ha-792382-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:40:00", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:50:11 +0000 UTC Type:0 Mac:52:54:00:95:40:00 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-792382-m02 Clientid:01:52:54:00:95:40:00}
	I1209 10:50:20.510031  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:20.510256  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHPort
	I1209 10:50:20.510485  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHKeyPath
	I1209 10:50:20.510630  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHUsername
	I1209 10:50:20.510792  627293 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382-m02/id_rsa Username:docker}
	I1209 10:50:20.591669  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1209 10:50:20.591738  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1209 10:50:20.614379  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1209 10:50:20.614474  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1209 10:50:20.635752  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1209 10:50:20.635819  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1209 10:50:20.657840  627293 provision.go:87] duration metric: took 254.019642ms to configureAuth
	I1209 10:50:20.657873  627293 buildroot.go:189] setting minikube options for container-runtime
	I1209 10:50:20.658088  627293 config.go:182] Loaded profile config "ha-792382": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 10:50:20.658221  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHHostname
	I1209 10:50:20.661758  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:20.662150  627293 main.go:141] libmachine: (ha-792382-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:40:00", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:50:11 +0000 UTC Type:0 Mac:52:54:00:95:40:00 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-792382-m02 Clientid:01:52:54:00:95:40:00}
	I1209 10:50:20.662207  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:20.662350  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHPort
	I1209 10:50:20.662590  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHKeyPath
	I1209 10:50:20.662773  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHKeyPath
	I1209 10:50:20.662982  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHUsername
	I1209 10:50:20.663174  627293 main.go:141] libmachine: Using SSH client type: native
	I1209 10:50:20.663396  627293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I1209 10:50:20.663417  627293 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 10:50:20.895342  627293 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 10:50:20.895376  627293 main.go:141] libmachine: Checking connection to Docker...
	I1209 10:50:20.895386  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetURL
	I1209 10:50:20.896678  627293 main.go:141] libmachine: (ha-792382-m02) DBG | Using libvirt version 6000000
	I1209 10:50:20.899127  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:20.899492  627293 main.go:141] libmachine: (ha-792382-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:40:00", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:50:11 +0000 UTC Type:0 Mac:52:54:00:95:40:00 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-792382-m02 Clientid:01:52:54:00:95:40:00}
	I1209 10:50:20.899524  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:20.899662  627293 main.go:141] libmachine: Docker is up and running!
	I1209 10:50:20.899675  627293 main.go:141] libmachine: Reticulating splines...
	I1209 10:50:20.899683  627293 client.go:171] duration metric: took 23.594715586s to LocalClient.Create
	I1209 10:50:20.899712  627293 start.go:167] duration metric: took 23.594799788s to libmachine.API.Create "ha-792382"
	I1209 10:50:20.899727  627293 start.go:293] postStartSetup for "ha-792382-m02" (driver="kvm2")
	I1209 10:50:20.899740  627293 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 10:50:20.899762  627293 main.go:141] libmachine: (ha-792382-m02) Calling .DriverName
	I1209 10:50:20.899988  627293 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 10:50:20.900011  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHHostname
	I1209 10:50:20.902193  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:20.902545  627293 main.go:141] libmachine: (ha-792382-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:40:00", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:50:11 +0000 UTC Type:0 Mac:52:54:00:95:40:00 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-792382-m02 Clientid:01:52:54:00:95:40:00}
	I1209 10:50:20.902574  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:20.902733  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHPort
	I1209 10:50:20.902907  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHKeyPath
	I1209 10:50:20.903055  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHUsername
	I1209 10:50:20.903224  627293 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382-m02/id_rsa Username:docker}
	I1209 10:50:20.987979  627293 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 10:50:20.992183  627293 info.go:137] Remote host: Buildroot 2023.02.9
	I1209 10:50:20.992210  627293 filesync.go:126] Scanning /home/jenkins/minikube-integration/20068-609844/.minikube/addons for local assets ...
	I1209 10:50:20.992280  627293 filesync.go:126] Scanning /home/jenkins/minikube-integration/20068-609844/.minikube/files for local assets ...
	I1209 10:50:20.992373  627293 filesync.go:149] local asset: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem -> 6170172.pem in /etc/ssl/certs
	I1209 10:50:20.992388  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem -> /etc/ssl/certs/6170172.pem
	I1209 10:50:20.992517  627293 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 10:50:21.001255  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem --> /etc/ssl/certs/6170172.pem (1708 bytes)
	I1209 10:50:21.023333  627293 start.go:296] duration metric: took 123.590873ms for postStartSetup
	I1209 10:50:21.023382  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetConfigRaw
	I1209 10:50:21.024074  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetIP
	I1209 10:50:21.026760  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:21.027216  627293 main.go:141] libmachine: (ha-792382-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:40:00", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:50:11 +0000 UTC Type:0 Mac:52:54:00:95:40:00 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-792382-m02 Clientid:01:52:54:00:95:40:00}
	I1209 10:50:21.027253  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:21.027452  627293 profile.go:143] Saving config to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/config.json ...
	I1209 10:50:21.027657  627293 start.go:128] duration metric: took 23.741699232s to createHost
	I1209 10:50:21.027689  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHHostname
	I1209 10:50:21.029948  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:21.030322  627293 main.go:141] libmachine: (ha-792382-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:40:00", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:50:11 +0000 UTC Type:0 Mac:52:54:00:95:40:00 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-792382-m02 Clientid:01:52:54:00:95:40:00}
	I1209 10:50:21.030343  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:21.030537  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHPort
	I1209 10:50:21.030708  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHKeyPath
	I1209 10:50:21.030868  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHKeyPath
	I1209 10:50:21.031040  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHUsername
	I1209 10:50:21.031235  627293 main.go:141] libmachine: Using SSH client type: native
	I1209 10:50:21.031525  627293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I1209 10:50:21.031542  627293 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1209 10:50:21.134634  627293 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733741421.109382404
	
	I1209 10:50:21.134664  627293 fix.go:216] guest clock: 1733741421.109382404
	I1209 10:50:21.134671  627293 fix.go:229] Guest: 2024-12-09 10:50:21.109382404 +0000 UTC Remote: 2024-12-09 10:50:21.027672389 +0000 UTC m=+68.911911388 (delta=81.710015ms)
	I1209 10:50:21.134687  627293 fix.go:200] guest clock delta is within tolerance: 81.710015ms
	I1209 10:50:21.134693  627293 start.go:83] releasing machines lock for "ha-792382-m02", held for 23.84885063s
	I1209 10:50:21.134711  627293 main.go:141] libmachine: (ha-792382-m02) Calling .DriverName
	I1209 10:50:21.135011  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetIP
	I1209 10:50:21.137922  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:21.138329  627293 main.go:141] libmachine: (ha-792382-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:40:00", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:50:11 +0000 UTC Type:0 Mac:52:54:00:95:40:00 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-792382-m02 Clientid:01:52:54:00:95:40:00}
	I1209 10:50:21.138359  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:21.140711  627293 out.go:177] * Found network options:
	I1209 10:50:21.142033  627293 out.go:177]   - NO_PROXY=192.168.39.69
	W1209 10:50:21.143264  627293 proxy.go:119] fail to check proxy env: Error ip not in block
	I1209 10:50:21.143304  627293 main.go:141] libmachine: (ha-792382-m02) Calling .DriverName
	I1209 10:50:21.143961  627293 main.go:141] libmachine: (ha-792382-m02) Calling .DriverName
	I1209 10:50:21.144186  627293 main.go:141] libmachine: (ha-792382-m02) Calling .DriverName
	I1209 10:50:21.144305  627293 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 10:50:21.144354  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHHostname
	W1209 10:50:21.144454  627293 proxy.go:119] fail to check proxy env: Error ip not in block
	I1209 10:50:21.144534  627293 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 10:50:21.144559  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHHostname
	I1209 10:50:21.147622  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:21.147846  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:21.147959  627293 main.go:141] libmachine: (ha-792382-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:40:00", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:50:11 +0000 UTC Type:0 Mac:52:54:00:95:40:00 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-792382-m02 Clientid:01:52:54:00:95:40:00}
	I1209 10:50:21.147994  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:21.148084  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHPort
	I1209 10:50:21.148250  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHKeyPath
	I1209 10:50:21.148369  627293 main.go:141] libmachine: (ha-792382-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:40:00", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:50:11 +0000 UTC Type:0 Mac:52:54:00:95:40:00 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-792382-m02 Clientid:01:52:54:00:95:40:00}
	I1209 10:50:21.148396  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:21.148419  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHUsername
	I1209 10:50:21.148619  627293 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382-m02/id_rsa Username:docker}
	I1209 10:50:21.148763  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHPort
	I1209 10:50:21.148870  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHKeyPath
	I1209 10:50:21.149177  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHUsername
	I1209 10:50:21.149326  627293 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382-m02/id_rsa Username:docker}
	I1209 10:50:21.377528  627293 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 10:50:21.383869  627293 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 10:50:21.383962  627293 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 10:50:21.402713  627293 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1209 10:50:21.402747  627293 start.go:495] detecting cgroup driver to use...
	I1209 10:50:21.402836  627293 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 10:50:21.418644  627293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 10:50:21.431825  627293 docker.go:217] disabling cri-docker service (if available) ...
	I1209 10:50:21.431894  627293 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 10:50:21.445030  627293 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 10:50:21.458235  627293 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 10:50:21.576888  627293 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 10:50:21.715254  627293 docker.go:233] disabling docker service ...
	I1209 10:50:21.715337  627293 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 10:50:21.728777  627293 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 10:50:21.741484  627293 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 10:50:21.877920  627293 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 10:50:21.987438  627293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 10:50:22.000287  627293 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 10:50:22.017967  627293 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1209 10:50:22.018044  627293 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 10:50:22.027586  627293 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 10:50:22.027647  627293 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 10:50:22.037032  627293 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 10:50:22.046716  627293 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 10:50:22.056390  627293 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 10:50:22.066025  627293 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 10:50:22.075591  627293 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 10:50:22.092169  627293 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 10:50:22.102292  627293 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 10:50:22.111580  627293 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1209 10:50:22.111645  627293 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1209 10:50:22.124823  627293 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 10:50:22.134059  627293 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 10:50:22.267517  627293 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 10:50:22.360113  627293 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 10:50:22.360202  627293 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 10:50:22.366049  627293 start.go:563] Will wait 60s for crictl version
	I1209 10:50:22.366124  627293 ssh_runner.go:195] Run: which crictl
	I1209 10:50:22.369685  627293 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 10:50:22.406117  627293 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1209 10:50:22.406233  627293 ssh_runner.go:195] Run: crio --version
	I1209 10:50:22.433831  627293 ssh_runner.go:195] Run: crio --version
	I1209 10:50:22.466702  627293 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1209 10:50:22.468114  627293 out.go:177]   - env NO_PROXY=192.168.39.69
	I1209 10:50:22.469415  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetIP
	I1209 10:50:22.472354  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:22.472792  627293 main.go:141] libmachine: (ha-792382-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:40:00", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:50:11 +0000 UTC Type:0 Mac:52:54:00:95:40:00 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-792382-m02 Clientid:01:52:54:00:95:40:00}
	I1209 10:50:22.472838  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:22.473063  627293 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1209 10:50:22.478206  627293 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 10:50:22.490975  627293 mustload.go:65] Loading cluster: ha-792382
	I1209 10:50:22.491223  627293 config.go:182] Loaded profile config "ha-792382": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 10:50:22.491515  627293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:50:22.491566  627293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:50:22.507354  627293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34813
	I1209 10:50:22.507839  627293 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:50:22.508378  627293 main.go:141] libmachine: Using API Version  1
	I1209 10:50:22.508407  627293 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:50:22.508811  627293 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:50:22.509022  627293 main.go:141] libmachine: (ha-792382) Calling .GetState
	I1209 10:50:22.510469  627293 host.go:66] Checking if "ha-792382" exists ...
	I1209 10:50:22.510748  627293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:50:22.510785  627293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:50:22.525474  627293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34445
	I1209 10:50:22.525972  627293 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:50:22.526524  627293 main.go:141] libmachine: Using API Version  1
	I1209 10:50:22.526554  627293 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:50:22.526848  627293 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:50:22.527055  627293 main.go:141] libmachine: (ha-792382) Calling .DriverName
	I1209 10:50:22.527271  627293 certs.go:68] Setting up /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382 for IP: 192.168.39.89
	I1209 10:50:22.527285  627293 certs.go:194] generating shared ca certs ...
	I1209 10:50:22.527308  627293 certs.go:226] acquiring lock for ca certs: {Name:mk2df5887e08965d909a9c950da5dfffb8a04ddc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 10:50:22.527465  627293 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.key
	I1209 10:50:22.527507  627293 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.key
	I1209 10:50:22.527514  627293 certs.go:256] generating profile certs ...
	I1209 10:50:22.527587  627293 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/client.key
	I1209 10:50:22.527613  627293 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.key.8c4cfabb
	I1209 10:50:22.527628  627293 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.crt.8c4cfabb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.69 192.168.39.89 192.168.39.254]
	I1209 10:50:22.618893  627293 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.crt.8c4cfabb ...
	I1209 10:50:22.618924  627293 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.crt.8c4cfabb: {Name:mk9fc14aa3aaf65091f9f2d45f3765515e31473e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 10:50:22.619129  627293 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.key.8c4cfabb ...
	I1209 10:50:22.619148  627293 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.key.8c4cfabb: {Name:mk41f99fa98267e5a58e4b407fa7296350fea4d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 10:50:22.619255  627293 certs.go:381] copying /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.crt.8c4cfabb -> /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.crt
	I1209 10:50:22.619394  627293 certs.go:385] copying /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.key.8c4cfabb -> /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.key
	I1209 10:50:22.619538  627293 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/proxy-client.key
	I1209 10:50:22.619555  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1209 10:50:22.619568  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1209 10:50:22.619579  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1209 10:50:22.619593  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1209 10:50:22.619603  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1209 10:50:22.619614  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1209 10:50:22.619626  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1209 10:50:22.619636  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1209 10:50:22.619683  627293 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017.pem (1338 bytes)
	W1209 10:50:22.619711  627293 certs.go:480] ignoring /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017_empty.pem, impossibly tiny 0 bytes
	I1209 10:50:22.619720  627293 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem (1675 bytes)
	I1209 10:50:22.619743  627293 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem (1082 bytes)
	I1209 10:50:22.619767  627293 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem (1123 bytes)
	I1209 10:50:22.619790  627293 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem (1679 bytes)
	I1209 10:50:22.619828  627293 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem (1708 bytes)
	I1209 10:50:22.619853  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem -> /usr/share/ca-certificates/6170172.pem
	I1209 10:50:22.619866  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1209 10:50:22.619877  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017.pem -> /usr/share/ca-certificates/617017.pem
	I1209 10:50:22.619908  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHHostname
	I1209 10:50:22.623291  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:50:22.623706  627293 main.go:141] libmachine: (ha-792382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:82:f7", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:49:26 +0000 UTC Type:0 Mac:52:54:00:a8:82:f7 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-792382 Clientid:01:52:54:00:a8:82:f7}
	I1209 10:50:22.623734  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:50:22.623919  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHPort
	I1209 10:50:22.624122  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHKeyPath
	I1209 10:50:22.624329  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHUsername
	I1209 10:50:22.624526  627293 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382/id_rsa Username:docker}
	I1209 10:50:22.694590  627293 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1209 10:50:22.700190  627293 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1209 10:50:22.715537  627293 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1209 10:50:22.720737  627293 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1209 10:50:22.731623  627293 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1209 10:50:22.736050  627293 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1209 10:50:22.747578  627293 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1209 10:50:22.752312  627293 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1209 10:50:22.763588  627293 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1209 10:50:22.768050  627293 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1209 10:50:22.777655  627293 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1209 10:50:22.781717  627293 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1209 10:50:22.792464  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 10:50:22.816318  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1209 10:50:22.837988  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 10:50:22.861671  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1209 10:50:22.883735  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1209 10:50:22.904888  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1209 10:50:22.926092  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 10:50:22.947329  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1209 10:50:22.968466  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem --> /usr/share/ca-certificates/6170172.pem (1708 bytes)
	I1209 10:50:22.989908  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 10:50:23.012190  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017.pem --> /usr/share/ca-certificates/617017.pem (1338 bytes)
	I1209 10:50:23.036349  627293 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1209 10:50:23.051329  627293 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1209 10:50:23.066824  627293 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1209 10:50:23.081626  627293 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1209 10:50:23.096856  627293 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1209 10:50:23.112249  627293 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1209 10:50:23.126784  627293 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1209 10:50:23.141365  627293 ssh_runner.go:195] Run: openssl version
	I1209 10:50:23.146879  627293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6170172.pem && ln -fs /usr/share/ca-certificates/6170172.pem /etc/ssl/certs/6170172.pem"
	I1209 10:50:23.156698  627293 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6170172.pem
	I1209 10:50:23.160669  627293 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 10:45 /usr/share/ca-certificates/6170172.pem
	I1209 10:50:23.160717  627293 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6170172.pem
	I1209 10:50:23.166987  627293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6170172.pem /etc/ssl/certs/3ec20f2e.0"
	I1209 10:50:23.176745  627293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1209 10:50:23.186586  627293 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 10:50:23.190639  627293 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 10:34 /usr/share/ca-certificates/minikubeCA.pem
	I1209 10:50:23.190687  627293 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 10:50:23.195990  627293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1209 10:50:23.205745  627293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/617017.pem && ln -fs /usr/share/ca-certificates/617017.pem /etc/ssl/certs/617017.pem"
	I1209 10:50:23.215364  627293 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/617017.pem
	I1209 10:50:23.219316  627293 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 10:45 /usr/share/ca-certificates/617017.pem
	I1209 10:50:23.219368  627293 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/617017.pem
	I1209 10:50:23.225208  627293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/617017.pem /etc/ssl/certs/51391683.0"
	I1209 10:50:23.235141  627293 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 10:50:23.238820  627293 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1209 10:50:23.238882  627293 kubeadm.go:934] updating node {m02 192.168.39.89 8443 v1.31.2 crio true true} ...
	I1209 10:50:23.238988  627293 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-792382-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.89
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-792382 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 10:50:23.239016  627293 kube-vip.go:115] generating kube-vip config ...
	I1209 10:50:23.239060  627293 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1209 10:50:23.254073  627293 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1209 10:50:23.254184  627293 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.7
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1209 10:50:23.254233  627293 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1209 10:50:23.263688  627293 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1209 10:50:23.263749  627293 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1209 10:50:23.272494  627293 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1209 10:50:23.272527  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1209 10:50:23.272570  627293 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/20068-609844/.minikube/cache/linux/amd64/v1.31.2/kubeadm
	I1209 10:50:23.272599  627293 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1209 10:50:23.272611  627293 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/20068-609844/.minikube/cache/linux/amd64/v1.31.2/kubelet
	I1209 10:50:23.276784  627293 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1209 10:50:23.276819  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1209 10:50:24.168986  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1209 10:50:24.169072  627293 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1209 10:50:24.174707  627293 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1209 10:50:24.174764  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1209 10:50:24.294393  627293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 10:50:24.325197  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1209 10:50:24.325289  627293 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1209 10:50:24.335547  627293 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1209 10:50:24.335594  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1209 10:50:24.706937  627293 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1209 10:50:24.715886  627293 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1209 10:50:24.731189  627293 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 10:50:24.746662  627293 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1209 10:50:24.762089  627293 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1209 10:50:24.765881  627293 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 10:50:24.777191  627293 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 10:50:24.904006  627293 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 10:50:24.921009  627293 host.go:66] Checking if "ha-792382" exists ...
	I1209 10:50:24.921461  627293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:50:24.921511  627293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:50:24.937482  627293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34099
	I1209 10:50:24.937973  627293 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:50:24.938486  627293 main.go:141] libmachine: Using API Version  1
	I1209 10:50:24.938508  627293 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:50:24.938885  627293 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:50:24.939098  627293 main.go:141] libmachine: (ha-792382) Calling .DriverName
	I1209 10:50:24.939248  627293 start.go:317] joinCluster: &{Name:ha-792382 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-792382 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.69 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.89 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 10:50:24.939386  627293 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1209 10:50:24.939418  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHHostname
	I1209 10:50:24.942285  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:50:24.942827  627293 main.go:141] libmachine: (ha-792382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:82:f7", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:49:26 +0000 UTC Type:0 Mac:52:54:00:a8:82:f7 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-792382 Clientid:01:52:54:00:a8:82:f7}
	I1209 10:50:24.942855  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:50:24.942985  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHPort
	I1209 10:50:24.943215  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHKeyPath
	I1209 10:50:24.943387  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHUsername
	I1209 10:50:24.943515  627293 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382/id_rsa Username:docker}
	I1209 10:50:25.097594  627293 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.89 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 10:50:25.097643  627293 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token dvotig.smgl74cs6saznre8 --discovery-token-ca-cert-hash sha256:c011865ce27f188d55feab5f04226df0aae8d2e7cfe661283806c6f2de0ef29a --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-792382-m02 --control-plane --apiserver-advertise-address=192.168.39.89 --apiserver-bind-port=8443"
	I1209 10:50:47.230030  627293 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token dvotig.smgl74cs6saznre8 --discovery-token-ca-cert-hash sha256:c011865ce27f188d55feab5f04226df0aae8d2e7cfe661283806c6f2de0ef29a --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-792382-m02 --control-plane --apiserver-advertise-address=192.168.39.89 --apiserver-bind-port=8443": (22.132356511s)
	I1209 10:50:47.230081  627293 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1209 10:50:47.777805  627293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-792382-m02 minikube.k8s.io/updated_at=2024_12_09T10_50_47_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=60110addcdbf0fec7168b962521659e922988d6c minikube.k8s.io/name=ha-792382 minikube.k8s.io/primary=false
	I1209 10:50:47.938150  627293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-792382-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I1209 10:50:48.082480  627293 start.go:319] duration metric: took 23.143228187s to joinCluster
	I1209 10:50:48.082581  627293 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.89 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 10:50:48.082941  627293 config.go:182] Loaded profile config "ha-792382": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 10:50:48.084770  627293 out.go:177] * Verifying Kubernetes components...
	I1209 10:50:48.085991  627293 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 10:50:48.337603  627293 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 10:50:48.368412  627293 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/20068-609844/kubeconfig
	I1209 10:50:48.368651  627293 kapi.go:59] client config for ha-792382: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/client.crt", KeyFile:"/home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/client.key", CAFile:"/home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243b6e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1209 10:50:48.368776  627293 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.69:8443
	I1209 10:50:48.369027  627293 node_ready.go:35] waiting up to 6m0s for node "ha-792382-m02" to be "Ready" ...
	I1209 10:50:48.369182  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:50:48.369197  627293 round_trippers.go:469] Request Headers:
	I1209 10:50:48.369210  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:50:48.369215  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:50:48.379219  627293 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1209 10:50:48.869436  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:50:48.869471  627293 round_trippers.go:469] Request Headers:
	I1209 10:50:48.869484  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:50:48.869491  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:50:48.873562  627293 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 10:50:49.369649  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:50:49.369671  627293 round_trippers.go:469] Request Headers:
	I1209 10:50:49.369679  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:50:49.369685  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:50:49.372678  627293 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 10:50:49.869490  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:50:49.869516  627293 round_trippers.go:469] Request Headers:
	I1209 10:50:49.869525  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:50:49.869529  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:50:49.872495  627293 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 10:50:50.369998  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:50:50.370028  627293 round_trippers.go:469] Request Headers:
	I1209 10:50:50.370038  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:50:50.370043  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:50:50.374983  627293 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 10:50:50.377595  627293 node_ready.go:53] node "ha-792382-m02" has status "Ready":"False"
	I1209 10:50:50.869651  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:50:50.869674  627293 round_trippers.go:469] Request Headers:
	I1209 10:50:50.869688  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:50:50.869692  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:50:50.906453  627293 round_trippers.go:574] Response Status: 200 OK in 36 milliseconds
	I1209 10:50:51.369287  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:50:51.369317  627293 round_trippers.go:469] Request Headers:
	I1209 10:50:51.369329  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:50:51.369335  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:50:51.372362  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:50:51.870258  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:50:51.870289  627293 round_trippers.go:469] Request Headers:
	I1209 10:50:51.870302  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:50:51.870310  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:50:51.873898  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:50:52.370080  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:50:52.370105  627293 round_trippers.go:469] Request Headers:
	I1209 10:50:52.370115  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:50:52.370118  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:50:52.376430  627293 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1209 10:50:52.869331  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:50:52.869355  627293 round_trippers.go:469] Request Headers:
	I1209 10:50:52.869364  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:50:52.869368  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:50:52.873136  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:50:52.873737  627293 node_ready.go:53] node "ha-792382-m02" has status "Ready":"False"
	I1209 10:50:53.370232  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:50:53.370258  627293 round_trippers.go:469] Request Headers:
	I1209 10:50:53.370267  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:50:53.370272  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:50:53.373647  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:50:53.869640  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:50:53.869666  627293 round_trippers.go:469] Request Headers:
	I1209 10:50:53.869674  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:50:53.869678  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:50:53.872620  627293 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 10:50:54.369762  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:50:54.369789  627293 round_trippers.go:469] Request Headers:
	I1209 10:50:54.369798  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:50:54.369802  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:50:54.373551  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:50:54.869513  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:50:54.869538  627293 round_trippers.go:469] Request Headers:
	I1209 10:50:54.869547  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:50:54.869552  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:50:54.872817  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:50:55.369351  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:50:55.369377  627293 round_trippers.go:469] Request Headers:
	I1209 10:50:55.369387  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:50:55.369391  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:50:55.372662  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:50:55.373185  627293 node_ready.go:53] node "ha-792382-m02" has status "Ready":"False"
	I1209 10:50:55.869601  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:50:55.869626  627293 round_trippers.go:469] Request Headers:
	I1209 10:50:55.869636  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:50:55.869642  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:50:55.873128  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:50:56.369713  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:50:56.369741  627293 round_trippers.go:469] Request Headers:
	I1209 10:50:56.369751  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:50:56.369755  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:50:56.373053  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:50:56.870191  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:50:56.870225  627293 round_trippers.go:469] Request Headers:
	I1209 10:50:56.870238  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:50:56.870247  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:50:56.873685  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:50:57.369825  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:50:57.369849  627293 round_trippers.go:469] Request Headers:
	I1209 10:50:57.369858  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:50:57.369861  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:50:57.373394  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:50:57.373898  627293 node_ready.go:53] node "ha-792382-m02" has status "Ready":"False"
	I1209 10:50:57.869257  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:50:57.869284  627293 round_trippers.go:469] Request Headers:
	I1209 10:50:57.869293  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:50:57.869297  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:50:57.872590  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:50:58.369600  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:50:58.369629  627293 round_trippers.go:469] Request Headers:
	I1209 10:50:58.369641  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:50:58.369648  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:50:58.372771  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:50:58.869748  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:50:58.869775  627293 round_trippers.go:469] Request Headers:
	I1209 10:50:58.869784  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:50:58.869788  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:50:58.873037  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:50:59.369979  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:50:59.370004  627293 round_trippers.go:469] Request Headers:
	I1209 10:50:59.370013  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:50:59.370017  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:50:59.373442  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:50:59.869269  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:50:59.869294  627293 round_trippers.go:469] Request Headers:
	I1209 10:50:59.869309  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:50:59.869314  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:50:59.872720  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:50:59.873370  627293 node_ready.go:53] node "ha-792382-m02" has status "Ready":"False"
	I1209 10:51:00.369254  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:51:00.369281  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:00.369289  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:00.369294  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:00.372431  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:51:00.869327  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:51:00.869352  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:00.869361  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:00.869365  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:00.872790  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:51:01.369711  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:51:01.369743  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:01.369755  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:01.369761  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:01.372739  627293 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 10:51:01.869629  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:51:01.869659  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:01.869672  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:01.869680  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:01.873312  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:51:01.873858  627293 node_ready.go:53] node "ha-792382-m02" has status "Ready":"False"
	I1209 10:51:02.369761  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:51:02.369798  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:02.369811  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:02.369818  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:02.373514  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:51:02.869485  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:51:02.869511  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:02.869524  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:02.869530  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:02.875847  627293 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1209 10:51:03.369998  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:51:03.370025  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:03.370034  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:03.370039  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:03.373227  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:51:03.870196  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:51:03.870226  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:03.870238  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:03.870245  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:03.873280  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:51:03.873981  627293 node_ready.go:53] node "ha-792382-m02" has status "Ready":"False"
	I1209 10:51:04.369276  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:51:04.369305  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:04.369314  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:04.369318  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:04.373386  627293 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 10:51:04.869282  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:51:04.869309  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:04.869317  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:04.869321  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:04.872919  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:51:05.369501  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:51:05.369531  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:05.369544  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:05.369551  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:05.373273  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:51:05.869275  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:51:05.869301  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:05.869313  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:05.869319  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:05.875077  627293 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1209 10:51:05.875712  627293 node_ready.go:49] node "ha-792382-m02" has status "Ready":"True"
	I1209 10:51:05.875741  627293 node_ready.go:38] duration metric: took 17.506691417s for node "ha-792382-m02" to be "Ready" ...
	I1209 10:51:05.875753  627293 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 10:51:05.875877  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods
	I1209 10:51:05.875894  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:05.875903  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:05.875908  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:05.880622  627293 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 10:51:05.886687  627293 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-8hlml" in "kube-system" namespace to be "Ready" ...
	I1209 10:51:05.886796  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-8hlml
	I1209 10:51:05.886807  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:05.886815  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:05.886820  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:05.891623  627293 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 10:51:05.892565  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382
	I1209 10:51:05.892583  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:05.892608  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:05.892615  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:05.895456  627293 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 10:51:05.895899  627293 pod_ready.go:93] pod "coredns-7c65d6cfc9-8hlml" in "kube-system" namespace has status "Ready":"True"
	I1209 10:51:05.895917  627293 pod_ready.go:82] duration metric: took 9.205439ms for pod "coredns-7c65d6cfc9-8hlml" in "kube-system" namespace to be "Ready" ...
	I1209 10:51:05.895927  627293 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-rz6mw" in "kube-system" namespace to be "Ready" ...
	I1209 10:51:05.895993  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-rz6mw
	I1209 10:51:05.896006  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:05.896013  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:05.896016  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:05.898484  627293 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 10:51:05.899083  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382
	I1209 10:51:05.899101  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:05.899108  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:05.899112  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:05.901260  627293 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 10:51:05.901817  627293 pod_ready.go:93] pod "coredns-7c65d6cfc9-rz6mw" in "kube-system" namespace has status "Ready":"True"
	I1209 10:51:05.901842  627293 pod_ready.go:82] duration metric: took 5.908358ms for pod "coredns-7c65d6cfc9-rz6mw" in "kube-system" namespace to be "Ready" ...
	I1209 10:51:05.901854  627293 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-792382" in "kube-system" namespace to be "Ready" ...
	I1209 10:51:05.901923  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-792382
	I1209 10:51:05.901934  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:05.901946  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:05.901953  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:05.904274  627293 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 10:51:05.905123  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382
	I1209 10:51:05.905142  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:05.905152  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:05.905158  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:05.907644  627293 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 10:51:05.908181  627293 pod_ready.go:93] pod "etcd-ha-792382" in "kube-system" namespace has status "Ready":"True"
	I1209 10:51:05.908211  627293 pod_ready.go:82] duration metric: took 6.349761ms for pod "etcd-ha-792382" in "kube-system" namespace to be "Ready" ...
	I1209 10:51:05.908224  627293 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-792382-m02" in "kube-system" namespace to be "Ready" ...
	I1209 10:51:05.908297  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-792382-m02
	I1209 10:51:05.908307  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:05.908318  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:05.908329  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:05.910369  627293 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 10:51:05.910967  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:51:05.910983  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:05.910992  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:05.910997  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:05.913018  627293 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 10:51:05.913518  627293 pod_ready.go:93] pod "etcd-ha-792382-m02" in "kube-system" namespace has status "Ready":"True"
	I1209 10:51:05.913539  627293 pod_ready.go:82] duration metric: took 5.308048ms for pod "etcd-ha-792382-m02" in "kube-system" namespace to be "Ready" ...
	I1209 10:51:05.913558  627293 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-792382" in "kube-system" namespace to be "Ready" ...
	I1209 10:51:06.070017  627293 request.go:632] Waited for 156.363826ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-792382
	I1209 10:51:06.070081  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-792382
	I1209 10:51:06.070086  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:06.070095  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:06.070102  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:06.073645  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:51:06.269848  627293 request.go:632] Waited for 195.364699ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-792382
	I1209 10:51:06.269918  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382
	I1209 10:51:06.269924  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:06.269931  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:06.269935  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:06.272803  627293 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 10:51:06.273443  627293 pod_ready.go:93] pod "kube-apiserver-ha-792382" in "kube-system" namespace has status "Ready":"True"
	I1209 10:51:06.273469  627293 pod_ready.go:82] duration metric: took 359.901606ms for pod "kube-apiserver-ha-792382" in "kube-system" namespace to be "Ready" ...
	I1209 10:51:06.273484  627293 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-792382-m02" in "kube-system" namespace to be "Ready" ...
	I1209 10:51:06.469639  627293 request.go:632] Waited for 196.043735ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-792382-m02
	I1209 10:51:06.469733  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-792382-m02
	I1209 10:51:06.469741  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:06.469754  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:06.469762  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:06.473158  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:51:06.670306  627293 request.go:632] Waited for 196.412719ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:51:06.670379  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:51:06.670387  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:06.670399  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:06.670409  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:06.673435  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:51:06.673975  627293 pod_ready.go:93] pod "kube-apiserver-ha-792382-m02" in "kube-system" namespace has status "Ready":"True"
	I1209 10:51:06.673996  627293 pod_ready.go:82] duration metric: took 400.504015ms for pod "kube-apiserver-ha-792382-m02" in "kube-system" namespace to be "Ready" ...
	I1209 10:51:06.674006  627293 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-792382" in "kube-system" namespace to be "Ready" ...
	I1209 10:51:06.870147  627293 request.go:632] Waited for 196.063707ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-792382
	I1209 10:51:06.870265  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-792382
	I1209 10:51:06.870276  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:06.870285  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:06.870292  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:06.873707  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:51:07.069908  627293 request.go:632] Waited for 195.387799ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-792382
	I1209 10:51:07.069975  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382
	I1209 10:51:07.069983  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:07.069995  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:07.070015  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:07.073101  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:51:07.073736  627293 pod_ready.go:93] pod "kube-controller-manager-ha-792382" in "kube-system" namespace has status "Ready":"True"
	I1209 10:51:07.073758  627293 pod_ready.go:82] duration metric: took 399.744041ms for pod "kube-controller-manager-ha-792382" in "kube-system" namespace to be "Ready" ...
	I1209 10:51:07.073774  627293 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-792382-m02" in "kube-system" namespace to be "Ready" ...
	I1209 10:51:07.269459  627293 request.go:632] Waited for 195.589987ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-792382-m02
	I1209 10:51:07.269554  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-792382-m02
	I1209 10:51:07.269566  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:07.269577  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:07.269584  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:07.273156  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:51:07.470290  627293 request.go:632] Waited for 196.338376ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:51:07.470357  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:51:07.470364  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:07.470374  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:07.470384  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:07.474385  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:51:07.474970  627293 pod_ready.go:93] pod "kube-controller-manager-ha-792382-m02" in "kube-system" namespace has status "Ready":"True"
	I1209 10:51:07.474989  627293 pod_ready.go:82] duration metric: took 401.206827ms for pod "kube-controller-manager-ha-792382-m02" in "kube-system" namespace to be "Ready" ...
	I1209 10:51:07.475001  627293 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-dckpl" in "kube-system" namespace to be "Ready" ...
	I1209 10:51:07.670046  627293 request.go:632] Waited for 194.938435ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dckpl
	I1209 10:51:07.670123  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dckpl
	I1209 10:51:07.670153  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:07.670161  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:07.670177  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:07.673612  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:51:07.869971  627293 request.go:632] Waited for 195.374837ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:51:07.870066  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:51:07.870077  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:07.870089  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:07.870096  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:07.873498  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:51:07.873966  627293 pod_ready.go:93] pod "kube-proxy-dckpl" in "kube-system" namespace has status "Ready":"True"
	I1209 10:51:07.873986  627293 pod_ready.go:82] duration metric: took 398.974048ms for pod "kube-proxy-dckpl" in "kube-system" namespace to be "Ready" ...
	I1209 10:51:07.873999  627293 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-wrvgb" in "kube-system" namespace to be "Ready" ...
	I1209 10:51:08.070122  627293 request.go:632] Waited for 195.97145ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wrvgb
	I1209 10:51:08.070208  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wrvgb
	I1209 10:51:08.070220  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:08.070232  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:08.070246  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:08.073337  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:51:08.270335  627293 request.go:632] Waited for 196.383902ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-792382
	I1209 10:51:08.270428  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382
	I1209 10:51:08.270439  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:08.270446  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:08.270450  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:08.273875  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:51:08.274422  627293 pod_ready.go:93] pod "kube-proxy-wrvgb" in "kube-system" namespace has status "Ready":"True"
	I1209 10:51:08.274444  627293 pod_ready.go:82] duration metric: took 400.436343ms for pod "kube-proxy-wrvgb" in "kube-system" namespace to be "Ready" ...
	I1209 10:51:08.274455  627293 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-792382" in "kube-system" namespace to be "Ready" ...
	I1209 10:51:08.469480  627293 request.go:632] Waited for 194.92406ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-792382
	I1209 10:51:08.469571  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-792382
	I1209 10:51:08.469579  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:08.469593  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:08.469604  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:08.473101  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:51:08.670247  627293 request.go:632] Waited for 196.404632ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-792382
	I1209 10:51:08.670318  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382
	I1209 10:51:08.670323  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:08.670331  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:08.670334  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:08.673487  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:51:08.674226  627293 pod_ready.go:93] pod "kube-scheduler-ha-792382" in "kube-system" namespace has status "Ready":"True"
	I1209 10:51:08.674250  627293 pod_ready.go:82] duration metric: took 399.789273ms for pod "kube-scheduler-ha-792382" in "kube-system" namespace to be "Ready" ...
	I1209 10:51:08.674263  627293 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-792382-m02" in "kube-system" namespace to be "Ready" ...
	I1209 10:51:08.870290  627293 request.go:632] Waited for 195.926045ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-792382-m02
	I1209 10:51:08.870371  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-792382-m02
	I1209 10:51:08.870379  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:08.870387  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:08.870393  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:08.873809  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:51:09.069870  627293 request.go:632] Waited for 195.368943ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:51:09.069944  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:51:09.069950  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:09.069962  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:09.069967  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:09.074483  627293 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 10:51:09.075070  627293 pod_ready.go:93] pod "kube-scheduler-ha-792382-m02" in "kube-system" namespace has status "Ready":"True"
	I1209 10:51:09.075095  627293 pod_ready.go:82] duration metric: took 400.825701ms for pod "kube-scheduler-ha-792382-m02" in "kube-system" namespace to be "Ready" ...
	I1209 10:51:09.075107  627293 pod_ready.go:39] duration metric: took 3.199339967s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 10:51:09.075137  627293 api_server.go:52] waiting for apiserver process to appear ...
	I1209 10:51:09.075203  627293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 10:51:09.089759  627293 api_server.go:72] duration metric: took 21.007136874s to wait for apiserver process to appear ...
	I1209 10:51:09.089785  627293 api_server.go:88] waiting for apiserver healthz status ...
	I1209 10:51:09.089806  627293 api_server.go:253] Checking apiserver healthz at https://192.168.39.69:8443/healthz ...
	I1209 10:51:09.093868  627293 api_server.go:279] https://192.168.39.69:8443/healthz returned 200:
	ok
	I1209 10:51:09.093935  627293 round_trippers.go:463] GET https://192.168.39.69:8443/version
	I1209 10:51:09.093940  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:09.093949  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:09.093957  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:09.094830  627293 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1209 10:51:09.094916  627293 api_server.go:141] control plane version: v1.31.2
	I1209 10:51:09.094932  627293 api_server.go:131] duration metric: took 5.141357ms to wait for apiserver health ...
	I1209 10:51:09.094940  627293 system_pods.go:43] waiting for kube-system pods to appear ...
	I1209 10:51:09.269312  627293 request.go:632] Waited for 174.277582ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods
	I1209 10:51:09.269388  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods
	I1209 10:51:09.269394  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:09.269402  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:09.269407  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:09.274316  627293 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 10:51:09.278484  627293 system_pods.go:59] 17 kube-system pods found
	I1209 10:51:09.278512  627293 system_pods.go:61] "coredns-7c65d6cfc9-8hlml" [d820cd6c-5064-4934-adc8-c68f84c09b46] Running
	I1209 10:51:09.278518  627293 system_pods.go:61] "coredns-7c65d6cfc9-rz6mw" [af297b6d-91f1-4114-b98c-cdfdfbd1589e] Running
	I1209 10:51:09.278523  627293 system_pods.go:61] "etcd-ha-792382" [eee2fbad-7f2f-4f5a-b701-abdbf8456f99] Running
	I1209 10:51:09.278527  627293 system_pods.go:61] "etcd-ha-792382-m02" [424768ff-af5d-4a31-b062-ab2c9576884d] Running
	I1209 10:51:09.278531  627293 system_pods.go:61] "kindnet-bqp2z" [b2c40579-4d72-4efe-b921-1e0f98b91544] Running
	I1209 10:51:09.278534  627293 system_pods.go:61] "kindnet-hkrhk" [9b35011c-ab45-4f55-a60f-08f9c4509c1d] Running
	I1209 10:51:09.278540  627293 system_pods.go:61] "kube-apiserver-ha-792382" [5157cfb0-bc91-4efe-b2e2-689778d5b012] Running
	I1209 10:51:09.278544  627293 system_pods.go:61] "kube-apiserver-ha-792382-m02" [71f9ce1c-aeff-4853-83ec-01a7fb6a81d5] Running
	I1209 10:51:09.278547  627293 system_pods.go:61] "kube-controller-manager-ha-792382" [f8cb7175-f6fd-4dcf-a6a5-66c44aa136ce] Running
	I1209 10:51:09.278550  627293 system_pods.go:61] "kube-controller-manager-ha-792382-m02" [2f7d8deb-ddad-47ac-ad18-5f8e7d0e095d] Running
	I1209 10:51:09.278553  627293 system_pods.go:61] "kube-proxy-dckpl" [13f7bda1-f9c2-4fd0-96e0-b6aee1139bc1] Running
	I1209 10:51:09.278556  627293 system_pods.go:61] "kube-proxy-wrvgb" [2531e29f-a4d5-41f9-8c38-3220b4caf96b] Running
	I1209 10:51:09.278560  627293 system_pods.go:61] "kube-scheduler-ha-792382" [b11693d0-b2e7-45a1-a1a2-6519a1535c45] Running
	I1209 10:51:09.278566  627293 system_pods.go:61] "kube-scheduler-ha-792382-m02" [250ce40c-5cbf-4f5d-a475-4f3d7ec100d9] Running
	I1209 10:51:09.278569  627293 system_pods.go:61] "kube-vip-ha-792382" [511f3a84-b444-4603-8f6f-eeeb262b1384] Running
	I1209 10:51:09.278574  627293 system_pods.go:61] "kube-vip-ha-792382-m02" [f2110c28-09f5-42f9-8e16-beec9759a0a2] Running
	I1209 10:51:09.278578  627293 system_pods.go:61] "storage-provisioner" [4419fe4f-e2ed-4ecb-a912-2dd074e29727] Running
	I1209 10:51:09.278587  627293 system_pods.go:74] duration metric: took 183.639674ms to wait for pod list to return data ...
	I1209 10:51:09.278598  627293 default_sa.go:34] waiting for default service account to be created ...
	I1209 10:51:09.470106  627293 request.go:632] Waited for 191.4045ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/default/serviceaccounts
	I1209 10:51:09.470215  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/default/serviceaccounts
	I1209 10:51:09.470227  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:09.470242  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:09.470252  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:09.479626  627293 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1209 10:51:09.479907  627293 default_sa.go:45] found service account: "default"
	I1209 10:51:09.479929  627293 default_sa.go:55] duration metric: took 201.319758ms for default service account to be created ...
	I1209 10:51:09.479942  627293 system_pods.go:116] waiting for k8s-apps to be running ...
	I1209 10:51:09.670105  627293 request.go:632] Waited for 190.065824ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods
	I1209 10:51:09.670208  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods
	I1209 10:51:09.670215  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:09.670223  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:09.670228  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:09.674641  627293 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 10:51:09.679080  627293 system_pods.go:86] 17 kube-system pods found
	I1209 10:51:09.679114  627293 system_pods.go:89] "coredns-7c65d6cfc9-8hlml" [d820cd6c-5064-4934-adc8-c68f84c09b46] Running
	I1209 10:51:09.679123  627293 system_pods.go:89] "coredns-7c65d6cfc9-rz6mw" [af297b6d-91f1-4114-b98c-cdfdfbd1589e] Running
	I1209 10:51:09.679131  627293 system_pods.go:89] "etcd-ha-792382" [eee2fbad-7f2f-4f5a-b701-abdbf8456f99] Running
	I1209 10:51:09.679138  627293 system_pods.go:89] "etcd-ha-792382-m02" [424768ff-af5d-4a31-b062-ab2c9576884d] Running
	I1209 10:51:09.679143  627293 system_pods.go:89] "kindnet-bqp2z" [b2c40579-4d72-4efe-b921-1e0f98b91544] Running
	I1209 10:51:09.679149  627293 system_pods.go:89] "kindnet-hkrhk" [9b35011c-ab45-4f55-a60f-08f9c4509c1d] Running
	I1209 10:51:09.679156  627293 system_pods.go:89] "kube-apiserver-ha-792382" [5157cfb0-bc91-4efe-b2e2-689778d5b012] Running
	I1209 10:51:09.679165  627293 system_pods.go:89] "kube-apiserver-ha-792382-m02" [71f9ce1c-aeff-4853-83ec-01a7fb6a81d5] Running
	I1209 10:51:09.679171  627293 system_pods.go:89] "kube-controller-manager-ha-792382" [f8cb7175-f6fd-4dcf-a6a5-66c44aa136ce] Running
	I1209 10:51:09.679180  627293 system_pods.go:89] "kube-controller-manager-ha-792382-m02" [2f7d8deb-ddad-47ac-ad18-5f8e7d0e095d] Running
	I1209 10:51:09.679184  627293 system_pods.go:89] "kube-proxy-dckpl" [13f7bda1-f9c2-4fd0-96e0-b6aee1139bc1] Running
	I1209 10:51:09.679188  627293 system_pods.go:89] "kube-proxy-wrvgb" [2531e29f-a4d5-41f9-8c38-3220b4caf96b] Running
	I1209 10:51:09.679195  627293 system_pods.go:89] "kube-scheduler-ha-792382" [b11693d0-b2e7-45a1-a1a2-6519a1535c45] Running
	I1209 10:51:09.679198  627293 system_pods.go:89] "kube-scheduler-ha-792382-m02" [250ce40c-5cbf-4f5d-a475-4f3d7ec100d9] Running
	I1209 10:51:09.679204  627293 system_pods.go:89] "kube-vip-ha-792382" [511f3a84-b444-4603-8f6f-eeeb262b1384] Running
	I1209 10:51:09.679208  627293 system_pods.go:89] "kube-vip-ha-792382-m02" [f2110c28-09f5-42f9-8e16-beec9759a0a2] Running
	I1209 10:51:09.679214  627293 system_pods.go:89] "storage-provisioner" [4419fe4f-e2ed-4ecb-a912-2dd074e29727] Running
	I1209 10:51:09.679221  627293 system_pods.go:126] duration metric: took 199.268781ms to wait for k8s-apps to be running ...
	I1209 10:51:09.679230  627293 system_svc.go:44] waiting for kubelet service to be running ....
	I1209 10:51:09.679276  627293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 10:51:09.694076  627293 system_svc.go:56] duration metric: took 14.835467ms WaitForService to wait for kubelet
	I1209 10:51:09.694109  627293 kubeadm.go:582] duration metric: took 21.611489035s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 10:51:09.694134  627293 node_conditions.go:102] verifying NodePressure condition ...
	I1209 10:51:09.869608  627293 request.go:632] Waited for 175.356595ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes
	I1209 10:51:09.869706  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes
	I1209 10:51:09.869714  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:09.869723  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:09.869734  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:09.873420  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:51:09.874254  627293 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1209 10:51:09.874278  627293 node_conditions.go:123] node cpu capacity is 2
	I1209 10:51:09.874300  627293 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1209 10:51:09.874304  627293 node_conditions.go:123] node cpu capacity is 2
	I1209 10:51:09.874310  627293 node_conditions.go:105] duration metric: took 180.168766ms to run NodePressure ...
	I1209 10:51:09.874324  627293 start.go:241] waiting for startup goroutines ...
	I1209 10:51:09.874349  627293 start.go:255] writing updated cluster config ...
	I1209 10:51:09.876293  627293 out.go:201] 
	I1209 10:51:09.877844  627293 config.go:182] Loaded profile config "ha-792382": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 10:51:09.877938  627293 profile.go:143] Saving config to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/config.json ...
	I1209 10:51:09.879618  627293 out.go:177] * Starting "ha-792382-m03" control-plane node in "ha-792382" cluster
	I1209 10:51:09.880651  627293 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 10:51:09.880677  627293 cache.go:56] Caching tarball of preloaded images
	I1209 10:51:09.880794  627293 preload.go:172] Found /home/jenkins/minikube-integration/20068-609844/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1209 10:51:09.880808  627293 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1209 10:51:09.880894  627293 profile.go:143] Saving config to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/config.json ...
	I1209 10:51:09.881065  627293 start.go:360] acquireMachinesLock for ha-792382-m03: {Name:mka6afffe9ddf2faebdc603ed0401805d58bf31e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 10:51:09.881109  627293 start.go:364] duration metric: took 24.695µs to acquireMachinesLock for "ha-792382-m03"
	I1209 10:51:09.881155  627293 start.go:93] Provisioning new machine with config: &{Name:ha-792382 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-792382 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.69 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.89 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 10:51:09.881251  627293 start.go:125] createHost starting for "m03" (driver="kvm2")
	I1209 10:51:09.882597  627293 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1209 10:51:09.882697  627293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:51:09.882736  627293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:51:09.898133  627293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41609
	I1209 10:51:09.898752  627293 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:51:09.899364  627293 main.go:141] libmachine: Using API Version  1
	I1209 10:51:09.899388  627293 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:51:09.899714  627293 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:51:09.899932  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetMachineName
	I1209 10:51:09.900153  627293 main.go:141] libmachine: (ha-792382-m03) Calling .DriverName
	I1209 10:51:09.900311  627293 start.go:159] libmachine.API.Create for "ha-792382" (driver="kvm2")
	I1209 10:51:09.900340  627293 client.go:168] LocalClient.Create starting
	I1209 10:51:09.900368  627293 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem
	I1209 10:51:09.900399  627293 main.go:141] libmachine: Decoding PEM data...
	I1209 10:51:09.900414  627293 main.go:141] libmachine: Parsing certificate...
	I1209 10:51:09.900469  627293 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem
	I1209 10:51:09.900490  627293 main.go:141] libmachine: Decoding PEM data...
	I1209 10:51:09.900500  627293 main.go:141] libmachine: Parsing certificate...
	I1209 10:51:09.900517  627293 main.go:141] libmachine: Running pre-create checks...
	I1209 10:51:09.900526  627293 main.go:141] libmachine: (ha-792382-m03) Calling .PreCreateCheck
	I1209 10:51:09.900676  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetConfigRaw
	I1209 10:51:09.901024  627293 main.go:141] libmachine: Creating machine...
	I1209 10:51:09.901037  627293 main.go:141] libmachine: (ha-792382-m03) Calling .Create
	I1209 10:51:09.901229  627293 main.go:141] libmachine: (ha-792382-m03) Creating KVM machine...
	I1209 10:51:09.902418  627293 main.go:141] libmachine: (ha-792382-m03) DBG | found existing default KVM network
	I1209 10:51:09.902584  627293 main.go:141] libmachine: (ha-792382-m03) DBG | found existing private KVM network mk-ha-792382
	I1209 10:51:09.902745  627293 main.go:141] libmachine: (ha-792382-m03) Setting up store path in /home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382-m03 ...
	I1209 10:51:09.902768  627293 main.go:141] libmachine: (ha-792382-m03) Building disk image from file:///home/jenkins/minikube-integration/20068-609844/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1209 10:51:09.902867  627293 main.go:141] libmachine: (ha-792382-m03) DBG | I1209 10:51:09.902742  628056 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20068-609844/.minikube
	I1209 10:51:09.902959  627293 main.go:141] libmachine: (ha-792382-m03) Downloading /home/jenkins/minikube-integration/20068-609844/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20068-609844/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1209 10:51:10.187575  627293 main.go:141] libmachine: (ha-792382-m03) DBG | I1209 10:51:10.187437  628056 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382-m03/id_rsa...
	I1209 10:51:10.500975  627293 main.go:141] libmachine: (ha-792382-m03) DBG | I1209 10:51:10.500841  628056 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382-m03/ha-792382-m03.rawdisk...
	I1209 10:51:10.501016  627293 main.go:141] libmachine: (ha-792382-m03) DBG | Writing magic tar header
	I1209 10:51:10.501026  627293 main.go:141] libmachine: (ha-792382-m03) DBG | Writing SSH key tar header
	I1209 10:51:10.501034  627293 main.go:141] libmachine: (ha-792382-m03) DBG | I1209 10:51:10.500985  628056 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382-m03 ...
	I1209 10:51:10.501188  627293 main.go:141] libmachine: (ha-792382-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382-m03
	I1209 10:51:10.501214  627293 main.go:141] libmachine: (ha-792382-m03) Setting executable bit set on /home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382-m03 (perms=drwx------)
	I1209 10:51:10.501235  627293 main.go:141] libmachine: (ha-792382-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20068-609844/.minikube/machines
	I1209 10:51:10.501255  627293 main.go:141] libmachine: (ha-792382-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20068-609844/.minikube
	I1209 10:51:10.501270  627293 main.go:141] libmachine: (ha-792382-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20068-609844
	I1209 10:51:10.501289  627293 main.go:141] libmachine: (ha-792382-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1209 10:51:10.501315  627293 main.go:141] libmachine: (ha-792382-m03) Setting executable bit set on /home/jenkins/minikube-integration/20068-609844/.minikube/machines (perms=drwxr-xr-x)
	I1209 10:51:10.501328  627293 main.go:141] libmachine: (ha-792382-m03) DBG | Checking permissions on dir: /home/jenkins
	I1209 10:51:10.501340  627293 main.go:141] libmachine: (ha-792382-m03) DBG | Checking permissions on dir: /home
	I1209 10:51:10.501354  627293 main.go:141] libmachine: (ha-792382-m03) DBG | Skipping /home - not owner
	I1209 10:51:10.501371  627293 main.go:141] libmachine: (ha-792382-m03) Setting executable bit set on /home/jenkins/minikube-integration/20068-609844/.minikube (perms=drwxr-xr-x)
	I1209 10:51:10.501393  627293 main.go:141] libmachine: (ha-792382-m03) Setting executable bit set on /home/jenkins/minikube-integration/20068-609844 (perms=drwxrwxr-x)
	I1209 10:51:10.501413  627293 main.go:141] libmachine: (ha-792382-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1209 10:51:10.501426  627293 main.go:141] libmachine: (ha-792382-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1209 10:51:10.501440  627293 main.go:141] libmachine: (ha-792382-m03) Creating domain...
	I1209 10:51:10.502439  627293 main.go:141] libmachine: (ha-792382-m03) define libvirt domain using xml: 
	I1209 10:51:10.502466  627293 main.go:141] libmachine: (ha-792382-m03) <domain type='kvm'>
	I1209 10:51:10.502476  627293 main.go:141] libmachine: (ha-792382-m03)   <name>ha-792382-m03</name>
	I1209 10:51:10.502484  627293 main.go:141] libmachine: (ha-792382-m03)   <memory unit='MiB'>2200</memory>
	I1209 10:51:10.502490  627293 main.go:141] libmachine: (ha-792382-m03)   <vcpu>2</vcpu>
	I1209 10:51:10.502495  627293 main.go:141] libmachine: (ha-792382-m03)   <features>
	I1209 10:51:10.502506  627293 main.go:141] libmachine: (ha-792382-m03)     <acpi/>
	I1209 10:51:10.502516  627293 main.go:141] libmachine: (ha-792382-m03)     <apic/>
	I1209 10:51:10.502524  627293 main.go:141] libmachine: (ha-792382-m03)     <pae/>
	I1209 10:51:10.502534  627293 main.go:141] libmachine: (ha-792382-m03)     
	I1209 10:51:10.502544  627293 main.go:141] libmachine: (ha-792382-m03)   </features>
	I1209 10:51:10.502552  627293 main.go:141] libmachine: (ha-792382-m03)   <cpu mode='host-passthrough'>
	I1209 10:51:10.502587  627293 main.go:141] libmachine: (ha-792382-m03)   
	I1209 10:51:10.502612  627293 main.go:141] libmachine: (ha-792382-m03)   </cpu>
	I1209 10:51:10.502650  627293 main.go:141] libmachine: (ha-792382-m03)   <os>
	I1209 10:51:10.502668  627293 main.go:141] libmachine: (ha-792382-m03)     <type>hvm</type>
	I1209 10:51:10.502674  627293 main.go:141] libmachine: (ha-792382-m03)     <boot dev='cdrom'/>
	I1209 10:51:10.502679  627293 main.go:141] libmachine: (ha-792382-m03)     <boot dev='hd'/>
	I1209 10:51:10.502688  627293 main.go:141] libmachine: (ha-792382-m03)     <bootmenu enable='no'/>
	I1209 10:51:10.502693  627293 main.go:141] libmachine: (ha-792382-m03)   </os>
	I1209 10:51:10.502731  627293 main.go:141] libmachine: (ha-792382-m03)   <devices>
	I1209 10:51:10.502756  627293 main.go:141] libmachine: (ha-792382-m03)     <disk type='file' device='cdrom'>
	I1209 10:51:10.502773  627293 main.go:141] libmachine: (ha-792382-m03)       <source file='/home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382-m03/boot2docker.iso'/>
	I1209 10:51:10.502784  627293 main.go:141] libmachine: (ha-792382-m03)       <target dev='hdc' bus='scsi'/>
	I1209 10:51:10.502796  627293 main.go:141] libmachine: (ha-792382-m03)       <readonly/>
	I1209 10:51:10.502806  627293 main.go:141] libmachine: (ha-792382-m03)     </disk>
	I1209 10:51:10.502815  627293 main.go:141] libmachine: (ha-792382-m03)     <disk type='file' device='disk'>
	I1209 10:51:10.502827  627293 main.go:141] libmachine: (ha-792382-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1209 10:51:10.502844  627293 main.go:141] libmachine: (ha-792382-m03)       <source file='/home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382-m03/ha-792382-m03.rawdisk'/>
	I1209 10:51:10.502854  627293 main.go:141] libmachine: (ha-792382-m03)       <target dev='hda' bus='virtio'/>
	I1209 10:51:10.502862  627293 main.go:141] libmachine: (ha-792382-m03)     </disk>
	I1209 10:51:10.502873  627293 main.go:141] libmachine: (ha-792382-m03)     <interface type='network'>
	I1209 10:51:10.502886  627293 main.go:141] libmachine: (ha-792382-m03)       <source network='mk-ha-792382'/>
	I1209 10:51:10.502901  627293 main.go:141] libmachine: (ha-792382-m03)       <model type='virtio'/>
	I1209 10:51:10.502917  627293 main.go:141] libmachine: (ha-792382-m03)     </interface>
	I1209 10:51:10.502927  627293 main.go:141] libmachine: (ha-792382-m03)     <interface type='network'>
	I1209 10:51:10.502935  627293 main.go:141] libmachine: (ha-792382-m03)       <source network='default'/>
	I1209 10:51:10.502945  627293 main.go:141] libmachine: (ha-792382-m03)       <model type='virtio'/>
	I1209 10:51:10.502954  627293 main.go:141] libmachine: (ha-792382-m03)     </interface>
	I1209 10:51:10.502965  627293 main.go:141] libmachine: (ha-792382-m03)     <serial type='pty'>
	I1209 10:51:10.502981  627293 main.go:141] libmachine: (ha-792382-m03)       <target port='0'/>
	I1209 10:51:10.503011  627293 main.go:141] libmachine: (ha-792382-m03)     </serial>
	I1209 10:51:10.503041  627293 main.go:141] libmachine: (ha-792382-m03)     <console type='pty'>
	I1209 10:51:10.503058  627293 main.go:141] libmachine: (ha-792382-m03)       <target type='serial' port='0'/>
	I1209 10:51:10.503071  627293 main.go:141] libmachine: (ha-792382-m03)     </console>
	I1209 10:51:10.503082  627293 main.go:141] libmachine: (ha-792382-m03)     <rng model='virtio'>
	I1209 10:51:10.503096  627293 main.go:141] libmachine: (ha-792382-m03)       <backend model='random'>/dev/random</backend>
	I1209 10:51:10.503113  627293 main.go:141] libmachine: (ha-792382-m03)     </rng>
	I1209 10:51:10.503127  627293 main.go:141] libmachine: (ha-792382-m03)     
	I1209 10:51:10.503136  627293 main.go:141] libmachine: (ha-792382-m03)     
	I1209 10:51:10.503142  627293 main.go:141] libmachine: (ha-792382-m03)   </devices>
	I1209 10:51:10.503150  627293 main.go:141] libmachine: (ha-792382-m03) </domain>
	I1209 10:51:10.503164  627293 main.go:141] libmachine: (ha-792382-m03) 
	I1209 10:51:10.509799  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:26:51:82 in network default
	I1209 10:51:10.510544  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:10.510571  627293 main.go:141] libmachine: (ha-792382-m03) Ensuring networks are active...
	I1209 10:51:10.511459  627293 main.go:141] libmachine: (ha-792382-m03) Ensuring network default is active
	I1209 10:51:10.511785  627293 main.go:141] libmachine: (ha-792382-m03) Ensuring network mk-ha-792382 is active
	I1209 10:51:10.512166  627293 main.go:141] libmachine: (ha-792382-m03) Getting domain xml...
	I1209 10:51:10.512954  627293 main.go:141] libmachine: (ha-792382-m03) Creating domain...
	I1209 10:51:11.772243  627293 main.go:141] libmachine: (ha-792382-m03) Waiting to get IP...
	I1209 10:51:11.773341  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:11.773804  627293 main.go:141] libmachine: (ha-792382-m03) DBG | unable to find current IP address of domain ha-792382-m03 in network mk-ha-792382
	I1209 10:51:11.773837  627293 main.go:141] libmachine: (ha-792382-m03) DBG | I1209 10:51:11.773768  628056 retry.go:31] will retry after 261.519944ms: waiting for machine to come up
	I1209 10:51:12.038077  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:12.038774  627293 main.go:141] libmachine: (ha-792382-m03) DBG | unable to find current IP address of domain ha-792382-m03 in network mk-ha-792382
	I1209 10:51:12.038804  627293 main.go:141] libmachine: (ha-792382-m03) DBG | I1209 10:51:12.038709  628056 retry.go:31] will retry after 310.562513ms: waiting for machine to come up
	I1209 10:51:12.350405  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:12.350812  627293 main.go:141] libmachine: (ha-792382-m03) DBG | unable to find current IP address of domain ha-792382-m03 in network mk-ha-792382
	I1209 10:51:12.350870  627293 main.go:141] libmachine: (ha-792382-m03) DBG | I1209 10:51:12.350779  628056 retry.go:31] will retry after 381.875413ms: waiting for machine to come up
	I1209 10:51:12.734428  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:12.734917  627293 main.go:141] libmachine: (ha-792382-m03) DBG | unable to find current IP address of domain ha-792382-m03 in network mk-ha-792382
	I1209 10:51:12.734939  627293 main.go:141] libmachine: (ha-792382-m03) DBG | I1209 10:51:12.734868  628056 retry.go:31] will retry after 376.611685ms: waiting for machine to come up
	I1209 10:51:13.113430  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:13.113850  627293 main.go:141] libmachine: (ha-792382-m03) DBG | unable to find current IP address of domain ha-792382-m03 in network mk-ha-792382
	I1209 10:51:13.113878  627293 main.go:141] libmachine: (ha-792382-m03) DBG | I1209 10:51:13.113807  628056 retry.go:31] will retry after 480.736793ms: waiting for machine to come up
	I1209 10:51:13.596329  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:13.596796  627293 main.go:141] libmachine: (ha-792382-m03) DBG | unable to find current IP address of domain ha-792382-m03 in network mk-ha-792382
	I1209 10:51:13.596819  627293 main.go:141] libmachine: (ha-792382-m03) DBG | I1209 10:51:13.596753  628056 retry.go:31] will retry after 875.034768ms: waiting for machine to come up
	I1209 10:51:14.473751  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:14.474126  627293 main.go:141] libmachine: (ha-792382-m03) DBG | unable to find current IP address of domain ha-792382-m03 in network mk-ha-792382
	I1209 10:51:14.474155  627293 main.go:141] libmachine: (ha-792382-m03) DBG | I1209 10:51:14.474088  628056 retry.go:31] will retry after 816.368717ms: waiting for machine to come up
	I1209 10:51:15.292960  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:15.293587  627293 main.go:141] libmachine: (ha-792382-m03) DBG | unable to find current IP address of domain ha-792382-m03 in network mk-ha-792382
	I1209 10:51:15.293618  627293 main.go:141] libmachine: (ha-792382-m03) DBG | I1209 10:51:15.293489  628056 retry.go:31] will retry after 1.183655157s: waiting for machine to come up
	I1209 10:51:16.478955  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:16.479455  627293 main.go:141] libmachine: (ha-792382-m03) DBG | unable to find current IP address of domain ha-792382-m03 in network mk-ha-792382
	I1209 10:51:16.479486  627293 main.go:141] libmachine: (ha-792382-m03) DBG | I1209 10:51:16.479390  628056 retry.go:31] will retry after 1.459421983s: waiting for machine to come up
	I1209 10:51:17.940565  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:17.940909  627293 main.go:141] libmachine: (ha-792382-m03) DBG | unable to find current IP address of domain ha-792382-m03 in network mk-ha-792382
	I1209 10:51:17.940939  627293 main.go:141] libmachine: (ha-792382-m03) DBG | I1209 10:51:17.940853  628056 retry.go:31] will retry after 2.01883018s: waiting for machine to come up
	I1209 10:51:19.961861  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:19.962417  627293 main.go:141] libmachine: (ha-792382-m03) DBG | unable to find current IP address of domain ha-792382-m03 in network mk-ha-792382
	I1209 10:51:19.962457  627293 main.go:141] libmachine: (ha-792382-m03) DBG | I1209 10:51:19.962353  628056 retry.go:31] will retry after 1.857861431s: waiting for machine to come up
	I1209 10:51:21.822060  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:21.822610  627293 main.go:141] libmachine: (ha-792382-m03) DBG | unable to find current IP address of domain ha-792382-m03 in network mk-ha-792382
	I1209 10:51:21.822640  627293 main.go:141] libmachine: (ha-792382-m03) DBG | I1209 10:51:21.822556  628056 retry.go:31] will retry after 2.674364218s: waiting for machine to come up
	I1209 10:51:24.499290  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:24.499696  627293 main.go:141] libmachine: (ha-792382-m03) DBG | unable to find current IP address of domain ha-792382-m03 in network mk-ha-792382
	I1209 10:51:24.499718  627293 main.go:141] libmachine: (ha-792382-m03) DBG | I1209 10:51:24.499647  628056 retry.go:31] will retry after 3.815833745s: waiting for machine to come up
	I1209 10:51:28.319279  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:28.319654  627293 main.go:141] libmachine: (ha-792382-m03) DBG | unable to find current IP address of domain ha-792382-m03 in network mk-ha-792382
	I1209 10:51:28.319685  627293 main.go:141] libmachine: (ha-792382-m03) DBG | I1209 10:51:28.319601  628056 retry.go:31] will retry after 5.165694329s: waiting for machine to come up
	I1209 10:51:33.487484  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:33.487908  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has current primary IP address 192.168.39.82 and MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:33.487939  627293 main.go:141] libmachine: (ha-792382-m03) Found IP for machine: 192.168.39.82
	I1209 10:51:33.487954  627293 main.go:141] libmachine: (ha-792382-m03) Reserving static IP address...
	I1209 10:51:33.488381  627293 main.go:141] libmachine: (ha-792382-m03) DBG | unable to find host DHCP lease matching {name: "ha-792382-m03", mac: "52:54:00:10:ae:3c", ip: "192.168.39.82"} in network mk-ha-792382
	I1209 10:51:33.564150  627293 main.go:141] libmachine: (ha-792382-m03) Reserved static IP address: 192.168.39.82
	I1209 10:51:33.564197  627293 main.go:141] libmachine: (ha-792382-m03) DBG | Getting to WaitForSSH function...
	I1209 10:51:33.564206  627293 main.go:141] libmachine: (ha-792382-m03) Waiting for SSH to be available...
	I1209 10:51:33.567024  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:33.567471  627293 main.go:141] libmachine: (ha-792382-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:ae:3c", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:51:25 +0000 UTC Type:0 Mac:52:54:00:10:ae:3c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:minikube Clientid:01:52:54:00:10:ae:3c}
	I1209 10:51:33.567501  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined IP address 192.168.39.82 and MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:33.567664  627293 main.go:141] libmachine: (ha-792382-m03) DBG | Using SSH client type: external
	I1209 10:51:33.567687  627293 main.go:141] libmachine: (ha-792382-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382-m03/id_rsa (-rw-------)
	I1209 10:51:33.567722  627293 main.go:141] libmachine: (ha-792382-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.82 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1209 10:51:33.567734  627293 main.go:141] libmachine: (ha-792382-m03) DBG | About to run SSH command:
	I1209 10:51:33.567748  627293 main.go:141] libmachine: (ha-792382-m03) DBG | exit 0
	I1209 10:51:33.698092  627293 main.go:141] libmachine: (ha-792382-m03) DBG | SSH cmd err, output: <nil>: 
	I1209 10:51:33.698421  627293 main.go:141] libmachine: (ha-792382-m03) KVM machine creation complete!
	I1209 10:51:33.698819  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetConfigRaw
	I1209 10:51:33.699478  627293 main.go:141] libmachine: (ha-792382-m03) Calling .DriverName
	I1209 10:51:33.699674  627293 main.go:141] libmachine: (ha-792382-m03) Calling .DriverName
	I1209 10:51:33.699826  627293 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1209 10:51:33.699837  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetState
	I1209 10:51:33.701167  627293 main.go:141] libmachine: Detecting operating system of created instance...
	I1209 10:51:33.701183  627293 main.go:141] libmachine: Waiting for SSH to be available...
	I1209 10:51:33.701191  627293 main.go:141] libmachine: Getting to WaitForSSH function...
	I1209 10:51:33.701198  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHHostname
	I1209 10:51:33.703744  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:33.704133  627293 main.go:141] libmachine: (ha-792382-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:ae:3c", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:51:25 +0000 UTC Type:0 Mac:52:54:00:10:ae:3c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:ha-792382-m03 Clientid:01:52:54:00:10:ae:3c}
	I1209 10:51:33.704162  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined IP address 192.168.39.82 and MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:33.704266  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHPort
	I1209 10:51:33.704462  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHKeyPath
	I1209 10:51:33.704600  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHKeyPath
	I1209 10:51:33.704723  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHUsername
	I1209 10:51:33.704916  627293 main.go:141] libmachine: Using SSH client type: native
	I1209 10:51:33.705157  627293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.82 22 <nil> <nil>}
	I1209 10:51:33.705168  627293 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1209 10:51:33.813390  627293 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 10:51:33.813423  627293 main.go:141] libmachine: Detecting the provisioner...
	I1209 10:51:33.813436  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHHostname
	I1209 10:51:33.816441  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:33.816804  627293 main.go:141] libmachine: (ha-792382-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:ae:3c", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:51:25 +0000 UTC Type:0 Mac:52:54:00:10:ae:3c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:ha-792382-m03 Clientid:01:52:54:00:10:ae:3c}
	I1209 10:51:33.816841  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined IP address 192.168.39.82 and MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:33.816951  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHPort
	I1209 10:51:33.817167  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHKeyPath
	I1209 10:51:33.817376  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHKeyPath
	I1209 10:51:33.817559  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHUsername
	I1209 10:51:33.817716  627293 main.go:141] libmachine: Using SSH client type: native
	I1209 10:51:33.817907  627293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.82 22 <nil> <nil>}
	I1209 10:51:33.817921  627293 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1209 10:51:33.926605  627293 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1209 10:51:33.926676  627293 main.go:141] libmachine: found compatible host: buildroot
	I1209 10:51:33.926683  627293 main.go:141] libmachine: Provisioning with buildroot...
	I1209 10:51:33.926691  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetMachineName
	I1209 10:51:33.926942  627293 buildroot.go:166] provisioning hostname "ha-792382-m03"
	I1209 10:51:33.926972  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetMachineName
	I1209 10:51:33.927120  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHHostname
	I1209 10:51:33.929899  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:33.930353  627293 main.go:141] libmachine: (ha-792382-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:ae:3c", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:51:25 +0000 UTC Type:0 Mac:52:54:00:10:ae:3c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:ha-792382-m03 Clientid:01:52:54:00:10:ae:3c}
	I1209 10:51:33.930382  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined IP address 192.168.39.82 and MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:33.930545  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHPort
	I1209 10:51:33.930780  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHKeyPath
	I1209 10:51:33.930935  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHKeyPath
	I1209 10:51:33.931076  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHUsername
	I1209 10:51:33.931236  627293 main.go:141] libmachine: Using SSH client type: native
	I1209 10:51:33.931442  627293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.82 22 <nil> <nil>}
	I1209 10:51:33.931455  627293 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-792382-m03 && echo "ha-792382-m03" | sudo tee /etc/hostname
	I1209 10:51:34.053804  627293 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-792382-m03
	
	I1209 10:51:34.053838  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHHostname
	I1209 10:51:34.056450  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:34.056795  627293 main.go:141] libmachine: (ha-792382-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:ae:3c", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:51:25 +0000 UTC Type:0 Mac:52:54:00:10:ae:3c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:ha-792382-m03 Clientid:01:52:54:00:10:ae:3c}
	I1209 10:51:34.056821  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined IP address 192.168.39.82 and MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:34.057070  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHPort
	I1209 10:51:34.057253  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHKeyPath
	I1209 10:51:34.057460  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHKeyPath
	I1209 10:51:34.057580  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHUsername
	I1209 10:51:34.057749  627293 main.go:141] libmachine: Using SSH client type: native
	I1209 10:51:34.057912  627293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.82 22 <nil> <nil>}
	I1209 10:51:34.057932  627293 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-792382-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-792382-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-792382-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 10:51:34.174396  627293 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 10:51:34.174436  627293 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20068-609844/.minikube CaCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20068-609844/.minikube}
	I1209 10:51:34.174459  627293 buildroot.go:174] setting up certificates
	I1209 10:51:34.174471  627293 provision.go:84] configureAuth start
	I1209 10:51:34.174484  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetMachineName
	I1209 10:51:34.174826  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetIP
	I1209 10:51:34.178006  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:34.178384  627293 main.go:141] libmachine: (ha-792382-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:ae:3c", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:51:25 +0000 UTC Type:0 Mac:52:54:00:10:ae:3c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:ha-792382-m03 Clientid:01:52:54:00:10:ae:3c}
	I1209 10:51:34.178414  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined IP address 192.168.39.82 and MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:34.178593  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHHostname
	I1209 10:51:34.180882  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:34.181259  627293 main.go:141] libmachine: (ha-792382-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:ae:3c", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:51:25 +0000 UTC Type:0 Mac:52:54:00:10:ae:3c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:ha-792382-m03 Clientid:01:52:54:00:10:ae:3c}
	I1209 10:51:34.181297  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined IP address 192.168.39.82 and MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:34.181434  627293 provision.go:143] copyHostCerts
	I1209 10:51:34.181467  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem
	I1209 10:51:34.181509  627293 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem, removing ...
	I1209 10:51:34.181521  627293 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem
	I1209 10:51:34.181599  627293 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem (1082 bytes)
	I1209 10:51:34.181708  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem
	I1209 10:51:34.181739  627293 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem, removing ...
	I1209 10:51:34.181750  627293 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem
	I1209 10:51:34.181796  627293 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem (1123 bytes)
	I1209 10:51:34.181862  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem
	I1209 10:51:34.181879  627293 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem, removing ...
	I1209 10:51:34.181885  627293 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem
	I1209 10:51:34.181910  627293 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem (1679 bytes)
	I1209 10:51:34.181961  627293 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem org=jenkins.ha-792382-m03 san=[127.0.0.1 192.168.39.82 ha-792382-m03 localhost minikube]
	I1209 10:51:34.410867  627293 provision.go:177] copyRemoteCerts
	I1209 10:51:34.410930  627293 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 10:51:34.410961  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHHostname
	I1209 10:51:34.414202  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:34.414663  627293 main.go:141] libmachine: (ha-792382-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:ae:3c", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:51:25 +0000 UTC Type:0 Mac:52:54:00:10:ae:3c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:ha-792382-m03 Clientid:01:52:54:00:10:ae:3c}
	I1209 10:51:34.414696  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined IP address 192.168.39.82 and MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:34.414964  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHPort
	I1209 10:51:34.415202  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHKeyPath
	I1209 10:51:34.415374  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHUsername
	I1209 10:51:34.415561  627293 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382-m03/id_rsa Username:docker}
	I1209 10:51:34.500121  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1209 10:51:34.500216  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1209 10:51:34.525465  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1209 10:51:34.525566  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1209 10:51:34.548733  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1209 10:51:34.548819  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1209 10:51:34.570848  627293 provision.go:87] duration metric: took 396.361471ms to configureAuth
	I1209 10:51:34.570884  627293 buildroot.go:189] setting minikube options for container-runtime
	I1209 10:51:34.571164  627293 config.go:182] Loaded profile config "ha-792382": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 10:51:34.571276  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHHostname
	I1209 10:51:34.574107  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:34.574532  627293 main.go:141] libmachine: (ha-792382-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:ae:3c", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:51:25 +0000 UTC Type:0 Mac:52:54:00:10:ae:3c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:ha-792382-m03 Clientid:01:52:54:00:10:ae:3c}
	I1209 10:51:34.574557  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined IP address 192.168.39.82 and MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:34.574761  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHPort
	I1209 10:51:34.574957  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHKeyPath
	I1209 10:51:34.575114  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHKeyPath
	I1209 10:51:34.575329  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHUsername
	I1209 10:51:34.575548  627293 main.go:141] libmachine: Using SSH client type: native
	I1209 10:51:34.575797  627293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.82 22 <nil> <nil>}
	I1209 10:51:34.575824  627293 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 10:51:34.816625  627293 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 10:51:34.816655  627293 main.go:141] libmachine: Checking connection to Docker...
	I1209 10:51:34.816670  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetURL
	I1209 10:51:34.817924  627293 main.go:141] libmachine: (ha-792382-m03) DBG | Using libvirt version 6000000
	I1209 10:51:34.820293  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:34.820739  627293 main.go:141] libmachine: (ha-792382-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:ae:3c", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:51:25 +0000 UTC Type:0 Mac:52:54:00:10:ae:3c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:ha-792382-m03 Clientid:01:52:54:00:10:ae:3c}
	I1209 10:51:34.820782  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined IP address 192.168.39.82 and MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:34.820943  627293 main.go:141] libmachine: Docker is up and running!
	I1209 10:51:34.820954  627293 main.go:141] libmachine: Reticulating splines...
	I1209 10:51:34.820962  627293 client.go:171] duration metric: took 24.920612799s to LocalClient.Create
	I1209 10:51:34.820990  627293 start.go:167] duration metric: took 24.920677638s to libmachine.API.Create "ha-792382"
	I1209 10:51:34.821001  627293 start.go:293] postStartSetup for "ha-792382-m03" (driver="kvm2")
	I1209 10:51:34.821015  627293 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 10:51:34.821041  627293 main.go:141] libmachine: (ha-792382-m03) Calling .DriverName
	I1209 10:51:34.821314  627293 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 10:51:34.821340  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHHostname
	I1209 10:51:34.823716  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:34.824123  627293 main.go:141] libmachine: (ha-792382-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:ae:3c", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:51:25 +0000 UTC Type:0 Mac:52:54:00:10:ae:3c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:ha-792382-m03 Clientid:01:52:54:00:10:ae:3c}
	I1209 10:51:34.824150  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined IP address 192.168.39.82 and MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:34.824346  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHPort
	I1209 10:51:34.824540  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHKeyPath
	I1209 10:51:34.824710  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHUsername
	I1209 10:51:34.824899  627293 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382-m03/id_rsa Username:docker}
	I1209 10:51:34.908596  627293 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 10:51:34.912587  627293 info.go:137] Remote host: Buildroot 2023.02.9
	I1209 10:51:34.912634  627293 filesync.go:126] Scanning /home/jenkins/minikube-integration/20068-609844/.minikube/addons for local assets ...
	I1209 10:51:34.912758  627293 filesync.go:126] Scanning /home/jenkins/minikube-integration/20068-609844/.minikube/files for local assets ...
	I1209 10:51:34.912881  627293 filesync.go:149] local asset: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem -> 6170172.pem in /etc/ssl/certs
	I1209 10:51:34.912894  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem -> /etc/ssl/certs/6170172.pem
	I1209 10:51:34.913014  627293 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 10:51:34.921828  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem --> /etc/ssl/certs/6170172.pem (1708 bytes)
	I1209 10:51:34.944676  627293 start.go:296] duration metric: took 123.657477ms for postStartSetup
	I1209 10:51:34.944735  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetConfigRaw
	I1209 10:51:34.945372  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetIP
	I1209 10:51:34.948020  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:34.948350  627293 main.go:141] libmachine: (ha-792382-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:ae:3c", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:51:25 +0000 UTC Type:0 Mac:52:54:00:10:ae:3c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:ha-792382-m03 Clientid:01:52:54:00:10:ae:3c}
	I1209 10:51:34.948374  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined IP address 192.168.39.82 and MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:34.948706  627293 profile.go:143] Saving config to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/config.json ...
	I1209 10:51:34.948901  627293 start.go:128] duration metric: took 25.067639086s to createHost
	I1209 10:51:34.948928  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHHostname
	I1209 10:51:34.951092  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:34.951471  627293 main.go:141] libmachine: (ha-792382-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:ae:3c", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:51:25 +0000 UTC Type:0 Mac:52:54:00:10:ae:3c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:ha-792382-m03 Clientid:01:52:54:00:10:ae:3c}
	I1209 10:51:34.951504  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined IP address 192.168.39.82 and MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:34.951672  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHPort
	I1209 10:51:34.951858  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHKeyPath
	I1209 10:51:34.952015  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHKeyPath
	I1209 10:51:34.952130  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHUsername
	I1209 10:51:34.952269  627293 main.go:141] libmachine: Using SSH client type: native
	I1209 10:51:34.952475  627293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.82 22 <nil> <nil>}
	I1209 10:51:34.952491  627293 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1209 10:51:35.062736  627293 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733741495.040495881
	
	I1209 10:51:35.062764  627293 fix.go:216] guest clock: 1733741495.040495881
	I1209 10:51:35.062773  627293 fix.go:229] Guest: 2024-12-09 10:51:35.040495881 +0000 UTC Remote: 2024-12-09 10:51:34.948914535 +0000 UTC m=+142.833153468 (delta=91.581346ms)
	I1209 10:51:35.062795  627293 fix.go:200] guest clock delta is within tolerance: 91.581346ms
	I1209 10:51:35.062802  627293 start.go:83] releasing machines lock for "ha-792382-m03", held for 25.181683585s
	I1209 10:51:35.062825  627293 main.go:141] libmachine: (ha-792382-m03) Calling .DriverName
	I1209 10:51:35.063125  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetIP
	I1209 10:51:35.065564  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:35.065919  627293 main.go:141] libmachine: (ha-792382-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:ae:3c", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:51:25 +0000 UTC Type:0 Mac:52:54:00:10:ae:3c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:ha-792382-m03 Clientid:01:52:54:00:10:ae:3c}
	I1209 10:51:35.065950  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined IP address 192.168.39.82 and MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:35.068041  627293 out.go:177] * Found network options:
	I1209 10:51:35.069311  627293 out.go:177]   - NO_PROXY=192.168.39.69,192.168.39.89
	W1209 10:51:35.070337  627293 proxy.go:119] fail to check proxy env: Error ip not in block
	W1209 10:51:35.070367  627293 proxy.go:119] fail to check proxy env: Error ip not in block
	I1209 10:51:35.070382  627293 main.go:141] libmachine: (ha-792382-m03) Calling .DriverName
	I1209 10:51:35.070888  627293 main.go:141] libmachine: (ha-792382-m03) Calling .DriverName
	I1209 10:51:35.071098  627293 main.go:141] libmachine: (ha-792382-m03) Calling .DriverName
	I1209 10:51:35.071216  627293 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 10:51:35.071260  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHHostname
	W1209 10:51:35.071333  627293 proxy.go:119] fail to check proxy env: Error ip not in block
	W1209 10:51:35.071358  627293 proxy.go:119] fail to check proxy env: Error ip not in block
	I1209 10:51:35.071448  627293 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 10:51:35.071472  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHHostname
	I1209 10:51:35.074136  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:35.074287  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:35.074566  627293 main.go:141] libmachine: (ha-792382-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:ae:3c", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:51:25 +0000 UTC Type:0 Mac:52:54:00:10:ae:3c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:ha-792382-m03 Clientid:01:52:54:00:10:ae:3c}
	I1209 10:51:35.074588  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined IP address 192.168.39.82 and MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:35.074614  627293 main.go:141] libmachine: (ha-792382-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:ae:3c", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:51:25 +0000 UTC Type:0 Mac:52:54:00:10:ae:3c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:ha-792382-m03 Clientid:01:52:54:00:10:ae:3c}
	I1209 10:51:35.074633  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined IP address 192.168.39.82 and MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:35.074729  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHPort
	I1209 10:51:35.074920  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHPort
	I1209 10:51:35.074923  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHKeyPath
	I1209 10:51:35.075091  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHKeyPath
	I1209 10:51:35.075094  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHUsername
	I1209 10:51:35.075270  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHUsername
	I1209 10:51:35.075298  627293 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382-m03/id_rsa Username:docker}
	I1209 10:51:35.075413  627293 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382-m03/id_rsa Username:docker}
	I1209 10:51:35.318511  627293 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 10:51:35.324511  627293 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 10:51:35.324586  627293 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 10:51:35.341575  627293 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1209 10:51:35.341607  627293 start.go:495] detecting cgroup driver to use...
	I1209 10:51:35.341686  627293 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 10:51:35.357724  627293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 10:51:35.372685  627293 docker.go:217] disabling cri-docker service (if available) ...
	I1209 10:51:35.372771  627293 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 10:51:35.387627  627293 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 10:51:35.401716  627293 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 10:51:35.525416  627293 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 10:51:35.688544  627293 docker.go:233] disabling docker service ...
	I1209 10:51:35.688627  627293 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 10:51:35.703495  627293 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 10:51:35.717769  627293 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 10:51:35.838656  627293 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 10:51:35.968740  627293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 10:51:35.982914  627293 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 10:51:36.001011  627293 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1209 10:51:36.001092  627293 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 10:51:36.011496  627293 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 10:51:36.011565  627293 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 10:51:36.021527  627293 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 10:51:36.031202  627293 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 10:51:36.041196  627293 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 10:51:36.051656  627293 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 10:51:36.062085  627293 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 10:51:36.078955  627293 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 10:51:36.088919  627293 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 10:51:36.098428  627293 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1209 10:51:36.098491  627293 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1209 10:51:36.112478  627293 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 10:51:36.121985  627293 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 10:51:36.236147  627293 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 10:51:36.331891  627293 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 10:51:36.331989  627293 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 10:51:36.336578  627293 start.go:563] Will wait 60s for crictl version
	I1209 10:51:36.336641  627293 ssh_runner.go:195] Run: which crictl
	I1209 10:51:36.340301  627293 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 10:51:36.380474  627293 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1209 10:51:36.380557  627293 ssh_runner.go:195] Run: crio --version
	I1209 10:51:36.408527  627293 ssh_runner.go:195] Run: crio --version
	I1209 10:51:36.438078  627293 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1209 10:51:36.439329  627293 out.go:177]   - env NO_PROXY=192.168.39.69
	I1209 10:51:36.440501  627293 out.go:177]   - env NO_PROXY=192.168.39.69,192.168.39.89
	I1209 10:51:36.441659  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetIP
	I1209 10:51:36.444828  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:36.445310  627293 main.go:141] libmachine: (ha-792382-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:ae:3c", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:51:25 +0000 UTC Type:0 Mac:52:54:00:10:ae:3c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:ha-792382-m03 Clientid:01:52:54:00:10:ae:3c}
	I1209 10:51:36.445339  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined IP address 192.168.39.82 and MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:36.445521  627293 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1209 10:51:36.449517  627293 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 10:51:36.461352  627293 mustload.go:65] Loading cluster: ha-792382
	I1209 10:51:36.461581  627293 config.go:182] Loaded profile config "ha-792382": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 10:51:36.461851  627293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:51:36.461915  627293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:51:36.476757  627293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45915
	I1209 10:51:36.477266  627293 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:51:36.477839  627293 main.go:141] libmachine: Using API Version  1
	I1209 10:51:36.477861  627293 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:51:36.478264  627293 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:51:36.478470  627293 main.go:141] libmachine: (ha-792382) Calling .GetState
	I1209 10:51:36.480228  627293 host.go:66] Checking if "ha-792382" exists ...
	I1209 10:51:36.480540  627293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:51:36.480578  627293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:51:36.495892  627293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37057
	I1209 10:51:36.496439  627293 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:51:36.496999  627293 main.go:141] libmachine: Using API Version  1
	I1209 10:51:36.497024  627293 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:51:36.497365  627293 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:51:36.497597  627293 main.go:141] libmachine: (ha-792382) Calling .DriverName
	I1209 10:51:36.497777  627293 certs.go:68] Setting up /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382 for IP: 192.168.39.82
	I1209 10:51:36.497796  627293 certs.go:194] generating shared ca certs ...
	I1209 10:51:36.497816  627293 certs.go:226] acquiring lock for ca certs: {Name:mk2df5887e08965d909a9c950da5dfffb8a04ddc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 10:51:36.497951  627293 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.key
	I1209 10:51:36.497987  627293 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.key
	I1209 10:51:36.497996  627293 certs.go:256] generating profile certs ...
	I1209 10:51:36.498067  627293 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/client.key
	I1209 10:51:36.498091  627293 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.key.74be6275
	I1209 10:51:36.498107  627293 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.crt.74be6275 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.69 192.168.39.89 192.168.39.82 192.168.39.254]
	I1209 10:51:36.575706  627293 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.crt.74be6275 ...
	I1209 10:51:36.575744  627293 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.crt.74be6275: {Name:mkc0279d5f95c7c05a4a03239304c698f543bc23 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 10:51:36.575927  627293 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.key.74be6275 ...
	I1209 10:51:36.575940  627293 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.key.74be6275: {Name:mk628bdb195c5612308f11734296bd7934f36956 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 10:51:36.576016  627293 certs.go:381] copying /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.crt.74be6275 -> /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.crt
	I1209 10:51:36.576148  627293 certs.go:385] copying /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.key.74be6275 -> /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.key
	I1209 10:51:36.576277  627293 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/proxy-client.key
	I1209 10:51:36.576293  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1209 10:51:36.576307  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1209 10:51:36.576321  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1209 10:51:36.576334  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1209 10:51:36.576347  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1209 10:51:36.576359  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1209 10:51:36.576371  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1209 10:51:36.590260  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1209 10:51:36.590358  627293 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017.pem (1338 bytes)
	W1209 10:51:36.590394  627293 certs.go:480] ignoring /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017_empty.pem, impossibly tiny 0 bytes
	I1209 10:51:36.590412  627293 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem (1675 bytes)
	I1209 10:51:36.590439  627293 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem (1082 bytes)
	I1209 10:51:36.590462  627293 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem (1123 bytes)
	I1209 10:51:36.590483  627293 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem (1679 bytes)
	I1209 10:51:36.590521  627293 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem (1708 bytes)
	I1209 10:51:36.590548  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem -> /usr/share/ca-certificates/6170172.pem
	I1209 10:51:36.590563  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1209 10:51:36.590576  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017.pem -> /usr/share/ca-certificates/617017.pem
	I1209 10:51:36.590614  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHHostname
	I1209 10:51:36.594031  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:51:36.594418  627293 main.go:141] libmachine: (ha-792382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:82:f7", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:49:26 +0000 UTC Type:0 Mac:52:54:00:a8:82:f7 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-792382 Clientid:01:52:54:00:a8:82:f7}
	I1209 10:51:36.594452  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:51:36.594660  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHPort
	I1209 10:51:36.594910  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHKeyPath
	I1209 10:51:36.595086  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHUsername
	I1209 10:51:36.595232  627293 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382/id_rsa Username:docker}
	I1209 10:51:36.666577  627293 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1209 10:51:36.671392  627293 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1209 10:51:36.681688  627293 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1209 10:51:36.685694  627293 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1209 10:51:36.696364  627293 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1209 10:51:36.700718  627293 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1209 10:51:36.712302  627293 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1209 10:51:36.716534  627293 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1209 10:51:36.728128  627293 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1209 10:51:36.732026  627293 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1209 10:51:36.743956  627293 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1209 10:51:36.748200  627293 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1209 10:51:36.761818  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 10:51:36.786260  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1209 10:51:36.809394  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 10:51:36.832350  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1209 10:51:36.854875  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1209 10:51:36.876691  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1209 10:51:36.900011  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 10:51:36.922859  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1209 10:51:36.945086  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem --> /usr/share/ca-certificates/6170172.pem (1708 bytes)
	I1209 10:51:36.966983  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 10:51:36.989660  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017.pem --> /usr/share/ca-certificates/617017.pem (1338 bytes)
	I1209 10:51:37.011442  627293 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1209 10:51:37.027256  627293 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1209 10:51:37.042921  627293 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1209 10:51:37.059579  627293 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1209 10:51:37.078911  627293 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1209 10:51:37.094738  627293 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1209 10:51:37.112113  627293 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1209 10:51:37.130720  627293 ssh_runner.go:195] Run: openssl version
	I1209 10:51:37.136460  627293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1209 10:51:37.148061  627293 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 10:51:37.152555  627293 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 10:34 /usr/share/ca-certificates/minikubeCA.pem
	I1209 10:51:37.152627  627293 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 10:51:37.158639  627293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1209 10:51:37.170061  627293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/617017.pem && ln -fs /usr/share/ca-certificates/617017.pem /etc/ssl/certs/617017.pem"
	I1209 10:51:37.180567  627293 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/617017.pem
	I1209 10:51:37.184633  627293 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 10:45 /usr/share/ca-certificates/617017.pem
	I1209 10:51:37.184695  627293 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/617017.pem
	I1209 10:51:37.190044  627293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/617017.pem /etc/ssl/certs/51391683.0"
	I1209 10:51:37.200767  627293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6170172.pem && ln -fs /usr/share/ca-certificates/6170172.pem /etc/ssl/certs/6170172.pem"
	I1209 10:51:37.211239  627293 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6170172.pem
	I1209 10:51:37.215531  627293 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 10:45 /usr/share/ca-certificates/6170172.pem
	I1209 10:51:37.215617  627293 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6170172.pem
	I1209 10:51:37.221282  627293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6170172.pem /etc/ssl/certs/3ec20f2e.0"
	I1209 10:51:37.232891  627293 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 10:51:37.237033  627293 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1209 10:51:37.237096  627293 kubeadm.go:934] updating node {m03 192.168.39.82 8443 v1.31.2 crio true true} ...
	I1209 10:51:37.237210  627293 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-792382-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.82
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-792382 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 10:51:37.237247  627293 kube-vip.go:115] generating kube-vip config ...
	I1209 10:51:37.237291  627293 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1209 10:51:37.254154  627293 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1209 10:51:37.254286  627293 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.7
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1209 10:51:37.254376  627293 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1209 10:51:37.266499  627293 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1209 10:51:37.266573  627293 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1209 10:51:37.276989  627293 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256
	I1209 10:51:37.277004  627293 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256
	I1209 10:51:37.277031  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1209 10:51:37.277052  627293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 10:51:37.277099  627293 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1209 10:51:37.276989  627293 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1209 10:51:37.277162  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1209 10:51:37.277221  627293 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1209 10:51:37.294260  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1209 10:51:37.294329  627293 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1209 10:51:37.294354  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1209 10:51:37.294397  627293 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1209 10:51:37.294410  627293 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1209 10:51:37.294447  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1209 10:51:37.309738  627293 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1209 10:51:37.309777  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1209 10:51:38.106081  627293 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1209 10:51:38.115636  627293 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1209 10:51:38.132759  627293 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 10:51:38.149726  627293 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1209 10:51:38.166083  627293 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1209 10:51:38.169937  627293 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 10:51:38.181150  627293 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 10:51:38.308494  627293 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 10:51:38.325679  627293 host.go:66] Checking if "ha-792382" exists ...
	I1209 10:51:38.326045  627293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:51:38.326105  627293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:51:38.344459  627293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45865
	I1209 10:51:38.345084  627293 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:51:38.345753  627293 main.go:141] libmachine: Using API Version  1
	I1209 10:51:38.345796  627293 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:51:38.346197  627293 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:51:38.346437  627293 main.go:141] libmachine: (ha-792382) Calling .DriverName
	I1209 10:51:38.346586  627293 start.go:317] joinCluster: &{Name:ha-792382 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-792382 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.69 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.89 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.82 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns
:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 10:51:38.346740  627293 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1209 10:51:38.346768  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHHostname
	I1209 10:51:38.349642  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:51:38.350099  627293 main.go:141] libmachine: (ha-792382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:82:f7", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:49:26 +0000 UTC Type:0 Mac:52:54:00:a8:82:f7 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-792382 Clientid:01:52:54:00:a8:82:f7}
	I1209 10:51:38.350125  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:51:38.350286  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHPort
	I1209 10:51:38.350484  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHKeyPath
	I1209 10:51:38.350634  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHUsername
	I1209 10:51:38.350780  627293 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382/id_rsa Username:docker}
	I1209 10:51:38.514216  627293 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.82 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 10:51:38.514274  627293 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token exrmr9.huiz7swpoaojy929 --discovery-token-ca-cert-hash sha256:c011865ce27f188d55feab5f04226df0aae8d2e7cfe661283806c6f2de0ef29a --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-792382-m03 --control-plane --apiserver-advertise-address=192.168.39.82 --apiserver-bind-port=8443"
	I1209 10:52:01.803198  627293 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token exrmr9.huiz7swpoaojy929 --discovery-token-ca-cert-hash sha256:c011865ce27f188d55feab5f04226df0aae8d2e7cfe661283806c6f2de0ef29a --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-792382-m03 --control-plane --apiserver-advertise-address=192.168.39.82 --apiserver-bind-port=8443": (23.288893034s)
	I1209 10:52:01.803245  627293 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1209 10:52:02.338453  627293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-792382-m03 minikube.k8s.io/updated_at=2024_12_09T10_52_02_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=60110addcdbf0fec7168b962521659e922988d6c minikube.k8s.io/name=ha-792382 minikube.k8s.io/primary=false
	I1209 10:52:02.475613  627293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-792382-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I1209 10:52:02.591820  627293 start.go:319] duration metric: took 24.245228011s to joinCluster
	I1209 10:52:02.591921  627293 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.82 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 10:52:02.592324  627293 config.go:182] Loaded profile config "ha-792382": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 10:52:02.593526  627293 out.go:177] * Verifying Kubernetes components...
	I1209 10:52:02.594809  627293 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 10:52:02.839263  627293 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 10:52:02.861519  627293 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/20068-609844/kubeconfig
	I1209 10:52:02.861874  627293 kapi.go:59] client config for ha-792382: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/client.crt", KeyFile:"/home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/client.key", CAFile:"/home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243b6e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1209 10:52:02.861974  627293 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.69:8443
	I1209 10:52:02.862413  627293 node_ready.go:35] waiting up to 6m0s for node "ha-792382-m03" to be "Ready" ...
	I1209 10:52:02.862536  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:02.862551  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:02.862563  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:02.862569  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:02.866706  627293 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 10:52:03.363562  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:03.363585  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:03.363593  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:03.363597  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:03.367171  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:03.863250  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:03.863275  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:03.863284  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:03.863288  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:03.866476  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:04.363562  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:04.363593  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:04.363607  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:04.363611  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:04.367286  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:04.862912  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:04.862943  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:04.862957  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:04.862964  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:04.866217  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:04.866889  627293 node_ready.go:53] node "ha-792382-m03" has status "Ready":"False"
	I1209 10:52:05.363334  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:05.363359  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:05.363368  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:05.363371  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:05.366850  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:05.863531  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:05.863565  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:05.863577  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:05.863584  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:05.867191  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:06.363075  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:06.363103  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:06.363116  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:06.363123  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:06.368722  627293 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1209 10:52:06.862720  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:06.862750  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:06.862764  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:06.862773  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:06.865876  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:07.363131  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:07.363158  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:07.363167  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:07.363181  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:07.366603  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:07.367388  627293 node_ready.go:53] node "ha-792382-m03" has status "Ready":"False"
	I1209 10:52:07.862715  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:07.862743  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:07.862756  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:07.862762  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:07.866073  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:08.362710  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:08.362744  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:08.362756  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:08.362763  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:08.366953  627293 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 10:52:08.862771  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:08.862799  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:08.862808  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:08.862813  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:08.866875  627293 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 10:52:09.362787  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:09.362812  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:09.362820  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:09.362824  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:09.367053  627293 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 10:52:09.367603  627293 node_ready.go:53] node "ha-792382-m03" has status "Ready":"False"
	I1209 10:52:09.862752  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:09.862786  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:09.862803  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:09.862809  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:09.866207  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:10.363296  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:10.363329  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:10.363341  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:10.363347  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:10.368594  627293 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1209 10:52:10.863471  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:10.863504  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:10.863518  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:10.863523  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:10.868956  627293 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1209 10:52:11.362961  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:11.362988  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:11.362998  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:11.363003  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:11.366828  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:11.862866  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:11.862896  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:11.862906  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:11.862912  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:11.868040  627293 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1209 10:52:11.868910  627293 node_ready.go:53] node "ha-792382-m03" has status "Ready":"False"
	I1209 10:52:12.363520  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:12.363543  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:12.363551  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:12.363555  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:12.367064  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:12.862709  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:12.862738  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:12.862747  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:12.862751  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:12.866024  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:13.362946  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:13.362972  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:13.362981  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:13.362985  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:13.367208  627293 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 10:52:13.863257  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:13.863282  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:13.863291  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:13.863295  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:13.866570  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:14.363551  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:14.363576  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:14.363588  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:14.363595  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:14.367509  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:14.368341  627293 node_ready.go:53] node "ha-792382-m03" has status "Ready":"False"
	I1209 10:52:14.863449  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:14.863475  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:14.863485  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:14.863492  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:14.866808  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:15.363473  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:15.363501  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:15.363510  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:15.363514  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:15.367252  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:15.863063  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:15.863086  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:15.863095  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:15.863099  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:15.866694  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:16.363487  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:16.363515  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:16.363525  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:16.363529  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:16.366968  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:16.863237  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:16.863267  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:16.863277  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:16.863285  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:16.866528  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:16.867067  627293 node_ready.go:53] node "ha-792382-m03" has status "Ready":"False"
	I1209 10:52:17.363592  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:17.363616  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:17.363628  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:17.363634  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:17.367261  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:17.863310  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:17.863334  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:17.863343  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:17.863347  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:17.866881  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:18.363575  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:18.363603  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:18.363614  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:18.363624  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:18.368502  627293 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 10:52:18.863660  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:18.863684  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:18.863693  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:18.863698  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:18.866946  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:18.867391  627293 node_ready.go:53] node "ha-792382-m03" has status "Ready":"False"
	I1209 10:52:19.362762  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:19.362786  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:19.362794  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:19.362798  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:19.366684  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:19.863495  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:19.863581  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:19.863600  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:19.863608  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:19.870858  627293 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1209 10:52:20.363448  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:20.363473  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:20.363482  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:20.363487  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:20.367472  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:20.368003  627293 node_ready.go:49] node "ha-792382-m03" has status "Ready":"True"
	I1209 10:52:20.368025  627293 node_ready.go:38] duration metric: took 17.505584111s for node "ha-792382-m03" to be "Ready" ...
	I1209 10:52:20.368035  627293 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 10:52:20.368124  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods
	I1209 10:52:20.368135  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:20.368143  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:20.368147  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:20.375067  627293 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1209 10:52:20.382809  627293 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-8hlml" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:20.382913  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-8hlml
	I1209 10:52:20.382922  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:20.382932  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:20.382939  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:20.386681  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:20.387473  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382
	I1209 10:52:20.387492  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:20.387502  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:20.387506  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:20.390201  627293 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 10:52:20.390989  627293 pod_ready.go:93] pod "coredns-7c65d6cfc9-8hlml" in "kube-system" namespace has status "Ready":"True"
	I1209 10:52:20.391012  627293 pod_ready.go:82] duration metric: took 8.170284ms for pod "coredns-7c65d6cfc9-8hlml" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:20.391025  627293 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-rz6mw" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:20.391107  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-rz6mw
	I1209 10:52:20.391121  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:20.391132  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:20.391139  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:20.393896  627293 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 10:52:20.394886  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382
	I1209 10:52:20.394902  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:20.394910  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:20.394913  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:20.397630  627293 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 10:52:20.398092  627293 pod_ready.go:93] pod "coredns-7c65d6cfc9-rz6mw" in "kube-system" namespace has status "Ready":"True"
	I1209 10:52:20.398114  627293 pod_ready.go:82] duration metric: took 7.080989ms for pod "coredns-7c65d6cfc9-rz6mw" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:20.398128  627293 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-792382" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:20.398227  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-792382
	I1209 10:52:20.398238  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:20.398249  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:20.398255  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:20.402755  627293 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 10:52:20.403454  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382
	I1209 10:52:20.403477  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:20.403487  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:20.403495  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:20.407171  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:20.407675  627293 pod_ready.go:93] pod "etcd-ha-792382" in "kube-system" namespace has status "Ready":"True"
	I1209 10:52:20.407690  627293 pod_ready.go:82] duration metric: took 9.55619ms for pod "etcd-ha-792382" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:20.407701  627293 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-792382-m02" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:20.407761  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-792382-m02
	I1209 10:52:20.407769  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:20.407776  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:20.407782  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:20.411699  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:20.412198  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:52:20.412214  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:20.412221  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:20.412228  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:20.415128  627293 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 10:52:20.415876  627293 pod_ready.go:93] pod "etcd-ha-792382-m02" in "kube-system" namespace has status "Ready":"True"
	I1209 10:52:20.415895  627293 pod_ready.go:82] duration metric: took 8.185439ms for pod "etcd-ha-792382-m02" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:20.415927  627293 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-792382-m03" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:20.564348  627293 request.go:632] Waited for 148.293235ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-792382-m03
	I1209 10:52:20.564443  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-792382-m03
	I1209 10:52:20.564455  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:20.564475  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:20.564485  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:20.567758  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:20.763843  627293 request.go:632] Waited for 195.366287ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:20.763920  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:20.763933  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:20.763945  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:20.763957  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:20.772124  627293 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1209 10:52:20.772769  627293 pod_ready.go:93] pod "etcd-ha-792382-m03" in "kube-system" namespace has status "Ready":"True"
	I1209 10:52:20.772802  627293 pod_ready.go:82] duration metric: took 356.849767ms for pod "etcd-ha-792382-m03" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:20.772827  627293 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-792382" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:20.963692  627293 request.go:632] Waited for 190.744323ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-792382
	I1209 10:52:20.963762  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-792382
	I1209 10:52:20.963767  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:20.963775  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:20.963781  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:20.966983  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:21.163987  627293 request.go:632] Waited for 196.382643ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-792382
	I1209 10:52:21.164057  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382
	I1209 10:52:21.164062  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:21.164070  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:21.164074  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:21.167406  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:21.168047  627293 pod_ready.go:93] pod "kube-apiserver-ha-792382" in "kube-system" namespace has status "Ready":"True"
	I1209 10:52:21.168074  627293 pod_ready.go:82] duration metric: took 395.237987ms for pod "kube-apiserver-ha-792382" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:21.168086  627293 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-792382-m02" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:21.364059  627293 request.go:632] Waited for 195.853676ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-792382-m02
	I1209 10:52:21.364141  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-792382-m02
	I1209 10:52:21.364147  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:21.364155  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:21.364164  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:21.368500  627293 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 10:52:21.563923  627293 request.go:632] Waited for 194.790397ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:52:21.563997  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:52:21.564006  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:21.564018  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:21.564029  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:21.567739  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:21.568495  627293 pod_ready.go:93] pod "kube-apiserver-ha-792382-m02" in "kube-system" namespace has status "Ready":"True"
	I1209 10:52:21.568518  627293 pod_ready.go:82] duration metric: took 400.423423ms for pod "kube-apiserver-ha-792382-m02" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:21.568529  627293 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-792382-m03" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:21.763480  627293 request.go:632] Waited for 194.86491ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-792382-m03
	I1209 10:52:21.763574  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-792382-m03
	I1209 10:52:21.763581  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:21.763594  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:21.763602  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:21.767033  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:21.964208  627293 request.go:632] Waited for 196.356498ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:21.964296  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:21.964305  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:21.964340  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:21.964351  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:21.967752  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:21.968228  627293 pod_ready.go:93] pod "kube-apiserver-ha-792382-m03" in "kube-system" namespace has status "Ready":"True"
	I1209 10:52:21.968247  627293 pod_ready.go:82] duration metric: took 399.712092ms for pod "kube-apiserver-ha-792382-m03" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:21.968258  627293 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-792382" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:22.163746  627293 request.go:632] Waited for 195.415661ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-792382
	I1209 10:52:22.163805  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-792382
	I1209 10:52:22.163810  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:22.163823  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:22.163830  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:22.166645  627293 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 10:52:22.364336  627293 request.go:632] Waited for 197.03194ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-792382
	I1209 10:52:22.364428  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382
	I1209 10:52:22.364449  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:22.364480  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:22.364491  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:22.368286  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:22.369016  627293 pod_ready.go:93] pod "kube-controller-manager-ha-792382" in "kube-system" namespace has status "Ready":"True"
	I1209 10:52:22.369039  627293 pod_ready.go:82] duration metric: took 400.774826ms for pod "kube-controller-manager-ha-792382" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:22.369050  627293 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-792382-m02" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:22.564041  627293 request.go:632] Waited for 194.907266ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-792382-m02
	I1209 10:52:22.564119  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-792382-m02
	I1209 10:52:22.564127  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:22.564140  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:22.564149  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:22.567707  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:22.763845  627293 request.go:632] Waited for 195.40032ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:52:22.763928  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:52:22.763935  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:22.763956  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:22.763982  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:22.767705  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:22.768312  627293 pod_ready.go:93] pod "kube-controller-manager-ha-792382-m02" in "kube-system" namespace has status "Ready":"True"
	I1209 10:52:22.768335  627293 pod_ready.go:82] duration metric: took 399.277854ms for pod "kube-controller-manager-ha-792382-m02" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:22.768350  627293 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-792382-m03" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:22.964360  627293 request.go:632] Waited for 195.903206ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-792382-m03
	I1209 10:52:22.964433  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-792382-m03
	I1209 10:52:22.964446  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:22.964457  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:22.964465  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:22.967540  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:23.163523  627293 request.go:632] Waited for 195.162382ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:23.163590  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:23.163596  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:23.163611  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:23.163618  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:23.166875  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:23.167557  627293 pod_ready.go:93] pod "kube-controller-manager-ha-792382-m03" in "kube-system" namespace has status "Ready":"True"
	I1209 10:52:23.167581  627293 pod_ready.go:82] duration metric: took 399.219283ms for pod "kube-controller-manager-ha-792382-m03" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:23.167592  627293 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-2l42s" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:23.364163  627293 request.go:632] Waited for 196.469736ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2l42s
	I1209 10:52:23.364233  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2l42s
	I1209 10:52:23.364240  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:23.364250  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:23.364256  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:23.368871  627293 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 10:52:23.564369  627293 request.go:632] Waited for 194.396631ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:23.564485  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:23.564496  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:23.564504  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:23.564509  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:23.567861  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:23.568367  627293 pod_ready.go:93] pod "kube-proxy-2l42s" in "kube-system" namespace has status "Ready":"True"
	I1209 10:52:23.568387  627293 pod_ready.go:82] duration metric: took 400.786442ms for pod "kube-proxy-2l42s" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:23.568400  627293 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-dckpl" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:23.763515  627293 request.go:632] Waited for 195.023087ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dckpl
	I1209 10:52:23.763600  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dckpl
	I1209 10:52:23.763608  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:23.763619  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:23.763628  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:23.767899  627293 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 10:52:23.964038  627293 request.go:632] Waited for 195.369645ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:52:23.964137  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:52:23.964144  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:23.964152  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:23.964161  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:23.967628  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:23.968543  627293 pod_ready.go:93] pod "kube-proxy-dckpl" in "kube-system" namespace has status "Ready":"True"
	I1209 10:52:23.968572  627293 pod_ready.go:82] duration metric: took 400.162458ms for pod "kube-proxy-dckpl" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:23.968586  627293 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-wrvgb" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:24.164418  627293 request.go:632] Waited for 195.731455ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wrvgb
	I1209 10:52:24.164497  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wrvgb
	I1209 10:52:24.164502  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:24.164511  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:24.164516  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:24.167227  627293 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 10:52:24.364211  627293 request.go:632] Waited for 196.319396ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-792382
	I1209 10:52:24.364296  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382
	I1209 10:52:24.364308  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:24.364319  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:24.364330  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:24.368387  627293 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 10:52:24.369158  627293 pod_ready.go:93] pod "kube-proxy-wrvgb" in "kube-system" namespace has status "Ready":"True"
	I1209 10:52:24.369182  627293 pod_ready.go:82] duration metric: took 400.580765ms for pod "kube-proxy-wrvgb" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:24.369195  627293 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-792382" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:24.564251  627293 request.go:632] Waited for 194.959562ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-792382
	I1209 10:52:24.564342  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-792382
	I1209 10:52:24.564348  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:24.564357  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:24.564361  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:24.568298  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:24.764304  627293 request.go:632] Waited for 195.363618ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-792382
	I1209 10:52:24.764392  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382
	I1209 10:52:24.764408  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:24.764418  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:24.764425  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:24.768139  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:24.768711  627293 pod_ready.go:93] pod "kube-scheduler-ha-792382" in "kube-system" namespace has status "Ready":"True"
	I1209 10:52:24.768733  627293 pod_ready.go:82] duration metric: took 399.519254ms for pod "kube-scheduler-ha-792382" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:24.768746  627293 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-792382-m02" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:24.963667  627293 request.go:632] Waited for 194.82946ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-792382-m02
	I1209 10:52:24.963730  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-792382-m02
	I1209 10:52:24.963736  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:24.963744  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:24.963749  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:24.967092  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:25.164276  627293 request.go:632] Waited for 196.380929ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:52:25.164345  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:52:25.164349  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:25.164358  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:25.164364  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:25.169070  627293 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 10:52:25.169673  627293 pod_ready.go:93] pod "kube-scheduler-ha-792382-m02" in "kube-system" namespace has status "Ready":"True"
	I1209 10:52:25.169696  627293 pod_ready.go:82] duration metric: took 400.939865ms for pod "kube-scheduler-ha-792382-m02" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:25.169706  627293 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-792382-m03" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:25.363779  627293 request.go:632] Waited for 193.996151ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-792382-m03
	I1209 10:52:25.363866  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-792382-m03
	I1209 10:52:25.363882  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:25.363912  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:25.363923  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:25.367885  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:25.563919  627293 request.go:632] Waited for 195.39244ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:25.563987  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:25.563992  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:25.564000  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:25.564003  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:25.567759  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:25.568223  627293 pod_ready.go:93] pod "kube-scheduler-ha-792382-m03" in "kube-system" namespace has status "Ready":"True"
	I1209 10:52:25.568247  627293 pod_ready.go:82] duration metric: took 398.53325ms for pod "kube-scheduler-ha-792382-m03" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:25.568262  627293 pod_ready.go:39] duration metric: took 5.200212564s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 10:52:25.568288  627293 api_server.go:52] waiting for apiserver process to appear ...
	I1209 10:52:25.568359  627293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 10:52:25.588000  627293 api_server.go:72] duration metric: took 22.996035203s to wait for apiserver process to appear ...
	I1209 10:52:25.588031  627293 api_server.go:88] waiting for apiserver healthz status ...
	I1209 10:52:25.588055  627293 api_server.go:253] Checking apiserver healthz at https://192.168.39.69:8443/healthz ...
	I1209 10:52:25.592469  627293 api_server.go:279] https://192.168.39.69:8443/healthz returned 200:
	ok
	I1209 10:52:25.592544  627293 round_trippers.go:463] GET https://192.168.39.69:8443/version
	I1209 10:52:25.592549  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:25.592557  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:25.592563  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:25.593630  627293 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1209 10:52:25.593699  627293 api_server.go:141] control plane version: v1.31.2
	I1209 10:52:25.593714  627293 api_server.go:131] duration metric: took 5.676129ms to wait for apiserver health ...
	I1209 10:52:25.593722  627293 system_pods.go:43] waiting for kube-system pods to appear ...
	I1209 10:52:25.764156  627293 request.go:632] Waited for 170.352326ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods
	I1209 10:52:25.764268  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods
	I1209 10:52:25.764281  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:25.764294  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:25.764301  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:25.774462  627293 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1209 10:52:25.781848  627293 system_pods.go:59] 24 kube-system pods found
	I1209 10:52:25.781880  627293 system_pods.go:61] "coredns-7c65d6cfc9-8hlml" [d820cd6c-5064-4934-adc8-c68f84c09b46] Running
	I1209 10:52:25.781886  627293 system_pods.go:61] "coredns-7c65d6cfc9-rz6mw" [af297b6d-91f1-4114-b98c-cdfdfbd1589e] Running
	I1209 10:52:25.781890  627293 system_pods.go:61] "etcd-ha-792382" [eee2fbad-7f2f-4f5a-b701-abdbf8456f99] Running
	I1209 10:52:25.781894  627293 system_pods.go:61] "etcd-ha-792382-m02" [424768ff-af5d-4a31-b062-ab2c9576884d] Running
	I1209 10:52:25.781897  627293 system_pods.go:61] "etcd-ha-792382-m03" [4112b988-6915-413a-badd-c0207865e60d] Running
	I1209 10:52:25.781900  627293 system_pods.go:61] "kindnet-6hlht" [23156ebc-d366-4fc2-bedb-7a63e950b116] Running
	I1209 10:52:25.781903  627293 system_pods.go:61] "kindnet-bqp2z" [b2c40579-4d72-4efe-b921-1e0f98b91544] Running
	I1209 10:52:25.781906  627293 system_pods.go:61] "kindnet-hkrhk" [9b35011c-ab45-4f55-a60f-08f9c4509c1d] Running
	I1209 10:52:25.781909  627293 system_pods.go:61] "kube-apiserver-ha-792382" [5157cfb0-bc91-4efe-b2e2-689778d5b012] Running
	I1209 10:52:25.781913  627293 system_pods.go:61] "kube-apiserver-ha-792382-m02" [71f9ce1c-aeff-4853-83ec-01a7fb6a81d5] Running
	I1209 10:52:25.781916  627293 system_pods.go:61] "kube-apiserver-ha-792382-m03" [5cd4395c-58a8-45ba-90ea-72105d25fadd] Running
	I1209 10:52:25.781919  627293 system_pods.go:61] "kube-controller-manager-ha-792382" [f8cb7175-f6fd-4dcf-a6a5-66c44aa136ce] Running
	I1209 10:52:25.781922  627293 system_pods.go:61] "kube-controller-manager-ha-792382-m02" [2f7d8deb-ddad-47ac-ad18-5f8e7d0e095d] Running
	I1209 10:52:25.781926  627293 system_pods.go:61] "kube-controller-manager-ha-792382-m03" [5c5d03de-e7e9-491b-a6fd-fdc50b4ce7ed] Running
	I1209 10:52:25.781930  627293 system_pods.go:61] "kube-proxy-2l42s" [a4bfe3cb-9b06-4d1e-9887-c461d31aaaec] Running
	I1209 10:52:25.781934  627293 system_pods.go:61] "kube-proxy-dckpl" [13f7bda1-f9c2-4fd0-96e0-b6aee1139bc1] Running
	I1209 10:52:25.781940  627293 system_pods.go:61] "kube-proxy-wrvgb" [2531e29f-a4d5-41f9-8c38-3220b4caf96b] Running
	I1209 10:52:25.781942  627293 system_pods.go:61] "kube-scheduler-ha-792382" [b11693d0-b2e7-45a1-a1a2-6519a1535c45] Running
	I1209 10:52:25.781945  627293 system_pods.go:61] "kube-scheduler-ha-792382-m02" [250ce40c-5cbf-4f5d-a475-4f3d7ec100d9] Running
	I1209 10:52:25.781948  627293 system_pods.go:61] "kube-scheduler-ha-792382-m03" [b994f699-40b5-423e-b92f-3ca6208e69d0] Running
	I1209 10:52:25.781951  627293 system_pods.go:61] "kube-vip-ha-792382" [511f3a84-b444-4603-8f6f-eeeb262b1384] Running
	I1209 10:52:25.781954  627293 system_pods.go:61] "kube-vip-ha-792382-m02" [f2110c28-09f5-42f9-8e16-beec9759a0a2] Running
	I1209 10:52:25.781957  627293 system_pods.go:61] "kube-vip-ha-792382-m03" [5eee7c3c-1b75-48ad-813e-963fa4308d1b] Running
	I1209 10:52:25.781960  627293 system_pods.go:61] "storage-provisioner" [4419fe4f-e2ed-4ecb-a912-2dd074e29727] Running
	I1209 10:52:25.781965  627293 system_pods.go:74] duration metric: took 188.238253ms to wait for pod list to return data ...
	I1209 10:52:25.781976  627293 default_sa.go:34] waiting for default service account to be created ...
	I1209 10:52:25.964450  627293 request.go:632] Waited for 182.375955ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/default/serviceaccounts
	I1209 10:52:25.964524  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/default/serviceaccounts
	I1209 10:52:25.964529  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:25.964538  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:25.964543  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:25.968489  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:25.968636  627293 default_sa.go:45] found service account: "default"
	I1209 10:52:25.968653  627293 default_sa.go:55] duration metric: took 186.669919ms for default service account to be created ...
	I1209 10:52:25.968664  627293 system_pods.go:116] waiting for k8s-apps to be running ...
	I1209 10:52:26.163895  627293 request.go:632] Waited for 195.104758ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods
	I1209 10:52:26.163963  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods
	I1209 10:52:26.163969  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:26.163977  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:26.163981  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:26.169457  627293 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1209 10:52:26.176126  627293 system_pods.go:86] 24 kube-system pods found
	I1209 10:52:26.176160  627293 system_pods.go:89] "coredns-7c65d6cfc9-8hlml" [d820cd6c-5064-4934-adc8-c68f84c09b46] Running
	I1209 10:52:26.176166  627293 system_pods.go:89] "coredns-7c65d6cfc9-rz6mw" [af297b6d-91f1-4114-b98c-cdfdfbd1589e] Running
	I1209 10:52:26.176171  627293 system_pods.go:89] "etcd-ha-792382" [eee2fbad-7f2f-4f5a-b701-abdbf8456f99] Running
	I1209 10:52:26.176175  627293 system_pods.go:89] "etcd-ha-792382-m02" [424768ff-af5d-4a31-b062-ab2c9576884d] Running
	I1209 10:52:26.176178  627293 system_pods.go:89] "etcd-ha-792382-m03" [4112b988-6915-413a-badd-c0207865e60d] Running
	I1209 10:52:26.176184  627293 system_pods.go:89] "kindnet-6hlht" [23156ebc-d366-4fc2-bedb-7a63e950b116] Running
	I1209 10:52:26.176189  627293 system_pods.go:89] "kindnet-bqp2z" [b2c40579-4d72-4efe-b921-1e0f98b91544] Running
	I1209 10:52:26.176195  627293 system_pods.go:89] "kindnet-hkrhk" [9b35011c-ab45-4f55-a60f-08f9c4509c1d] Running
	I1209 10:52:26.176201  627293 system_pods.go:89] "kube-apiserver-ha-792382" [5157cfb0-bc91-4efe-b2e2-689778d5b012] Running
	I1209 10:52:26.176206  627293 system_pods.go:89] "kube-apiserver-ha-792382-m02" [71f9ce1c-aeff-4853-83ec-01a7fb6a81d5] Running
	I1209 10:52:26.176212  627293 system_pods.go:89] "kube-apiserver-ha-792382-m03" [5cd4395c-58a8-45ba-90ea-72105d25fadd] Running
	I1209 10:52:26.176220  627293 system_pods.go:89] "kube-controller-manager-ha-792382" [f8cb7175-f6fd-4dcf-a6a5-66c44aa136ce] Running
	I1209 10:52:26.176231  627293 system_pods.go:89] "kube-controller-manager-ha-792382-m02" [2f7d8deb-ddad-47ac-ad18-5f8e7d0e095d] Running
	I1209 10:52:26.176240  627293 system_pods.go:89] "kube-controller-manager-ha-792382-m03" [5c5d03de-e7e9-491b-a6fd-fdc50b4ce7ed] Running
	I1209 10:52:26.176245  627293 system_pods.go:89] "kube-proxy-2l42s" [a4bfe3cb-9b06-4d1e-9887-c461d31aaaec] Running
	I1209 10:52:26.176254  627293 system_pods.go:89] "kube-proxy-dckpl" [13f7bda1-f9c2-4fd0-96e0-b6aee1139bc1] Running
	I1209 10:52:26.176263  627293 system_pods.go:89] "kube-proxy-wrvgb" [2531e29f-a4d5-41f9-8c38-3220b4caf96b] Running
	I1209 10:52:26.176272  627293 system_pods.go:89] "kube-scheduler-ha-792382" [b11693d0-b2e7-45a1-a1a2-6519a1535c45] Running
	I1209 10:52:26.176285  627293 system_pods.go:89] "kube-scheduler-ha-792382-m02" [250ce40c-5cbf-4f5d-a475-4f3d7ec100d9] Running
	I1209 10:52:26.176294  627293 system_pods.go:89] "kube-scheduler-ha-792382-m03" [b994f699-40b5-423e-b92f-3ca6208e69d0] Running
	I1209 10:52:26.176303  627293 system_pods.go:89] "kube-vip-ha-792382" [511f3a84-b444-4603-8f6f-eeeb262b1384] Running
	I1209 10:52:26.176312  627293 system_pods.go:89] "kube-vip-ha-792382-m02" [f2110c28-09f5-42f9-8e16-beec9759a0a2] Running
	I1209 10:52:26.176320  627293 system_pods.go:89] "kube-vip-ha-792382-m03" [5eee7c3c-1b75-48ad-813e-963fa4308d1b] Running
	I1209 10:52:26.176327  627293 system_pods.go:89] "storage-provisioner" [4419fe4f-e2ed-4ecb-a912-2dd074e29727] Running
	I1209 10:52:26.176338  627293 system_pods.go:126] duration metric: took 207.663846ms to wait for k8s-apps to be running ...
	I1209 10:52:26.176348  627293 system_svc.go:44] waiting for kubelet service to be running ....
	I1209 10:52:26.176410  627293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 10:52:26.193241  627293 system_svc.go:56] duration metric: took 16.882967ms WaitForService to wait for kubelet
	I1209 10:52:26.193274  627293 kubeadm.go:582] duration metric: took 23.601316183s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 10:52:26.193295  627293 node_conditions.go:102] verifying NodePressure condition ...
	I1209 10:52:26.363791  627293 request.go:632] Waited for 170.378697ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes
	I1209 10:52:26.363869  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes
	I1209 10:52:26.363877  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:26.363893  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:26.363902  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:26.369525  627293 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1209 10:52:26.370723  627293 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1209 10:52:26.370747  627293 node_conditions.go:123] node cpu capacity is 2
	I1209 10:52:26.370760  627293 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1209 10:52:26.370763  627293 node_conditions.go:123] node cpu capacity is 2
	I1209 10:52:26.370766  627293 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1209 10:52:26.370770  627293 node_conditions.go:123] node cpu capacity is 2
	I1209 10:52:26.370774  627293 node_conditions.go:105] duration metric: took 177.473705ms to run NodePressure ...
	I1209 10:52:26.370790  627293 start.go:241] waiting for startup goroutines ...
	I1209 10:52:26.370823  627293 start.go:255] writing updated cluster config ...
	I1209 10:52:26.371156  627293 ssh_runner.go:195] Run: rm -f paused
	I1209 10:52:26.426485  627293 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1209 10:52:26.428634  627293 out.go:177] * Done! kubectl is now configured to use "ha-792382" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 09 10:56:12 ha-792382 crio[665]: time="2024-12-09 10:56:12.103647136Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=acc6e955-9af1-4115-9c51-e0a10ef172db name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 10:56:12 ha-792382 crio[665]: time="2024-12-09 10:56:12.104046995Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733741772104031126,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=acc6e955-9af1-4115-9c51-e0a10ef172db name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 10:56:12 ha-792382 crio[665]: time="2024-12-09 10:56:12.121485446Z" level=debug msg="Request: &StatusRequest{Verbose:false,}" file="otel-collector/interceptors.go:62" id=e27f36af-3031-4efe-9267-5f6f7537741d name=/runtime.v1.RuntimeService/Status
	Dec 09 10:56:12 ha-792382 crio[665]: time="2024-12-09 10:56:12.121550106Z" level=debug msg="Response: &StatusResponse{Status:&RuntimeStatus{Conditions:[]*RuntimeCondition{&RuntimeCondition{Type:RuntimeReady,Status:true,Reason:,Message:,},&RuntimeCondition{Type:NetworkReady,Status:true,Reason:,Message:,},},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=e27f36af-3031-4efe-9267-5f6f7537741d name=/runtime.v1.RuntimeService/Status
	Dec 09 10:56:12 ha-792382 crio[665]: time="2024-12-09 10:56:12.122987638Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0b077253-2cbb-42a9-bd0c-b1c6b7d85cfd name=/runtime.v1.RuntimeService/Version
	Dec 09 10:56:12 ha-792382 crio[665]: time="2024-12-09 10:56:12.123058500Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0b077253-2cbb-42a9-bd0c-b1c6b7d85cfd name=/runtime.v1.RuntimeService/Version
	Dec 09 10:56:12 ha-792382 crio[665]: time="2024-12-09 10:56:12.124115956Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2bdd8cd9-c104-4940-8c3d-ae8ccdadacff name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 10:56:12 ha-792382 crio[665]: time="2024-12-09 10:56:12.124729421Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733741772124709649,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2bdd8cd9-c104-4940-8c3d-ae8ccdadacff name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 10:56:12 ha-792382 crio[665]: time="2024-12-09 10:56:12.125286479Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=674c4b1b-382d-41a5-a31f-5a592d5b0863 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 10:56:12 ha-792382 crio[665]: time="2024-12-09 10:56:12.125451082Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=674c4b1b-382d-41a5-a31f-5a592d5b0863 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 10:56:12 ha-792382 crio[665]: time="2024-12-09 10:56:12.125722741Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3354d3bec20606deb1f19130215939f5c7b2123b73319ef892bffc278d375fb8,PodSandboxId:e47f42b7e0900b7149c93ac504858a9b845addf2278ae1c0abd45e66fc9df066,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733741551077576684,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-z9wjm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 00b911f2-4cd1-486a-9276-1e98745ede0e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4ba11ff07ea5065d66a5e8b8d091a9ba9a1c680ab1ade1d19aecf153081d7dd,PodSandboxId:a5c60a0e3c19becc3387cabd45cb24c34922bd26351286c88744f9e2caf6f8d6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733741412981896414,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8hlml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d820cd6c-5064-4934-adc8-c68f84c09b46,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afc0f0aea4c8acce16752f3e50cc08b660c2c26b24932beaf28ba3d1bc596733,PodSandboxId:038ff3d97cfe50dddc147974bab92d243212a5896794a4bc3f1ce2b77fdb5e39,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733741412958470450,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rz6mw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
af297b6d-91f1-4114-b98c-cdfdfbd1589e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9fa96349b5a7bf6e8223fd16d01f77aa0d3c45aa83ad28fc42eec5d2a80dd24,PodSandboxId:02bd44e5a67d9df1c56cc6297ec66940ce286c89b103f7efa4b35259d2a2f8c7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733741412903452485,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4419fe4f-e2ed-4ecb-a912-2dd074e29727,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6bf7c7cf0d689f954fca58d1d94afa21f5bfd0f606552fbbf1479a9ae1593d3,PodSandboxId:cfb791c6d05cec0b9cc244408089c309a41f321fe06c2083eabaeaf9184c0ff6,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e,State:CO
NTAINER_RUNNING,CreatedAt:1733741400823508110,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bqp2z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2c40579-4d72-4efe-b921-1e0f98b91544,},Annotations:map[string]string{io.kubernetes.container.hash: 9533276,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cf6196a4789ec6471e8e2fe474e71d1756368a4423e425262410e6c7a71e522,PodSandboxId:82b54a7467a7aa9cf761959fb4e7e7ea830c6fdcf33629ec45c8b55326334f67,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733741398
382511881,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wrvgb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2531e29f-a4d5-41f9-8c38-3220b4caf96b,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:082e8ff7e6c7e3e1110852687152ddb22202b3134aa3105a8b11aa7d702bcd6a,PodSandboxId:1486ff19db45e7b9cb8b545f56579a8765f6cef7fe3b1d44f967a5c5823b4cdc,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:32829cc6f8630eba4e1b5e4df5bcbc34c767e70703d26e64a0f7317951c7b517,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1c87c24be687be21b68228b1b53b7792a3d82fb4d99bc68a5a984f582508a37,State:CONTAINER_RUNNING,CreatedAt:173374138888
4669062,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-792382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9922f13afb31842008ba0179dabd897e,},Annotations:map[string]string{io.kubernetes.container.hash: 69195791,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:778345b29099a60c663a83117fc6b409ae496d9228f79ecc48c26209e7a87f63,PodSandboxId:27e12e36b1bd81bfdb6cc876b8c1a9c925ed2eec6ac687056f4bd10eb872b7a6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733741386069265058,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-792382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2460a8b15a62b9cf3ad5343586bde402,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64b96c1c2297057b3b928bc7494df511d909f05efb4f8d5be27458c164efe95f,PodSandboxId:7bbf390b8ef03829849defea1ba3817acb7a86ce1705fb53d765d7ddca57a066,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733741386071035939,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kub
ernetes.pod.name: kube-apiserver-ha-792382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89a89b1c65df6e3ad9608c5607172f77,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d93c68b855d9f72ce368ee4f59250a988032fe781209e2115a8d70ea60f77fee,PodSandboxId:9493b93aded71efd1c05e2df3b9537c0c63dc9b2e9abdbf1be430d3c29d5ad9f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733741386009117508,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kub
e-scheduler-ha-792382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 082fcfac40bcf36b76f1e733a9f73bc8,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00db8f77881efc15e9bf57aba34890302ec37f9239ba1d191862ea78b9833604,PodSandboxId:02e8433fa67cc5aef5d43a5edf820340ad5d91eef3bdd9e7149eeec0e55a8c95,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733741385994463722,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-792382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4d8d358ed72ac30c9365aedd3aee4d1,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=674c4b1b-382d-41a5-a31f-5a592d5b0863 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 10:56:12 ha-792382 crio[665]: time="2024-12-09 10:56:12.164468993Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=75cdcffe-8e0a-4913-8e2e-d29a79361d24 name=/runtime.v1.RuntimeService/Version
	Dec 09 10:56:12 ha-792382 crio[665]: time="2024-12-09 10:56:12.164541771Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=75cdcffe-8e0a-4913-8e2e-d29a79361d24 name=/runtime.v1.RuntimeService/Version
	Dec 09 10:56:12 ha-792382 crio[665]: time="2024-12-09 10:56:12.165534416Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=250b7158-f183-40bd-b04b-d923358424c9 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 10:56:12 ha-792382 crio[665]: time="2024-12-09 10:56:12.166155810Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733741772166124474,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=250b7158-f183-40bd-b04b-d923358424c9 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 10:56:12 ha-792382 crio[665]: time="2024-12-09 10:56:12.167139224Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2b297035-75b5-40d2-881b-40ecb08b1741 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 10:56:12 ha-792382 crio[665]: time="2024-12-09 10:56:12.167202367Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2b297035-75b5-40d2-881b-40ecb08b1741 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 10:56:12 ha-792382 crio[665]: time="2024-12-09 10:56:12.167503304Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3354d3bec20606deb1f19130215939f5c7b2123b73319ef892bffc278d375fb8,PodSandboxId:e47f42b7e0900b7149c93ac504858a9b845addf2278ae1c0abd45e66fc9df066,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733741551077576684,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-z9wjm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 00b911f2-4cd1-486a-9276-1e98745ede0e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4ba11ff07ea5065d66a5e8b8d091a9ba9a1c680ab1ade1d19aecf153081d7dd,PodSandboxId:a5c60a0e3c19becc3387cabd45cb24c34922bd26351286c88744f9e2caf6f8d6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733741412981896414,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8hlml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d820cd6c-5064-4934-adc8-c68f84c09b46,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afc0f0aea4c8acce16752f3e50cc08b660c2c26b24932beaf28ba3d1bc596733,PodSandboxId:038ff3d97cfe50dddc147974bab92d243212a5896794a4bc3f1ce2b77fdb5e39,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733741412958470450,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rz6mw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
af297b6d-91f1-4114-b98c-cdfdfbd1589e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9fa96349b5a7bf6e8223fd16d01f77aa0d3c45aa83ad28fc42eec5d2a80dd24,PodSandboxId:02bd44e5a67d9df1c56cc6297ec66940ce286c89b103f7efa4b35259d2a2f8c7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733741412903452485,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4419fe4f-e2ed-4ecb-a912-2dd074e29727,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6bf7c7cf0d689f954fca58d1d94afa21f5bfd0f606552fbbf1479a9ae1593d3,PodSandboxId:cfb791c6d05cec0b9cc244408089c309a41f321fe06c2083eabaeaf9184c0ff6,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e,State:CO
NTAINER_RUNNING,CreatedAt:1733741400823508110,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bqp2z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2c40579-4d72-4efe-b921-1e0f98b91544,},Annotations:map[string]string{io.kubernetes.container.hash: 9533276,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cf6196a4789ec6471e8e2fe474e71d1756368a4423e425262410e6c7a71e522,PodSandboxId:82b54a7467a7aa9cf761959fb4e7e7ea830c6fdcf33629ec45c8b55326334f67,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733741398
382511881,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wrvgb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2531e29f-a4d5-41f9-8c38-3220b4caf96b,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:082e8ff7e6c7e3e1110852687152ddb22202b3134aa3105a8b11aa7d702bcd6a,PodSandboxId:1486ff19db45e7b9cb8b545f56579a8765f6cef7fe3b1d44f967a5c5823b4cdc,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:32829cc6f8630eba4e1b5e4df5bcbc34c767e70703d26e64a0f7317951c7b517,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1c87c24be687be21b68228b1b53b7792a3d82fb4d99bc68a5a984f582508a37,State:CONTAINER_RUNNING,CreatedAt:173374138888
4669062,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-792382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9922f13afb31842008ba0179dabd897e,},Annotations:map[string]string{io.kubernetes.container.hash: 69195791,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:778345b29099a60c663a83117fc6b409ae496d9228f79ecc48c26209e7a87f63,PodSandboxId:27e12e36b1bd81bfdb6cc876b8c1a9c925ed2eec6ac687056f4bd10eb872b7a6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733741386069265058,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-792382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2460a8b15a62b9cf3ad5343586bde402,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64b96c1c2297057b3b928bc7494df511d909f05efb4f8d5be27458c164efe95f,PodSandboxId:7bbf390b8ef03829849defea1ba3817acb7a86ce1705fb53d765d7ddca57a066,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733741386071035939,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kub
ernetes.pod.name: kube-apiserver-ha-792382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89a89b1c65df6e3ad9608c5607172f77,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d93c68b855d9f72ce368ee4f59250a988032fe781209e2115a8d70ea60f77fee,PodSandboxId:9493b93aded71efd1c05e2df3b9537c0c63dc9b2e9abdbf1be430d3c29d5ad9f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733741386009117508,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kub
e-scheduler-ha-792382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 082fcfac40bcf36b76f1e733a9f73bc8,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00db8f77881efc15e9bf57aba34890302ec37f9239ba1d191862ea78b9833604,PodSandboxId:02e8433fa67cc5aef5d43a5edf820340ad5d91eef3bdd9e7149eeec0e55a8c95,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733741385994463722,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-792382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4d8d358ed72ac30c9365aedd3aee4d1,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2b297035-75b5-40d2-881b-40ecb08b1741 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 10:56:12 ha-792382 crio[665]: time="2024-12-09 10:56:12.203650915Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4d79c4a7-7fce-4010-86bc-9c46f8d98f1d name=/runtime.v1.RuntimeService/Version
	Dec 09 10:56:12 ha-792382 crio[665]: time="2024-12-09 10:56:12.203764975Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4d79c4a7-7fce-4010-86bc-9c46f8d98f1d name=/runtime.v1.RuntimeService/Version
	Dec 09 10:56:12 ha-792382 crio[665]: time="2024-12-09 10:56:12.205003032Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0b21d2db-58ff-4769-89c7-2ce0cbb49020 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 10:56:12 ha-792382 crio[665]: time="2024-12-09 10:56:12.205489977Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733741772205468271,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0b21d2db-58ff-4769-89c7-2ce0cbb49020 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 10:56:12 ha-792382 crio[665]: time="2024-12-09 10:56:12.206057686Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=40c957c1-ac76-4b20-ad88-f98e6d5a8841 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 10:56:12 ha-792382 crio[665]: time="2024-12-09 10:56:12.206120953Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=40c957c1-ac76-4b20-ad88-f98e6d5a8841 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 10:56:12 ha-792382 crio[665]: time="2024-12-09 10:56:12.206416200Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3354d3bec20606deb1f19130215939f5c7b2123b73319ef892bffc278d375fb8,PodSandboxId:e47f42b7e0900b7149c93ac504858a9b845addf2278ae1c0abd45e66fc9df066,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733741551077576684,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-z9wjm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 00b911f2-4cd1-486a-9276-1e98745ede0e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4ba11ff07ea5065d66a5e8b8d091a9ba9a1c680ab1ade1d19aecf153081d7dd,PodSandboxId:a5c60a0e3c19becc3387cabd45cb24c34922bd26351286c88744f9e2caf6f8d6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733741412981896414,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8hlml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d820cd6c-5064-4934-adc8-c68f84c09b46,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afc0f0aea4c8acce16752f3e50cc08b660c2c26b24932beaf28ba3d1bc596733,PodSandboxId:038ff3d97cfe50dddc147974bab92d243212a5896794a4bc3f1ce2b77fdb5e39,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733741412958470450,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rz6mw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
af297b6d-91f1-4114-b98c-cdfdfbd1589e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9fa96349b5a7bf6e8223fd16d01f77aa0d3c45aa83ad28fc42eec5d2a80dd24,PodSandboxId:02bd44e5a67d9df1c56cc6297ec66940ce286c89b103f7efa4b35259d2a2f8c7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733741412903452485,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4419fe4f-e2ed-4ecb-a912-2dd074e29727,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6bf7c7cf0d689f954fca58d1d94afa21f5bfd0f606552fbbf1479a9ae1593d3,PodSandboxId:cfb791c6d05cec0b9cc244408089c309a41f321fe06c2083eabaeaf9184c0ff6,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e,State:CO
NTAINER_RUNNING,CreatedAt:1733741400823508110,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bqp2z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2c40579-4d72-4efe-b921-1e0f98b91544,},Annotations:map[string]string{io.kubernetes.container.hash: 9533276,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cf6196a4789ec6471e8e2fe474e71d1756368a4423e425262410e6c7a71e522,PodSandboxId:82b54a7467a7aa9cf761959fb4e7e7ea830c6fdcf33629ec45c8b55326334f67,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733741398
382511881,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wrvgb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2531e29f-a4d5-41f9-8c38-3220b4caf96b,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:082e8ff7e6c7e3e1110852687152ddb22202b3134aa3105a8b11aa7d702bcd6a,PodSandboxId:1486ff19db45e7b9cb8b545f56579a8765f6cef7fe3b1d44f967a5c5823b4cdc,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:32829cc6f8630eba4e1b5e4df5bcbc34c767e70703d26e64a0f7317951c7b517,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1c87c24be687be21b68228b1b53b7792a3d82fb4d99bc68a5a984f582508a37,State:CONTAINER_RUNNING,CreatedAt:173374138888
4669062,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-792382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9922f13afb31842008ba0179dabd897e,},Annotations:map[string]string{io.kubernetes.container.hash: 69195791,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:778345b29099a60c663a83117fc6b409ae496d9228f79ecc48c26209e7a87f63,PodSandboxId:27e12e36b1bd81bfdb6cc876b8c1a9c925ed2eec6ac687056f4bd10eb872b7a6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733741386069265058,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-792382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2460a8b15a62b9cf3ad5343586bde402,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64b96c1c2297057b3b928bc7494df511d909f05efb4f8d5be27458c164efe95f,PodSandboxId:7bbf390b8ef03829849defea1ba3817acb7a86ce1705fb53d765d7ddca57a066,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733741386071035939,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kub
ernetes.pod.name: kube-apiserver-ha-792382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89a89b1c65df6e3ad9608c5607172f77,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d93c68b855d9f72ce368ee4f59250a988032fe781209e2115a8d70ea60f77fee,PodSandboxId:9493b93aded71efd1c05e2df3b9537c0c63dc9b2e9abdbf1be430d3c29d5ad9f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733741386009117508,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kub
e-scheduler-ha-792382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 082fcfac40bcf36b76f1e733a9f73bc8,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00db8f77881efc15e9bf57aba34890302ec37f9239ba1d191862ea78b9833604,PodSandboxId:02e8433fa67cc5aef5d43a5edf820340ad5d91eef3bdd9e7149eeec0e55a8c95,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733741385994463722,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-792382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4d8d358ed72ac30c9365aedd3aee4d1,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=40c957c1-ac76-4b20-ad88-f98e6d5a8841 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	3354d3bec2060       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   e47f42b7e0900       busybox-7dff88458-z9wjm
	f4ba11ff07ea5       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   0                   a5c60a0e3c19b       coredns-7c65d6cfc9-8hlml
	afc0f0aea4c8a       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   0                   038ff3d97cfe5       coredns-7c65d6cfc9-rz6mw
	d9fa96349b5a7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Running             storage-provisioner       0                   02bd44e5a67d9       storage-provisioner
	b6bf7c7cf0d68       docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3    6 minutes ago       Running             kindnet-cni               0                   cfb791c6d05ce       kindnet-bqp2z
	3cf6196a4789e       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                      6 minutes ago       Running             kube-proxy                0                   82b54a7467a7a       kube-proxy-wrvgb
	082e8ff7e6c7e       ghcr.io/kube-vip/kube-vip@sha256:32829cc6f8630eba4e1b5e4df5bcbc34c767e70703d26e64a0f7317951c7b517     6 minutes ago       Running             kube-vip                  0                   1486ff19db45e       kube-vip-ha-792382
	64b96c1c22970       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                      6 minutes ago       Running             kube-apiserver            0                   7bbf390b8ef03       kube-apiserver-ha-792382
	778345b29099a       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   27e12e36b1bd8       etcd-ha-792382
	d93c68b855d9f       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                      6 minutes ago       Running             kube-scheduler            0                   9493b93aded71       kube-scheduler-ha-792382
	00db8f77881ef       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                      6 minutes ago       Running             kube-controller-manager   0                   02e8433fa67cc       kube-controller-manager-ha-792382
	
	
	==> coredns [afc0f0aea4c8acce16752f3e50cc08b660c2c26b24932beaf28ba3d1bc596733] <==
	[INFO] 10.244.2.2:57485 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000178522s
	[INFO] 10.244.2.2:51008 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003461693s
	[INFO] 10.244.2.2:51209 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000132423s
	[INFO] 10.244.2.2:44233 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000160403s
	[INFO] 10.244.2.2:36343 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000113366s
	[INFO] 10.244.1.2:40108 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001755871s
	[INFO] 10.244.1.2:57627 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000088641s
	[INFO] 10.244.0.4:49175 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000210271s
	[INFO] 10.244.0.4:42721 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001653061s
	[INFO] 10.244.0.4:53085 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000087293s
	[INFO] 10.244.2.2:46633 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000111394s
	[INFO] 10.244.2.2:34060 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000087724s
	[INFO] 10.244.2.2:42086 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000112165s
	[INFO] 10.244.1.2:55917 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000167759s
	[INFO] 10.244.1.2:38190 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000113655s
	[INFO] 10.244.1.2:46262 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000092112s
	[INFO] 10.244.1.2:55410 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000080217s
	[INFO] 10.244.0.4:43802 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000073668s
	[INFO] 10.244.0.4:48010 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000099328s
	[INFO] 10.244.0.4:45687 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00004859s
	[INFO] 10.244.2.2:35669 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00019184s
	[INFO] 10.244.2.2:54242 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000232065s
	[INFO] 10.244.2.2:41931 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000140914s
	[INFO] 10.244.0.4:48531 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000105047s
	[INFO] 10.244.0.4:36756 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000068167s
	
	
	==> coredns [f4ba11ff07ea5065d66a5e8b8d091a9ba9a1c680ab1ade1d19aecf153081d7dd] <==
	[INFO] 10.244.0.4:58900 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000184784s
	[INFO] 10.244.0.4:59585 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.004212695s
	[INFO] 10.244.0.4:42331 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001567158s
	[INFO] 10.244.2.2:43555 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003700387s
	[INFO] 10.244.2.2:38437 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000268841s
	[INFO] 10.244.1.2:36722 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000174774s
	[INFO] 10.244.1.2:46295 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000167521s
	[INFO] 10.244.1.2:36004 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000192453s
	[INFO] 10.244.1.2:54275 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001271437s
	[INFO] 10.244.1.2:48954 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000183213s
	[INFO] 10.244.1.2:57839 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00017811s
	[INFO] 10.244.0.4:54946 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001925365s
	[INFO] 10.244.0.4:59669 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0000722s
	[INFO] 10.244.0.4:40897 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000074421s
	[INFO] 10.244.0.4:46937 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000174065s
	[INFO] 10.244.0.4:34613 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000075946s
	[INFO] 10.244.2.2:44189 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000216239s
	[INFO] 10.244.0.4:39246 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000155453s
	[INFO] 10.244.2.2:48134 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000162494s
	[INFO] 10.244.1.2:44589 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000125364s
	[INFO] 10.244.1.2:59702 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00019329s
	[INFO] 10.244.1.2:58920 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000146935s
	[INFO] 10.244.1.2:55802 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000116158s
	[INFO] 10.244.0.4:47226 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000097556s
	[INFO] 10.244.0.4:42857 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000073279s
	
	
	==> describe nodes <==
	Name:               ha-792382
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-792382
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=60110addcdbf0fec7168b962521659e922988d6c
	                    minikube.k8s.io/name=ha-792382
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_09T10_49_53_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 09 Dec 2024 10:49:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-792382
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 09 Dec 2024 10:56:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 09 Dec 2024 10:52:55 +0000   Mon, 09 Dec 2024 10:49:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 09 Dec 2024 10:52:55 +0000   Mon, 09 Dec 2024 10:49:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 09 Dec 2024 10:52:55 +0000   Mon, 09 Dec 2024 10:49:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 09 Dec 2024 10:52:55 +0000   Mon, 09 Dec 2024 10:50:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.69
	  Hostname:    ha-792382
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 2c956a5ad4d142099b593c1d9352f7b5
	  System UUID:                2c956a5a-d4d1-4209-9b59-3c1d9352f7b5
	  Boot ID:                    5140ef96-1a92-4f56-b80b-7e99ce150ca0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-z9wjm              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m45s
	  kube-system                 coredns-7c65d6cfc9-8hlml             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m16s
	  kube-system                 coredns-7c65d6cfc9-rz6mw             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m16s
	  kube-system                 etcd-ha-792382                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m20s
	  kube-system                 kindnet-bqp2z                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m16s
	  kube-system                 kube-apiserver-ha-792382             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m20s
	  kube-system                 kube-controller-manager-ha-792382    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m20s
	  kube-system                 kube-proxy-wrvgb                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m16s
	  kube-system                 kube-scheduler-ha-792382             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m21s
	  kube-system                 kube-vip-ha-792382                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m23s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m15s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m13s                  kube-proxy       
	  Normal  NodeHasSufficientPID     6m27s (x7 over 6m27s)  kubelet          Node ha-792382 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m27s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 6m27s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m27s (x8 over 6m27s)  kubelet          Node ha-792382 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m27s (x8 over 6m27s)  kubelet          Node ha-792382 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 6m21s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m20s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m20s                  kubelet          Node ha-792382 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m20s                  kubelet          Node ha-792382 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m20s                  kubelet          Node ha-792382 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m17s                  node-controller  Node ha-792382 event: Registered Node ha-792382 in Controller
	  Normal  NodeReady                6m                     kubelet          Node ha-792382 status is now: NodeReady
	  Normal  RegisteredNode           5m20s                  node-controller  Node ha-792382 event: Registered Node ha-792382 in Controller
	  Normal  RegisteredNode           4m5s                   node-controller  Node ha-792382 event: Registered Node ha-792382 in Controller
	
	
	Name:               ha-792382-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-792382-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=60110addcdbf0fec7168b962521659e922988d6c
	                    minikube.k8s.io/name=ha-792382
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_09T10_50_47_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 09 Dec 2024 10:50:44 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-792382-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 09 Dec 2024 10:53:37 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 09 Dec 2024 10:52:48 +0000   Mon, 09 Dec 2024 10:54:20 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 09 Dec 2024 10:52:48 +0000   Mon, 09 Dec 2024 10:54:20 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 09 Dec 2024 10:52:48 +0000   Mon, 09 Dec 2024 10:54:20 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 09 Dec 2024 10:52:48 +0000   Mon, 09 Dec 2024 10:54:20 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.89
	  Hostname:    ha-792382-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 167721adca2249268bf51688530c2893
	  System UUID:                167721ad-ca22-4926-8bf5-1688530c2893
	  Boot ID:                    74f1c671-e420-4f88-b05b-e50c0597ee01
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-rbrpt                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m45s
	  kube-system                 etcd-ha-792382-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m26s
	  kube-system                 kindnet-hkrhk                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m28s
	  kube-system                 kube-apiserver-ha-792382-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m26s
	  kube-system                 kube-controller-manager-ha-792382-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m18s
	  kube-system                 kube-proxy-dckpl                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m28s
	  kube-system                 kube-scheduler-ha-792382-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m19s
	  kube-system                 kube-vip-ha-792382-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m23s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m28s (x8 over 5m28s)  kubelet          Node ha-792382-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m28s (x8 over 5m28s)  kubelet          Node ha-792382-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m28s (x7 over 5m28s)  kubelet          Node ha-792382-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m28s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m27s                  node-controller  Node ha-792382-m02 event: Registered Node ha-792382-m02 in Controller
	  Normal  RegisteredNode           5m20s                  node-controller  Node ha-792382-m02 event: Registered Node ha-792382-m02 in Controller
	  Normal  RegisteredNode           4m5s                   node-controller  Node ha-792382-m02 event: Registered Node ha-792382-m02 in Controller
	  Normal  NodeNotReady             112s                   node-controller  Node ha-792382-m02 status is now: NodeNotReady
	
	
	Name:               ha-792382-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-792382-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=60110addcdbf0fec7168b962521659e922988d6c
	                    minikube.k8s.io/name=ha-792382
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_09T10_52_02_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 09 Dec 2024 10:51:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-792382-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 09 Dec 2024 10:56:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 09 Dec 2024 10:53:00 +0000   Mon, 09 Dec 2024 10:51:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 09 Dec 2024 10:53:00 +0000   Mon, 09 Dec 2024 10:51:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 09 Dec 2024 10:53:00 +0000   Mon, 09 Dec 2024 10:51:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 09 Dec 2024 10:53:00 +0000   Mon, 09 Dec 2024 10:52:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.82
	  Hostname:    ha-792382-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c7e770a97238401cb03ba22edd7f66bc
	  System UUID:                c7e770a9-7238-401c-b03b-a22edd7f66bc
	  Boot ID:                    75bcd068-8763-4e3a-b01e-036ac11d2956
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-ft8s2                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m45s
	  kube-system                 etcd-ha-792382-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m13s
	  kube-system                 kindnet-6hlht                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m14s
	  kube-system                 kube-apiserver-ha-792382-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m13s
	  kube-system                 kube-controller-manager-ha-792382-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m13s
	  kube-system                 kube-proxy-2l42s                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m14s
	  kube-system                 kube-scheduler-ha-792382-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m13s
	  kube-system                 kube-vip-ha-792382-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m10s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m10s                  kube-proxy       
	  Normal  Starting                 4m14s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m14s (x8 over 4m14s)  kubelet          Node ha-792382-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m14s (x8 over 4m14s)  kubelet          Node ha-792382-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m14s (x7 over 4m14s)  kubelet          Node ha-792382-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m14s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m12s                  node-controller  Node ha-792382-m03 event: Registered Node ha-792382-m03 in Controller
	  Normal  RegisteredNode           4m10s                  node-controller  Node ha-792382-m03 event: Registered Node ha-792382-m03 in Controller
	  Normal  RegisteredNode           4m5s                   node-controller  Node ha-792382-m03 event: Registered Node ha-792382-m03 in Controller
	
	
	Name:               ha-792382-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-792382-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=60110addcdbf0fec7168b962521659e922988d6c
	                    minikube.k8s.io/name=ha-792382
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_09T10_53_05_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 09 Dec 2024 10:53:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-792382-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 09 Dec 2024 10:56:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 09 Dec 2024 10:53:35 +0000   Mon, 09 Dec 2024 10:53:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 09 Dec 2024 10:53:35 +0000   Mon, 09 Dec 2024 10:53:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 09 Dec 2024 10:53:35 +0000   Mon, 09 Dec 2024 10:53:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 09 Dec 2024 10:53:35 +0000   Mon, 09 Dec 2024 10:53:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.54
	  Hostname:    ha-792382-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f7109c0766654d148c611df97b2ed795
	  System UUID:                f7109c07-6665-4d14-8c61-1df97b2ed795
	  Boot ID:                    8d79820d-d818-486f-88fb-9a376256bc79
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-rwsmp       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m8s
	  kube-system                 kube-proxy-727n6    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 3m2s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  3m8s (x2 over 3m8s)  kubelet          Node ha-792382-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m8s (x2 over 3m8s)  kubelet          Node ha-792382-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m8s (x2 over 3m8s)  kubelet          Node ha-792382-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m8s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m7s                 node-controller  Node ha-792382-m04 event: Registered Node ha-792382-m04 in Controller
	  Normal  RegisteredNode           3m5s                 node-controller  Node ha-792382-m04 event: Registered Node ha-792382-m04 in Controller
	  Normal  RegisteredNode           3m5s                 node-controller  Node ha-792382-m04 event: Registered Node ha-792382-m04 in Controller
	  Normal  NodeReady                2m47s                kubelet          Node ha-792382-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec 9 10:49] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052723] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037555] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.827157] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.929161] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.560988] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.837514] systemd-fstab-generator[584]: Ignoring "noauto" option for root device
	[  +0.057481] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.052320] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.193651] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.117185] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.263430] systemd-fstab-generator[656]: Ignoring "noauto" option for root device
	[  +3.805323] systemd-fstab-generator[749]: Ignoring "noauto" option for root device
	[  +3.647118] systemd-fstab-generator[878]: Ignoring "noauto" option for root device
	[  +0.055434] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.026961] systemd-fstab-generator[1297]: Ignoring "noauto" option for root device
	[  +0.076746] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.128281] kauditd_printk_skb: 21 callbacks suppressed
	[Dec 9 10:50] kauditd_printk_skb: 38 callbacks suppressed
	[ +38.131475] kauditd_printk_skb: 28 callbacks suppressed
	
	
	==> etcd [778345b29099a60c663a83117fc6b409ae496d9228f79ecc48c26209e7a87f63] <==
	{"level":"warn","ts":"2024-12-09T10:56:12.144593Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"950d7fcf7e5c88d0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T10:56:12.231172Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"950d7fcf7e5c88d0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T10:56:12.331488Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"950d7fcf7e5c88d0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T10:56:12.356239Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"950d7fcf7e5c88d0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T10:56:12.471465Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"950d7fcf7e5c88d0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T10:56:12.479436Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"950d7fcf7e5c88d0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T10:56:12.486457Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"950d7fcf7e5c88d0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T10:56:12.493011Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"950d7fcf7e5c88d0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T10:56:12.495731Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"950d7fcf7e5c88d0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T10:56:12.500732Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"950d7fcf7e5c88d0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T10:56:12.506771Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"950d7fcf7e5c88d0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T10:56:12.513535Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"950d7fcf7e5c88d0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T10:56:12.517850Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"950d7fcf7e5c88d0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T10:56:12.520674Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"950d7fcf7e5c88d0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T10:56:12.527499Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"950d7fcf7e5c88d0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T10:56:12.528564Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"950d7fcf7e5c88d0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T10:56:12.531534Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"950d7fcf7e5c88d0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T10:56:12.536632Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"950d7fcf7e5c88d0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T10:56:12.546352Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"950d7fcf7e5c88d0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T10:56:12.549693Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"950d7fcf7e5c88d0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T10:56:12.552833Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"950d7fcf7e5c88d0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T10:56:12.557201Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"950d7fcf7e5c88d0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T10:56:12.562689Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"950d7fcf7e5c88d0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T10:56:12.571237Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"950d7fcf7e5c88d0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T10:56:12.631966Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"950d7fcf7e5c88d0","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 10:56:12 up 6 min,  0 users,  load average: 0.45, 0.32, 0.16
	Linux ha-792382 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [b6bf7c7cf0d689f954fca58d1d94afa21f5bfd0f606552fbbf1479a9ae1593d3] <==
	I1209 10:55:41.792124       1 main.go:324] Node ha-792382-m04 has CIDR [10.244.3.0/24] 
	I1209 10:55:51.785788       1 main.go:297] Handling node with IPs: map[192.168.39.69:{}]
	I1209 10:55:51.785901       1 main.go:301] handling current node
	I1209 10:55:51.785962       1 main.go:297] Handling node with IPs: map[192.168.39.89:{}]
	I1209 10:55:51.785993       1 main.go:324] Node ha-792382-m02 has CIDR [10.244.1.0/24] 
	I1209 10:55:51.786189       1 main.go:297] Handling node with IPs: map[192.168.39.82:{}]
	I1209 10:55:51.786293       1 main.go:324] Node ha-792382-m03 has CIDR [10.244.2.0/24] 
	I1209 10:55:51.786573       1 main.go:297] Handling node with IPs: map[192.168.39.54:{}]
	I1209 10:55:51.786644       1 main.go:324] Node ha-792382-m04 has CIDR [10.244.3.0/24] 
	I1209 10:56:01.783030       1 main.go:297] Handling node with IPs: map[192.168.39.69:{}]
	I1209 10:56:01.783176       1 main.go:301] handling current node
	I1209 10:56:01.783209       1 main.go:297] Handling node with IPs: map[192.168.39.89:{}]
	I1209 10:56:01.783262       1 main.go:324] Node ha-792382-m02 has CIDR [10.244.1.0/24] 
	I1209 10:56:01.783503       1 main.go:297] Handling node with IPs: map[192.168.39.82:{}]
	I1209 10:56:01.783567       1 main.go:324] Node ha-792382-m03 has CIDR [10.244.2.0/24] 
	I1209 10:56:01.784071       1 main.go:297] Handling node with IPs: map[192.168.39.54:{}]
	I1209 10:56:01.784166       1 main.go:324] Node ha-792382-m04 has CIDR [10.244.3.0/24] 
	I1209 10:56:11.792014       1 main.go:297] Handling node with IPs: map[192.168.39.69:{}]
	I1209 10:56:11.792252       1 main.go:301] handling current node
	I1209 10:56:11.792297       1 main.go:297] Handling node with IPs: map[192.168.39.89:{}]
	I1209 10:56:11.792379       1 main.go:324] Node ha-792382-m02 has CIDR [10.244.1.0/24] 
	I1209 10:56:11.792752       1 main.go:297] Handling node with IPs: map[192.168.39.82:{}]
	I1209 10:56:11.792788       1 main.go:324] Node ha-792382-m03 has CIDR [10.244.2.0/24] 
	I1209 10:56:11.792953       1 main.go:297] Handling node with IPs: map[192.168.39.54:{}]
	I1209 10:56:11.792978       1 main.go:324] Node ha-792382-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [64b96c1c2297057b3b928bc7494df511d909f05efb4f8d5be27458c164efe95f] <==
	I1209 10:49:52.072307       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1209 10:49:52.095069       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1209 10:49:56.392767       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I1209 10:49:56.516080       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E1209 10:51:59.302973       1 writers.go:122] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E1209 10:51:59.303668       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 331.746µs, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E1209 10:51:59.304570       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E1209 10:51:59.308414       1 writers.go:135] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E1209 10:51:59.309695       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="6.795998ms" method="POST" path="/api/v1/namespaces/kube-system/events" result=null
	E1209 10:52:32.421048       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:43832: use of closed network connection
	E1209 10:52:32.619590       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:43852: use of closed network connection
	E1209 10:52:32.815616       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:43862: use of closed network connection
	E1209 10:52:33.010440       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:43888: use of closed network connection
	E1209 10:52:33.191451       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:43910: use of closed network connection
	E1209 10:52:33.385647       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:43930: use of closed network connection
	E1209 10:52:33.571472       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:43946: use of closed network connection
	E1209 10:52:33.741655       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:43972: use of closed network connection
	E1209 10:52:33.919176       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:43990: use of closed network connection
	E1209 10:52:34.226233       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44000: use of closed network connection
	E1209 10:52:34.408728       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44016: use of closed network connection
	E1209 10:52:34.588897       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44034: use of closed network connection
	E1209 10:52:34.765608       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44050: use of closed network connection
	E1209 10:52:34.943122       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44058: use of closed network connection
	E1209 10:52:35.115793       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44068: use of closed network connection
	W1209 10:54:00.405476       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.69 192.168.39.82]
	
	
	==> kube-controller-manager [00db8f77881efc15e9bf57aba34890302ec37f9239ba1d191862ea78b9833604] <==
	I1209 10:53:04.483677       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-792382-m04" podCIDRs=["10.244.3.0/24"]
	I1209 10:53:04.483873       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-792382-m04"
	I1209 10:53:04.484031       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-792382-m04"
	I1209 10:53:04.508782       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-792382-m04"
	I1209 10:53:04.947247       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-792382-m04"
	I1209 10:53:05.336150       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-792382-m04"
	I1209 10:53:05.632610       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-792382-m04"
	I1209 10:53:05.665145       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-792382-m04"
	I1209 10:53:07.101579       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-792382-m04"
	I1209 10:53:07.148958       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-792382-m04"
	I1209 10:53:08.041907       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-792382-m04"
	I1209 10:53:08.474258       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-792382-m04"
	I1209 10:53:14.706287       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-792382-m04"
	I1209 10:53:25.397617       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-792382-m04"
	I1209 10:53:25.397765       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-792382-m04"
	I1209 10:53:25.412410       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-792382-m04"
	I1209 10:53:25.649201       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-792382-m04"
	I1209 10:53:35.378859       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-792382-m04"
	I1209 10:54:20.671888       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-792382-m02"
	I1209 10:54:20.672434       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-792382-m04"
	I1209 10:54:20.703980       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-792382-m02"
	I1209 10:54:20.840624       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="14.419282ms"
	I1209 10:54:20.841721       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="157.508µs"
	I1209 10:54:22.157822       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-792382-m02"
	I1209 10:54:25.899451       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-792382-m02"
	
	
	==> kube-proxy [3cf6196a4789ec6471e8e2fe474e71d1756368a4423e425262410e6c7a71e522] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1209 10:49:58.601423       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1209 10:49:58.617859       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.69"]
	E1209 10:49:58.617945       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1209 10:49:58.657152       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1209 10:49:58.657213       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1209 10:49:58.657247       1 server_linux.go:169] "Using iptables Proxier"
	I1209 10:49:58.660760       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1209 10:49:58.661154       1 server.go:483] "Version info" version="v1.31.2"
	I1209 10:49:58.661230       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1209 10:49:58.663604       1 config.go:199] "Starting service config controller"
	I1209 10:49:58.663767       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1209 10:49:58.664471       1 config.go:105] "Starting endpoint slice config controller"
	I1209 10:49:58.664498       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1209 10:49:58.666409       1 config.go:328] "Starting node config controller"
	I1209 10:49:58.666433       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1209 10:49:58.765096       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1209 10:49:58.767373       1 shared_informer.go:320] Caches are synced for service config
	I1209 10:49:58.767373       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [d93c68b855d9f72ce368ee4f59250a988032fe781209e2115a8d70ea60f77fee] <==
	W1209 10:49:49.686971       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1209 10:49:49.687036       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1209 10:49:49.693717       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1209 10:49:49.693755       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1209 10:49:49.756854       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1209 10:49:49.756907       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1209 10:49:49.761365       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1209 10:49:49.761407       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1209 10:49:49.901909       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1209 10:49:49.902484       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1209 10:49:50.012571       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1209 10:49:50.012617       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1209 10:49:50.018069       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1209 10:49:50.018128       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1209 10:49:50.045681       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1209 10:49:50.045732       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1209 10:49:50.048146       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1209 10:49:50.048203       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1209 10:49:51.665195       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1209 10:52:27.353144       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-ft8s2\": pod busybox-7dff88458-ft8s2 is already assigned to node \"ha-792382-m03\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-ft8s2" node="ha-792382-m03"
	E1209 10:52:27.354035       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 51271b6c-9fb3-4893-8502-54b74c4cbaa5(default/busybox-7dff88458-ft8s2) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-ft8s2"
	E1209 10:52:27.354086       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-ft8s2\": pod busybox-7dff88458-ft8s2 is already assigned to node \"ha-792382-m03\"" pod="default/busybox-7dff88458-ft8s2"
	I1209 10:52:27.354141       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-ft8s2" node="ha-792382-m03"
	E1209 10:52:27.402980       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-z9wjm\": pod busybox-7dff88458-z9wjm is already assigned to node \"ha-792382\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-z9wjm" node="ha-792382"
	E1209 10:52:27.403164       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-z9wjm\": pod busybox-7dff88458-z9wjm is already assigned to node \"ha-792382\"" pod="default/busybox-7dff88458-z9wjm"
	
	
	==> kubelet <==
	Dec 09 10:54:52 ha-792382 kubelet[1304]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 09 10:54:52 ha-792382 kubelet[1304]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 09 10:54:52 ha-792382 kubelet[1304]: E1209 10:54:52.082247    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733741692081818749,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 10:54:52 ha-792382 kubelet[1304]: E1209 10:54:52.082273    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733741692081818749,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 10:55:02 ha-792382 kubelet[1304]: E1209 10:55:02.088147    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733741702086894201,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 10:55:02 ha-792382 kubelet[1304]: E1209 10:55:02.088210    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733741702086894201,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 10:55:12 ha-792382 kubelet[1304]: E1209 10:55:12.089935    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733741712089600382,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 10:55:12 ha-792382 kubelet[1304]: E1209 10:55:12.090372    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733741712089600382,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 10:55:22 ha-792382 kubelet[1304]: E1209 10:55:22.094837    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733741722094438540,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 10:55:22 ha-792382 kubelet[1304]: E1209 10:55:22.094877    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733741722094438540,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 10:55:32 ha-792382 kubelet[1304]: E1209 10:55:32.096240    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733741732095902907,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 10:55:32 ha-792382 kubelet[1304]: E1209 10:55:32.096268    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733741732095902907,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 10:55:42 ha-792382 kubelet[1304]: E1209 10:55:42.098166    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733741742097877429,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 10:55:42 ha-792382 kubelet[1304]: E1209 10:55:42.098566    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733741742097877429,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 10:55:52 ha-792382 kubelet[1304]: E1209 10:55:52.004085    1304 iptables.go:577] "Could not set up iptables canary" err=<
	Dec 09 10:55:52 ha-792382 kubelet[1304]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 09 10:55:52 ha-792382 kubelet[1304]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 09 10:55:52 ha-792382 kubelet[1304]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 09 10:55:52 ha-792382 kubelet[1304]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 09 10:55:52 ha-792382 kubelet[1304]: E1209 10:55:52.100761    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733741752100425512,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 10:55:52 ha-792382 kubelet[1304]: E1209 10:55:52.100783    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733741752100425512,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 10:56:02 ha-792382 kubelet[1304]: E1209 10:56:02.102546    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733741762102177289,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 10:56:02 ha-792382 kubelet[1304]: E1209 10:56:02.102939    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733741762102177289,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 10:56:12 ha-792382 kubelet[1304]: E1209 10:56:12.104513    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733741772104031126,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 10:56:12 ha-792382 kubelet[1304]: E1209 10:56:12.104554    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733741772104031126,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-792382 -n ha-792382
helpers_test.go:261: (dbg) Run:  kubectl --context ha-792382 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (5.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (6.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-792382 node start m02 -v=7 --alsologtostderr
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-792382 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Done: out/minikube-linux-amd64 -p ha-792382 status -v=7 --alsologtostderr: (3.987430979s)
ha_test.go:437: status says not all three control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-792382 status -v=7 --alsologtostderr": 
ha_test.go:440: status says not all four hosts are running: args "out/minikube-linux-amd64 -p ha-792382 status -v=7 --alsologtostderr": 
ha_test.go:443: status says not all four kubelets are running: args "out/minikube-linux-amd64 -p ha-792382 status -v=7 --alsologtostderr": 
ha_test.go:446: status says not all three apiservers are running: args "out/minikube-linux-amd64 -p ha-792382 status -v=7 --alsologtostderr": 
ha_test.go:450: (dbg) Run:  kubectl get nodes
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-792382 -n ha-792382
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-792382 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-792382 logs -n 25: (1.397419721s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-792382 ssh -n                                                                 | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | ha-792382-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-792382 cp ha-792382-m03:/home/docker/cp-test.txt                              | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | ha-792382:/home/docker/cp-test_ha-792382-m03_ha-792382.txt                       |           |         |         |                     |                     |
	| ssh     | ha-792382 ssh -n                                                                 | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | ha-792382-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-792382 ssh -n ha-792382 sudo cat                                              | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | /home/docker/cp-test_ha-792382-m03_ha-792382.txt                                 |           |         |         |                     |                     |
	| cp      | ha-792382 cp ha-792382-m03:/home/docker/cp-test.txt                              | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | ha-792382-m02:/home/docker/cp-test_ha-792382-m03_ha-792382-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-792382 ssh -n                                                                 | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | ha-792382-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-792382 ssh -n ha-792382-m02 sudo cat                                          | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | /home/docker/cp-test_ha-792382-m03_ha-792382-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-792382 cp ha-792382-m03:/home/docker/cp-test.txt                              | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | ha-792382-m04:/home/docker/cp-test_ha-792382-m03_ha-792382-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-792382 ssh -n                                                                 | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | ha-792382-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-792382 ssh -n ha-792382-m04 sudo cat                                          | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | /home/docker/cp-test_ha-792382-m03_ha-792382-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-792382 cp testdata/cp-test.txt                                                | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | ha-792382-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-792382 ssh -n                                                                 | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | ha-792382-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-792382 cp ha-792382-m04:/home/docker/cp-test.txt                              | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3558467820/001/cp-test_ha-792382-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-792382 ssh -n                                                                 | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | ha-792382-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-792382 cp ha-792382-m04:/home/docker/cp-test.txt                              | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | ha-792382:/home/docker/cp-test_ha-792382-m04_ha-792382.txt                       |           |         |         |                     |                     |
	| ssh     | ha-792382 ssh -n                                                                 | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | ha-792382-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-792382 ssh -n ha-792382 sudo cat                                              | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | /home/docker/cp-test_ha-792382-m04_ha-792382.txt                                 |           |         |         |                     |                     |
	| cp      | ha-792382 cp ha-792382-m04:/home/docker/cp-test.txt                              | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | ha-792382-m02:/home/docker/cp-test_ha-792382-m04_ha-792382-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-792382 ssh -n                                                                 | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | ha-792382-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-792382 ssh -n ha-792382-m02 sudo cat                                          | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | /home/docker/cp-test_ha-792382-m04_ha-792382-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-792382 cp ha-792382-m04:/home/docker/cp-test.txt                              | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | ha-792382-m03:/home/docker/cp-test_ha-792382-m04_ha-792382-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-792382 ssh -n                                                                 | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | ha-792382-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-792382 ssh -n ha-792382-m03 sudo cat                                          | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | /home/docker/cp-test_ha-792382-m04_ha-792382-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-792382 node stop m02 -v=7                                                     | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-792382 node start m02 -v=7                                                    | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:56 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/09 10:49:12
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 10:49:12.155112  627293 out.go:345] Setting OutFile to fd 1 ...
	I1209 10:49:12.155243  627293 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 10:49:12.155252  627293 out.go:358] Setting ErrFile to fd 2...
	I1209 10:49:12.155256  627293 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 10:49:12.155455  627293 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20068-609844/.minikube/bin
	I1209 10:49:12.156111  627293 out.go:352] Setting JSON to false
	I1209 10:49:12.157109  627293 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":12696,"bootTime":1733728656,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 10:49:12.157245  627293 start.go:139] virtualization: kvm guest
	I1209 10:49:12.159303  627293 out.go:177] * [ha-792382] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1209 10:49:12.160611  627293 out.go:177]   - MINIKUBE_LOCATION=20068
	I1209 10:49:12.160611  627293 notify.go:220] Checking for updates...
	I1209 10:49:12.163029  627293 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 10:49:12.164218  627293 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20068-609844/kubeconfig
	I1209 10:49:12.165346  627293 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20068-609844/.minikube
	I1209 10:49:12.166392  627293 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1209 10:49:12.168066  627293 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 10:49:12.169526  627293 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 10:49:12.205667  627293 out.go:177] * Using the kvm2 driver based on user configuration
	I1209 10:49:12.206853  627293 start.go:297] selected driver: kvm2
	I1209 10:49:12.206869  627293 start.go:901] validating driver "kvm2" against <nil>
	I1209 10:49:12.206881  627293 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 10:49:12.207633  627293 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 10:49:12.207718  627293 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20068-609844/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1209 10:49:12.223409  627293 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1209 10:49:12.223621  627293 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1209 10:49:12.224275  627293 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 10:49:12.224320  627293 cni.go:84] Creating CNI manager for ""
	I1209 10:49:12.224382  627293 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1209 10:49:12.224394  627293 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1209 10:49:12.224467  627293 start.go:340] cluster config:
	{Name:ha-792382 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-792382 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1209 10:49:12.224624  627293 iso.go:125] acquiring lock: {Name:mk3bf4d9da9b75cc0ae0a3732f4492bac54a4b46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 10:49:12.226221  627293 out.go:177] * Starting "ha-792382" primary control-plane node in "ha-792382" cluster
	I1209 10:49:12.227308  627293 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 10:49:12.227336  627293 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1209 10:49:12.227354  627293 cache.go:56] Caching tarball of preloaded images
	I1209 10:49:12.227432  627293 preload.go:172] Found /home/jenkins/minikube-integration/20068-609844/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1209 10:49:12.227447  627293 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1209 10:49:12.227749  627293 profile.go:143] Saving config to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/config.json ...
	I1209 10:49:12.227772  627293 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/config.json: {Name:mkc1440c2022322fca4f71077ddb8bd509450a18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 10:49:12.227928  627293 start.go:360] acquireMachinesLock for ha-792382: {Name:mka6afffe9ddf2faebdc603ed0401805d58bf31e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 10:49:12.227972  627293 start.go:364] duration metric: took 26.731µs to acquireMachinesLock for "ha-792382"
	I1209 10:49:12.227996  627293 start.go:93] Provisioning new machine with config: &{Name:ha-792382 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-792382 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 10:49:12.228057  627293 start.go:125] createHost starting for "" (driver="kvm2")
	I1209 10:49:12.229507  627293 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1209 10:49:12.229650  627293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:49:12.229688  627293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:49:12.243739  627293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40935
	I1209 10:49:12.244181  627293 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:49:12.244733  627293 main.go:141] libmachine: Using API Version  1
	I1209 10:49:12.244754  627293 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:49:12.245151  627293 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:49:12.245359  627293 main.go:141] libmachine: (ha-792382) Calling .GetMachineName
	I1209 10:49:12.245524  627293 main.go:141] libmachine: (ha-792382) Calling .DriverName
	I1209 10:49:12.245673  627293 start.go:159] libmachine.API.Create for "ha-792382" (driver="kvm2")
	I1209 10:49:12.245706  627293 client.go:168] LocalClient.Create starting
	I1209 10:49:12.245734  627293 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem
	I1209 10:49:12.245764  627293 main.go:141] libmachine: Decoding PEM data...
	I1209 10:49:12.245782  627293 main.go:141] libmachine: Parsing certificate...
	I1209 10:49:12.245831  627293 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem
	I1209 10:49:12.245849  627293 main.go:141] libmachine: Decoding PEM data...
	I1209 10:49:12.245860  627293 main.go:141] libmachine: Parsing certificate...
	I1209 10:49:12.245876  627293 main.go:141] libmachine: Running pre-create checks...
	I1209 10:49:12.245884  627293 main.go:141] libmachine: (ha-792382) Calling .PreCreateCheck
	I1209 10:49:12.246327  627293 main.go:141] libmachine: (ha-792382) Calling .GetConfigRaw
	I1209 10:49:12.246669  627293 main.go:141] libmachine: Creating machine...
	I1209 10:49:12.246682  627293 main.go:141] libmachine: (ha-792382) Calling .Create
	I1209 10:49:12.246831  627293 main.go:141] libmachine: (ha-792382) Creating KVM machine...
	I1209 10:49:12.248145  627293 main.go:141] libmachine: (ha-792382) DBG | found existing default KVM network
	I1209 10:49:12.248911  627293 main.go:141] libmachine: (ha-792382) DBG | I1209 10:49:12.248755  627316 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000123350}
	I1209 10:49:12.248939  627293 main.go:141] libmachine: (ha-792382) DBG | created network xml: 
	I1209 10:49:12.248951  627293 main.go:141] libmachine: (ha-792382) DBG | <network>
	I1209 10:49:12.248971  627293 main.go:141] libmachine: (ha-792382) DBG |   <name>mk-ha-792382</name>
	I1209 10:49:12.248981  627293 main.go:141] libmachine: (ha-792382) DBG |   <dns enable='no'/>
	I1209 10:49:12.248994  627293 main.go:141] libmachine: (ha-792382) DBG |   
	I1209 10:49:12.249009  627293 main.go:141] libmachine: (ha-792382) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1209 10:49:12.249019  627293 main.go:141] libmachine: (ha-792382) DBG |     <dhcp>
	I1209 10:49:12.249032  627293 main.go:141] libmachine: (ha-792382) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1209 10:49:12.249045  627293 main.go:141] libmachine: (ha-792382) DBG |     </dhcp>
	I1209 10:49:12.249058  627293 main.go:141] libmachine: (ha-792382) DBG |   </ip>
	I1209 10:49:12.249067  627293 main.go:141] libmachine: (ha-792382) DBG |   
	I1209 10:49:12.249134  627293 main.go:141] libmachine: (ha-792382) DBG | </network>
	I1209 10:49:12.249173  627293 main.go:141] libmachine: (ha-792382) DBG | 
	I1209 10:49:12.253952  627293 main.go:141] libmachine: (ha-792382) DBG | trying to create private KVM network mk-ha-792382 192.168.39.0/24...
	I1209 10:49:12.320765  627293 main.go:141] libmachine: (ha-792382) DBG | private KVM network mk-ha-792382 192.168.39.0/24 created
	I1209 10:49:12.320810  627293 main.go:141] libmachine: (ha-792382) Setting up store path in /home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382 ...
	I1209 10:49:12.320824  627293 main.go:141] libmachine: (ha-792382) DBG | I1209 10:49:12.320703  627316 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20068-609844/.minikube
	I1209 10:49:12.320846  627293 main.go:141] libmachine: (ha-792382) Building disk image from file:///home/jenkins/minikube-integration/20068-609844/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1209 10:49:12.320864  627293 main.go:141] libmachine: (ha-792382) Downloading /home/jenkins/minikube-integration/20068-609844/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20068-609844/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1209 10:49:12.624365  627293 main.go:141] libmachine: (ha-792382) DBG | I1209 10:49:12.624217  627316 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382/id_rsa...
	I1209 10:49:12.718158  627293 main.go:141] libmachine: (ha-792382) DBG | I1209 10:49:12.718015  627316 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382/ha-792382.rawdisk...
	I1209 10:49:12.718234  627293 main.go:141] libmachine: (ha-792382) DBG | Writing magic tar header
	I1209 10:49:12.718307  627293 main.go:141] libmachine: (ha-792382) DBG | Writing SSH key tar header
	I1209 10:49:12.718345  627293 main.go:141] libmachine: (ha-792382) DBG | I1209 10:49:12.718134  627316 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382 ...
	I1209 10:49:12.718360  627293 main.go:141] libmachine: (ha-792382) Setting executable bit set on /home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382 (perms=drwx------)
	I1209 10:49:12.718367  627293 main.go:141] libmachine: (ha-792382) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382
	I1209 10:49:12.718384  627293 main.go:141] libmachine: (ha-792382) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20068-609844/.minikube/machines
	I1209 10:49:12.718399  627293 main.go:141] libmachine: (ha-792382) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20068-609844/.minikube
	I1209 10:49:12.718409  627293 main.go:141] libmachine: (ha-792382) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20068-609844
	I1209 10:49:12.718416  627293 main.go:141] libmachine: (ha-792382) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1209 10:49:12.718424  627293 main.go:141] libmachine: (ha-792382) Setting executable bit set on /home/jenkins/minikube-integration/20068-609844/.minikube/machines (perms=drwxr-xr-x)
	I1209 10:49:12.718431  627293 main.go:141] libmachine: (ha-792382) Setting executable bit set on /home/jenkins/minikube-integration/20068-609844/.minikube (perms=drwxr-xr-x)
	I1209 10:49:12.718436  627293 main.go:141] libmachine: (ha-792382) DBG | Checking permissions on dir: /home/jenkins
	I1209 10:49:12.718443  627293 main.go:141] libmachine: (ha-792382) DBG | Checking permissions on dir: /home
	I1209 10:49:12.718449  627293 main.go:141] libmachine: (ha-792382) DBG | Skipping /home - not owner
	I1209 10:49:12.718461  627293 main.go:141] libmachine: (ha-792382) Setting executable bit set on /home/jenkins/minikube-integration/20068-609844 (perms=drwxrwxr-x)
	I1209 10:49:12.718475  627293 main.go:141] libmachine: (ha-792382) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1209 10:49:12.718495  627293 main.go:141] libmachine: (ha-792382) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1209 10:49:12.718506  627293 main.go:141] libmachine: (ha-792382) Creating domain...
	I1209 10:49:12.719443  627293 main.go:141] libmachine: (ha-792382) define libvirt domain using xml: 
	I1209 10:49:12.719473  627293 main.go:141] libmachine: (ha-792382) <domain type='kvm'>
	I1209 10:49:12.719482  627293 main.go:141] libmachine: (ha-792382)   <name>ha-792382</name>
	I1209 10:49:12.719490  627293 main.go:141] libmachine: (ha-792382)   <memory unit='MiB'>2200</memory>
	I1209 10:49:12.719512  627293 main.go:141] libmachine: (ha-792382)   <vcpu>2</vcpu>
	I1209 10:49:12.719521  627293 main.go:141] libmachine: (ha-792382)   <features>
	I1209 10:49:12.719529  627293 main.go:141] libmachine: (ha-792382)     <acpi/>
	I1209 10:49:12.719537  627293 main.go:141] libmachine: (ha-792382)     <apic/>
	I1209 10:49:12.719561  627293 main.go:141] libmachine: (ha-792382)     <pae/>
	I1209 10:49:12.719580  627293 main.go:141] libmachine: (ha-792382)     
	I1209 10:49:12.719586  627293 main.go:141] libmachine: (ha-792382)   </features>
	I1209 10:49:12.719602  627293 main.go:141] libmachine: (ha-792382)   <cpu mode='host-passthrough'>
	I1209 10:49:12.719613  627293 main.go:141] libmachine: (ha-792382)   
	I1209 10:49:12.719619  627293 main.go:141] libmachine: (ha-792382)   </cpu>
	I1209 10:49:12.719631  627293 main.go:141] libmachine: (ha-792382)   <os>
	I1209 10:49:12.719637  627293 main.go:141] libmachine: (ha-792382)     <type>hvm</type>
	I1209 10:49:12.719648  627293 main.go:141] libmachine: (ha-792382)     <boot dev='cdrom'/>
	I1209 10:49:12.719659  627293 main.go:141] libmachine: (ha-792382)     <boot dev='hd'/>
	I1209 10:49:12.719681  627293 main.go:141] libmachine: (ha-792382)     <bootmenu enable='no'/>
	I1209 10:49:12.719701  627293 main.go:141] libmachine: (ha-792382)   </os>
	I1209 10:49:12.719719  627293 main.go:141] libmachine: (ha-792382)   <devices>
	I1209 10:49:12.719738  627293 main.go:141] libmachine: (ha-792382)     <disk type='file' device='cdrom'>
	I1209 10:49:12.719756  627293 main.go:141] libmachine: (ha-792382)       <source file='/home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382/boot2docker.iso'/>
	I1209 10:49:12.719767  627293 main.go:141] libmachine: (ha-792382)       <target dev='hdc' bus='scsi'/>
	I1209 10:49:12.719777  627293 main.go:141] libmachine: (ha-792382)       <readonly/>
	I1209 10:49:12.719791  627293 main.go:141] libmachine: (ha-792382)     </disk>
	I1209 10:49:12.719805  627293 main.go:141] libmachine: (ha-792382)     <disk type='file' device='disk'>
	I1209 10:49:12.719816  627293 main.go:141] libmachine: (ha-792382)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1209 10:49:12.719831  627293 main.go:141] libmachine: (ha-792382)       <source file='/home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382/ha-792382.rawdisk'/>
	I1209 10:49:12.719845  627293 main.go:141] libmachine: (ha-792382)       <target dev='hda' bus='virtio'/>
	I1209 10:49:12.719857  627293 main.go:141] libmachine: (ha-792382)     </disk>
	I1209 10:49:12.719868  627293 main.go:141] libmachine: (ha-792382)     <interface type='network'>
	I1209 10:49:12.719881  627293 main.go:141] libmachine: (ha-792382)       <source network='mk-ha-792382'/>
	I1209 10:49:12.719892  627293 main.go:141] libmachine: (ha-792382)       <model type='virtio'/>
	I1209 10:49:12.719902  627293 main.go:141] libmachine: (ha-792382)     </interface>
	I1209 10:49:12.719910  627293 main.go:141] libmachine: (ha-792382)     <interface type='network'>
	I1209 10:49:12.719940  627293 main.go:141] libmachine: (ha-792382)       <source network='default'/>
	I1209 10:49:12.719966  627293 main.go:141] libmachine: (ha-792382)       <model type='virtio'/>
	I1209 10:49:12.719981  627293 main.go:141] libmachine: (ha-792382)     </interface>
	I1209 10:49:12.719994  627293 main.go:141] libmachine: (ha-792382)     <serial type='pty'>
	I1209 10:49:12.720009  627293 main.go:141] libmachine: (ha-792382)       <target port='0'/>
	I1209 10:49:12.720026  627293 main.go:141] libmachine: (ha-792382)     </serial>
	I1209 10:49:12.720038  627293 main.go:141] libmachine: (ha-792382)     <console type='pty'>
	I1209 10:49:12.720049  627293 main.go:141] libmachine: (ha-792382)       <target type='serial' port='0'/>
	I1209 10:49:12.720070  627293 main.go:141] libmachine: (ha-792382)     </console>
	I1209 10:49:12.720083  627293 main.go:141] libmachine: (ha-792382)     <rng model='virtio'>
	I1209 10:49:12.720106  627293 main.go:141] libmachine: (ha-792382)       <backend model='random'>/dev/random</backend>
	I1209 10:49:12.720122  627293 main.go:141] libmachine: (ha-792382)     </rng>
	I1209 10:49:12.720133  627293 main.go:141] libmachine: (ha-792382)     
	I1209 10:49:12.720141  627293 main.go:141] libmachine: (ha-792382)     
	I1209 10:49:12.720152  627293 main.go:141] libmachine: (ha-792382)   </devices>
	I1209 10:49:12.720161  627293 main.go:141] libmachine: (ha-792382) </domain>
	I1209 10:49:12.720175  627293 main.go:141] libmachine: (ha-792382) 
	I1209 10:49:12.724156  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:b1:77:e1 in network default
	I1209 10:49:12.724674  627293 main.go:141] libmachine: (ha-792382) Ensuring networks are active...
	I1209 10:49:12.724713  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:12.725331  627293 main.go:141] libmachine: (ha-792382) Ensuring network default is active
	I1209 10:49:12.725573  627293 main.go:141] libmachine: (ha-792382) Ensuring network mk-ha-792382 is active
	I1209 10:49:12.726011  627293 main.go:141] libmachine: (ha-792382) Getting domain xml...
	I1209 10:49:12.726856  627293 main.go:141] libmachine: (ha-792382) Creating domain...
	I1209 10:49:13.913426  627293 main.go:141] libmachine: (ha-792382) Waiting to get IP...
	I1209 10:49:13.914474  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:13.914854  627293 main.go:141] libmachine: (ha-792382) DBG | unable to find current IP address of domain ha-792382 in network mk-ha-792382
	I1209 10:49:13.914884  627293 main.go:141] libmachine: (ha-792382) DBG | I1209 10:49:13.914843  627316 retry.go:31] will retry after 231.46558ms: waiting for machine to come up
	I1209 10:49:14.148392  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:14.148786  627293 main.go:141] libmachine: (ha-792382) DBG | unable to find current IP address of domain ha-792382 in network mk-ha-792382
	I1209 10:49:14.148818  627293 main.go:141] libmachine: (ha-792382) DBG | I1209 10:49:14.148733  627316 retry.go:31] will retry after 323.334507ms: waiting for machine to come up
	I1209 10:49:14.473105  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:14.473482  627293 main.go:141] libmachine: (ha-792382) DBG | unable to find current IP address of domain ha-792382 in network mk-ha-792382
	I1209 10:49:14.473521  627293 main.go:141] libmachine: (ha-792382) DBG | I1209 10:49:14.473432  627316 retry.go:31] will retry after 293.410473ms: waiting for machine to come up
	I1209 10:49:14.769073  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:14.769413  627293 main.go:141] libmachine: (ha-792382) DBG | unable to find current IP address of domain ha-792382 in network mk-ha-792382
	I1209 10:49:14.769442  627293 main.go:141] libmachine: (ha-792382) DBG | I1209 10:49:14.769369  627316 retry.go:31] will retry after 414.561658ms: waiting for machine to come up
	I1209 10:49:15.186115  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:15.186526  627293 main.go:141] libmachine: (ha-792382) DBG | unable to find current IP address of domain ha-792382 in network mk-ha-792382
	I1209 10:49:15.186550  627293 main.go:141] libmachine: (ha-792382) DBG | I1209 10:49:15.186486  627316 retry.go:31] will retry after 602.170929ms: waiting for machine to come up
	I1209 10:49:15.790232  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:15.790609  627293 main.go:141] libmachine: (ha-792382) DBG | unable to find current IP address of domain ha-792382 in network mk-ha-792382
	I1209 10:49:15.790636  627293 main.go:141] libmachine: (ha-792382) DBG | I1209 10:49:15.790561  627316 retry.go:31] will retry after 626.828073ms: waiting for machine to come up
	I1209 10:49:16.419433  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:16.419896  627293 main.go:141] libmachine: (ha-792382) DBG | unable to find current IP address of domain ha-792382 in network mk-ha-792382
	I1209 10:49:16.419938  627293 main.go:141] libmachine: (ha-792382) DBG | I1209 10:49:16.419857  627316 retry.go:31] will retry after 735.370165ms: waiting for machine to come up
	I1209 10:49:17.156849  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:17.157231  627293 main.go:141] libmachine: (ha-792382) DBG | unable to find current IP address of domain ha-792382 in network mk-ha-792382
	I1209 10:49:17.157266  627293 main.go:141] libmachine: (ha-792382) DBG | I1209 10:49:17.157218  627316 retry.go:31] will retry after 1.229419392s: waiting for machine to come up
	I1209 10:49:18.387855  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:18.388261  627293 main.go:141] libmachine: (ha-792382) DBG | unable to find current IP address of domain ha-792382 in network mk-ha-792382
	I1209 10:49:18.388300  627293 main.go:141] libmachine: (ha-792382) DBG | I1209 10:49:18.388201  627316 retry.go:31] will retry after 1.781823768s: waiting for machine to come up
	I1209 10:49:20.172140  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:20.172552  627293 main.go:141] libmachine: (ha-792382) DBG | unable to find current IP address of domain ha-792382 in network mk-ha-792382
	I1209 10:49:20.172583  627293 main.go:141] libmachine: (ha-792382) DBG | I1209 10:49:20.172526  627316 retry.go:31] will retry after 1.563022016s: waiting for machine to come up
	I1209 10:49:21.736731  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:21.737192  627293 main.go:141] libmachine: (ha-792382) DBG | unable to find current IP address of domain ha-792382 in network mk-ha-792382
	I1209 10:49:21.737227  627293 main.go:141] libmachine: (ha-792382) DBG | I1209 10:49:21.737132  627316 retry.go:31] will retry after 1.796183688s: waiting for machine to come up
	I1209 10:49:23.536165  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:23.536600  627293 main.go:141] libmachine: (ha-792382) DBG | unable to find current IP address of domain ha-792382 in network mk-ha-792382
	I1209 10:49:23.536633  627293 main.go:141] libmachine: (ha-792382) DBG | I1209 10:49:23.536553  627316 retry.go:31] will retry after 2.766987907s: waiting for machine to come up
	I1209 10:49:26.306562  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:26.306896  627293 main.go:141] libmachine: (ha-792382) DBG | unable to find current IP address of domain ha-792382 in network mk-ha-792382
	I1209 10:49:26.306918  627293 main.go:141] libmachine: (ha-792382) DBG | I1209 10:49:26.306878  627316 retry.go:31] will retry after 3.713874413s: waiting for machine to come up
	I1209 10:49:30.024188  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:30.024650  627293 main.go:141] libmachine: (ha-792382) DBG | unable to find current IP address of domain ha-792382 in network mk-ha-792382
	I1209 10:49:30.024693  627293 main.go:141] libmachine: (ha-792382) DBG | I1209 10:49:30.024632  627316 retry.go:31] will retry after 4.575233995s: waiting for machine to come up
	I1209 10:49:34.603079  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:34.603556  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has current primary IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:34.603577  627293 main.go:141] libmachine: (ha-792382) Found IP for machine: 192.168.39.69
	I1209 10:49:34.603593  627293 main.go:141] libmachine: (ha-792382) Reserving static IP address...
	I1209 10:49:34.603995  627293 main.go:141] libmachine: (ha-792382) DBG | unable to find host DHCP lease matching {name: "ha-792382", mac: "52:54:00:a8:82:f7", ip: "192.168.39.69"} in network mk-ha-792382
	I1209 10:49:34.677115  627293 main.go:141] libmachine: (ha-792382) DBG | Getting to WaitForSSH function...
	I1209 10:49:34.677150  627293 main.go:141] libmachine: (ha-792382) Reserved static IP address: 192.168.39.69
	I1209 10:49:34.677164  627293 main.go:141] libmachine: (ha-792382) Waiting for SSH to be available...
	I1209 10:49:34.680016  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:34.680510  627293 main.go:141] libmachine: (ha-792382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:82:f7", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:49:26 +0000 UTC Type:0 Mac:52:54:00:a8:82:f7 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:minikube Clientid:01:52:54:00:a8:82:f7}
	I1209 10:49:34.680547  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:34.680683  627293 main.go:141] libmachine: (ha-792382) DBG | Using SSH client type: external
	I1209 10:49:34.680713  627293 main.go:141] libmachine: (ha-792382) DBG | Using SSH private key: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382/id_rsa (-rw-------)
	I1209 10:49:34.680743  627293 main.go:141] libmachine: (ha-792382) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.69 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1209 10:49:34.680759  627293 main.go:141] libmachine: (ha-792382) DBG | About to run SSH command:
	I1209 10:49:34.680771  627293 main.go:141] libmachine: (ha-792382) DBG | exit 0
	I1209 10:49:34.802056  627293 main.go:141] libmachine: (ha-792382) DBG | SSH cmd err, output: <nil>: 
	I1209 10:49:34.802342  627293 main.go:141] libmachine: (ha-792382) KVM machine creation complete!
	I1209 10:49:34.802652  627293 main.go:141] libmachine: (ha-792382) Calling .GetConfigRaw
	I1209 10:49:34.803265  627293 main.go:141] libmachine: (ha-792382) Calling .DriverName
	I1209 10:49:34.803470  627293 main.go:141] libmachine: (ha-792382) Calling .DriverName
	I1209 10:49:34.803641  627293 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1209 10:49:34.803655  627293 main.go:141] libmachine: (ha-792382) Calling .GetState
	I1209 10:49:34.804897  627293 main.go:141] libmachine: Detecting operating system of created instance...
	I1209 10:49:34.804910  627293 main.go:141] libmachine: Waiting for SSH to be available...
	I1209 10:49:34.804920  627293 main.go:141] libmachine: Getting to WaitForSSH function...
	I1209 10:49:34.804925  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHHostname
	I1209 10:49:34.807181  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:34.807580  627293 main.go:141] libmachine: (ha-792382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:82:f7", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:49:26 +0000 UTC Type:0 Mac:52:54:00:a8:82:f7 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-792382 Clientid:01:52:54:00:a8:82:f7}
	I1209 10:49:34.807606  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:34.807797  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHPort
	I1209 10:49:34.807971  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHKeyPath
	I1209 10:49:34.808252  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHKeyPath
	I1209 10:49:34.808380  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHUsername
	I1209 10:49:34.808550  627293 main.go:141] libmachine: Using SSH client type: native
	I1209 10:49:34.808901  627293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.69 22 <nil> <nil>}
	I1209 10:49:34.808916  627293 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1209 10:49:34.901048  627293 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 10:49:34.901075  627293 main.go:141] libmachine: Detecting the provisioner...
	I1209 10:49:34.901084  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHHostname
	I1209 10:49:34.903801  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:34.904137  627293 main.go:141] libmachine: (ha-792382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:82:f7", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:49:26 +0000 UTC Type:0 Mac:52:54:00:a8:82:f7 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-792382 Clientid:01:52:54:00:a8:82:f7}
	I1209 10:49:34.904167  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:34.904294  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHPort
	I1209 10:49:34.904473  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHKeyPath
	I1209 10:49:34.904619  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHKeyPath
	I1209 10:49:34.904801  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHUsername
	I1209 10:49:34.904935  627293 main.go:141] libmachine: Using SSH client type: native
	I1209 10:49:34.905144  627293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.69 22 <nil> <nil>}
	I1209 10:49:34.905156  627293 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1209 10:49:34.998134  627293 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1209 10:49:34.998232  627293 main.go:141] libmachine: found compatible host: buildroot
	I1209 10:49:34.998245  627293 main.go:141] libmachine: Provisioning with buildroot...
	I1209 10:49:34.998256  627293 main.go:141] libmachine: (ha-792382) Calling .GetMachineName
	I1209 10:49:34.998517  627293 buildroot.go:166] provisioning hostname "ha-792382"
	I1209 10:49:34.998550  627293 main.go:141] libmachine: (ha-792382) Calling .GetMachineName
	I1209 10:49:34.998742  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHHostname
	I1209 10:49:35.001204  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:35.001556  627293 main.go:141] libmachine: (ha-792382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:82:f7", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:49:26 +0000 UTC Type:0 Mac:52:54:00:a8:82:f7 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-792382 Clientid:01:52:54:00:a8:82:f7}
	I1209 10:49:35.001585  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:35.001746  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHPort
	I1209 10:49:35.001925  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHKeyPath
	I1209 10:49:35.002086  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHKeyPath
	I1209 10:49:35.002233  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHUsername
	I1209 10:49:35.002387  627293 main.go:141] libmachine: Using SSH client type: native
	I1209 10:49:35.002580  627293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.69 22 <nil> <nil>}
	I1209 10:49:35.002594  627293 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-792382 && echo "ha-792382" | sudo tee /etc/hostname
	I1209 10:49:35.111878  627293 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-792382
	
	I1209 10:49:35.111914  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHHostname
	I1209 10:49:35.114679  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:35.114968  627293 main.go:141] libmachine: (ha-792382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:82:f7", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:49:26 +0000 UTC Type:0 Mac:52:54:00:a8:82:f7 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-792382 Clientid:01:52:54:00:a8:82:f7}
	I1209 10:49:35.114999  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:35.115174  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHPort
	I1209 10:49:35.115415  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHKeyPath
	I1209 10:49:35.115601  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHKeyPath
	I1209 10:49:35.115731  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHUsername
	I1209 10:49:35.115880  627293 main.go:141] libmachine: Using SSH client type: native
	I1209 10:49:35.116106  627293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.69 22 <nil> <nil>}
	I1209 10:49:35.116130  627293 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-792382' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-792382/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-792382' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 10:49:35.218632  627293 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 10:49:35.218667  627293 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20068-609844/.minikube CaCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20068-609844/.minikube}
	I1209 10:49:35.218688  627293 buildroot.go:174] setting up certificates
	I1209 10:49:35.218699  627293 provision.go:84] configureAuth start
	I1209 10:49:35.218708  627293 main.go:141] libmachine: (ha-792382) Calling .GetMachineName
	I1209 10:49:35.218985  627293 main.go:141] libmachine: (ha-792382) Calling .GetIP
	I1209 10:49:35.221513  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:35.221813  627293 main.go:141] libmachine: (ha-792382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:82:f7", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:49:26 +0000 UTC Type:0 Mac:52:54:00:a8:82:f7 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-792382 Clientid:01:52:54:00:a8:82:f7}
	I1209 10:49:35.221835  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:35.221978  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHHostname
	I1209 10:49:35.224283  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:35.224638  627293 main.go:141] libmachine: (ha-792382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:82:f7", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:49:26 +0000 UTC Type:0 Mac:52:54:00:a8:82:f7 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-792382 Clientid:01:52:54:00:a8:82:f7}
	I1209 10:49:35.224666  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:35.224816  627293 provision.go:143] copyHostCerts
	I1209 10:49:35.224849  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem
	I1209 10:49:35.224892  627293 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem, removing ...
	I1209 10:49:35.224913  627293 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem
	I1209 10:49:35.225004  627293 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem (1123 bytes)
	I1209 10:49:35.225113  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem
	I1209 10:49:35.225145  627293 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem, removing ...
	I1209 10:49:35.225155  627293 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem
	I1209 10:49:35.225195  627293 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem (1679 bytes)
	I1209 10:49:35.225255  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem
	I1209 10:49:35.225280  627293 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem, removing ...
	I1209 10:49:35.225290  627293 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem
	I1209 10:49:35.225325  627293 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem (1082 bytes)
	I1209 10:49:35.225392  627293 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem org=jenkins.ha-792382 san=[127.0.0.1 192.168.39.69 ha-792382 localhost minikube]
	I1209 10:49:35.530739  627293 provision.go:177] copyRemoteCerts
	I1209 10:49:35.530807  627293 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 10:49:35.530832  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHHostname
	I1209 10:49:35.533806  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:35.534127  627293 main.go:141] libmachine: (ha-792382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:82:f7", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:49:26 +0000 UTC Type:0 Mac:52:54:00:a8:82:f7 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-792382 Clientid:01:52:54:00:a8:82:f7}
	I1209 10:49:35.534158  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:35.534311  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHPort
	I1209 10:49:35.534552  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHKeyPath
	I1209 10:49:35.534707  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHUsername
	I1209 10:49:35.534862  627293 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382/id_rsa Username:docker}
	I1209 10:49:35.611999  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1209 10:49:35.612097  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1209 10:49:35.633738  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1209 10:49:35.633820  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1209 10:49:35.654744  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1209 10:49:35.654813  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1209 10:49:35.675689  627293 provision.go:87] duration metric: took 456.977679ms to configureAuth
	I1209 10:49:35.675718  627293 buildroot.go:189] setting minikube options for container-runtime
	I1209 10:49:35.675925  627293 config.go:182] Loaded profile config "ha-792382": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 10:49:35.676032  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHHostname
	I1209 10:49:35.678943  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:35.679261  627293 main.go:141] libmachine: (ha-792382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:82:f7", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:49:26 +0000 UTC Type:0 Mac:52:54:00:a8:82:f7 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-792382 Clientid:01:52:54:00:a8:82:f7}
	I1209 10:49:35.679289  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:35.679496  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHPort
	I1209 10:49:35.679710  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHKeyPath
	I1209 10:49:35.679841  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHKeyPath
	I1209 10:49:35.679959  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHUsername
	I1209 10:49:35.680105  627293 main.go:141] libmachine: Using SSH client type: native
	I1209 10:49:35.680332  627293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.69 22 <nil> <nil>}
	I1209 10:49:35.680355  627293 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 10:49:35.879810  627293 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 10:49:35.879848  627293 main.go:141] libmachine: Checking connection to Docker...
	I1209 10:49:35.879878  627293 main.go:141] libmachine: (ha-792382) Calling .GetURL
	I1209 10:49:35.881298  627293 main.go:141] libmachine: (ha-792382) DBG | Using libvirt version 6000000
	I1209 10:49:35.883322  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:35.883653  627293 main.go:141] libmachine: (ha-792382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:82:f7", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:49:26 +0000 UTC Type:0 Mac:52:54:00:a8:82:f7 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-792382 Clientid:01:52:54:00:a8:82:f7}
	I1209 10:49:35.883694  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:35.883840  627293 main.go:141] libmachine: Docker is up and running!
	I1209 10:49:35.883855  627293 main.go:141] libmachine: Reticulating splines...
	I1209 10:49:35.883863  627293 client.go:171] duration metric: took 23.63814664s to LocalClient.Create
	I1209 10:49:35.883888  627293 start.go:167] duration metric: took 23.638217304s to libmachine.API.Create "ha-792382"
	I1209 10:49:35.883903  627293 start.go:293] postStartSetup for "ha-792382" (driver="kvm2")
	I1209 10:49:35.883916  627293 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 10:49:35.883934  627293 main.go:141] libmachine: (ha-792382) Calling .DriverName
	I1209 10:49:35.884193  627293 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 10:49:35.884224  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHHostname
	I1209 10:49:35.886333  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:35.886719  627293 main.go:141] libmachine: (ha-792382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:82:f7", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:49:26 +0000 UTC Type:0 Mac:52:54:00:a8:82:f7 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-792382 Clientid:01:52:54:00:a8:82:f7}
	I1209 10:49:35.886746  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:35.886830  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHPort
	I1209 10:49:35.887023  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHKeyPath
	I1209 10:49:35.887177  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHUsername
	I1209 10:49:35.887342  627293 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382/id_rsa Username:docker}
	I1209 10:49:35.963840  627293 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 10:49:35.967678  627293 info.go:137] Remote host: Buildroot 2023.02.9
	I1209 10:49:35.967709  627293 filesync.go:126] Scanning /home/jenkins/minikube-integration/20068-609844/.minikube/addons for local assets ...
	I1209 10:49:35.967791  627293 filesync.go:126] Scanning /home/jenkins/minikube-integration/20068-609844/.minikube/files for local assets ...
	I1209 10:49:35.967866  627293 filesync.go:149] local asset: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem -> 6170172.pem in /etc/ssl/certs
	I1209 10:49:35.967876  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem -> /etc/ssl/certs/6170172.pem
	I1209 10:49:35.967969  627293 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 10:49:35.976432  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem --> /etc/ssl/certs/6170172.pem (1708 bytes)
	I1209 10:49:35.997593  627293 start.go:296] duration metric: took 113.67336ms for postStartSetup
	I1209 10:49:35.997658  627293 main.go:141] libmachine: (ha-792382) Calling .GetConfigRaw
	I1209 10:49:35.998325  627293 main.go:141] libmachine: (ha-792382) Calling .GetIP
	I1209 10:49:36.000848  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:36.001239  627293 main.go:141] libmachine: (ha-792382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:82:f7", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:49:26 +0000 UTC Type:0 Mac:52:54:00:a8:82:f7 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-792382 Clientid:01:52:54:00:a8:82:f7}
	I1209 10:49:36.001267  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:36.001479  627293 profile.go:143] Saving config to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/config.json ...
	I1209 10:49:36.001656  627293 start.go:128] duration metric: took 23.77358998s to createHost
	I1209 10:49:36.001690  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHHostname
	I1209 10:49:36.004043  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:36.004400  627293 main.go:141] libmachine: (ha-792382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:82:f7", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:49:26 +0000 UTC Type:0 Mac:52:54:00:a8:82:f7 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-792382 Clientid:01:52:54:00:a8:82:f7}
	I1209 10:49:36.004431  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:36.004549  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHPort
	I1209 10:49:36.004734  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHKeyPath
	I1209 10:49:36.004893  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHKeyPath
	I1209 10:49:36.005024  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHUsername
	I1209 10:49:36.005202  627293 main.go:141] libmachine: Using SSH client type: native
	I1209 10:49:36.005368  627293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.69 22 <nil> <nil>}
	I1209 10:49:36.005389  627293 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1209 10:49:36.102487  627293 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733741376.078541083
	
	I1209 10:49:36.102513  627293 fix.go:216] guest clock: 1733741376.078541083
	I1209 10:49:36.102520  627293 fix.go:229] Guest: 2024-12-09 10:49:36.078541083 +0000 UTC Remote: 2024-12-09 10:49:36.001674575 +0000 UTC m=+23.885913523 (delta=76.866508ms)
	I1209 10:49:36.102562  627293 fix.go:200] guest clock delta is within tolerance: 76.866508ms
	I1209 10:49:36.102567  627293 start.go:83] releasing machines lock for "ha-792382", held for 23.874584082s
	I1209 10:49:36.102599  627293 main.go:141] libmachine: (ha-792382) Calling .DriverName
	I1209 10:49:36.102894  627293 main.go:141] libmachine: (ha-792382) Calling .GetIP
	I1209 10:49:36.105447  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:36.105786  627293 main.go:141] libmachine: (ha-792382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:82:f7", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:49:26 +0000 UTC Type:0 Mac:52:54:00:a8:82:f7 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-792382 Clientid:01:52:54:00:a8:82:f7}
	I1209 10:49:36.105824  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:36.105948  627293 main.go:141] libmachine: (ha-792382) Calling .DriverName
	I1209 10:49:36.106428  627293 main.go:141] libmachine: (ha-792382) Calling .DriverName
	I1209 10:49:36.106564  627293 main.go:141] libmachine: (ha-792382) Calling .DriverName
	I1209 10:49:36.106659  627293 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 10:49:36.106712  627293 ssh_runner.go:195] Run: cat /version.json
	I1209 10:49:36.106729  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHHostname
	I1209 10:49:36.106735  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHHostname
	I1209 10:49:36.108936  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:36.108975  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:36.109292  627293 main.go:141] libmachine: (ha-792382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:82:f7", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:49:26 +0000 UTC Type:0 Mac:52:54:00:a8:82:f7 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-792382 Clientid:01:52:54:00:a8:82:f7}
	I1209 10:49:36.109315  627293 main.go:141] libmachine: (ha-792382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:82:f7", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:49:26 +0000 UTC Type:0 Mac:52:54:00:a8:82:f7 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-792382 Clientid:01:52:54:00:a8:82:f7}
	I1209 10:49:36.109331  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:36.109347  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:36.109458  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHPort
	I1209 10:49:36.109631  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHPort
	I1209 10:49:36.109648  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHKeyPath
	I1209 10:49:36.109795  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHKeyPath
	I1209 10:49:36.109838  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHUsername
	I1209 10:49:36.109969  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHUsername
	I1209 10:49:36.109997  627293 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382/id_rsa Username:docker}
	I1209 10:49:36.110076  627293 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382/id_rsa Username:docker}
	I1209 10:49:36.213912  627293 ssh_runner.go:195] Run: systemctl --version
	I1209 10:49:36.219737  627293 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 10:49:36.373775  627293 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 10:49:36.379232  627293 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 10:49:36.379295  627293 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 10:49:36.394395  627293 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1209 10:49:36.394420  627293 start.go:495] detecting cgroup driver to use...
	I1209 10:49:36.394492  627293 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 10:49:36.409701  627293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 10:49:36.422542  627293 docker.go:217] disabling cri-docker service (if available) ...
	I1209 10:49:36.422600  627293 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 10:49:36.434811  627293 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 10:49:36.447372  627293 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 10:49:36.555614  627293 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 10:49:36.712890  627293 docker.go:233] disabling docker service ...
	I1209 10:49:36.712971  627293 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 10:49:36.726789  627293 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 10:49:36.738514  627293 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 10:49:36.860478  627293 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 10:49:36.981442  627293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 10:49:36.994232  627293 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 10:49:37.010639  627293 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1209 10:49:37.010699  627293 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 10:49:37.019623  627293 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 10:49:37.019678  627293 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 10:49:37.028741  627293 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 10:49:37.037802  627293 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 10:49:37.047112  627293 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 10:49:37.056587  627293 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 10:49:37.065626  627293 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 10:49:37.081471  627293 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 10:49:37.090400  627293 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 10:49:37.098511  627293 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1209 10:49:37.098567  627293 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1209 10:49:37.112020  627293 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 10:49:37.122574  627293 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 10:49:37.244301  627293 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 10:49:37.327990  627293 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 10:49:37.328076  627293 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 10:49:37.332519  627293 start.go:563] Will wait 60s for crictl version
	I1209 10:49:37.332580  627293 ssh_runner.go:195] Run: which crictl
	I1209 10:49:37.336027  627293 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 10:49:37.371600  627293 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1209 10:49:37.371689  627293 ssh_runner.go:195] Run: crio --version
	I1209 10:49:37.397060  627293 ssh_runner.go:195] Run: crio --version
	I1209 10:49:37.427301  627293 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1209 10:49:37.428631  627293 main.go:141] libmachine: (ha-792382) Calling .GetIP
	I1209 10:49:37.431338  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:37.431646  627293 main.go:141] libmachine: (ha-792382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:82:f7", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:49:26 +0000 UTC Type:0 Mac:52:54:00:a8:82:f7 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-792382 Clientid:01:52:54:00:a8:82:f7}
	I1209 10:49:37.431664  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:37.431871  627293 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1209 10:49:37.435530  627293 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 10:49:37.447078  627293 kubeadm.go:883] updating cluster {Name:ha-792382 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cl
usterName:ha-792382 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.69 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 10:49:37.447263  627293 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 10:49:37.447334  627293 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 10:49:37.477408  627293 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1209 10:49:37.477478  627293 ssh_runner.go:195] Run: which lz4
	I1209 10:49:37.480957  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1209 10:49:37.481050  627293 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1209 10:49:37.484762  627293 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1209 10:49:37.484788  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1209 10:49:38.710605  627293 crio.go:462] duration metric: took 1.229579062s to copy over tarball
	I1209 10:49:38.710680  627293 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1209 10:49:40.690695  627293 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.979974769s)
	I1209 10:49:40.690734  627293 crio.go:469] duration metric: took 1.980097705s to extract the tarball
	I1209 10:49:40.690745  627293 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1209 10:49:40.726929  627293 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 10:49:40.771095  627293 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 10:49:40.771125  627293 cache_images.go:84] Images are preloaded, skipping loading
	I1209 10:49:40.771136  627293 kubeadm.go:934] updating node { 192.168.39.69 8443 v1.31.2 crio true true} ...
	I1209 10:49:40.771264  627293 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-792382 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.69
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-792382 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 10:49:40.771357  627293 ssh_runner.go:195] Run: crio config
	I1209 10:49:40.816747  627293 cni.go:84] Creating CNI manager for ""
	I1209 10:49:40.816772  627293 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1209 10:49:40.816783  627293 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1209 10:49:40.816808  627293 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.69 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-792382 NodeName:ha-792382 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.69"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.69 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 10:49:40.816935  627293 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.69
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-792382"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.69"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.69"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 10:49:40.816960  627293 kube-vip.go:115] generating kube-vip config ...
	I1209 10:49:40.817003  627293 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1209 10:49:40.831794  627293 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1209 10:49:40.831917  627293 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.7
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1209 10:49:40.831988  627293 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1209 10:49:40.841266  627293 binaries.go:44] Found k8s binaries, skipping transfer
	I1209 10:49:40.841344  627293 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1209 10:49:40.850351  627293 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I1209 10:49:40.865301  627293 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 10:49:40.880173  627293 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2286 bytes)
	I1209 10:49:40.895089  627293 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I1209 10:49:40.909836  627293 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1209 10:49:40.913336  627293 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 10:49:40.924356  627293 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 10:49:41.046665  627293 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 10:49:41.063018  627293 certs.go:68] Setting up /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382 for IP: 192.168.39.69
	I1209 10:49:41.063041  627293 certs.go:194] generating shared ca certs ...
	I1209 10:49:41.063062  627293 certs.go:226] acquiring lock for ca certs: {Name:mk2df5887e08965d909a9c950da5dfffb8a04ddc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 10:49:41.063244  627293 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.key
	I1209 10:49:41.063289  627293 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.key
	I1209 10:49:41.063300  627293 certs.go:256] generating profile certs ...
	I1209 10:49:41.063355  627293 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/client.key
	I1209 10:49:41.063367  627293 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/client.crt with IP's: []
	I1209 10:49:41.129843  627293 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/client.crt ...
	I1209 10:49:41.129870  627293 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/client.crt: {Name:mkf984c9e526db9b810af9b168d6930601d7ed72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 10:49:41.130077  627293 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/client.key ...
	I1209 10:49:41.130094  627293 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/client.key: {Name:mk7ce7334711bfa08abe5164a05b3a0e352b8f81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 10:49:41.130213  627293 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.key.27c0a765
	I1209 10:49:41.130234  627293 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.crt.27c0a765 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.69 192.168.39.254]
	I1209 10:49:41.505985  627293 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.crt.27c0a765 ...
	I1209 10:49:41.506019  627293 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.crt.27c0a765: {Name:mkd0b0619960f58505ea5c5b1f53c5a2d8b55baf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 10:49:41.506242  627293 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.key.27c0a765 ...
	I1209 10:49:41.506261  627293 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.key.27c0a765: {Name:mk67bc39f2b151954187d9bdff2b01a7060c0444 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 10:49:41.506368  627293 certs.go:381] copying /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.crt.27c0a765 -> /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.crt
	I1209 10:49:41.506445  627293 certs.go:385] copying /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.key.27c0a765 -> /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.key
	I1209 10:49:41.506499  627293 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/proxy-client.key
	I1209 10:49:41.506513  627293 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/proxy-client.crt with IP's: []
	I1209 10:49:41.582775  627293 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/proxy-client.crt ...
	I1209 10:49:41.582805  627293 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/proxy-client.crt: {Name:mk8ba382df4a8d41cbb5595274fb67800a146923 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 10:49:41.582997  627293 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/proxy-client.key ...
	I1209 10:49:41.583012  627293 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/proxy-client.key: {Name:mka4002ccf01f2f736e4a0e998ece96628af1083 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 10:49:41.583117  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1209 10:49:41.583147  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1209 10:49:41.583161  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1209 10:49:41.583173  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1209 10:49:41.583197  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1209 10:49:41.583210  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1209 10:49:41.583222  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1209 10:49:41.583234  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1209 10:49:41.583286  627293 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017.pem (1338 bytes)
	W1209 10:49:41.583322  627293 certs.go:480] ignoring /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017_empty.pem, impossibly tiny 0 bytes
	I1209 10:49:41.583332  627293 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem (1675 bytes)
	I1209 10:49:41.583354  627293 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem (1082 bytes)
	I1209 10:49:41.583377  627293 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem (1123 bytes)
	I1209 10:49:41.583404  627293 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem (1679 bytes)
	I1209 10:49:41.583441  627293 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem (1708 bytes)
	I1209 10:49:41.583468  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem -> /usr/share/ca-certificates/6170172.pem
	I1209 10:49:41.583481  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1209 10:49:41.583493  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017.pem -> /usr/share/ca-certificates/617017.pem
	I1209 10:49:41.584023  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 10:49:41.607858  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1209 10:49:41.629298  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 10:49:41.650915  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1209 10:49:41.672892  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1209 10:49:41.695834  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1209 10:49:41.719653  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 10:49:41.742298  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1209 10:49:41.764468  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem --> /usr/share/ca-certificates/6170172.pem (1708 bytes)
	I1209 10:49:41.786947  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 10:49:41.811703  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017.pem --> /usr/share/ca-certificates/617017.pem (1338 bytes)
	I1209 10:49:41.837346  627293 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 10:49:41.855854  627293 ssh_runner.go:195] Run: openssl version
	I1209 10:49:41.862371  627293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/617017.pem && ln -fs /usr/share/ca-certificates/617017.pem /etc/ssl/certs/617017.pem"
	I1209 10:49:41.872771  627293 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/617017.pem
	I1209 10:49:41.878140  627293 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 10:45 /usr/share/ca-certificates/617017.pem
	I1209 10:49:41.878210  627293 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/617017.pem
	I1209 10:49:41.883640  627293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/617017.pem /etc/ssl/certs/51391683.0"
	I1209 10:49:41.893209  627293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6170172.pem && ln -fs /usr/share/ca-certificates/6170172.pem /etc/ssl/certs/6170172.pem"
	I1209 10:49:41.902869  627293 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6170172.pem
	I1209 10:49:41.906850  627293 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 10:45 /usr/share/ca-certificates/6170172.pem
	I1209 10:49:41.906898  627293 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6170172.pem
	I1209 10:49:41.912084  627293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6170172.pem /etc/ssl/certs/3ec20f2e.0"
	I1209 10:49:41.922405  627293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1209 10:49:41.932252  627293 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 10:49:41.936213  627293 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 10:34 /usr/share/ca-certificates/minikubeCA.pem
	I1209 10:49:41.936274  627293 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 10:49:41.941486  627293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1209 10:49:41.951188  627293 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 10:49:41.954834  627293 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1209 10:49:41.954890  627293 kubeadm.go:392] StartCluster: {Name:ha-792382 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Clust
erName:ha-792382 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.69 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 10:49:41.954978  627293 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 10:49:41.955029  627293 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 10:49:41.990596  627293 cri.go:89] found id: ""
	I1209 10:49:41.990674  627293 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 10:49:41.999783  627293 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 10:49:42.008238  627293 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 10:49:42.016846  627293 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 10:49:42.016865  627293 kubeadm.go:157] found existing configuration files:
	
	I1209 10:49:42.016904  627293 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 10:49:42.024739  627293 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 10:49:42.024809  627293 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 10:49:42.033044  627293 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 10:49:42.040972  627293 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 10:49:42.041020  627293 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 10:49:42.049238  627293 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 10:49:42.056966  627293 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 10:49:42.057032  627293 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 10:49:42.065232  627293 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 10:49:42.073082  627293 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 10:49:42.073123  627293 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 10:49:42.081145  627293 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1209 10:49:42.179849  627293 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1209 10:49:42.179910  627293 kubeadm.go:310] [preflight] Running pre-flight checks
	I1209 10:49:42.276408  627293 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1209 10:49:42.276561  627293 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1209 10:49:42.276716  627293 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1209 10:49:42.284852  627293 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1209 10:49:42.286435  627293 out.go:235]   - Generating certificates and keys ...
	I1209 10:49:42.286522  627293 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1209 10:49:42.286594  627293 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1209 10:49:42.590387  627293 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1209 10:49:42.745055  627293 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1209 10:49:42.887467  627293 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1209 10:49:43.151549  627293 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1209 10:49:43.207644  627293 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1209 10:49:43.207798  627293 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-792382 localhost] and IPs [192.168.39.69 127.0.0.1 ::1]
	I1209 10:49:43.393565  627293 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1209 10:49:43.393710  627293 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-792382 localhost] and IPs [192.168.39.69 127.0.0.1 ::1]
	I1209 10:49:43.595429  627293 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1209 10:49:43.672644  627293 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1209 10:49:43.819815  627293 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1209 10:49:43.819914  627293 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1209 10:49:44.041243  627293 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1209 10:49:44.173892  627293 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1209 10:49:44.337644  627293 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1209 10:49:44.481944  627293 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1209 10:49:44.539526  627293 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1209 10:49:44.540094  627293 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1209 10:49:44.543689  627293 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1209 10:49:44.575870  627293 out.go:235]   - Booting up control plane ...
	I1209 10:49:44.576053  627293 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1209 10:49:44.576187  627293 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1209 10:49:44.576309  627293 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1209 10:49:44.576459  627293 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1209 10:49:44.576560  627293 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1209 10:49:44.576606  627293 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1209 10:49:44.708364  627293 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1209 10:49:44.708561  627293 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1209 10:49:45.209677  627293 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.518639ms
	I1209 10:49:45.209811  627293 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1209 10:49:51.244834  627293 kubeadm.go:310] [api-check] The API server is healthy after 6.038769474s
	I1209 10:49:51.258766  627293 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1209 10:49:51.275586  627293 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1209 10:49:51.347505  627293 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1209 10:49:51.347730  627293 kubeadm.go:310] [mark-control-plane] Marking the node ha-792382 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1209 10:49:51.363557  627293 kubeadm.go:310] [bootstrap-token] Using token: 3fogiz.oanziwjzsm1wr1kv
	I1209 10:49:51.364826  627293 out.go:235]   - Configuring RBAC rules ...
	I1209 10:49:51.364951  627293 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1209 10:49:51.370786  627293 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1209 10:49:51.381797  627293 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1209 10:49:51.388857  627293 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1209 10:49:51.392743  627293 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1209 10:49:51.397933  627293 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1209 10:49:51.652382  627293 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1209 10:49:52.085079  627293 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1209 10:49:52.651844  627293 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1209 10:49:52.653438  627293 kubeadm.go:310] 
	I1209 10:49:52.653557  627293 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1209 10:49:52.653580  627293 kubeadm.go:310] 
	I1209 10:49:52.653672  627293 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1209 10:49:52.653682  627293 kubeadm.go:310] 
	I1209 10:49:52.653710  627293 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1209 10:49:52.653783  627293 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1209 10:49:52.653859  627293 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1209 10:49:52.653869  627293 kubeadm.go:310] 
	I1209 10:49:52.653946  627293 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1209 10:49:52.653955  627293 kubeadm.go:310] 
	I1209 10:49:52.654040  627293 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1209 10:49:52.654062  627293 kubeadm.go:310] 
	I1209 10:49:52.654116  627293 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1209 10:49:52.654229  627293 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1209 10:49:52.654328  627293 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1209 10:49:52.654347  627293 kubeadm.go:310] 
	I1209 10:49:52.654461  627293 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1209 10:49:52.654579  627293 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1209 10:49:52.654591  627293 kubeadm.go:310] 
	I1209 10:49:52.654710  627293 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 3fogiz.oanziwjzsm1wr1kv \
	I1209 10:49:52.654860  627293 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c011865ce27f188d55feab5f04226df0aae8d2e7cfe661283806c6f2de0ef29a \
	I1209 10:49:52.654894  627293 kubeadm.go:310] 	--control-plane 
	I1209 10:49:52.654903  627293 kubeadm.go:310] 
	I1209 10:49:52.655035  627293 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1209 10:49:52.655045  627293 kubeadm.go:310] 
	I1209 10:49:52.655125  627293 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 3fogiz.oanziwjzsm1wr1kv \
	I1209 10:49:52.655253  627293 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c011865ce27f188d55feab5f04226df0aae8d2e7cfe661283806c6f2de0ef29a 
	I1209 10:49:52.656128  627293 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1209 10:49:52.656180  627293 cni.go:84] Creating CNI manager for ""
	I1209 10:49:52.656208  627293 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1209 10:49:52.657779  627293 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1209 10:49:52.659033  627293 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1209 10:49:52.663808  627293 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.2/kubectl ...
	I1209 10:49:52.663829  627293 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1209 10:49:52.683028  627293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1209 10:49:53.058715  627293 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1209 10:49:53.058827  627293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 10:49:53.058833  627293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-792382 minikube.k8s.io/updated_at=2024_12_09T10_49_53_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=60110addcdbf0fec7168b962521659e922988d6c minikube.k8s.io/name=ha-792382 minikube.k8s.io/primary=true
	I1209 10:49:53.086878  627293 ops.go:34] apiserver oom_adj: -16
	I1209 10:49:53.256202  627293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 10:49:53.756573  627293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 10:49:54.256994  627293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 10:49:54.756404  627293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 10:49:55.257137  627293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 10:49:55.756813  627293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 10:49:56.256686  627293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 10:49:56.352743  627293 kubeadm.go:1113] duration metric: took 3.294004538s to wait for elevateKubeSystemPrivileges
	I1209 10:49:56.352793  627293 kubeadm.go:394] duration metric: took 14.397907015s to StartCluster
	I1209 10:49:56.352820  627293 settings.go:142] acquiring lock: {Name:mkd819f3469f56ef651f2e5bb461e93bb6b2b5fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 10:49:56.352918  627293 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20068-609844/kubeconfig
	I1209 10:49:56.354019  627293 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/kubeconfig: {Name:mk32876a1ab47f13c0d7c0ba1ee5e32de01b4a4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 10:49:56.354304  627293 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1209 10:49:56.354300  627293 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.69 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 10:49:56.354326  627293 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1209 10:49:56.354417  627293 start.go:241] waiting for startup goroutines ...
	I1209 10:49:56.354432  627293 addons.go:69] Setting storage-provisioner=true in profile "ha-792382"
	I1209 10:49:56.354455  627293 addons.go:234] Setting addon storage-provisioner=true in "ha-792382"
	I1209 10:49:56.354464  627293 addons.go:69] Setting default-storageclass=true in profile "ha-792382"
	I1209 10:49:56.354495  627293 host.go:66] Checking if "ha-792382" exists ...
	I1209 10:49:56.354504  627293 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-792382"
	I1209 10:49:56.354547  627293 config.go:182] Loaded profile config "ha-792382": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 10:49:56.354836  627293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:49:56.354867  627293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:49:56.354970  627293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:49:56.355019  627293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:49:56.371190  627293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42209
	I1209 10:49:56.371264  627293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40229
	I1209 10:49:56.371767  627293 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:49:56.371795  627293 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:49:56.372258  627293 main.go:141] libmachine: Using API Version  1
	I1209 10:49:56.372273  627293 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:49:56.372420  627293 main.go:141] libmachine: Using API Version  1
	I1209 10:49:56.372446  627293 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:49:56.372589  627293 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:49:56.372844  627293 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:49:56.373068  627293 main.go:141] libmachine: (ha-792382) Calling .GetState
	I1209 10:49:56.373184  627293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:49:56.373230  627293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:49:56.375150  627293 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/20068-609844/kubeconfig
	I1209 10:49:56.375437  627293 kapi.go:59] client config for ha-792382: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/client.crt", KeyFile:"/home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/client.key", CAFile:"/home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243b6e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1209 10:49:56.375916  627293 cert_rotation.go:140] Starting client certificate rotation controller
	I1209 10:49:56.376176  627293 addons.go:234] Setting addon default-storageclass=true in "ha-792382"
	I1209 10:49:56.376225  627293 host.go:66] Checking if "ha-792382" exists ...
	I1209 10:49:56.376515  627293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:49:56.376560  627293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:49:56.389420  627293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45615
	I1209 10:49:56.390064  627293 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:49:56.390648  627293 main.go:141] libmachine: Using API Version  1
	I1209 10:49:56.390676  627293 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:49:56.391072  627293 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:49:56.391316  627293 main.go:141] libmachine: (ha-792382) Calling .GetState
	I1209 10:49:56.391995  627293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42071
	I1209 10:49:56.392539  627293 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:49:56.393048  627293 main.go:141] libmachine: Using API Version  1
	I1209 10:49:56.393071  627293 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:49:56.393381  627293 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:49:56.393446  627293 main.go:141] libmachine: (ha-792382) Calling .DriverName
	I1209 10:49:56.393880  627293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:49:56.393927  627293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:49:56.395537  627293 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 10:49:56.396877  627293 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 10:49:56.396901  627293 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1209 10:49:56.396927  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHHostname
	I1209 10:49:56.399986  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:56.400413  627293 main.go:141] libmachine: (ha-792382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:82:f7", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:49:26 +0000 UTC Type:0 Mac:52:54:00:a8:82:f7 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-792382 Clientid:01:52:54:00:a8:82:f7}
	I1209 10:49:56.400445  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:56.400639  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHPort
	I1209 10:49:56.400862  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHKeyPath
	I1209 10:49:56.401027  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHUsername
	I1209 10:49:56.401192  627293 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382/id_rsa Username:docker}
	I1209 10:49:56.410237  627293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41957
	I1209 10:49:56.411256  627293 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:49:56.413501  627293 main.go:141] libmachine: Using API Version  1
	I1209 10:49:56.413527  627293 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:49:56.414391  627293 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:49:56.414656  627293 main.go:141] libmachine: (ha-792382) Calling .GetState
	I1209 10:49:56.416343  627293 main.go:141] libmachine: (ha-792382) Calling .DriverName
	I1209 10:49:56.416575  627293 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1209 10:49:56.416592  627293 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1209 10:49:56.416608  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHHostname
	I1209 10:49:56.419239  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:56.419746  627293 main.go:141] libmachine: (ha-792382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:82:f7", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:49:26 +0000 UTC Type:0 Mac:52:54:00:a8:82:f7 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-792382 Clientid:01:52:54:00:a8:82:f7}
	I1209 10:49:56.419776  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:56.419875  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHPort
	I1209 10:49:56.420076  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHKeyPath
	I1209 10:49:56.420261  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHUsername
	I1209 10:49:56.420422  627293 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382/id_rsa Username:docker}
	I1209 10:49:56.497434  627293 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1209 10:49:56.595755  627293 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 10:49:56.677666  627293 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1209 10:49:57.066334  627293 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1209 10:49:57.258939  627293 main.go:141] libmachine: Making call to close driver server
	I1209 10:49:57.258974  627293 main.go:141] libmachine: (ha-792382) Calling .Close
	I1209 10:49:57.258947  627293 main.go:141] libmachine: Making call to close driver server
	I1209 10:49:57.259060  627293 main.go:141] libmachine: (ha-792382) Calling .Close
	I1209 10:49:57.259277  627293 main.go:141] libmachine: Successfully made call to close driver server
	I1209 10:49:57.259322  627293 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 10:49:57.259343  627293 main.go:141] libmachine: Making call to close driver server
	I1209 10:49:57.259358  627293 main.go:141] libmachine: (ha-792382) Calling .Close
	I1209 10:49:57.259450  627293 main.go:141] libmachine: (ha-792382) DBG | Closing plugin on server side
	I1209 10:49:57.259495  627293 main.go:141] libmachine: Successfully made call to close driver server
	I1209 10:49:57.259510  627293 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 10:49:57.259523  627293 main.go:141] libmachine: Making call to close driver server
	I1209 10:49:57.259535  627293 main.go:141] libmachine: (ha-792382) Calling .Close
	I1209 10:49:57.259638  627293 main.go:141] libmachine: Successfully made call to close driver server
	I1209 10:49:57.259658  627293 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 10:49:57.259664  627293 main.go:141] libmachine: (ha-792382) DBG | Closing plugin on server side
	I1209 10:49:57.259795  627293 main.go:141] libmachine: Successfully made call to close driver server
	I1209 10:49:57.259815  627293 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 10:49:57.259895  627293 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1209 10:49:57.259914  627293 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1209 10:49:57.260014  627293 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1209 10:49:57.260024  627293 round_trippers.go:469] Request Headers:
	I1209 10:49:57.260035  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:49:57.260046  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:49:57.272826  627293 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1209 10:49:57.273379  627293 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1209 10:49:57.273393  627293 round_trippers.go:469] Request Headers:
	I1209 10:49:57.273400  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:49:57.273404  627293 round_trippers.go:473]     Content-Type: application/json
	I1209 10:49:57.273408  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:49:57.276004  627293 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 10:49:57.276170  627293 main.go:141] libmachine: Making call to close driver server
	I1209 10:49:57.276182  627293 main.go:141] libmachine: (ha-792382) Calling .Close
	I1209 10:49:57.276582  627293 main.go:141] libmachine: Successfully made call to close driver server
	I1209 10:49:57.276606  627293 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 10:49:57.278423  627293 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1209 10:49:57.279715  627293 addons.go:510] duration metric: took 925.38672ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1209 10:49:57.279752  627293 start.go:246] waiting for cluster config update ...
	I1209 10:49:57.279765  627293 start.go:255] writing updated cluster config ...
	I1209 10:49:57.281341  627293 out.go:201] 
	I1209 10:49:57.282688  627293 config.go:182] Loaded profile config "ha-792382": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 10:49:57.282758  627293 profile.go:143] Saving config to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/config.json ...
	I1209 10:49:57.284265  627293 out.go:177] * Starting "ha-792382-m02" control-plane node in "ha-792382" cluster
	I1209 10:49:57.285340  627293 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 10:49:57.285363  627293 cache.go:56] Caching tarball of preloaded images
	I1209 10:49:57.285479  627293 preload.go:172] Found /home/jenkins/minikube-integration/20068-609844/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1209 10:49:57.285499  627293 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1209 10:49:57.285580  627293 profile.go:143] Saving config to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/config.json ...
	I1209 10:49:57.285772  627293 start.go:360] acquireMachinesLock for ha-792382-m02: {Name:mka6afffe9ddf2faebdc603ed0401805d58bf31e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 10:49:57.285830  627293 start.go:364] duration metric: took 34.649µs to acquireMachinesLock for "ha-792382-m02"
	I1209 10:49:57.285855  627293 start.go:93] Provisioning new machine with config: &{Name:ha-792382 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-792382 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.69 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 10:49:57.285945  627293 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1209 10:49:57.287544  627293 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1209 10:49:57.287637  627293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:49:57.287679  627293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:49:57.302923  627293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45627
	I1209 10:49:57.303345  627293 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:49:57.303929  627293 main.go:141] libmachine: Using API Version  1
	I1209 10:49:57.303955  627293 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:49:57.304276  627293 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:49:57.304507  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetMachineName
	I1209 10:49:57.304682  627293 main.go:141] libmachine: (ha-792382-m02) Calling .DriverName
	I1209 10:49:57.304915  627293 start.go:159] libmachine.API.Create for "ha-792382" (driver="kvm2")
	I1209 10:49:57.304958  627293 client.go:168] LocalClient.Create starting
	I1209 10:49:57.305006  627293 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem
	I1209 10:49:57.305054  627293 main.go:141] libmachine: Decoding PEM data...
	I1209 10:49:57.305076  627293 main.go:141] libmachine: Parsing certificate...
	I1209 10:49:57.305150  627293 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem
	I1209 10:49:57.305184  627293 main.go:141] libmachine: Decoding PEM data...
	I1209 10:49:57.305200  627293 main.go:141] libmachine: Parsing certificate...
	I1209 10:49:57.305226  627293 main.go:141] libmachine: Running pre-create checks...
	I1209 10:49:57.305237  627293 main.go:141] libmachine: (ha-792382-m02) Calling .PreCreateCheck
	I1209 10:49:57.305467  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetConfigRaw
	I1209 10:49:57.305949  627293 main.go:141] libmachine: Creating machine...
	I1209 10:49:57.305967  627293 main.go:141] libmachine: (ha-792382-m02) Calling .Create
	I1209 10:49:57.306165  627293 main.go:141] libmachine: (ha-792382-m02) Creating KVM machine...
	I1209 10:49:57.307365  627293 main.go:141] libmachine: (ha-792382-m02) DBG | found existing default KVM network
	I1209 10:49:57.307532  627293 main.go:141] libmachine: (ha-792382-m02) DBG | found existing private KVM network mk-ha-792382
	I1209 10:49:57.307606  627293 main.go:141] libmachine: (ha-792382-m02) Setting up store path in /home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382-m02 ...
	I1209 10:49:57.307640  627293 main.go:141] libmachine: (ha-792382-m02) Building disk image from file:///home/jenkins/minikube-integration/20068-609844/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1209 10:49:57.307676  627293 main.go:141] libmachine: (ha-792382-m02) DBG | I1209 10:49:57.307595  627662 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20068-609844/.minikube
	I1209 10:49:57.307776  627293 main.go:141] libmachine: (ha-792382-m02) Downloading /home/jenkins/minikube-integration/20068-609844/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20068-609844/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1209 10:49:57.586533  627293 main.go:141] libmachine: (ha-792382-m02) DBG | I1209 10:49:57.586377  627662 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382-m02/id_rsa...
	I1209 10:49:57.697560  627293 main.go:141] libmachine: (ha-792382-m02) DBG | I1209 10:49:57.697424  627662 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382-m02/ha-792382-m02.rawdisk...
	I1209 10:49:57.697602  627293 main.go:141] libmachine: (ha-792382-m02) DBG | Writing magic tar header
	I1209 10:49:57.697613  627293 main.go:141] libmachine: (ha-792382-m02) DBG | Writing SSH key tar header
	I1209 10:49:57.697621  627293 main.go:141] libmachine: (ha-792382-m02) DBG | I1209 10:49:57.697562  627662 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382-m02 ...
	I1209 10:49:57.697695  627293 main.go:141] libmachine: (ha-792382-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382-m02
	I1209 10:49:57.697714  627293 main.go:141] libmachine: (ha-792382-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20068-609844/.minikube/machines
	I1209 10:49:57.697722  627293 main.go:141] libmachine: (ha-792382-m02) Setting executable bit set on /home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382-m02 (perms=drwx------)
	I1209 10:49:57.697738  627293 main.go:141] libmachine: (ha-792382-m02) Setting executable bit set on /home/jenkins/minikube-integration/20068-609844/.minikube/machines (perms=drwxr-xr-x)
	I1209 10:49:57.697757  627293 main.go:141] libmachine: (ha-792382-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20068-609844/.minikube
	I1209 10:49:57.697771  627293 main.go:141] libmachine: (ha-792382-m02) Setting executable bit set on /home/jenkins/minikube-integration/20068-609844/.minikube (perms=drwxr-xr-x)
	I1209 10:49:57.697780  627293 main.go:141] libmachine: (ha-792382-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20068-609844
	I1209 10:49:57.697790  627293 main.go:141] libmachine: (ha-792382-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1209 10:49:57.697797  627293 main.go:141] libmachine: (ha-792382-m02) DBG | Checking permissions on dir: /home/jenkins
	I1209 10:49:57.697803  627293 main.go:141] libmachine: (ha-792382-m02) DBG | Checking permissions on dir: /home
	I1209 10:49:57.697812  627293 main.go:141] libmachine: (ha-792382-m02) DBG | Skipping /home - not owner
	I1209 10:49:57.697828  627293 main.go:141] libmachine: (ha-792382-m02) Setting executable bit set on /home/jenkins/minikube-integration/20068-609844 (perms=drwxrwxr-x)
	I1209 10:49:57.697853  627293 main.go:141] libmachine: (ha-792382-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1209 10:49:57.697862  627293 main.go:141] libmachine: (ha-792382-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1209 10:49:57.697867  627293 main.go:141] libmachine: (ha-792382-m02) Creating domain...
	I1209 10:49:57.698931  627293 main.go:141] libmachine: (ha-792382-m02) define libvirt domain using xml: 
	I1209 10:49:57.698948  627293 main.go:141] libmachine: (ha-792382-m02) <domain type='kvm'>
	I1209 10:49:57.698955  627293 main.go:141] libmachine: (ha-792382-m02)   <name>ha-792382-m02</name>
	I1209 10:49:57.698960  627293 main.go:141] libmachine: (ha-792382-m02)   <memory unit='MiB'>2200</memory>
	I1209 10:49:57.698965  627293 main.go:141] libmachine: (ha-792382-m02)   <vcpu>2</vcpu>
	I1209 10:49:57.698968  627293 main.go:141] libmachine: (ha-792382-m02)   <features>
	I1209 10:49:57.698974  627293 main.go:141] libmachine: (ha-792382-m02)     <acpi/>
	I1209 10:49:57.698977  627293 main.go:141] libmachine: (ha-792382-m02)     <apic/>
	I1209 10:49:57.698982  627293 main.go:141] libmachine: (ha-792382-m02)     <pae/>
	I1209 10:49:57.698985  627293 main.go:141] libmachine: (ha-792382-m02)     
	I1209 10:49:57.698991  627293 main.go:141] libmachine: (ha-792382-m02)   </features>
	I1209 10:49:57.698996  627293 main.go:141] libmachine: (ha-792382-m02)   <cpu mode='host-passthrough'>
	I1209 10:49:57.699000  627293 main.go:141] libmachine: (ha-792382-m02)   
	I1209 10:49:57.699004  627293 main.go:141] libmachine: (ha-792382-m02)   </cpu>
	I1209 10:49:57.699009  627293 main.go:141] libmachine: (ha-792382-m02)   <os>
	I1209 10:49:57.699013  627293 main.go:141] libmachine: (ha-792382-m02)     <type>hvm</type>
	I1209 10:49:57.699018  627293 main.go:141] libmachine: (ha-792382-m02)     <boot dev='cdrom'/>
	I1209 10:49:57.699034  627293 main.go:141] libmachine: (ha-792382-m02)     <boot dev='hd'/>
	I1209 10:49:57.699053  627293 main.go:141] libmachine: (ha-792382-m02)     <bootmenu enable='no'/>
	I1209 10:49:57.699065  627293 main.go:141] libmachine: (ha-792382-m02)   </os>
	I1209 10:49:57.699070  627293 main.go:141] libmachine: (ha-792382-m02)   <devices>
	I1209 10:49:57.699074  627293 main.go:141] libmachine: (ha-792382-m02)     <disk type='file' device='cdrom'>
	I1209 10:49:57.699083  627293 main.go:141] libmachine: (ha-792382-m02)       <source file='/home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382-m02/boot2docker.iso'/>
	I1209 10:49:57.699087  627293 main.go:141] libmachine: (ha-792382-m02)       <target dev='hdc' bus='scsi'/>
	I1209 10:49:57.699092  627293 main.go:141] libmachine: (ha-792382-m02)       <readonly/>
	I1209 10:49:57.699095  627293 main.go:141] libmachine: (ha-792382-m02)     </disk>
	I1209 10:49:57.699101  627293 main.go:141] libmachine: (ha-792382-m02)     <disk type='file' device='disk'>
	I1209 10:49:57.699106  627293 main.go:141] libmachine: (ha-792382-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1209 10:49:57.699114  627293 main.go:141] libmachine: (ha-792382-m02)       <source file='/home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382-m02/ha-792382-m02.rawdisk'/>
	I1209 10:49:57.699122  627293 main.go:141] libmachine: (ha-792382-m02)       <target dev='hda' bus='virtio'/>
	I1209 10:49:57.699137  627293 main.go:141] libmachine: (ha-792382-m02)     </disk>
	I1209 10:49:57.699147  627293 main.go:141] libmachine: (ha-792382-m02)     <interface type='network'>
	I1209 10:49:57.699179  627293 main.go:141] libmachine: (ha-792382-m02)       <source network='mk-ha-792382'/>
	I1209 10:49:57.699205  627293 main.go:141] libmachine: (ha-792382-m02)       <model type='virtio'/>
	I1209 10:49:57.699214  627293 main.go:141] libmachine: (ha-792382-m02)     </interface>
	I1209 10:49:57.699227  627293 main.go:141] libmachine: (ha-792382-m02)     <interface type='network'>
	I1209 10:49:57.699257  627293 main.go:141] libmachine: (ha-792382-m02)       <source network='default'/>
	I1209 10:49:57.699276  627293 main.go:141] libmachine: (ha-792382-m02)       <model type='virtio'/>
	I1209 10:49:57.699287  627293 main.go:141] libmachine: (ha-792382-m02)     </interface>
	I1209 10:49:57.699295  627293 main.go:141] libmachine: (ha-792382-m02)     <serial type='pty'>
	I1209 10:49:57.699302  627293 main.go:141] libmachine: (ha-792382-m02)       <target port='0'/>
	I1209 10:49:57.699309  627293 main.go:141] libmachine: (ha-792382-m02)     </serial>
	I1209 10:49:57.699314  627293 main.go:141] libmachine: (ha-792382-m02)     <console type='pty'>
	I1209 10:49:57.699320  627293 main.go:141] libmachine: (ha-792382-m02)       <target type='serial' port='0'/>
	I1209 10:49:57.699325  627293 main.go:141] libmachine: (ha-792382-m02)     </console>
	I1209 10:49:57.699332  627293 main.go:141] libmachine: (ha-792382-m02)     <rng model='virtio'>
	I1209 10:49:57.699338  627293 main.go:141] libmachine: (ha-792382-m02)       <backend model='random'>/dev/random</backend>
	I1209 10:49:57.699352  627293 main.go:141] libmachine: (ha-792382-m02)     </rng>
	I1209 10:49:57.699360  627293 main.go:141] libmachine: (ha-792382-m02)     
	I1209 10:49:57.699364  627293 main.go:141] libmachine: (ha-792382-m02)     
	I1209 10:49:57.699370  627293 main.go:141] libmachine: (ha-792382-m02)   </devices>
	I1209 10:49:57.699374  627293 main.go:141] libmachine: (ha-792382-m02) </domain>
	I1209 10:49:57.699384  627293 main.go:141] libmachine: (ha-792382-m02) 
	I1209 10:49:57.706829  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:be:31:4f in network default
	I1209 10:49:57.707394  627293 main.go:141] libmachine: (ha-792382-m02) Ensuring networks are active...
	I1209 10:49:57.707420  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:49:57.708099  627293 main.go:141] libmachine: (ha-792382-m02) Ensuring network default is active
	I1209 10:49:57.708447  627293 main.go:141] libmachine: (ha-792382-m02) Ensuring network mk-ha-792382 is active
	I1209 10:49:57.708833  627293 main.go:141] libmachine: (ha-792382-m02) Getting domain xml...
	I1209 10:49:57.709562  627293 main.go:141] libmachine: (ha-792382-m02) Creating domain...
	I1209 10:49:58.965991  627293 main.go:141] libmachine: (ha-792382-m02) Waiting to get IP...
	I1209 10:49:58.967025  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:49:58.967615  627293 main.go:141] libmachine: (ha-792382-m02) DBG | unable to find current IP address of domain ha-792382-m02 in network mk-ha-792382
	I1209 10:49:58.967718  627293 main.go:141] libmachine: (ha-792382-m02) DBG | I1209 10:49:58.967609  627662 retry.go:31] will retry after 289.483594ms: waiting for machine to come up
	I1209 10:49:59.259398  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:49:59.259929  627293 main.go:141] libmachine: (ha-792382-m02) DBG | unable to find current IP address of domain ha-792382-m02 in network mk-ha-792382
	I1209 10:49:59.259958  627293 main.go:141] libmachine: (ha-792382-m02) DBG | I1209 10:49:59.259877  627662 retry.go:31] will retry after 368.739813ms: waiting for machine to come up
	I1209 10:49:59.630595  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:49:59.631082  627293 main.go:141] libmachine: (ha-792382-m02) DBG | unable to find current IP address of domain ha-792382-m02 in network mk-ha-792382
	I1209 10:49:59.631111  627293 main.go:141] libmachine: (ha-792382-m02) DBG | I1209 10:49:59.631032  627662 retry.go:31] will retry after 468.793736ms: waiting for machine to come up
	I1209 10:50:00.101924  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:00.102437  627293 main.go:141] libmachine: (ha-792382-m02) DBG | unable to find current IP address of domain ha-792382-m02 in network mk-ha-792382
	I1209 10:50:00.102468  627293 main.go:141] libmachine: (ha-792382-m02) DBG | I1209 10:50:00.102389  627662 retry.go:31] will retry after 467.16032ms: waiting for machine to come up
	I1209 10:50:00.571568  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:00.572085  627293 main.go:141] libmachine: (ha-792382-m02) DBG | unable to find current IP address of domain ha-792382-m02 in network mk-ha-792382
	I1209 10:50:00.572158  627293 main.go:141] libmachine: (ha-792382-m02) DBG | I1209 10:50:00.571967  627662 retry.go:31] will retry after 614.331886ms: waiting for machine to come up
	I1209 10:50:01.188165  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:01.188721  627293 main.go:141] libmachine: (ha-792382-m02) DBG | unable to find current IP address of domain ha-792382-m02 in network mk-ha-792382
	I1209 10:50:01.188753  627293 main.go:141] libmachine: (ha-792382-m02) DBG | I1209 10:50:01.188683  627662 retry.go:31] will retry after 622.291039ms: waiting for machine to come up
	I1209 10:50:01.812761  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:01.813166  627293 main.go:141] libmachine: (ha-792382-m02) DBG | unable to find current IP address of domain ha-792382-m02 in network mk-ha-792382
	I1209 10:50:01.813197  627293 main.go:141] libmachine: (ha-792382-m02) DBG | I1209 10:50:01.813093  627662 retry.go:31] will retry after 970.350077ms: waiting for machine to come up
	I1209 10:50:02.785861  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:02.786416  627293 main.go:141] libmachine: (ha-792382-m02) DBG | unable to find current IP address of domain ha-792382-m02 in network mk-ha-792382
	I1209 10:50:02.786477  627293 main.go:141] libmachine: (ha-792382-m02) DBG | I1209 10:50:02.786368  627662 retry.go:31] will retry after 1.09205339s: waiting for machine to come up
	I1209 10:50:03.879814  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:03.880303  627293 main.go:141] libmachine: (ha-792382-m02) DBG | unable to find current IP address of domain ha-792382-m02 in network mk-ha-792382
	I1209 10:50:03.880327  627293 main.go:141] libmachine: (ha-792382-m02) DBG | I1209 10:50:03.880248  627662 retry.go:31] will retry after 1.765651975s: waiting for machine to come up
	I1209 10:50:05.648159  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:05.648671  627293 main.go:141] libmachine: (ha-792382-m02) DBG | unable to find current IP address of domain ha-792382-m02 in network mk-ha-792382
	I1209 10:50:05.648696  627293 main.go:141] libmachine: (ha-792382-m02) DBG | I1209 10:50:05.648615  627662 retry.go:31] will retry after 1.762832578s: waiting for machine to come up
	I1209 10:50:07.413599  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:07.414030  627293 main.go:141] libmachine: (ha-792382-m02) DBG | unable to find current IP address of domain ha-792382-m02 in network mk-ha-792382
	I1209 10:50:07.414059  627293 main.go:141] libmachine: (ha-792382-m02) DBG | I1209 10:50:07.413978  627662 retry.go:31] will retry after 2.150383903s: waiting for machine to come up
	I1209 10:50:09.565911  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:09.566390  627293 main.go:141] libmachine: (ha-792382-m02) DBG | unable to find current IP address of domain ha-792382-m02 in network mk-ha-792382
	I1209 10:50:09.566420  627293 main.go:141] libmachine: (ha-792382-m02) DBG | I1209 10:50:09.566350  627662 retry.go:31] will retry after 3.049537741s: waiting for machine to come up
	I1209 10:50:12.617744  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:12.618241  627293 main.go:141] libmachine: (ha-792382-m02) DBG | unable to find current IP address of domain ha-792382-m02 in network mk-ha-792382
	I1209 10:50:12.618276  627293 main.go:141] libmachine: (ha-792382-m02) DBG | I1209 10:50:12.618155  627662 retry.go:31] will retry after 3.599687882s: waiting for machine to come up
	I1209 10:50:16.219399  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:16.219837  627293 main.go:141] libmachine: (ha-792382-m02) DBG | unable to find current IP address of domain ha-792382-m02 in network mk-ha-792382
	I1209 10:50:16.219868  627293 main.go:141] libmachine: (ha-792382-m02) DBG | I1209 10:50:16.219789  627662 retry.go:31] will retry after 3.518875962s: waiting for machine to come up
	I1209 10:50:19.740130  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:19.740985  627293 main.go:141] libmachine: (ha-792382-m02) Found IP for machine: 192.168.39.89
	I1209 10:50:19.741024  627293 main.go:141] libmachine: (ha-792382-m02) Reserving static IP address...
	I1209 10:50:19.741037  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has current primary IP address 192.168.39.89 and MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:19.741518  627293 main.go:141] libmachine: (ha-792382-m02) DBG | unable to find host DHCP lease matching {name: "ha-792382-m02", mac: "52:54:00:95:40:00", ip: "192.168.39.89"} in network mk-ha-792382
	I1209 10:50:19.814048  627293 main.go:141] libmachine: (ha-792382-m02) DBG | Getting to WaitForSSH function...
	I1209 10:50:19.814070  627293 main.go:141] libmachine: (ha-792382-m02) Reserved static IP address: 192.168.39.89
	I1209 10:50:19.814078  627293 main.go:141] libmachine: (ha-792382-m02) Waiting for SSH to be available...
	I1209 10:50:19.816613  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:19.817057  627293 main.go:141] libmachine: (ha-792382-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:40:00", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:50:11 +0000 UTC Type:0 Mac:52:54:00:95:40:00 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:minikube Clientid:01:52:54:00:95:40:00}
	I1209 10:50:19.817166  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:19.817261  627293 main.go:141] libmachine: (ha-792382-m02) DBG | Using SSH client type: external
	I1209 10:50:19.817282  627293 main.go:141] libmachine: (ha-792382-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382-m02/id_rsa (-rw-------)
	I1209 10:50:19.817362  627293 main.go:141] libmachine: (ha-792382-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.89 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1209 10:50:19.817390  627293 main.go:141] libmachine: (ha-792382-m02) DBG | About to run SSH command:
	I1209 10:50:19.817411  627293 main.go:141] libmachine: (ha-792382-m02) DBG | exit 0
	I1209 10:50:19.942297  627293 main.go:141] libmachine: (ha-792382-m02) DBG | SSH cmd err, output: <nil>: 
	I1209 10:50:19.942595  627293 main.go:141] libmachine: (ha-792382-m02) KVM machine creation complete!
	I1209 10:50:19.942914  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetConfigRaw
	I1209 10:50:19.943559  627293 main.go:141] libmachine: (ha-792382-m02) Calling .DriverName
	I1209 10:50:19.943781  627293 main.go:141] libmachine: (ha-792382-m02) Calling .DriverName
	I1209 10:50:19.943947  627293 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1209 10:50:19.943965  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetState
	I1209 10:50:19.945579  627293 main.go:141] libmachine: Detecting operating system of created instance...
	I1209 10:50:19.945598  627293 main.go:141] libmachine: Waiting for SSH to be available...
	I1209 10:50:19.945607  627293 main.go:141] libmachine: Getting to WaitForSSH function...
	I1209 10:50:19.945616  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHHostname
	I1209 10:50:19.947916  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:19.948374  627293 main.go:141] libmachine: (ha-792382-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:40:00", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:50:11 +0000 UTC Type:0 Mac:52:54:00:95:40:00 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-792382-m02 Clientid:01:52:54:00:95:40:00}
	I1209 10:50:19.948400  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:19.948582  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHPort
	I1209 10:50:19.948773  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHKeyPath
	I1209 10:50:19.948920  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHKeyPath
	I1209 10:50:19.949049  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHUsername
	I1209 10:50:19.949307  627293 main.go:141] libmachine: Using SSH client type: native
	I1209 10:50:19.949555  627293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I1209 10:50:19.949573  627293 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1209 10:50:20.053499  627293 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 10:50:20.053528  627293 main.go:141] libmachine: Detecting the provisioner...
	I1209 10:50:20.053541  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHHostname
	I1209 10:50:20.056444  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:20.056881  627293 main.go:141] libmachine: (ha-792382-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:40:00", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:50:11 +0000 UTC Type:0 Mac:52:54:00:95:40:00 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-792382-m02 Clientid:01:52:54:00:95:40:00}
	I1209 10:50:20.056911  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:20.057119  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHPort
	I1209 10:50:20.057366  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHKeyPath
	I1209 10:50:20.057545  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHKeyPath
	I1209 10:50:20.057698  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHUsername
	I1209 10:50:20.057856  627293 main.go:141] libmachine: Using SSH client type: native
	I1209 10:50:20.058022  627293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I1209 10:50:20.058034  627293 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1209 10:50:20.162532  627293 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1209 10:50:20.162621  627293 main.go:141] libmachine: found compatible host: buildroot
	I1209 10:50:20.162636  627293 main.go:141] libmachine: Provisioning with buildroot...
	I1209 10:50:20.162651  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetMachineName
	I1209 10:50:20.162892  627293 buildroot.go:166] provisioning hostname "ha-792382-m02"
	I1209 10:50:20.162921  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetMachineName
	I1209 10:50:20.163135  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHHostname
	I1209 10:50:20.165692  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:20.166051  627293 main.go:141] libmachine: (ha-792382-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:40:00", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:50:11 +0000 UTC Type:0 Mac:52:54:00:95:40:00 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-792382-m02 Clientid:01:52:54:00:95:40:00}
	I1209 10:50:20.166078  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:20.166237  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHPort
	I1209 10:50:20.166425  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHKeyPath
	I1209 10:50:20.166592  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHKeyPath
	I1209 10:50:20.166734  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHUsername
	I1209 10:50:20.166863  627293 main.go:141] libmachine: Using SSH client type: native
	I1209 10:50:20.167071  627293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I1209 10:50:20.167087  627293 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-792382-m02 && echo "ha-792382-m02" | sudo tee /etc/hostname
	I1209 10:50:20.285783  627293 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-792382-m02
	
	I1209 10:50:20.285812  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHHostname
	I1209 10:50:20.288581  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:20.288945  627293 main.go:141] libmachine: (ha-792382-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:40:00", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:50:11 +0000 UTC Type:0 Mac:52:54:00:95:40:00 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-792382-m02 Clientid:01:52:54:00:95:40:00}
	I1209 10:50:20.289006  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:20.289156  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHPort
	I1209 10:50:20.289374  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHKeyPath
	I1209 10:50:20.289525  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHKeyPath
	I1209 10:50:20.289675  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHUsername
	I1209 10:50:20.289834  627293 main.go:141] libmachine: Using SSH client type: native
	I1209 10:50:20.290050  627293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I1209 10:50:20.290067  627293 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-792382-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-792382-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-792382-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 10:50:20.403745  627293 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 10:50:20.403780  627293 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20068-609844/.minikube CaCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20068-609844/.minikube}
	I1209 10:50:20.403797  627293 buildroot.go:174] setting up certificates
	I1209 10:50:20.403807  627293 provision.go:84] configureAuth start
	I1209 10:50:20.403816  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetMachineName
	I1209 10:50:20.404127  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetIP
	I1209 10:50:20.406853  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:20.407317  627293 main.go:141] libmachine: (ha-792382-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:40:00", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:50:11 +0000 UTC Type:0 Mac:52:54:00:95:40:00 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-792382-m02 Clientid:01:52:54:00:95:40:00}
	I1209 10:50:20.407339  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:20.407523  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHHostname
	I1209 10:50:20.410235  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:20.410616  627293 main.go:141] libmachine: (ha-792382-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:40:00", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:50:11 +0000 UTC Type:0 Mac:52:54:00:95:40:00 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-792382-m02 Clientid:01:52:54:00:95:40:00}
	I1209 10:50:20.410641  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:20.410813  627293 provision.go:143] copyHostCerts
	I1209 10:50:20.410851  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem
	I1209 10:50:20.410897  627293 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem, removing ...
	I1209 10:50:20.410910  627293 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem
	I1209 10:50:20.410996  627293 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem (1082 bytes)
	I1209 10:50:20.411092  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem
	I1209 10:50:20.411117  627293 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem, removing ...
	I1209 10:50:20.411127  627293 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem
	I1209 10:50:20.411167  627293 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem (1123 bytes)
	I1209 10:50:20.411241  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem
	I1209 10:50:20.411265  627293 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem, removing ...
	I1209 10:50:20.411274  627293 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem
	I1209 10:50:20.411310  627293 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem (1679 bytes)
	I1209 10:50:20.411379  627293 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem org=jenkins.ha-792382-m02 san=[127.0.0.1 192.168.39.89 ha-792382-m02 localhost minikube]
	I1209 10:50:20.506946  627293 provision.go:177] copyRemoteCerts
	I1209 10:50:20.507013  627293 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 10:50:20.507043  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHHostname
	I1209 10:50:20.509588  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:20.509997  627293 main.go:141] libmachine: (ha-792382-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:40:00", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:50:11 +0000 UTC Type:0 Mac:52:54:00:95:40:00 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-792382-m02 Clientid:01:52:54:00:95:40:00}
	I1209 10:50:20.510031  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:20.510256  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHPort
	I1209 10:50:20.510485  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHKeyPath
	I1209 10:50:20.510630  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHUsername
	I1209 10:50:20.510792  627293 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382-m02/id_rsa Username:docker}
	I1209 10:50:20.591669  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1209 10:50:20.591738  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1209 10:50:20.614379  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1209 10:50:20.614474  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1209 10:50:20.635752  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1209 10:50:20.635819  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1209 10:50:20.657840  627293 provision.go:87] duration metric: took 254.019642ms to configureAuth
	I1209 10:50:20.657873  627293 buildroot.go:189] setting minikube options for container-runtime
	I1209 10:50:20.658088  627293 config.go:182] Loaded profile config "ha-792382": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 10:50:20.658221  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHHostname
	I1209 10:50:20.661758  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:20.662150  627293 main.go:141] libmachine: (ha-792382-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:40:00", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:50:11 +0000 UTC Type:0 Mac:52:54:00:95:40:00 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-792382-m02 Clientid:01:52:54:00:95:40:00}
	I1209 10:50:20.662207  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:20.662350  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHPort
	I1209 10:50:20.662590  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHKeyPath
	I1209 10:50:20.662773  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHKeyPath
	I1209 10:50:20.662982  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHUsername
	I1209 10:50:20.663174  627293 main.go:141] libmachine: Using SSH client type: native
	I1209 10:50:20.663396  627293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I1209 10:50:20.663417  627293 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 10:50:20.895342  627293 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 10:50:20.895376  627293 main.go:141] libmachine: Checking connection to Docker...
	I1209 10:50:20.895386  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetURL
	I1209 10:50:20.896678  627293 main.go:141] libmachine: (ha-792382-m02) DBG | Using libvirt version 6000000
	I1209 10:50:20.899127  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:20.899492  627293 main.go:141] libmachine: (ha-792382-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:40:00", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:50:11 +0000 UTC Type:0 Mac:52:54:00:95:40:00 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-792382-m02 Clientid:01:52:54:00:95:40:00}
	I1209 10:50:20.899524  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:20.899662  627293 main.go:141] libmachine: Docker is up and running!
	I1209 10:50:20.899675  627293 main.go:141] libmachine: Reticulating splines...
	I1209 10:50:20.899683  627293 client.go:171] duration metric: took 23.594715586s to LocalClient.Create
	I1209 10:50:20.899712  627293 start.go:167] duration metric: took 23.594799788s to libmachine.API.Create "ha-792382"
	I1209 10:50:20.899727  627293 start.go:293] postStartSetup for "ha-792382-m02" (driver="kvm2")
	I1209 10:50:20.899740  627293 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 10:50:20.899762  627293 main.go:141] libmachine: (ha-792382-m02) Calling .DriverName
	I1209 10:50:20.899988  627293 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 10:50:20.900011  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHHostname
	I1209 10:50:20.902193  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:20.902545  627293 main.go:141] libmachine: (ha-792382-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:40:00", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:50:11 +0000 UTC Type:0 Mac:52:54:00:95:40:00 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-792382-m02 Clientid:01:52:54:00:95:40:00}
	I1209 10:50:20.902574  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:20.902733  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHPort
	I1209 10:50:20.902907  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHKeyPath
	I1209 10:50:20.903055  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHUsername
	I1209 10:50:20.903224  627293 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382-m02/id_rsa Username:docker}
	I1209 10:50:20.987979  627293 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 10:50:20.992183  627293 info.go:137] Remote host: Buildroot 2023.02.9
	I1209 10:50:20.992210  627293 filesync.go:126] Scanning /home/jenkins/minikube-integration/20068-609844/.minikube/addons for local assets ...
	I1209 10:50:20.992280  627293 filesync.go:126] Scanning /home/jenkins/minikube-integration/20068-609844/.minikube/files for local assets ...
	I1209 10:50:20.992373  627293 filesync.go:149] local asset: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem -> 6170172.pem in /etc/ssl/certs
	I1209 10:50:20.992388  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem -> /etc/ssl/certs/6170172.pem
	I1209 10:50:20.992517  627293 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 10:50:21.001255  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem --> /etc/ssl/certs/6170172.pem (1708 bytes)
	I1209 10:50:21.023333  627293 start.go:296] duration metric: took 123.590873ms for postStartSetup
	I1209 10:50:21.023382  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetConfigRaw
	I1209 10:50:21.024074  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetIP
	I1209 10:50:21.026760  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:21.027216  627293 main.go:141] libmachine: (ha-792382-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:40:00", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:50:11 +0000 UTC Type:0 Mac:52:54:00:95:40:00 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-792382-m02 Clientid:01:52:54:00:95:40:00}
	I1209 10:50:21.027253  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:21.027452  627293 profile.go:143] Saving config to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/config.json ...
	I1209 10:50:21.027657  627293 start.go:128] duration metric: took 23.741699232s to createHost
	I1209 10:50:21.027689  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHHostname
	I1209 10:50:21.029948  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:21.030322  627293 main.go:141] libmachine: (ha-792382-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:40:00", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:50:11 +0000 UTC Type:0 Mac:52:54:00:95:40:00 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-792382-m02 Clientid:01:52:54:00:95:40:00}
	I1209 10:50:21.030343  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:21.030537  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHPort
	I1209 10:50:21.030708  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHKeyPath
	I1209 10:50:21.030868  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHKeyPath
	I1209 10:50:21.031040  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHUsername
	I1209 10:50:21.031235  627293 main.go:141] libmachine: Using SSH client type: native
	I1209 10:50:21.031525  627293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I1209 10:50:21.031542  627293 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1209 10:50:21.134634  627293 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733741421.109382404
	
	I1209 10:50:21.134664  627293 fix.go:216] guest clock: 1733741421.109382404
	I1209 10:50:21.134671  627293 fix.go:229] Guest: 2024-12-09 10:50:21.109382404 +0000 UTC Remote: 2024-12-09 10:50:21.027672389 +0000 UTC m=+68.911911388 (delta=81.710015ms)
	I1209 10:50:21.134687  627293 fix.go:200] guest clock delta is within tolerance: 81.710015ms
	I1209 10:50:21.134693  627293 start.go:83] releasing machines lock for "ha-792382-m02", held for 23.84885063s
	I1209 10:50:21.134711  627293 main.go:141] libmachine: (ha-792382-m02) Calling .DriverName
	I1209 10:50:21.135011  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetIP
	I1209 10:50:21.137922  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:21.138329  627293 main.go:141] libmachine: (ha-792382-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:40:00", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:50:11 +0000 UTC Type:0 Mac:52:54:00:95:40:00 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-792382-m02 Clientid:01:52:54:00:95:40:00}
	I1209 10:50:21.138359  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:21.140711  627293 out.go:177] * Found network options:
	I1209 10:50:21.142033  627293 out.go:177]   - NO_PROXY=192.168.39.69
	W1209 10:50:21.143264  627293 proxy.go:119] fail to check proxy env: Error ip not in block
	I1209 10:50:21.143304  627293 main.go:141] libmachine: (ha-792382-m02) Calling .DriverName
	I1209 10:50:21.143961  627293 main.go:141] libmachine: (ha-792382-m02) Calling .DriverName
	I1209 10:50:21.144186  627293 main.go:141] libmachine: (ha-792382-m02) Calling .DriverName
	I1209 10:50:21.144305  627293 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 10:50:21.144354  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHHostname
	W1209 10:50:21.144454  627293 proxy.go:119] fail to check proxy env: Error ip not in block
	I1209 10:50:21.144534  627293 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 10:50:21.144559  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHHostname
	I1209 10:50:21.147622  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:21.147846  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:21.147959  627293 main.go:141] libmachine: (ha-792382-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:40:00", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:50:11 +0000 UTC Type:0 Mac:52:54:00:95:40:00 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-792382-m02 Clientid:01:52:54:00:95:40:00}
	I1209 10:50:21.147994  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:21.148084  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHPort
	I1209 10:50:21.148250  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHKeyPath
	I1209 10:50:21.148369  627293 main.go:141] libmachine: (ha-792382-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:40:00", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:50:11 +0000 UTC Type:0 Mac:52:54:00:95:40:00 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-792382-m02 Clientid:01:52:54:00:95:40:00}
	I1209 10:50:21.148396  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:21.148419  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHUsername
	I1209 10:50:21.148619  627293 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382-m02/id_rsa Username:docker}
	I1209 10:50:21.148763  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHPort
	I1209 10:50:21.148870  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHKeyPath
	I1209 10:50:21.149177  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHUsername
	I1209 10:50:21.149326  627293 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382-m02/id_rsa Username:docker}
	I1209 10:50:21.377528  627293 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 10:50:21.383869  627293 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 10:50:21.383962  627293 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 10:50:21.402713  627293 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1209 10:50:21.402747  627293 start.go:495] detecting cgroup driver to use...
	I1209 10:50:21.402836  627293 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 10:50:21.418644  627293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 10:50:21.431825  627293 docker.go:217] disabling cri-docker service (if available) ...
	I1209 10:50:21.431894  627293 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 10:50:21.445030  627293 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 10:50:21.458235  627293 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 10:50:21.576888  627293 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 10:50:21.715254  627293 docker.go:233] disabling docker service ...
	I1209 10:50:21.715337  627293 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 10:50:21.728777  627293 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 10:50:21.741484  627293 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 10:50:21.877920  627293 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 10:50:21.987438  627293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 10:50:22.000287  627293 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 10:50:22.017967  627293 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1209 10:50:22.018044  627293 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 10:50:22.027586  627293 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 10:50:22.027647  627293 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 10:50:22.037032  627293 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 10:50:22.046716  627293 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 10:50:22.056390  627293 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 10:50:22.066025  627293 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 10:50:22.075591  627293 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 10:50:22.092169  627293 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 10:50:22.102292  627293 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 10:50:22.111580  627293 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1209 10:50:22.111645  627293 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1209 10:50:22.124823  627293 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 10:50:22.134059  627293 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 10:50:22.267517  627293 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 10:50:22.360113  627293 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 10:50:22.360202  627293 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 10:50:22.366049  627293 start.go:563] Will wait 60s for crictl version
	I1209 10:50:22.366124  627293 ssh_runner.go:195] Run: which crictl
	I1209 10:50:22.369685  627293 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 10:50:22.406117  627293 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1209 10:50:22.406233  627293 ssh_runner.go:195] Run: crio --version
	I1209 10:50:22.433831  627293 ssh_runner.go:195] Run: crio --version
	I1209 10:50:22.466702  627293 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1209 10:50:22.468114  627293 out.go:177]   - env NO_PROXY=192.168.39.69
	I1209 10:50:22.469415  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetIP
	I1209 10:50:22.472354  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:22.472792  627293 main.go:141] libmachine: (ha-792382-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:40:00", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:50:11 +0000 UTC Type:0 Mac:52:54:00:95:40:00 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-792382-m02 Clientid:01:52:54:00:95:40:00}
	I1209 10:50:22.472838  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:22.473063  627293 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1209 10:50:22.478206  627293 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 10:50:22.490975  627293 mustload.go:65] Loading cluster: ha-792382
	I1209 10:50:22.491223  627293 config.go:182] Loaded profile config "ha-792382": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 10:50:22.491515  627293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:50:22.491566  627293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:50:22.507354  627293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34813
	I1209 10:50:22.507839  627293 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:50:22.508378  627293 main.go:141] libmachine: Using API Version  1
	I1209 10:50:22.508407  627293 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:50:22.508811  627293 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:50:22.509022  627293 main.go:141] libmachine: (ha-792382) Calling .GetState
	I1209 10:50:22.510469  627293 host.go:66] Checking if "ha-792382" exists ...
	I1209 10:50:22.510748  627293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:50:22.510785  627293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:50:22.525474  627293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34445
	I1209 10:50:22.525972  627293 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:50:22.526524  627293 main.go:141] libmachine: Using API Version  1
	I1209 10:50:22.526554  627293 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:50:22.526848  627293 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:50:22.527055  627293 main.go:141] libmachine: (ha-792382) Calling .DriverName
	I1209 10:50:22.527271  627293 certs.go:68] Setting up /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382 for IP: 192.168.39.89
	I1209 10:50:22.527285  627293 certs.go:194] generating shared ca certs ...
	I1209 10:50:22.527308  627293 certs.go:226] acquiring lock for ca certs: {Name:mk2df5887e08965d909a9c950da5dfffb8a04ddc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 10:50:22.527465  627293 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.key
	I1209 10:50:22.527507  627293 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.key
	I1209 10:50:22.527514  627293 certs.go:256] generating profile certs ...
	I1209 10:50:22.527587  627293 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/client.key
	I1209 10:50:22.527613  627293 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.key.8c4cfabb
	I1209 10:50:22.527628  627293 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.crt.8c4cfabb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.69 192.168.39.89 192.168.39.254]
	I1209 10:50:22.618893  627293 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.crt.8c4cfabb ...
	I1209 10:50:22.618924  627293 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.crt.8c4cfabb: {Name:mk9fc14aa3aaf65091f9f2d45f3765515e31473e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 10:50:22.619129  627293 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.key.8c4cfabb ...
	I1209 10:50:22.619148  627293 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.key.8c4cfabb: {Name:mk41f99fa98267e5a58e4b407fa7296350fea4d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 10:50:22.619255  627293 certs.go:381] copying /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.crt.8c4cfabb -> /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.crt
	I1209 10:50:22.619394  627293 certs.go:385] copying /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.key.8c4cfabb -> /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.key
	I1209 10:50:22.619538  627293 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/proxy-client.key
	I1209 10:50:22.619555  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1209 10:50:22.619568  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1209 10:50:22.619579  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1209 10:50:22.619593  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1209 10:50:22.619603  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1209 10:50:22.619614  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1209 10:50:22.619626  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1209 10:50:22.619636  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1209 10:50:22.619683  627293 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017.pem (1338 bytes)
	W1209 10:50:22.619711  627293 certs.go:480] ignoring /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017_empty.pem, impossibly tiny 0 bytes
	I1209 10:50:22.619720  627293 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem (1675 bytes)
	I1209 10:50:22.619743  627293 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem (1082 bytes)
	I1209 10:50:22.619767  627293 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem (1123 bytes)
	I1209 10:50:22.619790  627293 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem (1679 bytes)
	I1209 10:50:22.619828  627293 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem (1708 bytes)
	I1209 10:50:22.619853  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem -> /usr/share/ca-certificates/6170172.pem
	I1209 10:50:22.619866  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1209 10:50:22.619877  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017.pem -> /usr/share/ca-certificates/617017.pem
	I1209 10:50:22.619908  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHHostname
	I1209 10:50:22.623291  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:50:22.623706  627293 main.go:141] libmachine: (ha-792382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:82:f7", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:49:26 +0000 UTC Type:0 Mac:52:54:00:a8:82:f7 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-792382 Clientid:01:52:54:00:a8:82:f7}
	I1209 10:50:22.623734  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:50:22.623919  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHPort
	I1209 10:50:22.624122  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHKeyPath
	I1209 10:50:22.624329  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHUsername
	I1209 10:50:22.624526  627293 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382/id_rsa Username:docker}
	I1209 10:50:22.694590  627293 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1209 10:50:22.700190  627293 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1209 10:50:22.715537  627293 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1209 10:50:22.720737  627293 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1209 10:50:22.731623  627293 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1209 10:50:22.736050  627293 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1209 10:50:22.747578  627293 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1209 10:50:22.752312  627293 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1209 10:50:22.763588  627293 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1209 10:50:22.768050  627293 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1209 10:50:22.777655  627293 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1209 10:50:22.781717  627293 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1209 10:50:22.792464  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 10:50:22.816318  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1209 10:50:22.837988  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 10:50:22.861671  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1209 10:50:22.883735  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1209 10:50:22.904888  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1209 10:50:22.926092  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 10:50:22.947329  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1209 10:50:22.968466  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem --> /usr/share/ca-certificates/6170172.pem (1708 bytes)
	I1209 10:50:22.989908  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 10:50:23.012190  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017.pem --> /usr/share/ca-certificates/617017.pem (1338 bytes)
	I1209 10:50:23.036349  627293 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1209 10:50:23.051329  627293 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1209 10:50:23.066824  627293 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1209 10:50:23.081626  627293 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1209 10:50:23.096856  627293 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1209 10:50:23.112249  627293 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1209 10:50:23.126784  627293 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1209 10:50:23.141365  627293 ssh_runner.go:195] Run: openssl version
	I1209 10:50:23.146879  627293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6170172.pem && ln -fs /usr/share/ca-certificates/6170172.pem /etc/ssl/certs/6170172.pem"
	I1209 10:50:23.156698  627293 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6170172.pem
	I1209 10:50:23.160669  627293 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 10:45 /usr/share/ca-certificates/6170172.pem
	I1209 10:50:23.160717  627293 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6170172.pem
	I1209 10:50:23.166987  627293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6170172.pem /etc/ssl/certs/3ec20f2e.0"
	I1209 10:50:23.176745  627293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1209 10:50:23.186586  627293 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 10:50:23.190639  627293 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 10:34 /usr/share/ca-certificates/minikubeCA.pem
	I1209 10:50:23.190687  627293 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 10:50:23.195990  627293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1209 10:50:23.205745  627293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/617017.pem && ln -fs /usr/share/ca-certificates/617017.pem /etc/ssl/certs/617017.pem"
	I1209 10:50:23.215364  627293 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/617017.pem
	I1209 10:50:23.219316  627293 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 10:45 /usr/share/ca-certificates/617017.pem
	I1209 10:50:23.219368  627293 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/617017.pem
	I1209 10:50:23.225208  627293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/617017.pem /etc/ssl/certs/51391683.0"
	I1209 10:50:23.235141  627293 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 10:50:23.238820  627293 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1209 10:50:23.238882  627293 kubeadm.go:934] updating node {m02 192.168.39.89 8443 v1.31.2 crio true true} ...
	I1209 10:50:23.238988  627293 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-792382-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.89
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-792382 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 10:50:23.239016  627293 kube-vip.go:115] generating kube-vip config ...
	I1209 10:50:23.239060  627293 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1209 10:50:23.254073  627293 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1209 10:50:23.254184  627293 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.7
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1209 10:50:23.254233  627293 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1209 10:50:23.263688  627293 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1209 10:50:23.263749  627293 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1209 10:50:23.272494  627293 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1209 10:50:23.272527  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1209 10:50:23.272570  627293 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/20068-609844/.minikube/cache/linux/amd64/v1.31.2/kubeadm
	I1209 10:50:23.272599  627293 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1209 10:50:23.272611  627293 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/20068-609844/.minikube/cache/linux/amd64/v1.31.2/kubelet
	I1209 10:50:23.276784  627293 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1209 10:50:23.276819  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1209 10:50:24.168986  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1209 10:50:24.169072  627293 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1209 10:50:24.174707  627293 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1209 10:50:24.174764  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1209 10:50:24.294393  627293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 10:50:24.325197  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1209 10:50:24.325289  627293 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1209 10:50:24.335547  627293 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1209 10:50:24.335594  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1209 10:50:24.706937  627293 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1209 10:50:24.715886  627293 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1209 10:50:24.731189  627293 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 10:50:24.746662  627293 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1209 10:50:24.762089  627293 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1209 10:50:24.765881  627293 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 10:50:24.777191  627293 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 10:50:24.904006  627293 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 10:50:24.921009  627293 host.go:66] Checking if "ha-792382" exists ...
	I1209 10:50:24.921461  627293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:50:24.921511  627293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:50:24.937482  627293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34099
	I1209 10:50:24.937973  627293 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:50:24.938486  627293 main.go:141] libmachine: Using API Version  1
	I1209 10:50:24.938508  627293 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:50:24.938885  627293 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:50:24.939098  627293 main.go:141] libmachine: (ha-792382) Calling .DriverName
	I1209 10:50:24.939248  627293 start.go:317] joinCluster: &{Name:ha-792382 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-792382 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.69 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.89 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 10:50:24.939386  627293 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1209 10:50:24.939418  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHHostname
	I1209 10:50:24.942285  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:50:24.942827  627293 main.go:141] libmachine: (ha-792382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:82:f7", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:49:26 +0000 UTC Type:0 Mac:52:54:00:a8:82:f7 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-792382 Clientid:01:52:54:00:a8:82:f7}
	I1209 10:50:24.942855  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:50:24.942985  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHPort
	I1209 10:50:24.943215  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHKeyPath
	I1209 10:50:24.943387  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHUsername
	I1209 10:50:24.943515  627293 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382/id_rsa Username:docker}
	I1209 10:50:25.097594  627293 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.89 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 10:50:25.097643  627293 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token dvotig.smgl74cs6saznre8 --discovery-token-ca-cert-hash sha256:c011865ce27f188d55feab5f04226df0aae8d2e7cfe661283806c6f2de0ef29a --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-792382-m02 --control-plane --apiserver-advertise-address=192.168.39.89 --apiserver-bind-port=8443"
	I1209 10:50:47.230030  627293 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token dvotig.smgl74cs6saznre8 --discovery-token-ca-cert-hash sha256:c011865ce27f188d55feab5f04226df0aae8d2e7cfe661283806c6f2de0ef29a --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-792382-m02 --control-plane --apiserver-advertise-address=192.168.39.89 --apiserver-bind-port=8443": (22.132356511s)
	I1209 10:50:47.230081  627293 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1209 10:50:47.777805  627293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-792382-m02 minikube.k8s.io/updated_at=2024_12_09T10_50_47_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=60110addcdbf0fec7168b962521659e922988d6c minikube.k8s.io/name=ha-792382 minikube.k8s.io/primary=false
	I1209 10:50:47.938150  627293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-792382-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I1209 10:50:48.082480  627293 start.go:319] duration metric: took 23.143228187s to joinCluster
	I1209 10:50:48.082581  627293 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.89 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 10:50:48.082941  627293 config.go:182] Loaded profile config "ha-792382": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 10:50:48.084770  627293 out.go:177] * Verifying Kubernetes components...
	I1209 10:50:48.085991  627293 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 10:50:48.337603  627293 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 10:50:48.368412  627293 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/20068-609844/kubeconfig
	I1209 10:50:48.368651  627293 kapi.go:59] client config for ha-792382: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/client.crt", KeyFile:"/home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/client.key", CAFile:"/home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243b6e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1209 10:50:48.368776  627293 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.69:8443
	I1209 10:50:48.369027  627293 node_ready.go:35] waiting up to 6m0s for node "ha-792382-m02" to be "Ready" ...
	I1209 10:50:48.369182  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:50:48.369197  627293 round_trippers.go:469] Request Headers:
	I1209 10:50:48.369210  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:50:48.369215  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:50:48.379219  627293 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1209 10:50:48.869436  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:50:48.869471  627293 round_trippers.go:469] Request Headers:
	I1209 10:50:48.869484  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:50:48.869491  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:50:48.873562  627293 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 10:50:49.369649  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:50:49.369671  627293 round_trippers.go:469] Request Headers:
	I1209 10:50:49.369679  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:50:49.369685  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:50:49.372678  627293 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 10:50:49.869490  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:50:49.869516  627293 round_trippers.go:469] Request Headers:
	I1209 10:50:49.869525  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:50:49.869529  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:50:49.872495  627293 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 10:50:50.369998  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:50:50.370028  627293 round_trippers.go:469] Request Headers:
	I1209 10:50:50.370038  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:50:50.370043  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:50:50.374983  627293 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 10:50:50.377595  627293 node_ready.go:53] node "ha-792382-m02" has status "Ready":"False"
	I1209 10:50:50.869651  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:50:50.869674  627293 round_trippers.go:469] Request Headers:
	I1209 10:50:50.869688  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:50:50.869692  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:50:50.906453  627293 round_trippers.go:574] Response Status: 200 OK in 36 milliseconds
	I1209 10:50:51.369287  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:50:51.369317  627293 round_trippers.go:469] Request Headers:
	I1209 10:50:51.369329  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:50:51.369335  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:50:51.372362  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:50:51.870258  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:50:51.870289  627293 round_trippers.go:469] Request Headers:
	I1209 10:50:51.870302  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:50:51.870310  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:50:51.873898  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:50:52.370080  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:50:52.370105  627293 round_trippers.go:469] Request Headers:
	I1209 10:50:52.370115  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:50:52.370118  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:50:52.376430  627293 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1209 10:50:52.869331  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:50:52.869355  627293 round_trippers.go:469] Request Headers:
	I1209 10:50:52.869364  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:50:52.869368  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:50:52.873136  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:50:52.873737  627293 node_ready.go:53] node "ha-792382-m02" has status "Ready":"False"
	I1209 10:50:53.370232  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:50:53.370258  627293 round_trippers.go:469] Request Headers:
	I1209 10:50:53.370267  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:50:53.370272  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:50:53.373647  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:50:53.869640  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:50:53.869666  627293 round_trippers.go:469] Request Headers:
	I1209 10:50:53.869674  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:50:53.869678  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:50:53.872620  627293 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 10:50:54.369762  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:50:54.369789  627293 round_trippers.go:469] Request Headers:
	I1209 10:50:54.369798  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:50:54.369802  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:50:54.373551  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:50:54.869513  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:50:54.869538  627293 round_trippers.go:469] Request Headers:
	I1209 10:50:54.869547  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:50:54.869552  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:50:54.872817  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:50:55.369351  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:50:55.369377  627293 round_trippers.go:469] Request Headers:
	I1209 10:50:55.369387  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:50:55.369391  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:50:55.372662  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:50:55.373185  627293 node_ready.go:53] node "ha-792382-m02" has status "Ready":"False"
	I1209 10:50:55.869601  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:50:55.869626  627293 round_trippers.go:469] Request Headers:
	I1209 10:50:55.869636  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:50:55.869642  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:50:55.873128  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:50:56.369713  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:50:56.369741  627293 round_trippers.go:469] Request Headers:
	I1209 10:50:56.369751  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:50:56.369755  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:50:56.373053  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:50:56.870191  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:50:56.870225  627293 round_trippers.go:469] Request Headers:
	I1209 10:50:56.870238  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:50:56.870247  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:50:56.873685  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:50:57.369825  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:50:57.369849  627293 round_trippers.go:469] Request Headers:
	I1209 10:50:57.369858  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:50:57.369861  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:50:57.373394  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:50:57.373898  627293 node_ready.go:53] node "ha-792382-m02" has status "Ready":"False"
	I1209 10:50:57.869257  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:50:57.869284  627293 round_trippers.go:469] Request Headers:
	I1209 10:50:57.869293  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:50:57.869297  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:50:57.872590  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:50:58.369600  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:50:58.369629  627293 round_trippers.go:469] Request Headers:
	I1209 10:50:58.369641  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:50:58.369648  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:50:58.372771  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:50:58.869748  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:50:58.869775  627293 round_trippers.go:469] Request Headers:
	I1209 10:50:58.869784  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:50:58.869788  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:50:58.873037  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:50:59.369979  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:50:59.370004  627293 round_trippers.go:469] Request Headers:
	I1209 10:50:59.370013  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:50:59.370017  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:50:59.373442  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:50:59.869269  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:50:59.869294  627293 round_trippers.go:469] Request Headers:
	I1209 10:50:59.869309  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:50:59.869314  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:50:59.872720  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:50:59.873370  627293 node_ready.go:53] node "ha-792382-m02" has status "Ready":"False"
	I1209 10:51:00.369254  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:51:00.369281  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:00.369289  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:00.369294  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:00.372431  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:51:00.869327  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:51:00.869352  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:00.869361  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:00.869365  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:00.872790  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:51:01.369711  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:51:01.369743  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:01.369755  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:01.369761  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:01.372739  627293 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 10:51:01.869629  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:51:01.869659  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:01.869672  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:01.869680  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:01.873312  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:51:01.873858  627293 node_ready.go:53] node "ha-792382-m02" has status "Ready":"False"
	I1209 10:51:02.369761  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:51:02.369798  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:02.369811  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:02.369818  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:02.373514  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:51:02.869485  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:51:02.869511  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:02.869524  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:02.869530  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:02.875847  627293 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1209 10:51:03.369998  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:51:03.370025  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:03.370034  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:03.370039  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:03.373227  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:51:03.870196  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:51:03.870226  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:03.870238  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:03.870245  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:03.873280  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:51:03.873981  627293 node_ready.go:53] node "ha-792382-m02" has status "Ready":"False"
	I1209 10:51:04.369276  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:51:04.369305  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:04.369314  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:04.369318  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:04.373386  627293 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 10:51:04.869282  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:51:04.869309  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:04.869317  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:04.869321  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:04.872919  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:51:05.369501  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:51:05.369531  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:05.369544  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:05.369551  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:05.373273  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:51:05.869275  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:51:05.869301  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:05.869313  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:05.869319  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:05.875077  627293 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1209 10:51:05.875712  627293 node_ready.go:49] node "ha-792382-m02" has status "Ready":"True"
	I1209 10:51:05.875741  627293 node_ready.go:38] duration metric: took 17.506691417s for node "ha-792382-m02" to be "Ready" ...
	I1209 10:51:05.875753  627293 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 10:51:05.875877  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods
	I1209 10:51:05.875894  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:05.875903  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:05.875908  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:05.880622  627293 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 10:51:05.886687  627293 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-8hlml" in "kube-system" namespace to be "Ready" ...
	I1209 10:51:05.886796  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-8hlml
	I1209 10:51:05.886807  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:05.886815  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:05.886820  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:05.891623  627293 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 10:51:05.892565  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382
	I1209 10:51:05.892583  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:05.892608  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:05.892615  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:05.895456  627293 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 10:51:05.895899  627293 pod_ready.go:93] pod "coredns-7c65d6cfc9-8hlml" in "kube-system" namespace has status "Ready":"True"
	I1209 10:51:05.895917  627293 pod_ready.go:82] duration metric: took 9.205439ms for pod "coredns-7c65d6cfc9-8hlml" in "kube-system" namespace to be "Ready" ...
	I1209 10:51:05.895927  627293 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-rz6mw" in "kube-system" namespace to be "Ready" ...
	I1209 10:51:05.895993  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-rz6mw
	I1209 10:51:05.896006  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:05.896013  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:05.896016  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:05.898484  627293 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 10:51:05.899083  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382
	I1209 10:51:05.899101  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:05.899108  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:05.899112  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:05.901260  627293 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 10:51:05.901817  627293 pod_ready.go:93] pod "coredns-7c65d6cfc9-rz6mw" in "kube-system" namespace has status "Ready":"True"
	I1209 10:51:05.901842  627293 pod_ready.go:82] duration metric: took 5.908358ms for pod "coredns-7c65d6cfc9-rz6mw" in "kube-system" namespace to be "Ready" ...
	I1209 10:51:05.901854  627293 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-792382" in "kube-system" namespace to be "Ready" ...
	I1209 10:51:05.901923  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-792382
	I1209 10:51:05.901934  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:05.901946  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:05.901953  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:05.904274  627293 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 10:51:05.905123  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382
	I1209 10:51:05.905142  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:05.905152  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:05.905158  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:05.907644  627293 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 10:51:05.908181  627293 pod_ready.go:93] pod "etcd-ha-792382" in "kube-system" namespace has status "Ready":"True"
	I1209 10:51:05.908211  627293 pod_ready.go:82] duration metric: took 6.349761ms for pod "etcd-ha-792382" in "kube-system" namespace to be "Ready" ...
	I1209 10:51:05.908224  627293 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-792382-m02" in "kube-system" namespace to be "Ready" ...
	I1209 10:51:05.908297  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-792382-m02
	I1209 10:51:05.908307  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:05.908318  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:05.908329  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:05.910369  627293 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 10:51:05.910967  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:51:05.910983  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:05.910992  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:05.910997  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:05.913018  627293 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 10:51:05.913518  627293 pod_ready.go:93] pod "etcd-ha-792382-m02" in "kube-system" namespace has status "Ready":"True"
	I1209 10:51:05.913539  627293 pod_ready.go:82] duration metric: took 5.308048ms for pod "etcd-ha-792382-m02" in "kube-system" namespace to be "Ready" ...
	I1209 10:51:05.913558  627293 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-792382" in "kube-system" namespace to be "Ready" ...
	I1209 10:51:06.070017  627293 request.go:632] Waited for 156.363826ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-792382
	I1209 10:51:06.070081  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-792382
	I1209 10:51:06.070086  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:06.070095  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:06.070102  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:06.073645  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:51:06.269848  627293 request.go:632] Waited for 195.364699ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-792382
	I1209 10:51:06.269918  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382
	I1209 10:51:06.269924  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:06.269931  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:06.269935  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:06.272803  627293 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 10:51:06.273443  627293 pod_ready.go:93] pod "kube-apiserver-ha-792382" in "kube-system" namespace has status "Ready":"True"
	I1209 10:51:06.273469  627293 pod_ready.go:82] duration metric: took 359.901606ms for pod "kube-apiserver-ha-792382" in "kube-system" namespace to be "Ready" ...
	I1209 10:51:06.273484  627293 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-792382-m02" in "kube-system" namespace to be "Ready" ...
	I1209 10:51:06.469639  627293 request.go:632] Waited for 196.043735ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-792382-m02
	I1209 10:51:06.469733  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-792382-m02
	I1209 10:51:06.469741  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:06.469754  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:06.469762  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:06.473158  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:51:06.670306  627293 request.go:632] Waited for 196.412719ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:51:06.670379  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:51:06.670387  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:06.670399  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:06.670409  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:06.673435  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:51:06.673975  627293 pod_ready.go:93] pod "kube-apiserver-ha-792382-m02" in "kube-system" namespace has status "Ready":"True"
	I1209 10:51:06.673996  627293 pod_ready.go:82] duration metric: took 400.504015ms for pod "kube-apiserver-ha-792382-m02" in "kube-system" namespace to be "Ready" ...
	I1209 10:51:06.674006  627293 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-792382" in "kube-system" namespace to be "Ready" ...
	I1209 10:51:06.870147  627293 request.go:632] Waited for 196.063707ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-792382
	I1209 10:51:06.870265  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-792382
	I1209 10:51:06.870276  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:06.870285  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:06.870292  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:06.873707  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:51:07.069908  627293 request.go:632] Waited for 195.387799ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-792382
	I1209 10:51:07.069975  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382
	I1209 10:51:07.069983  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:07.069995  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:07.070015  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:07.073101  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:51:07.073736  627293 pod_ready.go:93] pod "kube-controller-manager-ha-792382" in "kube-system" namespace has status "Ready":"True"
	I1209 10:51:07.073758  627293 pod_ready.go:82] duration metric: took 399.744041ms for pod "kube-controller-manager-ha-792382" in "kube-system" namespace to be "Ready" ...
	I1209 10:51:07.073774  627293 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-792382-m02" in "kube-system" namespace to be "Ready" ...
	I1209 10:51:07.269459  627293 request.go:632] Waited for 195.589987ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-792382-m02
	I1209 10:51:07.269554  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-792382-m02
	I1209 10:51:07.269566  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:07.269577  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:07.269584  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:07.273156  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:51:07.470290  627293 request.go:632] Waited for 196.338376ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:51:07.470357  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:51:07.470364  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:07.470374  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:07.470384  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:07.474385  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:51:07.474970  627293 pod_ready.go:93] pod "kube-controller-manager-ha-792382-m02" in "kube-system" namespace has status "Ready":"True"
	I1209 10:51:07.474989  627293 pod_ready.go:82] duration metric: took 401.206827ms for pod "kube-controller-manager-ha-792382-m02" in "kube-system" namespace to be "Ready" ...
	I1209 10:51:07.475001  627293 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-dckpl" in "kube-system" namespace to be "Ready" ...
	I1209 10:51:07.670046  627293 request.go:632] Waited for 194.938435ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dckpl
	I1209 10:51:07.670123  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dckpl
	I1209 10:51:07.670153  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:07.670161  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:07.670177  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:07.673612  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:51:07.869971  627293 request.go:632] Waited for 195.374837ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:51:07.870066  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:51:07.870077  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:07.870089  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:07.870096  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:07.873498  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:51:07.873966  627293 pod_ready.go:93] pod "kube-proxy-dckpl" in "kube-system" namespace has status "Ready":"True"
	I1209 10:51:07.873986  627293 pod_ready.go:82] duration metric: took 398.974048ms for pod "kube-proxy-dckpl" in "kube-system" namespace to be "Ready" ...
	I1209 10:51:07.873999  627293 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-wrvgb" in "kube-system" namespace to be "Ready" ...
	I1209 10:51:08.070122  627293 request.go:632] Waited for 195.97145ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wrvgb
	I1209 10:51:08.070208  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wrvgb
	I1209 10:51:08.070220  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:08.070232  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:08.070246  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:08.073337  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:51:08.270335  627293 request.go:632] Waited for 196.383902ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-792382
	I1209 10:51:08.270428  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382
	I1209 10:51:08.270439  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:08.270446  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:08.270450  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:08.273875  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:51:08.274422  627293 pod_ready.go:93] pod "kube-proxy-wrvgb" in "kube-system" namespace has status "Ready":"True"
	I1209 10:51:08.274444  627293 pod_ready.go:82] duration metric: took 400.436343ms for pod "kube-proxy-wrvgb" in "kube-system" namespace to be "Ready" ...
	I1209 10:51:08.274455  627293 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-792382" in "kube-system" namespace to be "Ready" ...
	I1209 10:51:08.469480  627293 request.go:632] Waited for 194.92406ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-792382
	I1209 10:51:08.469571  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-792382
	I1209 10:51:08.469579  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:08.469593  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:08.469604  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:08.473101  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:51:08.670247  627293 request.go:632] Waited for 196.404632ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-792382
	I1209 10:51:08.670318  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382
	I1209 10:51:08.670323  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:08.670331  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:08.670334  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:08.673487  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:51:08.674226  627293 pod_ready.go:93] pod "kube-scheduler-ha-792382" in "kube-system" namespace has status "Ready":"True"
	I1209 10:51:08.674250  627293 pod_ready.go:82] duration metric: took 399.789273ms for pod "kube-scheduler-ha-792382" in "kube-system" namespace to be "Ready" ...
	I1209 10:51:08.674263  627293 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-792382-m02" in "kube-system" namespace to be "Ready" ...
	I1209 10:51:08.870290  627293 request.go:632] Waited for 195.926045ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-792382-m02
	I1209 10:51:08.870371  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-792382-m02
	I1209 10:51:08.870379  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:08.870387  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:08.870393  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:08.873809  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:51:09.069870  627293 request.go:632] Waited for 195.368943ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:51:09.069944  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:51:09.069950  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:09.069962  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:09.069967  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:09.074483  627293 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 10:51:09.075070  627293 pod_ready.go:93] pod "kube-scheduler-ha-792382-m02" in "kube-system" namespace has status "Ready":"True"
	I1209 10:51:09.075095  627293 pod_ready.go:82] duration metric: took 400.825701ms for pod "kube-scheduler-ha-792382-m02" in "kube-system" namespace to be "Ready" ...
	I1209 10:51:09.075107  627293 pod_ready.go:39] duration metric: took 3.199339967s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 10:51:09.075137  627293 api_server.go:52] waiting for apiserver process to appear ...
	I1209 10:51:09.075203  627293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 10:51:09.089759  627293 api_server.go:72] duration metric: took 21.007136874s to wait for apiserver process to appear ...
	I1209 10:51:09.089785  627293 api_server.go:88] waiting for apiserver healthz status ...
	I1209 10:51:09.089806  627293 api_server.go:253] Checking apiserver healthz at https://192.168.39.69:8443/healthz ...
	I1209 10:51:09.093868  627293 api_server.go:279] https://192.168.39.69:8443/healthz returned 200:
	ok
	I1209 10:51:09.093935  627293 round_trippers.go:463] GET https://192.168.39.69:8443/version
	I1209 10:51:09.093940  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:09.093949  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:09.093957  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:09.094830  627293 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1209 10:51:09.094916  627293 api_server.go:141] control plane version: v1.31.2
	I1209 10:51:09.094932  627293 api_server.go:131] duration metric: took 5.141357ms to wait for apiserver health ...
	I1209 10:51:09.094940  627293 system_pods.go:43] waiting for kube-system pods to appear ...
	I1209 10:51:09.269312  627293 request.go:632] Waited for 174.277582ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods
	I1209 10:51:09.269388  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods
	I1209 10:51:09.269394  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:09.269402  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:09.269407  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:09.274316  627293 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 10:51:09.278484  627293 system_pods.go:59] 17 kube-system pods found
	I1209 10:51:09.278512  627293 system_pods.go:61] "coredns-7c65d6cfc9-8hlml" [d820cd6c-5064-4934-adc8-c68f84c09b46] Running
	I1209 10:51:09.278518  627293 system_pods.go:61] "coredns-7c65d6cfc9-rz6mw" [af297b6d-91f1-4114-b98c-cdfdfbd1589e] Running
	I1209 10:51:09.278523  627293 system_pods.go:61] "etcd-ha-792382" [eee2fbad-7f2f-4f5a-b701-abdbf8456f99] Running
	I1209 10:51:09.278527  627293 system_pods.go:61] "etcd-ha-792382-m02" [424768ff-af5d-4a31-b062-ab2c9576884d] Running
	I1209 10:51:09.278531  627293 system_pods.go:61] "kindnet-bqp2z" [b2c40579-4d72-4efe-b921-1e0f98b91544] Running
	I1209 10:51:09.278534  627293 system_pods.go:61] "kindnet-hkrhk" [9b35011c-ab45-4f55-a60f-08f9c4509c1d] Running
	I1209 10:51:09.278540  627293 system_pods.go:61] "kube-apiserver-ha-792382" [5157cfb0-bc91-4efe-b2e2-689778d5b012] Running
	I1209 10:51:09.278544  627293 system_pods.go:61] "kube-apiserver-ha-792382-m02" [71f9ce1c-aeff-4853-83ec-01a7fb6a81d5] Running
	I1209 10:51:09.278547  627293 system_pods.go:61] "kube-controller-manager-ha-792382" [f8cb7175-f6fd-4dcf-a6a5-66c44aa136ce] Running
	I1209 10:51:09.278550  627293 system_pods.go:61] "kube-controller-manager-ha-792382-m02" [2f7d8deb-ddad-47ac-ad18-5f8e7d0e095d] Running
	I1209 10:51:09.278553  627293 system_pods.go:61] "kube-proxy-dckpl" [13f7bda1-f9c2-4fd0-96e0-b6aee1139bc1] Running
	I1209 10:51:09.278556  627293 system_pods.go:61] "kube-proxy-wrvgb" [2531e29f-a4d5-41f9-8c38-3220b4caf96b] Running
	I1209 10:51:09.278560  627293 system_pods.go:61] "kube-scheduler-ha-792382" [b11693d0-b2e7-45a1-a1a2-6519a1535c45] Running
	I1209 10:51:09.278566  627293 system_pods.go:61] "kube-scheduler-ha-792382-m02" [250ce40c-5cbf-4f5d-a475-4f3d7ec100d9] Running
	I1209 10:51:09.278569  627293 system_pods.go:61] "kube-vip-ha-792382" [511f3a84-b444-4603-8f6f-eeeb262b1384] Running
	I1209 10:51:09.278574  627293 system_pods.go:61] "kube-vip-ha-792382-m02" [f2110c28-09f5-42f9-8e16-beec9759a0a2] Running
	I1209 10:51:09.278578  627293 system_pods.go:61] "storage-provisioner" [4419fe4f-e2ed-4ecb-a912-2dd074e29727] Running
	I1209 10:51:09.278587  627293 system_pods.go:74] duration metric: took 183.639674ms to wait for pod list to return data ...
	I1209 10:51:09.278598  627293 default_sa.go:34] waiting for default service account to be created ...
	I1209 10:51:09.470106  627293 request.go:632] Waited for 191.4045ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/default/serviceaccounts
	I1209 10:51:09.470215  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/default/serviceaccounts
	I1209 10:51:09.470227  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:09.470242  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:09.470252  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:09.479626  627293 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1209 10:51:09.479907  627293 default_sa.go:45] found service account: "default"
	I1209 10:51:09.479929  627293 default_sa.go:55] duration metric: took 201.319758ms for default service account to be created ...
	I1209 10:51:09.479942  627293 system_pods.go:116] waiting for k8s-apps to be running ...
	I1209 10:51:09.670105  627293 request.go:632] Waited for 190.065824ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods
	I1209 10:51:09.670208  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods
	I1209 10:51:09.670215  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:09.670223  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:09.670228  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:09.674641  627293 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 10:51:09.679080  627293 system_pods.go:86] 17 kube-system pods found
	I1209 10:51:09.679114  627293 system_pods.go:89] "coredns-7c65d6cfc9-8hlml" [d820cd6c-5064-4934-adc8-c68f84c09b46] Running
	I1209 10:51:09.679123  627293 system_pods.go:89] "coredns-7c65d6cfc9-rz6mw" [af297b6d-91f1-4114-b98c-cdfdfbd1589e] Running
	I1209 10:51:09.679131  627293 system_pods.go:89] "etcd-ha-792382" [eee2fbad-7f2f-4f5a-b701-abdbf8456f99] Running
	I1209 10:51:09.679138  627293 system_pods.go:89] "etcd-ha-792382-m02" [424768ff-af5d-4a31-b062-ab2c9576884d] Running
	I1209 10:51:09.679143  627293 system_pods.go:89] "kindnet-bqp2z" [b2c40579-4d72-4efe-b921-1e0f98b91544] Running
	I1209 10:51:09.679149  627293 system_pods.go:89] "kindnet-hkrhk" [9b35011c-ab45-4f55-a60f-08f9c4509c1d] Running
	I1209 10:51:09.679156  627293 system_pods.go:89] "kube-apiserver-ha-792382" [5157cfb0-bc91-4efe-b2e2-689778d5b012] Running
	I1209 10:51:09.679165  627293 system_pods.go:89] "kube-apiserver-ha-792382-m02" [71f9ce1c-aeff-4853-83ec-01a7fb6a81d5] Running
	I1209 10:51:09.679171  627293 system_pods.go:89] "kube-controller-manager-ha-792382" [f8cb7175-f6fd-4dcf-a6a5-66c44aa136ce] Running
	I1209 10:51:09.679180  627293 system_pods.go:89] "kube-controller-manager-ha-792382-m02" [2f7d8deb-ddad-47ac-ad18-5f8e7d0e095d] Running
	I1209 10:51:09.679184  627293 system_pods.go:89] "kube-proxy-dckpl" [13f7bda1-f9c2-4fd0-96e0-b6aee1139bc1] Running
	I1209 10:51:09.679188  627293 system_pods.go:89] "kube-proxy-wrvgb" [2531e29f-a4d5-41f9-8c38-3220b4caf96b] Running
	I1209 10:51:09.679195  627293 system_pods.go:89] "kube-scheduler-ha-792382" [b11693d0-b2e7-45a1-a1a2-6519a1535c45] Running
	I1209 10:51:09.679198  627293 system_pods.go:89] "kube-scheduler-ha-792382-m02" [250ce40c-5cbf-4f5d-a475-4f3d7ec100d9] Running
	I1209 10:51:09.679204  627293 system_pods.go:89] "kube-vip-ha-792382" [511f3a84-b444-4603-8f6f-eeeb262b1384] Running
	I1209 10:51:09.679208  627293 system_pods.go:89] "kube-vip-ha-792382-m02" [f2110c28-09f5-42f9-8e16-beec9759a0a2] Running
	I1209 10:51:09.679214  627293 system_pods.go:89] "storage-provisioner" [4419fe4f-e2ed-4ecb-a912-2dd074e29727] Running
	I1209 10:51:09.679221  627293 system_pods.go:126] duration metric: took 199.268781ms to wait for k8s-apps to be running ...
	I1209 10:51:09.679230  627293 system_svc.go:44] waiting for kubelet service to be running ....
	I1209 10:51:09.679276  627293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 10:51:09.694076  627293 system_svc.go:56] duration metric: took 14.835467ms WaitForService to wait for kubelet
	I1209 10:51:09.694109  627293 kubeadm.go:582] duration metric: took 21.611489035s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 10:51:09.694134  627293 node_conditions.go:102] verifying NodePressure condition ...
	I1209 10:51:09.869608  627293 request.go:632] Waited for 175.356595ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes
	I1209 10:51:09.869706  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes
	I1209 10:51:09.869714  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:09.869723  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:09.869734  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:09.873420  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:51:09.874254  627293 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1209 10:51:09.874278  627293 node_conditions.go:123] node cpu capacity is 2
	I1209 10:51:09.874300  627293 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1209 10:51:09.874304  627293 node_conditions.go:123] node cpu capacity is 2
	I1209 10:51:09.874310  627293 node_conditions.go:105] duration metric: took 180.168766ms to run NodePressure ...
	I1209 10:51:09.874324  627293 start.go:241] waiting for startup goroutines ...
	I1209 10:51:09.874349  627293 start.go:255] writing updated cluster config ...
	I1209 10:51:09.876293  627293 out.go:201] 
	I1209 10:51:09.877844  627293 config.go:182] Loaded profile config "ha-792382": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 10:51:09.877938  627293 profile.go:143] Saving config to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/config.json ...
	I1209 10:51:09.879618  627293 out.go:177] * Starting "ha-792382-m03" control-plane node in "ha-792382" cluster
	I1209 10:51:09.880651  627293 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 10:51:09.880677  627293 cache.go:56] Caching tarball of preloaded images
	I1209 10:51:09.880794  627293 preload.go:172] Found /home/jenkins/minikube-integration/20068-609844/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1209 10:51:09.880808  627293 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1209 10:51:09.880894  627293 profile.go:143] Saving config to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/config.json ...
	I1209 10:51:09.881065  627293 start.go:360] acquireMachinesLock for ha-792382-m03: {Name:mka6afffe9ddf2faebdc603ed0401805d58bf31e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 10:51:09.881109  627293 start.go:364] duration metric: took 24.695µs to acquireMachinesLock for "ha-792382-m03"
	I1209 10:51:09.881155  627293 start.go:93] Provisioning new machine with config: &{Name:ha-792382 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-792382 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.69 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.89 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 10:51:09.881251  627293 start.go:125] createHost starting for "m03" (driver="kvm2")
	I1209 10:51:09.882597  627293 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1209 10:51:09.882697  627293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:51:09.882736  627293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:51:09.898133  627293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41609
	I1209 10:51:09.898752  627293 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:51:09.899364  627293 main.go:141] libmachine: Using API Version  1
	I1209 10:51:09.899388  627293 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:51:09.899714  627293 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:51:09.899932  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetMachineName
	I1209 10:51:09.900153  627293 main.go:141] libmachine: (ha-792382-m03) Calling .DriverName
	I1209 10:51:09.900311  627293 start.go:159] libmachine.API.Create for "ha-792382" (driver="kvm2")
	I1209 10:51:09.900340  627293 client.go:168] LocalClient.Create starting
	I1209 10:51:09.900368  627293 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem
	I1209 10:51:09.900399  627293 main.go:141] libmachine: Decoding PEM data...
	I1209 10:51:09.900414  627293 main.go:141] libmachine: Parsing certificate...
	I1209 10:51:09.900469  627293 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem
	I1209 10:51:09.900490  627293 main.go:141] libmachine: Decoding PEM data...
	I1209 10:51:09.900500  627293 main.go:141] libmachine: Parsing certificate...
	I1209 10:51:09.900517  627293 main.go:141] libmachine: Running pre-create checks...
	I1209 10:51:09.900526  627293 main.go:141] libmachine: (ha-792382-m03) Calling .PreCreateCheck
	I1209 10:51:09.900676  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetConfigRaw
	I1209 10:51:09.901024  627293 main.go:141] libmachine: Creating machine...
	I1209 10:51:09.901037  627293 main.go:141] libmachine: (ha-792382-m03) Calling .Create
	I1209 10:51:09.901229  627293 main.go:141] libmachine: (ha-792382-m03) Creating KVM machine...
	I1209 10:51:09.902418  627293 main.go:141] libmachine: (ha-792382-m03) DBG | found existing default KVM network
	I1209 10:51:09.902584  627293 main.go:141] libmachine: (ha-792382-m03) DBG | found existing private KVM network mk-ha-792382
	I1209 10:51:09.902745  627293 main.go:141] libmachine: (ha-792382-m03) Setting up store path in /home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382-m03 ...
	I1209 10:51:09.902768  627293 main.go:141] libmachine: (ha-792382-m03) Building disk image from file:///home/jenkins/minikube-integration/20068-609844/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1209 10:51:09.902867  627293 main.go:141] libmachine: (ha-792382-m03) DBG | I1209 10:51:09.902742  628056 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20068-609844/.minikube
	I1209 10:51:09.902959  627293 main.go:141] libmachine: (ha-792382-m03) Downloading /home/jenkins/minikube-integration/20068-609844/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20068-609844/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1209 10:51:10.187575  627293 main.go:141] libmachine: (ha-792382-m03) DBG | I1209 10:51:10.187437  628056 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382-m03/id_rsa...
	I1209 10:51:10.500975  627293 main.go:141] libmachine: (ha-792382-m03) DBG | I1209 10:51:10.500841  628056 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382-m03/ha-792382-m03.rawdisk...
	I1209 10:51:10.501016  627293 main.go:141] libmachine: (ha-792382-m03) DBG | Writing magic tar header
	I1209 10:51:10.501026  627293 main.go:141] libmachine: (ha-792382-m03) DBG | Writing SSH key tar header
	I1209 10:51:10.501034  627293 main.go:141] libmachine: (ha-792382-m03) DBG | I1209 10:51:10.500985  628056 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382-m03 ...
	I1209 10:51:10.501188  627293 main.go:141] libmachine: (ha-792382-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382-m03
	I1209 10:51:10.501214  627293 main.go:141] libmachine: (ha-792382-m03) Setting executable bit set on /home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382-m03 (perms=drwx------)
	I1209 10:51:10.501235  627293 main.go:141] libmachine: (ha-792382-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20068-609844/.minikube/machines
	I1209 10:51:10.501255  627293 main.go:141] libmachine: (ha-792382-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20068-609844/.minikube
	I1209 10:51:10.501270  627293 main.go:141] libmachine: (ha-792382-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20068-609844
	I1209 10:51:10.501289  627293 main.go:141] libmachine: (ha-792382-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1209 10:51:10.501315  627293 main.go:141] libmachine: (ha-792382-m03) Setting executable bit set on /home/jenkins/minikube-integration/20068-609844/.minikube/machines (perms=drwxr-xr-x)
	I1209 10:51:10.501328  627293 main.go:141] libmachine: (ha-792382-m03) DBG | Checking permissions on dir: /home/jenkins
	I1209 10:51:10.501340  627293 main.go:141] libmachine: (ha-792382-m03) DBG | Checking permissions on dir: /home
	I1209 10:51:10.501354  627293 main.go:141] libmachine: (ha-792382-m03) DBG | Skipping /home - not owner
	I1209 10:51:10.501371  627293 main.go:141] libmachine: (ha-792382-m03) Setting executable bit set on /home/jenkins/minikube-integration/20068-609844/.minikube (perms=drwxr-xr-x)
	I1209 10:51:10.501393  627293 main.go:141] libmachine: (ha-792382-m03) Setting executable bit set on /home/jenkins/minikube-integration/20068-609844 (perms=drwxrwxr-x)
	I1209 10:51:10.501413  627293 main.go:141] libmachine: (ha-792382-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1209 10:51:10.501426  627293 main.go:141] libmachine: (ha-792382-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1209 10:51:10.501440  627293 main.go:141] libmachine: (ha-792382-m03) Creating domain...
	I1209 10:51:10.502439  627293 main.go:141] libmachine: (ha-792382-m03) define libvirt domain using xml: 
	I1209 10:51:10.502466  627293 main.go:141] libmachine: (ha-792382-m03) <domain type='kvm'>
	I1209 10:51:10.502476  627293 main.go:141] libmachine: (ha-792382-m03)   <name>ha-792382-m03</name>
	I1209 10:51:10.502484  627293 main.go:141] libmachine: (ha-792382-m03)   <memory unit='MiB'>2200</memory>
	I1209 10:51:10.502490  627293 main.go:141] libmachine: (ha-792382-m03)   <vcpu>2</vcpu>
	I1209 10:51:10.502495  627293 main.go:141] libmachine: (ha-792382-m03)   <features>
	I1209 10:51:10.502506  627293 main.go:141] libmachine: (ha-792382-m03)     <acpi/>
	I1209 10:51:10.502516  627293 main.go:141] libmachine: (ha-792382-m03)     <apic/>
	I1209 10:51:10.502524  627293 main.go:141] libmachine: (ha-792382-m03)     <pae/>
	I1209 10:51:10.502534  627293 main.go:141] libmachine: (ha-792382-m03)     
	I1209 10:51:10.502544  627293 main.go:141] libmachine: (ha-792382-m03)   </features>
	I1209 10:51:10.502552  627293 main.go:141] libmachine: (ha-792382-m03)   <cpu mode='host-passthrough'>
	I1209 10:51:10.502587  627293 main.go:141] libmachine: (ha-792382-m03)   
	I1209 10:51:10.502612  627293 main.go:141] libmachine: (ha-792382-m03)   </cpu>
	I1209 10:51:10.502650  627293 main.go:141] libmachine: (ha-792382-m03)   <os>
	I1209 10:51:10.502668  627293 main.go:141] libmachine: (ha-792382-m03)     <type>hvm</type>
	I1209 10:51:10.502674  627293 main.go:141] libmachine: (ha-792382-m03)     <boot dev='cdrom'/>
	I1209 10:51:10.502679  627293 main.go:141] libmachine: (ha-792382-m03)     <boot dev='hd'/>
	I1209 10:51:10.502688  627293 main.go:141] libmachine: (ha-792382-m03)     <bootmenu enable='no'/>
	I1209 10:51:10.502693  627293 main.go:141] libmachine: (ha-792382-m03)   </os>
	I1209 10:51:10.502731  627293 main.go:141] libmachine: (ha-792382-m03)   <devices>
	I1209 10:51:10.502756  627293 main.go:141] libmachine: (ha-792382-m03)     <disk type='file' device='cdrom'>
	I1209 10:51:10.502773  627293 main.go:141] libmachine: (ha-792382-m03)       <source file='/home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382-m03/boot2docker.iso'/>
	I1209 10:51:10.502784  627293 main.go:141] libmachine: (ha-792382-m03)       <target dev='hdc' bus='scsi'/>
	I1209 10:51:10.502796  627293 main.go:141] libmachine: (ha-792382-m03)       <readonly/>
	I1209 10:51:10.502806  627293 main.go:141] libmachine: (ha-792382-m03)     </disk>
	I1209 10:51:10.502815  627293 main.go:141] libmachine: (ha-792382-m03)     <disk type='file' device='disk'>
	I1209 10:51:10.502827  627293 main.go:141] libmachine: (ha-792382-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1209 10:51:10.502844  627293 main.go:141] libmachine: (ha-792382-m03)       <source file='/home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382-m03/ha-792382-m03.rawdisk'/>
	I1209 10:51:10.502854  627293 main.go:141] libmachine: (ha-792382-m03)       <target dev='hda' bus='virtio'/>
	I1209 10:51:10.502862  627293 main.go:141] libmachine: (ha-792382-m03)     </disk>
	I1209 10:51:10.502873  627293 main.go:141] libmachine: (ha-792382-m03)     <interface type='network'>
	I1209 10:51:10.502886  627293 main.go:141] libmachine: (ha-792382-m03)       <source network='mk-ha-792382'/>
	I1209 10:51:10.502901  627293 main.go:141] libmachine: (ha-792382-m03)       <model type='virtio'/>
	I1209 10:51:10.502917  627293 main.go:141] libmachine: (ha-792382-m03)     </interface>
	I1209 10:51:10.502927  627293 main.go:141] libmachine: (ha-792382-m03)     <interface type='network'>
	I1209 10:51:10.502935  627293 main.go:141] libmachine: (ha-792382-m03)       <source network='default'/>
	I1209 10:51:10.502945  627293 main.go:141] libmachine: (ha-792382-m03)       <model type='virtio'/>
	I1209 10:51:10.502954  627293 main.go:141] libmachine: (ha-792382-m03)     </interface>
	I1209 10:51:10.502965  627293 main.go:141] libmachine: (ha-792382-m03)     <serial type='pty'>
	I1209 10:51:10.502981  627293 main.go:141] libmachine: (ha-792382-m03)       <target port='0'/>
	I1209 10:51:10.503011  627293 main.go:141] libmachine: (ha-792382-m03)     </serial>
	I1209 10:51:10.503041  627293 main.go:141] libmachine: (ha-792382-m03)     <console type='pty'>
	I1209 10:51:10.503058  627293 main.go:141] libmachine: (ha-792382-m03)       <target type='serial' port='0'/>
	I1209 10:51:10.503071  627293 main.go:141] libmachine: (ha-792382-m03)     </console>
	I1209 10:51:10.503082  627293 main.go:141] libmachine: (ha-792382-m03)     <rng model='virtio'>
	I1209 10:51:10.503096  627293 main.go:141] libmachine: (ha-792382-m03)       <backend model='random'>/dev/random</backend>
	I1209 10:51:10.503113  627293 main.go:141] libmachine: (ha-792382-m03)     </rng>
	I1209 10:51:10.503127  627293 main.go:141] libmachine: (ha-792382-m03)     
	I1209 10:51:10.503136  627293 main.go:141] libmachine: (ha-792382-m03)     
	I1209 10:51:10.503142  627293 main.go:141] libmachine: (ha-792382-m03)   </devices>
	I1209 10:51:10.503150  627293 main.go:141] libmachine: (ha-792382-m03) </domain>
	I1209 10:51:10.503164  627293 main.go:141] libmachine: (ha-792382-m03) 
	I1209 10:51:10.509799  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:26:51:82 in network default
	I1209 10:51:10.510544  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:10.510571  627293 main.go:141] libmachine: (ha-792382-m03) Ensuring networks are active...
	I1209 10:51:10.511459  627293 main.go:141] libmachine: (ha-792382-m03) Ensuring network default is active
	I1209 10:51:10.511785  627293 main.go:141] libmachine: (ha-792382-m03) Ensuring network mk-ha-792382 is active
	I1209 10:51:10.512166  627293 main.go:141] libmachine: (ha-792382-m03) Getting domain xml...
	I1209 10:51:10.512954  627293 main.go:141] libmachine: (ha-792382-m03) Creating domain...
	I1209 10:51:11.772243  627293 main.go:141] libmachine: (ha-792382-m03) Waiting to get IP...
	I1209 10:51:11.773341  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:11.773804  627293 main.go:141] libmachine: (ha-792382-m03) DBG | unable to find current IP address of domain ha-792382-m03 in network mk-ha-792382
	I1209 10:51:11.773837  627293 main.go:141] libmachine: (ha-792382-m03) DBG | I1209 10:51:11.773768  628056 retry.go:31] will retry after 261.519944ms: waiting for machine to come up
	I1209 10:51:12.038077  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:12.038774  627293 main.go:141] libmachine: (ha-792382-m03) DBG | unable to find current IP address of domain ha-792382-m03 in network mk-ha-792382
	I1209 10:51:12.038804  627293 main.go:141] libmachine: (ha-792382-m03) DBG | I1209 10:51:12.038709  628056 retry.go:31] will retry after 310.562513ms: waiting for machine to come up
	I1209 10:51:12.350405  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:12.350812  627293 main.go:141] libmachine: (ha-792382-m03) DBG | unable to find current IP address of domain ha-792382-m03 in network mk-ha-792382
	I1209 10:51:12.350870  627293 main.go:141] libmachine: (ha-792382-m03) DBG | I1209 10:51:12.350779  628056 retry.go:31] will retry after 381.875413ms: waiting for machine to come up
	I1209 10:51:12.734428  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:12.734917  627293 main.go:141] libmachine: (ha-792382-m03) DBG | unable to find current IP address of domain ha-792382-m03 in network mk-ha-792382
	I1209 10:51:12.734939  627293 main.go:141] libmachine: (ha-792382-m03) DBG | I1209 10:51:12.734868  628056 retry.go:31] will retry after 376.611685ms: waiting for machine to come up
	I1209 10:51:13.113430  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:13.113850  627293 main.go:141] libmachine: (ha-792382-m03) DBG | unable to find current IP address of domain ha-792382-m03 in network mk-ha-792382
	I1209 10:51:13.113878  627293 main.go:141] libmachine: (ha-792382-m03) DBG | I1209 10:51:13.113807  628056 retry.go:31] will retry after 480.736793ms: waiting for machine to come up
	I1209 10:51:13.596329  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:13.596796  627293 main.go:141] libmachine: (ha-792382-m03) DBG | unable to find current IP address of domain ha-792382-m03 in network mk-ha-792382
	I1209 10:51:13.596819  627293 main.go:141] libmachine: (ha-792382-m03) DBG | I1209 10:51:13.596753  628056 retry.go:31] will retry after 875.034768ms: waiting for machine to come up
	I1209 10:51:14.473751  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:14.474126  627293 main.go:141] libmachine: (ha-792382-m03) DBG | unable to find current IP address of domain ha-792382-m03 in network mk-ha-792382
	I1209 10:51:14.474155  627293 main.go:141] libmachine: (ha-792382-m03) DBG | I1209 10:51:14.474088  628056 retry.go:31] will retry after 816.368717ms: waiting for machine to come up
	I1209 10:51:15.292960  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:15.293587  627293 main.go:141] libmachine: (ha-792382-m03) DBG | unable to find current IP address of domain ha-792382-m03 in network mk-ha-792382
	I1209 10:51:15.293618  627293 main.go:141] libmachine: (ha-792382-m03) DBG | I1209 10:51:15.293489  628056 retry.go:31] will retry after 1.183655157s: waiting for machine to come up
	I1209 10:51:16.478955  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:16.479455  627293 main.go:141] libmachine: (ha-792382-m03) DBG | unable to find current IP address of domain ha-792382-m03 in network mk-ha-792382
	I1209 10:51:16.479486  627293 main.go:141] libmachine: (ha-792382-m03) DBG | I1209 10:51:16.479390  628056 retry.go:31] will retry after 1.459421983s: waiting for machine to come up
	I1209 10:51:17.940565  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:17.940909  627293 main.go:141] libmachine: (ha-792382-m03) DBG | unable to find current IP address of domain ha-792382-m03 in network mk-ha-792382
	I1209 10:51:17.940939  627293 main.go:141] libmachine: (ha-792382-m03) DBG | I1209 10:51:17.940853  628056 retry.go:31] will retry after 2.01883018s: waiting for machine to come up
	I1209 10:51:19.961861  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:19.962417  627293 main.go:141] libmachine: (ha-792382-m03) DBG | unable to find current IP address of domain ha-792382-m03 in network mk-ha-792382
	I1209 10:51:19.962457  627293 main.go:141] libmachine: (ha-792382-m03) DBG | I1209 10:51:19.962353  628056 retry.go:31] will retry after 1.857861431s: waiting for machine to come up
	I1209 10:51:21.822060  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:21.822610  627293 main.go:141] libmachine: (ha-792382-m03) DBG | unable to find current IP address of domain ha-792382-m03 in network mk-ha-792382
	I1209 10:51:21.822640  627293 main.go:141] libmachine: (ha-792382-m03) DBG | I1209 10:51:21.822556  628056 retry.go:31] will retry after 2.674364218s: waiting for machine to come up
	I1209 10:51:24.499290  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:24.499696  627293 main.go:141] libmachine: (ha-792382-m03) DBG | unable to find current IP address of domain ha-792382-m03 in network mk-ha-792382
	I1209 10:51:24.499718  627293 main.go:141] libmachine: (ha-792382-m03) DBG | I1209 10:51:24.499647  628056 retry.go:31] will retry after 3.815833745s: waiting for machine to come up
	I1209 10:51:28.319279  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:28.319654  627293 main.go:141] libmachine: (ha-792382-m03) DBG | unable to find current IP address of domain ha-792382-m03 in network mk-ha-792382
	I1209 10:51:28.319685  627293 main.go:141] libmachine: (ha-792382-m03) DBG | I1209 10:51:28.319601  628056 retry.go:31] will retry after 5.165694329s: waiting for machine to come up
	I1209 10:51:33.487484  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:33.487908  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has current primary IP address 192.168.39.82 and MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:33.487939  627293 main.go:141] libmachine: (ha-792382-m03) Found IP for machine: 192.168.39.82
	I1209 10:51:33.487954  627293 main.go:141] libmachine: (ha-792382-m03) Reserving static IP address...
	I1209 10:51:33.488381  627293 main.go:141] libmachine: (ha-792382-m03) DBG | unable to find host DHCP lease matching {name: "ha-792382-m03", mac: "52:54:00:10:ae:3c", ip: "192.168.39.82"} in network mk-ha-792382
	I1209 10:51:33.564150  627293 main.go:141] libmachine: (ha-792382-m03) Reserved static IP address: 192.168.39.82
	I1209 10:51:33.564197  627293 main.go:141] libmachine: (ha-792382-m03) DBG | Getting to WaitForSSH function...
	I1209 10:51:33.564206  627293 main.go:141] libmachine: (ha-792382-m03) Waiting for SSH to be available...
	I1209 10:51:33.567024  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:33.567471  627293 main.go:141] libmachine: (ha-792382-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:ae:3c", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:51:25 +0000 UTC Type:0 Mac:52:54:00:10:ae:3c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:minikube Clientid:01:52:54:00:10:ae:3c}
	I1209 10:51:33.567501  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined IP address 192.168.39.82 and MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:33.567664  627293 main.go:141] libmachine: (ha-792382-m03) DBG | Using SSH client type: external
	I1209 10:51:33.567687  627293 main.go:141] libmachine: (ha-792382-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382-m03/id_rsa (-rw-------)
	I1209 10:51:33.567722  627293 main.go:141] libmachine: (ha-792382-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.82 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1209 10:51:33.567734  627293 main.go:141] libmachine: (ha-792382-m03) DBG | About to run SSH command:
	I1209 10:51:33.567748  627293 main.go:141] libmachine: (ha-792382-m03) DBG | exit 0
	I1209 10:51:33.698092  627293 main.go:141] libmachine: (ha-792382-m03) DBG | SSH cmd err, output: <nil>: 
	I1209 10:51:33.698421  627293 main.go:141] libmachine: (ha-792382-m03) KVM machine creation complete!
	I1209 10:51:33.698819  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetConfigRaw
	I1209 10:51:33.699478  627293 main.go:141] libmachine: (ha-792382-m03) Calling .DriverName
	I1209 10:51:33.699674  627293 main.go:141] libmachine: (ha-792382-m03) Calling .DriverName
	I1209 10:51:33.699826  627293 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1209 10:51:33.699837  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetState
	I1209 10:51:33.701167  627293 main.go:141] libmachine: Detecting operating system of created instance...
	I1209 10:51:33.701183  627293 main.go:141] libmachine: Waiting for SSH to be available...
	I1209 10:51:33.701191  627293 main.go:141] libmachine: Getting to WaitForSSH function...
	I1209 10:51:33.701198  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHHostname
	I1209 10:51:33.703744  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:33.704133  627293 main.go:141] libmachine: (ha-792382-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:ae:3c", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:51:25 +0000 UTC Type:0 Mac:52:54:00:10:ae:3c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:ha-792382-m03 Clientid:01:52:54:00:10:ae:3c}
	I1209 10:51:33.704162  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined IP address 192.168.39.82 and MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:33.704266  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHPort
	I1209 10:51:33.704462  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHKeyPath
	I1209 10:51:33.704600  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHKeyPath
	I1209 10:51:33.704723  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHUsername
	I1209 10:51:33.704916  627293 main.go:141] libmachine: Using SSH client type: native
	I1209 10:51:33.705157  627293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.82 22 <nil> <nil>}
	I1209 10:51:33.705168  627293 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1209 10:51:33.813390  627293 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 10:51:33.813423  627293 main.go:141] libmachine: Detecting the provisioner...
	I1209 10:51:33.813436  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHHostname
	I1209 10:51:33.816441  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:33.816804  627293 main.go:141] libmachine: (ha-792382-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:ae:3c", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:51:25 +0000 UTC Type:0 Mac:52:54:00:10:ae:3c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:ha-792382-m03 Clientid:01:52:54:00:10:ae:3c}
	I1209 10:51:33.816841  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined IP address 192.168.39.82 and MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:33.816951  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHPort
	I1209 10:51:33.817167  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHKeyPath
	I1209 10:51:33.817376  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHKeyPath
	I1209 10:51:33.817559  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHUsername
	I1209 10:51:33.817716  627293 main.go:141] libmachine: Using SSH client type: native
	I1209 10:51:33.817907  627293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.82 22 <nil> <nil>}
	I1209 10:51:33.817921  627293 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1209 10:51:33.926605  627293 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1209 10:51:33.926676  627293 main.go:141] libmachine: found compatible host: buildroot
	I1209 10:51:33.926683  627293 main.go:141] libmachine: Provisioning with buildroot...
	I1209 10:51:33.926691  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetMachineName
	I1209 10:51:33.926942  627293 buildroot.go:166] provisioning hostname "ha-792382-m03"
	I1209 10:51:33.926972  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetMachineName
	I1209 10:51:33.927120  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHHostname
	I1209 10:51:33.929899  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:33.930353  627293 main.go:141] libmachine: (ha-792382-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:ae:3c", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:51:25 +0000 UTC Type:0 Mac:52:54:00:10:ae:3c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:ha-792382-m03 Clientid:01:52:54:00:10:ae:3c}
	I1209 10:51:33.930382  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined IP address 192.168.39.82 and MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:33.930545  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHPort
	I1209 10:51:33.930780  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHKeyPath
	I1209 10:51:33.930935  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHKeyPath
	I1209 10:51:33.931076  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHUsername
	I1209 10:51:33.931236  627293 main.go:141] libmachine: Using SSH client type: native
	I1209 10:51:33.931442  627293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.82 22 <nil> <nil>}
	I1209 10:51:33.931455  627293 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-792382-m03 && echo "ha-792382-m03" | sudo tee /etc/hostname
	I1209 10:51:34.053804  627293 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-792382-m03
	
	I1209 10:51:34.053838  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHHostname
	I1209 10:51:34.056450  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:34.056795  627293 main.go:141] libmachine: (ha-792382-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:ae:3c", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:51:25 +0000 UTC Type:0 Mac:52:54:00:10:ae:3c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:ha-792382-m03 Clientid:01:52:54:00:10:ae:3c}
	I1209 10:51:34.056821  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined IP address 192.168.39.82 and MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:34.057070  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHPort
	I1209 10:51:34.057253  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHKeyPath
	I1209 10:51:34.057460  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHKeyPath
	I1209 10:51:34.057580  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHUsername
	I1209 10:51:34.057749  627293 main.go:141] libmachine: Using SSH client type: native
	I1209 10:51:34.057912  627293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.82 22 <nil> <nil>}
	I1209 10:51:34.057932  627293 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-792382-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-792382-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-792382-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 10:51:34.174396  627293 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 10:51:34.174436  627293 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20068-609844/.minikube CaCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20068-609844/.minikube}
	I1209 10:51:34.174459  627293 buildroot.go:174] setting up certificates
	I1209 10:51:34.174471  627293 provision.go:84] configureAuth start
	I1209 10:51:34.174484  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetMachineName
	I1209 10:51:34.174826  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetIP
	I1209 10:51:34.178006  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:34.178384  627293 main.go:141] libmachine: (ha-792382-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:ae:3c", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:51:25 +0000 UTC Type:0 Mac:52:54:00:10:ae:3c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:ha-792382-m03 Clientid:01:52:54:00:10:ae:3c}
	I1209 10:51:34.178414  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined IP address 192.168.39.82 and MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:34.178593  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHHostname
	I1209 10:51:34.180882  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:34.181259  627293 main.go:141] libmachine: (ha-792382-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:ae:3c", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:51:25 +0000 UTC Type:0 Mac:52:54:00:10:ae:3c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:ha-792382-m03 Clientid:01:52:54:00:10:ae:3c}
	I1209 10:51:34.181297  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined IP address 192.168.39.82 and MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:34.181434  627293 provision.go:143] copyHostCerts
	I1209 10:51:34.181467  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem
	I1209 10:51:34.181509  627293 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem, removing ...
	I1209 10:51:34.181521  627293 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem
	I1209 10:51:34.181599  627293 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem (1082 bytes)
	I1209 10:51:34.181708  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem
	I1209 10:51:34.181739  627293 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem, removing ...
	I1209 10:51:34.181750  627293 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem
	I1209 10:51:34.181796  627293 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem (1123 bytes)
	I1209 10:51:34.181862  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem
	I1209 10:51:34.181879  627293 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem, removing ...
	I1209 10:51:34.181885  627293 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem
	I1209 10:51:34.181910  627293 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem (1679 bytes)
	I1209 10:51:34.181961  627293 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem org=jenkins.ha-792382-m03 san=[127.0.0.1 192.168.39.82 ha-792382-m03 localhost minikube]
	I1209 10:51:34.410867  627293 provision.go:177] copyRemoteCerts
	I1209 10:51:34.410930  627293 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 10:51:34.410961  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHHostname
	I1209 10:51:34.414202  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:34.414663  627293 main.go:141] libmachine: (ha-792382-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:ae:3c", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:51:25 +0000 UTC Type:0 Mac:52:54:00:10:ae:3c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:ha-792382-m03 Clientid:01:52:54:00:10:ae:3c}
	I1209 10:51:34.414696  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined IP address 192.168.39.82 and MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:34.414964  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHPort
	I1209 10:51:34.415202  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHKeyPath
	I1209 10:51:34.415374  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHUsername
	I1209 10:51:34.415561  627293 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382-m03/id_rsa Username:docker}
	I1209 10:51:34.500121  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1209 10:51:34.500216  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1209 10:51:34.525465  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1209 10:51:34.525566  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1209 10:51:34.548733  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1209 10:51:34.548819  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1209 10:51:34.570848  627293 provision.go:87] duration metric: took 396.361471ms to configureAuth
	I1209 10:51:34.570884  627293 buildroot.go:189] setting minikube options for container-runtime
	I1209 10:51:34.571164  627293 config.go:182] Loaded profile config "ha-792382": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 10:51:34.571276  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHHostname
	I1209 10:51:34.574107  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:34.574532  627293 main.go:141] libmachine: (ha-792382-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:ae:3c", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:51:25 +0000 UTC Type:0 Mac:52:54:00:10:ae:3c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:ha-792382-m03 Clientid:01:52:54:00:10:ae:3c}
	I1209 10:51:34.574557  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined IP address 192.168.39.82 and MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:34.574761  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHPort
	I1209 10:51:34.574957  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHKeyPath
	I1209 10:51:34.575114  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHKeyPath
	I1209 10:51:34.575329  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHUsername
	I1209 10:51:34.575548  627293 main.go:141] libmachine: Using SSH client type: native
	I1209 10:51:34.575797  627293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.82 22 <nil> <nil>}
	I1209 10:51:34.575824  627293 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 10:51:34.816625  627293 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 10:51:34.816655  627293 main.go:141] libmachine: Checking connection to Docker...
	I1209 10:51:34.816670  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetURL
	I1209 10:51:34.817924  627293 main.go:141] libmachine: (ha-792382-m03) DBG | Using libvirt version 6000000
	I1209 10:51:34.820293  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:34.820739  627293 main.go:141] libmachine: (ha-792382-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:ae:3c", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:51:25 +0000 UTC Type:0 Mac:52:54:00:10:ae:3c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:ha-792382-m03 Clientid:01:52:54:00:10:ae:3c}
	I1209 10:51:34.820782  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined IP address 192.168.39.82 and MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:34.820943  627293 main.go:141] libmachine: Docker is up and running!
	I1209 10:51:34.820954  627293 main.go:141] libmachine: Reticulating splines...
	I1209 10:51:34.820962  627293 client.go:171] duration metric: took 24.920612799s to LocalClient.Create
	I1209 10:51:34.820990  627293 start.go:167] duration metric: took 24.920677638s to libmachine.API.Create "ha-792382"
	I1209 10:51:34.821001  627293 start.go:293] postStartSetup for "ha-792382-m03" (driver="kvm2")
	I1209 10:51:34.821015  627293 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 10:51:34.821041  627293 main.go:141] libmachine: (ha-792382-m03) Calling .DriverName
	I1209 10:51:34.821314  627293 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 10:51:34.821340  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHHostname
	I1209 10:51:34.823716  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:34.824123  627293 main.go:141] libmachine: (ha-792382-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:ae:3c", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:51:25 +0000 UTC Type:0 Mac:52:54:00:10:ae:3c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:ha-792382-m03 Clientid:01:52:54:00:10:ae:3c}
	I1209 10:51:34.824150  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined IP address 192.168.39.82 and MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:34.824346  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHPort
	I1209 10:51:34.824540  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHKeyPath
	I1209 10:51:34.824710  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHUsername
	I1209 10:51:34.824899  627293 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382-m03/id_rsa Username:docker}
	I1209 10:51:34.908596  627293 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 10:51:34.912587  627293 info.go:137] Remote host: Buildroot 2023.02.9
	I1209 10:51:34.912634  627293 filesync.go:126] Scanning /home/jenkins/minikube-integration/20068-609844/.minikube/addons for local assets ...
	I1209 10:51:34.912758  627293 filesync.go:126] Scanning /home/jenkins/minikube-integration/20068-609844/.minikube/files for local assets ...
	I1209 10:51:34.912881  627293 filesync.go:149] local asset: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem -> 6170172.pem in /etc/ssl/certs
	I1209 10:51:34.912894  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem -> /etc/ssl/certs/6170172.pem
	I1209 10:51:34.913014  627293 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 10:51:34.921828  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem --> /etc/ssl/certs/6170172.pem (1708 bytes)
	I1209 10:51:34.944676  627293 start.go:296] duration metric: took 123.657477ms for postStartSetup
	I1209 10:51:34.944735  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetConfigRaw
	I1209 10:51:34.945372  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetIP
	I1209 10:51:34.948020  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:34.948350  627293 main.go:141] libmachine: (ha-792382-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:ae:3c", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:51:25 +0000 UTC Type:0 Mac:52:54:00:10:ae:3c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:ha-792382-m03 Clientid:01:52:54:00:10:ae:3c}
	I1209 10:51:34.948374  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined IP address 192.168.39.82 and MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:34.948706  627293 profile.go:143] Saving config to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/config.json ...
	I1209 10:51:34.948901  627293 start.go:128] duration metric: took 25.067639086s to createHost
	I1209 10:51:34.948928  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHHostname
	I1209 10:51:34.951092  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:34.951471  627293 main.go:141] libmachine: (ha-792382-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:ae:3c", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:51:25 +0000 UTC Type:0 Mac:52:54:00:10:ae:3c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:ha-792382-m03 Clientid:01:52:54:00:10:ae:3c}
	I1209 10:51:34.951504  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined IP address 192.168.39.82 and MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:34.951672  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHPort
	I1209 10:51:34.951858  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHKeyPath
	I1209 10:51:34.952015  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHKeyPath
	I1209 10:51:34.952130  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHUsername
	I1209 10:51:34.952269  627293 main.go:141] libmachine: Using SSH client type: native
	I1209 10:51:34.952475  627293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.82 22 <nil> <nil>}
	I1209 10:51:34.952491  627293 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1209 10:51:35.062736  627293 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733741495.040495881
	
	I1209 10:51:35.062764  627293 fix.go:216] guest clock: 1733741495.040495881
	I1209 10:51:35.062773  627293 fix.go:229] Guest: 2024-12-09 10:51:35.040495881 +0000 UTC Remote: 2024-12-09 10:51:34.948914535 +0000 UTC m=+142.833153468 (delta=91.581346ms)
	I1209 10:51:35.062795  627293 fix.go:200] guest clock delta is within tolerance: 91.581346ms
	I1209 10:51:35.062802  627293 start.go:83] releasing machines lock for "ha-792382-m03", held for 25.181683585s
	I1209 10:51:35.062825  627293 main.go:141] libmachine: (ha-792382-m03) Calling .DriverName
	I1209 10:51:35.063125  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetIP
	I1209 10:51:35.065564  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:35.065919  627293 main.go:141] libmachine: (ha-792382-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:ae:3c", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:51:25 +0000 UTC Type:0 Mac:52:54:00:10:ae:3c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:ha-792382-m03 Clientid:01:52:54:00:10:ae:3c}
	I1209 10:51:35.065950  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined IP address 192.168.39.82 and MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:35.068041  627293 out.go:177] * Found network options:
	I1209 10:51:35.069311  627293 out.go:177]   - NO_PROXY=192.168.39.69,192.168.39.89
	W1209 10:51:35.070337  627293 proxy.go:119] fail to check proxy env: Error ip not in block
	W1209 10:51:35.070367  627293 proxy.go:119] fail to check proxy env: Error ip not in block
	I1209 10:51:35.070382  627293 main.go:141] libmachine: (ha-792382-m03) Calling .DriverName
	I1209 10:51:35.070888  627293 main.go:141] libmachine: (ha-792382-m03) Calling .DriverName
	I1209 10:51:35.071098  627293 main.go:141] libmachine: (ha-792382-m03) Calling .DriverName
	I1209 10:51:35.071216  627293 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 10:51:35.071260  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHHostname
	W1209 10:51:35.071333  627293 proxy.go:119] fail to check proxy env: Error ip not in block
	W1209 10:51:35.071358  627293 proxy.go:119] fail to check proxy env: Error ip not in block
	I1209 10:51:35.071448  627293 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 10:51:35.071472  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHHostname
	I1209 10:51:35.074136  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:35.074287  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:35.074566  627293 main.go:141] libmachine: (ha-792382-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:ae:3c", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:51:25 +0000 UTC Type:0 Mac:52:54:00:10:ae:3c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:ha-792382-m03 Clientid:01:52:54:00:10:ae:3c}
	I1209 10:51:35.074588  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined IP address 192.168.39.82 and MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:35.074614  627293 main.go:141] libmachine: (ha-792382-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:ae:3c", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:51:25 +0000 UTC Type:0 Mac:52:54:00:10:ae:3c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:ha-792382-m03 Clientid:01:52:54:00:10:ae:3c}
	I1209 10:51:35.074633  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined IP address 192.168.39.82 and MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:35.074729  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHPort
	I1209 10:51:35.074920  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHPort
	I1209 10:51:35.074923  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHKeyPath
	I1209 10:51:35.075091  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHKeyPath
	I1209 10:51:35.075094  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHUsername
	I1209 10:51:35.075270  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHUsername
	I1209 10:51:35.075298  627293 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382-m03/id_rsa Username:docker}
	I1209 10:51:35.075413  627293 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382-m03/id_rsa Username:docker}
	I1209 10:51:35.318511  627293 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 10:51:35.324511  627293 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 10:51:35.324586  627293 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 10:51:35.341575  627293 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1209 10:51:35.341607  627293 start.go:495] detecting cgroup driver to use...
	I1209 10:51:35.341686  627293 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 10:51:35.357724  627293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 10:51:35.372685  627293 docker.go:217] disabling cri-docker service (if available) ...
	I1209 10:51:35.372771  627293 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 10:51:35.387627  627293 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 10:51:35.401716  627293 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 10:51:35.525416  627293 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 10:51:35.688544  627293 docker.go:233] disabling docker service ...
	I1209 10:51:35.688627  627293 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 10:51:35.703495  627293 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 10:51:35.717769  627293 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 10:51:35.838656  627293 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 10:51:35.968740  627293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 10:51:35.982914  627293 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 10:51:36.001011  627293 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1209 10:51:36.001092  627293 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 10:51:36.011496  627293 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 10:51:36.011565  627293 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 10:51:36.021527  627293 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 10:51:36.031202  627293 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 10:51:36.041196  627293 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 10:51:36.051656  627293 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 10:51:36.062085  627293 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 10:51:36.078955  627293 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 10:51:36.088919  627293 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 10:51:36.098428  627293 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1209 10:51:36.098491  627293 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1209 10:51:36.112478  627293 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 10:51:36.121985  627293 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 10:51:36.236147  627293 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 10:51:36.331891  627293 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 10:51:36.331989  627293 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 10:51:36.336578  627293 start.go:563] Will wait 60s for crictl version
	I1209 10:51:36.336641  627293 ssh_runner.go:195] Run: which crictl
	I1209 10:51:36.340301  627293 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 10:51:36.380474  627293 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1209 10:51:36.380557  627293 ssh_runner.go:195] Run: crio --version
	I1209 10:51:36.408527  627293 ssh_runner.go:195] Run: crio --version
	I1209 10:51:36.438078  627293 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1209 10:51:36.439329  627293 out.go:177]   - env NO_PROXY=192.168.39.69
	I1209 10:51:36.440501  627293 out.go:177]   - env NO_PROXY=192.168.39.69,192.168.39.89
	I1209 10:51:36.441659  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetIP
	I1209 10:51:36.444828  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:36.445310  627293 main.go:141] libmachine: (ha-792382-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:ae:3c", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:51:25 +0000 UTC Type:0 Mac:52:54:00:10:ae:3c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:ha-792382-m03 Clientid:01:52:54:00:10:ae:3c}
	I1209 10:51:36.445339  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined IP address 192.168.39.82 and MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:36.445521  627293 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1209 10:51:36.449517  627293 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 10:51:36.461352  627293 mustload.go:65] Loading cluster: ha-792382
	I1209 10:51:36.461581  627293 config.go:182] Loaded profile config "ha-792382": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 10:51:36.461851  627293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:51:36.461915  627293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:51:36.476757  627293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45915
	I1209 10:51:36.477266  627293 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:51:36.477839  627293 main.go:141] libmachine: Using API Version  1
	I1209 10:51:36.477861  627293 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:51:36.478264  627293 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:51:36.478470  627293 main.go:141] libmachine: (ha-792382) Calling .GetState
	I1209 10:51:36.480228  627293 host.go:66] Checking if "ha-792382" exists ...
	I1209 10:51:36.480540  627293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:51:36.480578  627293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:51:36.495892  627293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37057
	I1209 10:51:36.496439  627293 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:51:36.496999  627293 main.go:141] libmachine: Using API Version  1
	I1209 10:51:36.497024  627293 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:51:36.497365  627293 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:51:36.497597  627293 main.go:141] libmachine: (ha-792382) Calling .DriverName
	I1209 10:51:36.497777  627293 certs.go:68] Setting up /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382 for IP: 192.168.39.82
	I1209 10:51:36.497796  627293 certs.go:194] generating shared ca certs ...
	I1209 10:51:36.497816  627293 certs.go:226] acquiring lock for ca certs: {Name:mk2df5887e08965d909a9c950da5dfffb8a04ddc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 10:51:36.497951  627293 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.key
	I1209 10:51:36.497987  627293 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.key
	I1209 10:51:36.497996  627293 certs.go:256] generating profile certs ...
	I1209 10:51:36.498067  627293 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/client.key
	I1209 10:51:36.498091  627293 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.key.74be6275
	I1209 10:51:36.498107  627293 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.crt.74be6275 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.69 192.168.39.89 192.168.39.82 192.168.39.254]
	I1209 10:51:36.575706  627293 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.crt.74be6275 ...
	I1209 10:51:36.575744  627293 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.crt.74be6275: {Name:mkc0279d5f95c7c05a4a03239304c698f543bc23 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 10:51:36.575927  627293 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.key.74be6275 ...
	I1209 10:51:36.575940  627293 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.key.74be6275: {Name:mk628bdb195c5612308f11734296bd7934f36956 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 10:51:36.576016  627293 certs.go:381] copying /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.crt.74be6275 -> /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.crt
	I1209 10:51:36.576148  627293 certs.go:385] copying /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.key.74be6275 -> /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.key
	I1209 10:51:36.576277  627293 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/proxy-client.key
	I1209 10:51:36.576293  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1209 10:51:36.576307  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1209 10:51:36.576321  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1209 10:51:36.576334  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1209 10:51:36.576347  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1209 10:51:36.576359  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1209 10:51:36.576371  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1209 10:51:36.590260  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1209 10:51:36.590358  627293 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017.pem (1338 bytes)
	W1209 10:51:36.590394  627293 certs.go:480] ignoring /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017_empty.pem, impossibly tiny 0 bytes
	I1209 10:51:36.590412  627293 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem (1675 bytes)
	I1209 10:51:36.590439  627293 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem (1082 bytes)
	I1209 10:51:36.590462  627293 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem (1123 bytes)
	I1209 10:51:36.590483  627293 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem (1679 bytes)
	I1209 10:51:36.590521  627293 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem (1708 bytes)
	I1209 10:51:36.590548  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem -> /usr/share/ca-certificates/6170172.pem
	I1209 10:51:36.590563  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1209 10:51:36.590576  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017.pem -> /usr/share/ca-certificates/617017.pem
	I1209 10:51:36.590614  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHHostname
	I1209 10:51:36.594031  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:51:36.594418  627293 main.go:141] libmachine: (ha-792382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:82:f7", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:49:26 +0000 UTC Type:0 Mac:52:54:00:a8:82:f7 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-792382 Clientid:01:52:54:00:a8:82:f7}
	I1209 10:51:36.594452  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:51:36.594660  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHPort
	I1209 10:51:36.594910  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHKeyPath
	I1209 10:51:36.595086  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHUsername
	I1209 10:51:36.595232  627293 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382/id_rsa Username:docker}
	I1209 10:51:36.666577  627293 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1209 10:51:36.671392  627293 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1209 10:51:36.681688  627293 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1209 10:51:36.685694  627293 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1209 10:51:36.696364  627293 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1209 10:51:36.700718  627293 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1209 10:51:36.712302  627293 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1209 10:51:36.716534  627293 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1209 10:51:36.728128  627293 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1209 10:51:36.732026  627293 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1209 10:51:36.743956  627293 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1209 10:51:36.748200  627293 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1209 10:51:36.761818  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 10:51:36.786260  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1209 10:51:36.809394  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 10:51:36.832350  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1209 10:51:36.854875  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1209 10:51:36.876691  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1209 10:51:36.900011  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 10:51:36.922859  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1209 10:51:36.945086  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem --> /usr/share/ca-certificates/6170172.pem (1708 bytes)
	I1209 10:51:36.966983  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 10:51:36.989660  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017.pem --> /usr/share/ca-certificates/617017.pem (1338 bytes)
	I1209 10:51:37.011442  627293 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1209 10:51:37.027256  627293 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1209 10:51:37.042921  627293 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1209 10:51:37.059579  627293 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1209 10:51:37.078911  627293 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1209 10:51:37.094738  627293 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1209 10:51:37.112113  627293 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1209 10:51:37.130720  627293 ssh_runner.go:195] Run: openssl version
	I1209 10:51:37.136460  627293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1209 10:51:37.148061  627293 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 10:51:37.152555  627293 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 10:34 /usr/share/ca-certificates/minikubeCA.pem
	I1209 10:51:37.152627  627293 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 10:51:37.158639  627293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1209 10:51:37.170061  627293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/617017.pem && ln -fs /usr/share/ca-certificates/617017.pem /etc/ssl/certs/617017.pem"
	I1209 10:51:37.180567  627293 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/617017.pem
	I1209 10:51:37.184633  627293 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 10:45 /usr/share/ca-certificates/617017.pem
	I1209 10:51:37.184695  627293 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/617017.pem
	I1209 10:51:37.190044  627293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/617017.pem /etc/ssl/certs/51391683.0"
	I1209 10:51:37.200767  627293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6170172.pem && ln -fs /usr/share/ca-certificates/6170172.pem /etc/ssl/certs/6170172.pem"
	I1209 10:51:37.211239  627293 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6170172.pem
	I1209 10:51:37.215531  627293 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 10:45 /usr/share/ca-certificates/6170172.pem
	I1209 10:51:37.215617  627293 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6170172.pem
	I1209 10:51:37.221282  627293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6170172.pem /etc/ssl/certs/3ec20f2e.0"
	I1209 10:51:37.232891  627293 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 10:51:37.237033  627293 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1209 10:51:37.237096  627293 kubeadm.go:934] updating node {m03 192.168.39.82 8443 v1.31.2 crio true true} ...
	I1209 10:51:37.237210  627293 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-792382-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.82
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-792382 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 10:51:37.237247  627293 kube-vip.go:115] generating kube-vip config ...
	I1209 10:51:37.237291  627293 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1209 10:51:37.254154  627293 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1209 10:51:37.254286  627293 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.7
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1209 10:51:37.254376  627293 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1209 10:51:37.266499  627293 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1209 10:51:37.266573  627293 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1209 10:51:37.276989  627293 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256
	I1209 10:51:37.277004  627293 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256
	I1209 10:51:37.277031  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1209 10:51:37.277052  627293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 10:51:37.277099  627293 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1209 10:51:37.276989  627293 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1209 10:51:37.277162  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1209 10:51:37.277221  627293 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1209 10:51:37.294260  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1209 10:51:37.294329  627293 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1209 10:51:37.294354  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1209 10:51:37.294397  627293 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1209 10:51:37.294410  627293 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1209 10:51:37.294447  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1209 10:51:37.309738  627293 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1209 10:51:37.309777  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1209 10:51:38.106081  627293 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1209 10:51:38.115636  627293 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1209 10:51:38.132759  627293 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 10:51:38.149726  627293 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1209 10:51:38.166083  627293 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1209 10:51:38.169937  627293 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 10:51:38.181150  627293 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 10:51:38.308494  627293 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 10:51:38.325679  627293 host.go:66] Checking if "ha-792382" exists ...
	I1209 10:51:38.326045  627293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:51:38.326105  627293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:51:38.344459  627293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45865
	I1209 10:51:38.345084  627293 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:51:38.345753  627293 main.go:141] libmachine: Using API Version  1
	I1209 10:51:38.345796  627293 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:51:38.346197  627293 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:51:38.346437  627293 main.go:141] libmachine: (ha-792382) Calling .DriverName
	I1209 10:51:38.346586  627293 start.go:317] joinCluster: &{Name:ha-792382 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-792382 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.69 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.89 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.82 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns
:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 10:51:38.346740  627293 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1209 10:51:38.346768  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHHostname
	I1209 10:51:38.349642  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:51:38.350099  627293 main.go:141] libmachine: (ha-792382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:82:f7", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:49:26 +0000 UTC Type:0 Mac:52:54:00:a8:82:f7 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-792382 Clientid:01:52:54:00:a8:82:f7}
	I1209 10:51:38.350125  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:51:38.350286  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHPort
	I1209 10:51:38.350484  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHKeyPath
	I1209 10:51:38.350634  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHUsername
	I1209 10:51:38.350780  627293 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382/id_rsa Username:docker}
	I1209 10:51:38.514216  627293 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.82 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 10:51:38.514274  627293 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token exrmr9.huiz7swpoaojy929 --discovery-token-ca-cert-hash sha256:c011865ce27f188d55feab5f04226df0aae8d2e7cfe661283806c6f2de0ef29a --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-792382-m03 --control-plane --apiserver-advertise-address=192.168.39.82 --apiserver-bind-port=8443"
	I1209 10:52:01.803198  627293 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token exrmr9.huiz7swpoaojy929 --discovery-token-ca-cert-hash sha256:c011865ce27f188d55feab5f04226df0aae8d2e7cfe661283806c6f2de0ef29a --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-792382-m03 --control-plane --apiserver-advertise-address=192.168.39.82 --apiserver-bind-port=8443": (23.288893034s)
	I1209 10:52:01.803245  627293 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1209 10:52:02.338453  627293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-792382-m03 minikube.k8s.io/updated_at=2024_12_09T10_52_02_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=60110addcdbf0fec7168b962521659e922988d6c minikube.k8s.io/name=ha-792382 minikube.k8s.io/primary=false
	I1209 10:52:02.475613  627293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-792382-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I1209 10:52:02.591820  627293 start.go:319] duration metric: took 24.245228011s to joinCluster
	I1209 10:52:02.591921  627293 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.82 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 10:52:02.592324  627293 config.go:182] Loaded profile config "ha-792382": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 10:52:02.593526  627293 out.go:177] * Verifying Kubernetes components...
	I1209 10:52:02.594809  627293 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 10:52:02.839263  627293 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 10:52:02.861519  627293 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/20068-609844/kubeconfig
	I1209 10:52:02.861874  627293 kapi.go:59] client config for ha-792382: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/client.crt", KeyFile:"/home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/client.key", CAFile:"/home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243b6e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1209 10:52:02.861974  627293 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.69:8443
	I1209 10:52:02.862413  627293 node_ready.go:35] waiting up to 6m0s for node "ha-792382-m03" to be "Ready" ...
	I1209 10:52:02.862536  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:02.862551  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:02.862563  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:02.862569  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:02.866706  627293 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 10:52:03.363562  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:03.363585  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:03.363593  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:03.363597  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:03.367171  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:03.863250  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:03.863275  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:03.863284  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:03.863288  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:03.866476  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:04.363562  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:04.363593  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:04.363607  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:04.363611  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:04.367286  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:04.862912  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:04.862943  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:04.862957  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:04.862964  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:04.866217  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:04.866889  627293 node_ready.go:53] node "ha-792382-m03" has status "Ready":"False"
	I1209 10:52:05.363334  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:05.363359  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:05.363368  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:05.363371  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:05.366850  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:05.863531  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:05.863565  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:05.863577  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:05.863584  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:05.867191  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:06.363075  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:06.363103  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:06.363116  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:06.363123  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:06.368722  627293 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1209 10:52:06.862720  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:06.862750  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:06.862764  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:06.862773  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:06.865876  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:07.363131  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:07.363158  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:07.363167  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:07.363181  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:07.366603  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:07.367388  627293 node_ready.go:53] node "ha-792382-m03" has status "Ready":"False"
	I1209 10:52:07.862715  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:07.862743  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:07.862756  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:07.862762  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:07.866073  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:08.362710  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:08.362744  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:08.362756  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:08.362763  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:08.366953  627293 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 10:52:08.862771  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:08.862799  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:08.862808  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:08.862813  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:08.866875  627293 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 10:52:09.362787  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:09.362812  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:09.362820  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:09.362824  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:09.367053  627293 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 10:52:09.367603  627293 node_ready.go:53] node "ha-792382-m03" has status "Ready":"False"
	I1209 10:52:09.862752  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:09.862786  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:09.862803  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:09.862809  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:09.866207  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:10.363296  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:10.363329  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:10.363341  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:10.363347  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:10.368594  627293 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1209 10:52:10.863471  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:10.863504  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:10.863518  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:10.863523  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:10.868956  627293 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1209 10:52:11.362961  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:11.362988  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:11.362998  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:11.363003  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:11.366828  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:11.862866  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:11.862896  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:11.862906  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:11.862912  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:11.868040  627293 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1209 10:52:11.868910  627293 node_ready.go:53] node "ha-792382-m03" has status "Ready":"False"
	I1209 10:52:12.363520  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:12.363543  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:12.363551  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:12.363555  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:12.367064  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:12.862709  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:12.862738  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:12.862747  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:12.862751  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:12.866024  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:13.362946  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:13.362972  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:13.362981  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:13.362985  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:13.367208  627293 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 10:52:13.863257  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:13.863282  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:13.863291  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:13.863295  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:13.866570  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:14.363551  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:14.363576  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:14.363588  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:14.363595  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:14.367509  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:14.368341  627293 node_ready.go:53] node "ha-792382-m03" has status "Ready":"False"
	I1209 10:52:14.863449  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:14.863475  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:14.863485  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:14.863492  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:14.866808  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:15.363473  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:15.363501  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:15.363510  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:15.363514  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:15.367252  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:15.863063  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:15.863086  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:15.863095  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:15.863099  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:15.866694  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:16.363487  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:16.363515  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:16.363525  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:16.363529  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:16.366968  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:16.863237  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:16.863267  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:16.863277  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:16.863285  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:16.866528  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:16.867067  627293 node_ready.go:53] node "ha-792382-m03" has status "Ready":"False"
	I1209 10:52:17.363592  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:17.363616  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:17.363628  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:17.363634  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:17.367261  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:17.863310  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:17.863334  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:17.863343  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:17.863347  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:17.866881  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:18.363575  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:18.363603  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:18.363614  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:18.363624  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:18.368502  627293 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 10:52:18.863660  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:18.863684  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:18.863693  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:18.863698  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:18.866946  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:18.867391  627293 node_ready.go:53] node "ha-792382-m03" has status "Ready":"False"
	I1209 10:52:19.362762  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:19.362786  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:19.362794  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:19.362798  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:19.366684  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:19.863495  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:19.863581  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:19.863600  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:19.863608  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:19.870858  627293 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1209 10:52:20.363448  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:20.363473  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:20.363482  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:20.363487  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:20.367472  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:20.368003  627293 node_ready.go:49] node "ha-792382-m03" has status "Ready":"True"
	I1209 10:52:20.368025  627293 node_ready.go:38] duration metric: took 17.505584111s for node "ha-792382-m03" to be "Ready" ...
	I1209 10:52:20.368035  627293 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 10:52:20.368124  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods
	I1209 10:52:20.368135  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:20.368143  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:20.368147  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:20.375067  627293 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1209 10:52:20.382809  627293 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-8hlml" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:20.382913  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-8hlml
	I1209 10:52:20.382922  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:20.382932  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:20.382939  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:20.386681  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:20.387473  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382
	I1209 10:52:20.387492  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:20.387502  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:20.387506  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:20.390201  627293 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 10:52:20.390989  627293 pod_ready.go:93] pod "coredns-7c65d6cfc9-8hlml" in "kube-system" namespace has status "Ready":"True"
	I1209 10:52:20.391012  627293 pod_ready.go:82] duration metric: took 8.170284ms for pod "coredns-7c65d6cfc9-8hlml" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:20.391025  627293 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-rz6mw" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:20.391107  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-rz6mw
	I1209 10:52:20.391121  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:20.391132  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:20.391139  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:20.393896  627293 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 10:52:20.394886  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382
	I1209 10:52:20.394902  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:20.394910  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:20.394913  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:20.397630  627293 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 10:52:20.398092  627293 pod_ready.go:93] pod "coredns-7c65d6cfc9-rz6mw" in "kube-system" namespace has status "Ready":"True"
	I1209 10:52:20.398114  627293 pod_ready.go:82] duration metric: took 7.080989ms for pod "coredns-7c65d6cfc9-rz6mw" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:20.398128  627293 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-792382" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:20.398227  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-792382
	I1209 10:52:20.398238  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:20.398249  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:20.398255  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:20.402755  627293 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 10:52:20.403454  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382
	I1209 10:52:20.403477  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:20.403487  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:20.403495  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:20.407171  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:20.407675  627293 pod_ready.go:93] pod "etcd-ha-792382" in "kube-system" namespace has status "Ready":"True"
	I1209 10:52:20.407690  627293 pod_ready.go:82] duration metric: took 9.55619ms for pod "etcd-ha-792382" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:20.407701  627293 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-792382-m02" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:20.407761  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-792382-m02
	I1209 10:52:20.407769  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:20.407776  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:20.407782  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:20.411699  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:20.412198  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:52:20.412214  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:20.412221  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:20.412228  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:20.415128  627293 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 10:52:20.415876  627293 pod_ready.go:93] pod "etcd-ha-792382-m02" in "kube-system" namespace has status "Ready":"True"
	I1209 10:52:20.415895  627293 pod_ready.go:82] duration metric: took 8.185439ms for pod "etcd-ha-792382-m02" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:20.415927  627293 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-792382-m03" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:20.564348  627293 request.go:632] Waited for 148.293235ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-792382-m03
	I1209 10:52:20.564443  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-792382-m03
	I1209 10:52:20.564455  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:20.564475  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:20.564485  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:20.567758  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:20.763843  627293 request.go:632] Waited for 195.366287ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:20.763920  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:20.763933  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:20.763945  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:20.763957  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:20.772124  627293 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1209 10:52:20.772769  627293 pod_ready.go:93] pod "etcd-ha-792382-m03" in "kube-system" namespace has status "Ready":"True"
	I1209 10:52:20.772802  627293 pod_ready.go:82] duration metric: took 356.849767ms for pod "etcd-ha-792382-m03" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:20.772827  627293 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-792382" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:20.963692  627293 request.go:632] Waited for 190.744323ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-792382
	I1209 10:52:20.963762  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-792382
	I1209 10:52:20.963767  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:20.963775  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:20.963781  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:20.966983  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:21.163987  627293 request.go:632] Waited for 196.382643ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-792382
	I1209 10:52:21.164057  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382
	I1209 10:52:21.164062  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:21.164070  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:21.164074  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:21.167406  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:21.168047  627293 pod_ready.go:93] pod "kube-apiserver-ha-792382" in "kube-system" namespace has status "Ready":"True"
	I1209 10:52:21.168074  627293 pod_ready.go:82] duration metric: took 395.237987ms for pod "kube-apiserver-ha-792382" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:21.168086  627293 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-792382-m02" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:21.364059  627293 request.go:632] Waited for 195.853676ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-792382-m02
	I1209 10:52:21.364141  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-792382-m02
	I1209 10:52:21.364147  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:21.364155  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:21.364164  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:21.368500  627293 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 10:52:21.563923  627293 request.go:632] Waited for 194.790397ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:52:21.563997  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:52:21.564006  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:21.564018  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:21.564029  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:21.567739  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:21.568495  627293 pod_ready.go:93] pod "kube-apiserver-ha-792382-m02" in "kube-system" namespace has status "Ready":"True"
	I1209 10:52:21.568518  627293 pod_ready.go:82] duration metric: took 400.423423ms for pod "kube-apiserver-ha-792382-m02" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:21.568529  627293 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-792382-m03" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:21.763480  627293 request.go:632] Waited for 194.86491ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-792382-m03
	I1209 10:52:21.763574  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-792382-m03
	I1209 10:52:21.763581  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:21.763594  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:21.763602  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:21.767033  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:21.964208  627293 request.go:632] Waited for 196.356498ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:21.964296  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:21.964305  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:21.964340  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:21.964351  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:21.967752  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:21.968228  627293 pod_ready.go:93] pod "kube-apiserver-ha-792382-m03" in "kube-system" namespace has status "Ready":"True"
	I1209 10:52:21.968247  627293 pod_ready.go:82] duration metric: took 399.712092ms for pod "kube-apiserver-ha-792382-m03" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:21.968258  627293 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-792382" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:22.163746  627293 request.go:632] Waited for 195.415661ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-792382
	I1209 10:52:22.163805  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-792382
	I1209 10:52:22.163810  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:22.163823  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:22.163830  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:22.166645  627293 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 10:52:22.364336  627293 request.go:632] Waited for 197.03194ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-792382
	I1209 10:52:22.364428  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382
	I1209 10:52:22.364449  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:22.364480  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:22.364491  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:22.368286  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:22.369016  627293 pod_ready.go:93] pod "kube-controller-manager-ha-792382" in "kube-system" namespace has status "Ready":"True"
	I1209 10:52:22.369039  627293 pod_ready.go:82] duration metric: took 400.774826ms for pod "kube-controller-manager-ha-792382" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:22.369050  627293 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-792382-m02" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:22.564041  627293 request.go:632] Waited for 194.907266ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-792382-m02
	I1209 10:52:22.564119  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-792382-m02
	I1209 10:52:22.564127  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:22.564140  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:22.564149  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:22.567707  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:22.763845  627293 request.go:632] Waited for 195.40032ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:52:22.763928  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:52:22.763935  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:22.763956  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:22.763982  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:22.767705  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:22.768312  627293 pod_ready.go:93] pod "kube-controller-manager-ha-792382-m02" in "kube-system" namespace has status "Ready":"True"
	I1209 10:52:22.768335  627293 pod_ready.go:82] duration metric: took 399.277854ms for pod "kube-controller-manager-ha-792382-m02" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:22.768350  627293 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-792382-m03" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:22.964360  627293 request.go:632] Waited for 195.903206ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-792382-m03
	I1209 10:52:22.964433  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-792382-m03
	I1209 10:52:22.964446  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:22.964457  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:22.964465  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:22.967540  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:23.163523  627293 request.go:632] Waited for 195.162382ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:23.163590  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:23.163596  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:23.163611  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:23.163618  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:23.166875  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:23.167557  627293 pod_ready.go:93] pod "kube-controller-manager-ha-792382-m03" in "kube-system" namespace has status "Ready":"True"
	I1209 10:52:23.167581  627293 pod_ready.go:82] duration metric: took 399.219283ms for pod "kube-controller-manager-ha-792382-m03" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:23.167592  627293 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-2l42s" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:23.364163  627293 request.go:632] Waited for 196.469736ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2l42s
	I1209 10:52:23.364233  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2l42s
	I1209 10:52:23.364240  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:23.364250  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:23.364256  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:23.368871  627293 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 10:52:23.564369  627293 request.go:632] Waited for 194.396631ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:23.564485  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:23.564496  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:23.564504  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:23.564509  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:23.567861  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:23.568367  627293 pod_ready.go:93] pod "kube-proxy-2l42s" in "kube-system" namespace has status "Ready":"True"
	I1209 10:52:23.568387  627293 pod_ready.go:82] duration metric: took 400.786442ms for pod "kube-proxy-2l42s" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:23.568400  627293 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-dckpl" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:23.763515  627293 request.go:632] Waited for 195.023087ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dckpl
	I1209 10:52:23.763600  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dckpl
	I1209 10:52:23.763608  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:23.763619  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:23.763628  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:23.767899  627293 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 10:52:23.964038  627293 request.go:632] Waited for 195.369645ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:52:23.964137  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:52:23.964144  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:23.964152  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:23.964161  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:23.967628  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:23.968543  627293 pod_ready.go:93] pod "kube-proxy-dckpl" in "kube-system" namespace has status "Ready":"True"
	I1209 10:52:23.968572  627293 pod_ready.go:82] duration metric: took 400.162458ms for pod "kube-proxy-dckpl" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:23.968586  627293 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-wrvgb" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:24.164418  627293 request.go:632] Waited for 195.731455ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wrvgb
	I1209 10:52:24.164497  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wrvgb
	I1209 10:52:24.164502  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:24.164511  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:24.164516  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:24.167227  627293 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 10:52:24.364211  627293 request.go:632] Waited for 196.319396ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-792382
	I1209 10:52:24.364296  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382
	I1209 10:52:24.364308  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:24.364319  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:24.364330  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:24.368387  627293 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 10:52:24.369158  627293 pod_ready.go:93] pod "kube-proxy-wrvgb" in "kube-system" namespace has status "Ready":"True"
	I1209 10:52:24.369182  627293 pod_ready.go:82] duration metric: took 400.580765ms for pod "kube-proxy-wrvgb" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:24.369195  627293 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-792382" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:24.564251  627293 request.go:632] Waited for 194.959562ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-792382
	I1209 10:52:24.564342  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-792382
	I1209 10:52:24.564348  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:24.564357  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:24.564361  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:24.568298  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:24.764304  627293 request.go:632] Waited for 195.363618ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-792382
	I1209 10:52:24.764392  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382
	I1209 10:52:24.764408  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:24.764418  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:24.764425  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:24.768139  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:24.768711  627293 pod_ready.go:93] pod "kube-scheduler-ha-792382" in "kube-system" namespace has status "Ready":"True"
	I1209 10:52:24.768733  627293 pod_ready.go:82] duration metric: took 399.519254ms for pod "kube-scheduler-ha-792382" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:24.768746  627293 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-792382-m02" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:24.963667  627293 request.go:632] Waited for 194.82946ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-792382-m02
	I1209 10:52:24.963730  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-792382-m02
	I1209 10:52:24.963736  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:24.963744  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:24.963749  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:24.967092  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:25.164276  627293 request.go:632] Waited for 196.380929ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:52:25.164345  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:52:25.164349  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:25.164358  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:25.164364  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:25.169070  627293 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 10:52:25.169673  627293 pod_ready.go:93] pod "kube-scheduler-ha-792382-m02" in "kube-system" namespace has status "Ready":"True"
	I1209 10:52:25.169696  627293 pod_ready.go:82] duration metric: took 400.939865ms for pod "kube-scheduler-ha-792382-m02" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:25.169706  627293 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-792382-m03" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:25.363779  627293 request.go:632] Waited for 193.996151ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-792382-m03
	I1209 10:52:25.363866  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-792382-m03
	I1209 10:52:25.363882  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:25.363912  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:25.363923  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:25.367885  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:25.563919  627293 request.go:632] Waited for 195.39244ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:25.563987  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:25.563992  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:25.564000  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:25.564003  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:25.567759  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:25.568223  627293 pod_ready.go:93] pod "kube-scheduler-ha-792382-m03" in "kube-system" namespace has status "Ready":"True"
	I1209 10:52:25.568247  627293 pod_ready.go:82] duration metric: took 398.53325ms for pod "kube-scheduler-ha-792382-m03" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:25.568262  627293 pod_ready.go:39] duration metric: took 5.200212564s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 10:52:25.568288  627293 api_server.go:52] waiting for apiserver process to appear ...
	I1209 10:52:25.568359  627293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 10:52:25.588000  627293 api_server.go:72] duration metric: took 22.996035203s to wait for apiserver process to appear ...
	I1209 10:52:25.588031  627293 api_server.go:88] waiting for apiserver healthz status ...
	I1209 10:52:25.588055  627293 api_server.go:253] Checking apiserver healthz at https://192.168.39.69:8443/healthz ...
	I1209 10:52:25.592469  627293 api_server.go:279] https://192.168.39.69:8443/healthz returned 200:
	ok
	I1209 10:52:25.592544  627293 round_trippers.go:463] GET https://192.168.39.69:8443/version
	I1209 10:52:25.592549  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:25.592557  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:25.592563  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:25.593630  627293 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1209 10:52:25.593699  627293 api_server.go:141] control plane version: v1.31.2
	I1209 10:52:25.593714  627293 api_server.go:131] duration metric: took 5.676129ms to wait for apiserver health ...
	I1209 10:52:25.593722  627293 system_pods.go:43] waiting for kube-system pods to appear ...
	I1209 10:52:25.764156  627293 request.go:632] Waited for 170.352326ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods
	I1209 10:52:25.764268  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods
	I1209 10:52:25.764281  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:25.764294  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:25.764301  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:25.774462  627293 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1209 10:52:25.781848  627293 system_pods.go:59] 24 kube-system pods found
	I1209 10:52:25.781880  627293 system_pods.go:61] "coredns-7c65d6cfc9-8hlml" [d820cd6c-5064-4934-adc8-c68f84c09b46] Running
	I1209 10:52:25.781886  627293 system_pods.go:61] "coredns-7c65d6cfc9-rz6mw" [af297b6d-91f1-4114-b98c-cdfdfbd1589e] Running
	I1209 10:52:25.781890  627293 system_pods.go:61] "etcd-ha-792382" [eee2fbad-7f2f-4f5a-b701-abdbf8456f99] Running
	I1209 10:52:25.781894  627293 system_pods.go:61] "etcd-ha-792382-m02" [424768ff-af5d-4a31-b062-ab2c9576884d] Running
	I1209 10:52:25.781897  627293 system_pods.go:61] "etcd-ha-792382-m03" [4112b988-6915-413a-badd-c0207865e60d] Running
	I1209 10:52:25.781900  627293 system_pods.go:61] "kindnet-6hlht" [23156ebc-d366-4fc2-bedb-7a63e950b116] Running
	I1209 10:52:25.781903  627293 system_pods.go:61] "kindnet-bqp2z" [b2c40579-4d72-4efe-b921-1e0f98b91544] Running
	I1209 10:52:25.781906  627293 system_pods.go:61] "kindnet-hkrhk" [9b35011c-ab45-4f55-a60f-08f9c4509c1d] Running
	I1209 10:52:25.781909  627293 system_pods.go:61] "kube-apiserver-ha-792382" [5157cfb0-bc91-4efe-b2e2-689778d5b012] Running
	I1209 10:52:25.781913  627293 system_pods.go:61] "kube-apiserver-ha-792382-m02" [71f9ce1c-aeff-4853-83ec-01a7fb6a81d5] Running
	I1209 10:52:25.781916  627293 system_pods.go:61] "kube-apiserver-ha-792382-m03" [5cd4395c-58a8-45ba-90ea-72105d25fadd] Running
	I1209 10:52:25.781919  627293 system_pods.go:61] "kube-controller-manager-ha-792382" [f8cb7175-f6fd-4dcf-a6a5-66c44aa136ce] Running
	I1209 10:52:25.781922  627293 system_pods.go:61] "kube-controller-manager-ha-792382-m02" [2f7d8deb-ddad-47ac-ad18-5f8e7d0e095d] Running
	I1209 10:52:25.781926  627293 system_pods.go:61] "kube-controller-manager-ha-792382-m03" [5c5d03de-e7e9-491b-a6fd-fdc50b4ce7ed] Running
	I1209 10:52:25.781930  627293 system_pods.go:61] "kube-proxy-2l42s" [a4bfe3cb-9b06-4d1e-9887-c461d31aaaec] Running
	I1209 10:52:25.781934  627293 system_pods.go:61] "kube-proxy-dckpl" [13f7bda1-f9c2-4fd0-96e0-b6aee1139bc1] Running
	I1209 10:52:25.781940  627293 system_pods.go:61] "kube-proxy-wrvgb" [2531e29f-a4d5-41f9-8c38-3220b4caf96b] Running
	I1209 10:52:25.781942  627293 system_pods.go:61] "kube-scheduler-ha-792382" [b11693d0-b2e7-45a1-a1a2-6519a1535c45] Running
	I1209 10:52:25.781945  627293 system_pods.go:61] "kube-scheduler-ha-792382-m02" [250ce40c-5cbf-4f5d-a475-4f3d7ec100d9] Running
	I1209 10:52:25.781948  627293 system_pods.go:61] "kube-scheduler-ha-792382-m03" [b994f699-40b5-423e-b92f-3ca6208e69d0] Running
	I1209 10:52:25.781951  627293 system_pods.go:61] "kube-vip-ha-792382" [511f3a84-b444-4603-8f6f-eeeb262b1384] Running
	I1209 10:52:25.781954  627293 system_pods.go:61] "kube-vip-ha-792382-m02" [f2110c28-09f5-42f9-8e16-beec9759a0a2] Running
	I1209 10:52:25.781957  627293 system_pods.go:61] "kube-vip-ha-792382-m03" [5eee7c3c-1b75-48ad-813e-963fa4308d1b] Running
	I1209 10:52:25.781960  627293 system_pods.go:61] "storage-provisioner" [4419fe4f-e2ed-4ecb-a912-2dd074e29727] Running
	I1209 10:52:25.781965  627293 system_pods.go:74] duration metric: took 188.238253ms to wait for pod list to return data ...
	I1209 10:52:25.781976  627293 default_sa.go:34] waiting for default service account to be created ...
	I1209 10:52:25.964450  627293 request.go:632] Waited for 182.375955ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/default/serviceaccounts
	I1209 10:52:25.964524  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/default/serviceaccounts
	I1209 10:52:25.964529  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:25.964538  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:25.964543  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:25.968489  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:25.968636  627293 default_sa.go:45] found service account: "default"
	I1209 10:52:25.968653  627293 default_sa.go:55] duration metric: took 186.669919ms for default service account to be created ...
	I1209 10:52:25.968664  627293 system_pods.go:116] waiting for k8s-apps to be running ...
	I1209 10:52:26.163895  627293 request.go:632] Waited for 195.104758ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods
	I1209 10:52:26.163963  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods
	I1209 10:52:26.163969  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:26.163977  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:26.163981  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:26.169457  627293 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1209 10:52:26.176126  627293 system_pods.go:86] 24 kube-system pods found
	I1209 10:52:26.176160  627293 system_pods.go:89] "coredns-7c65d6cfc9-8hlml" [d820cd6c-5064-4934-adc8-c68f84c09b46] Running
	I1209 10:52:26.176166  627293 system_pods.go:89] "coredns-7c65d6cfc9-rz6mw" [af297b6d-91f1-4114-b98c-cdfdfbd1589e] Running
	I1209 10:52:26.176171  627293 system_pods.go:89] "etcd-ha-792382" [eee2fbad-7f2f-4f5a-b701-abdbf8456f99] Running
	I1209 10:52:26.176175  627293 system_pods.go:89] "etcd-ha-792382-m02" [424768ff-af5d-4a31-b062-ab2c9576884d] Running
	I1209 10:52:26.176178  627293 system_pods.go:89] "etcd-ha-792382-m03" [4112b988-6915-413a-badd-c0207865e60d] Running
	I1209 10:52:26.176184  627293 system_pods.go:89] "kindnet-6hlht" [23156ebc-d366-4fc2-bedb-7a63e950b116] Running
	I1209 10:52:26.176189  627293 system_pods.go:89] "kindnet-bqp2z" [b2c40579-4d72-4efe-b921-1e0f98b91544] Running
	I1209 10:52:26.176195  627293 system_pods.go:89] "kindnet-hkrhk" [9b35011c-ab45-4f55-a60f-08f9c4509c1d] Running
	I1209 10:52:26.176201  627293 system_pods.go:89] "kube-apiserver-ha-792382" [5157cfb0-bc91-4efe-b2e2-689778d5b012] Running
	I1209 10:52:26.176206  627293 system_pods.go:89] "kube-apiserver-ha-792382-m02" [71f9ce1c-aeff-4853-83ec-01a7fb6a81d5] Running
	I1209 10:52:26.176212  627293 system_pods.go:89] "kube-apiserver-ha-792382-m03" [5cd4395c-58a8-45ba-90ea-72105d25fadd] Running
	I1209 10:52:26.176220  627293 system_pods.go:89] "kube-controller-manager-ha-792382" [f8cb7175-f6fd-4dcf-a6a5-66c44aa136ce] Running
	I1209 10:52:26.176231  627293 system_pods.go:89] "kube-controller-manager-ha-792382-m02" [2f7d8deb-ddad-47ac-ad18-5f8e7d0e095d] Running
	I1209 10:52:26.176240  627293 system_pods.go:89] "kube-controller-manager-ha-792382-m03" [5c5d03de-e7e9-491b-a6fd-fdc50b4ce7ed] Running
	I1209 10:52:26.176245  627293 system_pods.go:89] "kube-proxy-2l42s" [a4bfe3cb-9b06-4d1e-9887-c461d31aaaec] Running
	I1209 10:52:26.176254  627293 system_pods.go:89] "kube-proxy-dckpl" [13f7bda1-f9c2-4fd0-96e0-b6aee1139bc1] Running
	I1209 10:52:26.176263  627293 system_pods.go:89] "kube-proxy-wrvgb" [2531e29f-a4d5-41f9-8c38-3220b4caf96b] Running
	I1209 10:52:26.176272  627293 system_pods.go:89] "kube-scheduler-ha-792382" [b11693d0-b2e7-45a1-a1a2-6519a1535c45] Running
	I1209 10:52:26.176285  627293 system_pods.go:89] "kube-scheduler-ha-792382-m02" [250ce40c-5cbf-4f5d-a475-4f3d7ec100d9] Running
	I1209 10:52:26.176294  627293 system_pods.go:89] "kube-scheduler-ha-792382-m03" [b994f699-40b5-423e-b92f-3ca6208e69d0] Running
	I1209 10:52:26.176303  627293 system_pods.go:89] "kube-vip-ha-792382" [511f3a84-b444-4603-8f6f-eeeb262b1384] Running
	I1209 10:52:26.176312  627293 system_pods.go:89] "kube-vip-ha-792382-m02" [f2110c28-09f5-42f9-8e16-beec9759a0a2] Running
	I1209 10:52:26.176320  627293 system_pods.go:89] "kube-vip-ha-792382-m03" [5eee7c3c-1b75-48ad-813e-963fa4308d1b] Running
	I1209 10:52:26.176327  627293 system_pods.go:89] "storage-provisioner" [4419fe4f-e2ed-4ecb-a912-2dd074e29727] Running
	I1209 10:52:26.176338  627293 system_pods.go:126] duration metric: took 207.663846ms to wait for k8s-apps to be running ...
	I1209 10:52:26.176348  627293 system_svc.go:44] waiting for kubelet service to be running ....
	I1209 10:52:26.176410  627293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 10:52:26.193241  627293 system_svc.go:56] duration metric: took 16.882967ms WaitForService to wait for kubelet
	I1209 10:52:26.193274  627293 kubeadm.go:582] duration metric: took 23.601316183s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 10:52:26.193295  627293 node_conditions.go:102] verifying NodePressure condition ...
	I1209 10:52:26.363791  627293 request.go:632] Waited for 170.378697ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes
	I1209 10:52:26.363869  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes
	I1209 10:52:26.363877  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:26.363893  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:26.363902  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:26.369525  627293 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1209 10:52:26.370723  627293 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1209 10:52:26.370747  627293 node_conditions.go:123] node cpu capacity is 2
	I1209 10:52:26.370760  627293 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1209 10:52:26.370763  627293 node_conditions.go:123] node cpu capacity is 2
	I1209 10:52:26.370766  627293 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1209 10:52:26.370770  627293 node_conditions.go:123] node cpu capacity is 2
	I1209 10:52:26.370774  627293 node_conditions.go:105] duration metric: took 177.473705ms to run NodePressure ...
	I1209 10:52:26.370790  627293 start.go:241] waiting for startup goroutines ...
	I1209 10:52:26.370823  627293 start.go:255] writing updated cluster config ...
	I1209 10:52:26.371156  627293 ssh_runner.go:195] Run: rm -f paused
	I1209 10:52:26.426485  627293 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1209 10:52:26.428634  627293 out.go:177] * Done! kubectl is now configured to use "ha-792382" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 09 10:56:18 ha-792382 crio[665]: time="2024-12-09 10:56:18.442162506Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=95190694-12fd-4672-89c9-3e18f052c5ee name=/runtime.v1.RuntimeService/Version
	Dec 09 10:56:18 ha-792382 crio[665]: time="2024-12-09 10:56:18.443582284Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9367c5a2-8d45-4157-bd62-139b0a670cd3 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 10:56:18 ha-792382 crio[665]: time="2024-12-09 10:56:18.444029707Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733741778444009275,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9367c5a2-8d45-4157-bd62-139b0a670cd3 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 10:56:18 ha-792382 crio[665]: time="2024-12-09 10:56:18.444502400Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c1e6672b-8373-46df-a693-cb591a355afe name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 10:56:18 ha-792382 crio[665]: time="2024-12-09 10:56:18.444570041Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c1e6672b-8373-46df-a693-cb591a355afe name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 10:56:18 ha-792382 crio[665]: time="2024-12-09 10:56:18.444828850Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3354d3bec20606deb1f19130215939f5c7b2123b73319ef892bffc278d375fb8,PodSandboxId:e47f42b7e0900b7149c93ac504858a9b845addf2278ae1c0abd45e66fc9df066,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733741551077576684,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-z9wjm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 00b911f2-4cd1-486a-9276-1e98745ede0e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4ba11ff07ea5065d66a5e8b8d091a9ba9a1c680ab1ade1d19aecf153081d7dd,PodSandboxId:a5c60a0e3c19becc3387cabd45cb24c34922bd26351286c88744f9e2caf6f8d6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733741412981896414,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8hlml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d820cd6c-5064-4934-adc8-c68f84c09b46,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afc0f0aea4c8acce16752f3e50cc08b660c2c26b24932beaf28ba3d1bc596733,PodSandboxId:038ff3d97cfe50dddc147974bab92d243212a5896794a4bc3f1ce2b77fdb5e39,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733741412958470450,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rz6mw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
af297b6d-91f1-4114-b98c-cdfdfbd1589e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9fa96349b5a7bf6e8223fd16d01f77aa0d3c45aa83ad28fc42eec5d2a80dd24,PodSandboxId:02bd44e5a67d9df1c56cc6297ec66940ce286c89b103f7efa4b35259d2a2f8c7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733741412903452485,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4419fe4f-e2ed-4ecb-a912-2dd074e29727,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6bf7c7cf0d689f954fca58d1d94afa21f5bfd0f606552fbbf1479a9ae1593d3,PodSandboxId:cfb791c6d05cec0b9cc244408089c309a41f321fe06c2083eabaeaf9184c0ff6,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e,State:CO
NTAINER_RUNNING,CreatedAt:1733741400823508110,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bqp2z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2c40579-4d72-4efe-b921-1e0f98b91544,},Annotations:map[string]string{io.kubernetes.container.hash: 9533276,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cf6196a4789ec6471e8e2fe474e71d1756368a4423e425262410e6c7a71e522,PodSandboxId:82b54a7467a7aa9cf761959fb4e7e7ea830c6fdcf33629ec45c8b55326334f67,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733741398
382511881,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wrvgb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2531e29f-a4d5-41f9-8c38-3220b4caf96b,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:082e8ff7e6c7e3e1110852687152ddb22202b3134aa3105a8b11aa7d702bcd6a,PodSandboxId:1486ff19db45e7b9cb8b545f56579a8765f6cef7fe3b1d44f967a5c5823b4cdc,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:32829cc6f8630eba4e1b5e4df5bcbc34c767e70703d26e64a0f7317951c7b517,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1c87c24be687be21b68228b1b53b7792a3d82fb4d99bc68a5a984f582508a37,State:CONTAINER_RUNNING,CreatedAt:173374138888
4669062,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-792382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9922f13afb31842008ba0179dabd897e,},Annotations:map[string]string{io.kubernetes.container.hash: 69195791,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:778345b29099a60c663a83117fc6b409ae496d9228f79ecc48c26209e7a87f63,PodSandboxId:27e12e36b1bd81bfdb6cc876b8c1a9c925ed2eec6ac687056f4bd10eb872b7a6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733741386069265058,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-792382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2460a8b15a62b9cf3ad5343586bde402,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64b96c1c2297057b3b928bc7494df511d909f05efb4f8d5be27458c164efe95f,PodSandboxId:7bbf390b8ef03829849defea1ba3817acb7a86ce1705fb53d765d7ddca57a066,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733741386071035939,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kub
ernetes.pod.name: kube-apiserver-ha-792382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89a89b1c65df6e3ad9608c5607172f77,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d93c68b855d9f72ce368ee4f59250a988032fe781209e2115a8d70ea60f77fee,PodSandboxId:9493b93aded71efd1c05e2df3b9537c0c63dc9b2e9abdbf1be430d3c29d5ad9f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733741386009117508,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kub
e-scheduler-ha-792382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 082fcfac40bcf36b76f1e733a9f73bc8,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00db8f77881efc15e9bf57aba34890302ec37f9239ba1d191862ea78b9833604,PodSandboxId:02e8433fa67cc5aef5d43a5edf820340ad5d91eef3bdd9e7149eeec0e55a8c95,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733741385994463722,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-792382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4d8d358ed72ac30c9365aedd3aee4d1,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c1e6672b-8373-46df-a693-cb591a355afe name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 10:56:18 ha-792382 crio[665]: time="2024-12-09 10:56:18.479985652Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bd45f4ee-d50b-4933-98fe-ef601b1231f0 name=/runtime.v1.RuntimeService/Version
	Dec 09 10:56:18 ha-792382 crio[665]: time="2024-12-09 10:56:18.480077399Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bd45f4ee-d50b-4933-98fe-ef601b1231f0 name=/runtime.v1.RuntimeService/Version
	Dec 09 10:56:18 ha-792382 crio[665]: time="2024-12-09 10:56:18.481280527Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d4eb0c7f-43d3-4048-a8d1-e33a7c264a3b name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 10:56:18 ha-792382 crio[665]: time="2024-12-09 10:56:18.481844601Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733741778481815149,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d4eb0c7f-43d3-4048-a8d1-e33a7c264a3b name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 10:56:18 ha-792382 crio[665]: time="2024-12-09 10:56:18.482282565Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ee8ff7b7-9566-4740-ac46-e2ff4ef1895a name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 10:56:18 ha-792382 crio[665]: time="2024-12-09 10:56:18.482397400Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ee8ff7b7-9566-4740-ac46-e2ff4ef1895a name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 10:56:18 ha-792382 crio[665]: time="2024-12-09 10:56:18.482648170Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3354d3bec20606deb1f19130215939f5c7b2123b73319ef892bffc278d375fb8,PodSandboxId:e47f42b7e0900b7149c93ac504858a9b845addf2278ae1c0abd45e66fc9df066,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733741551077576684,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-z9wjm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 00b911f2-4cd1-486a-9276-1e98745ede0e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4ba11ff07ea5065d66a5e8b8d091a9ba9a1c680ab1ade1d19aecf153081d7dd,PodSandboxId:a5c60a0e3c19becc3387cabd45cb24c34922bd26351286c88744f9e2caf6f8d6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733741412981896414,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8hlml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d820cd6c-5064-4934-adc8-c68f84c09b46,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afc0f0aea4c8acce16752f3e50cc08b660c2c26b24932beaf28ba3d1bc596733,PodSandboxId:038ff3d97cfe50dddc147974bab92d243212a5896794a4bc3f1ce2b77fdb5e39,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733741412958470450,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rz6mw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
af297b6d-91f1-4114-b98c-cdfdfbd1589e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9fa96349b5a7bf6e8223fd16d01f77aa0d3c45aa83ad28fc42eec5d2a80dd24,PodSandboxId:02bd44e5a67d9df1c56cc6297ec66940ce286c89b103f7efa4b35259d2a2f8c7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733741412903452485,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4419fe4f-e2ed-4ecb-a912-2dd074e29727,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6bf7c7cf0d689f954fca58d1d94afa21f5bfd0f606552fbbf1479a9ae1593d3,PodSandboxId:cfb791c6d05cec0b9cc244408089c309a41f321fe06c2083eabaeaf9184c0ff6,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e,State:CO
NTAINER_RUNNING,CreatedAt:1733741400823508110,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bqp2z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2c40579-4d72-4efe-b921-1e0f98b91544,},Annotations:map[string]string{io.kubernetes.container.hash: 9533276,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cf6196a4789ec6471e8e2fe474e71d1756368a4423e425262410e6c7a71e522,PodSandboxId:82b54a7467a7aa9cf761959fb4e7e7ea830c6fdcf33629ec45c8b55326334f67,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733741398
382511881,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wrvgb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2531e29f-a4d5-41f9-8c38-3220b4caf96b,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:082e8ff7e6c7e3e1110852687152ddb22202b3134aa3105a8b11aa7d702bcd6a,PodSandboxId:1486ff19db45e7b9cb8b545f56579a8765f6cef7fe3b1d44f967a5c5823b4cdc,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:32829cc6f8630eba4e1b5e4df5bcbc34c767e70703d26e64a0f7317951c7b517,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1c87c24be687be21b68228b1b53b7792a3d82fb4d99bc68a5a984f582508a37,State:CONTAINER_RUNNING,CreatedAt:173374138888
4669062,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-792382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9922f13afb31842008ba0179dabd897e,},Annotations:map[string]string{io.kubernetes.container.hash: 69195791,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:778345b29099a60c663a83117fc6b409ae496d9228f79ecc48c26209e7a87f63,PodSandboxId:27e12e36b1bd81bfdb6cc876b8c1a9c925ed2eec6ac687056f4bd10eb872b7a6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733741386069265058,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-792382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2460a8b15a62b9cf3ad5343586bde402,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64b96c1c2297057b3b928bc7494df511d909f05efb4f8d5be27458c164efe95f,PodSandboxId:7bbf390b8ef03829849defea1ba3817acb7a86ce1705fb53d765d7ddca57a066,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733741386071035939,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kub
ernetes.pod.name: kube-apiserver-ha-792382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89a89b1c65df6e3ad9608c5607172f77,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d93c68b855d9f72ce368ee4f59250a988032fe781209e2115a8d70ea60f77fee,PodSandboxId:9493b93aded71efd1c05e2df3b9537c0c63dc9b2e9abdbf1be430d3c29d5ad9f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733741386009117508,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kub
e-scheduler-ha-792382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 082fcfac40bcf36b76f1e733a9f73bc8,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00db8f77881efc15e9bf57aba34890302ec37f9239ba1d191862ea78b9833604,PodSandboxId:02e8433fa67cc5aef5d43a5edf820340ad5d91eef3bdd9e7149eeec0e55a8c95,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733741385994463722,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-792382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4d8d358ed72ac30c9365aedd3aee4d1,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ee8ff7b7-9566-4740-ac46-e2ff4ef1895a name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 10:56:18 ha-792382 crio[665]: time="2024-12-09 10:56:18.492263698Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=d34ff141-c93f-485c-8338-2ab23d985cc3 name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 09 10:56:18 ha-792382 crio[665]: time="2024-12-09 10:56:18.492615453Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:e47f42b7e0900b7149c93ac504858a9b845addf2278ae1c0abd45e66fc9df066,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-z9wjm,Uid:00b911f2-4cd1-486a-9276-1e98745ede0e,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733741547721451467,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-z9wjm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 00b911f2-4cd1-486a-9276-1e98745ede0e,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-09T10:52:27.406707129Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:02bd44e5a67d9df1c56cc6297ec66940ce286c89b103f7efa4b35259d2a2f8c7,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:4419fe4f-e2ed-4ecb-a912-2dd074e29727,Namespace:kube-system,Attempt:0,},State:SANDBO
X_READY,CreatedAt:1733741412725372136,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4419fe4f-e2ed-4ecb-a912-2dd074e29727,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"ty
pe\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-12-09T10:50:12.389187976Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:038ff3d97cfe50dddc147974bab92d243212a5896794a4bc3f1ce2b77fdb5e39,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-rz6mw,Uid:af297b6d-91f1-4114-b98c-cdfdfbd1589e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733741412714144056,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-rz6mw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af297b6d-91f1-4114-b98c-cdfdfbd1589e,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-09T10:50:12.385407546Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a5c60a0e3c19becc3387cabd45cb24c34922bd26351286c88744f9e2caf6f8d6,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-8hlml,Uid:d820cd6c-5064-4934-adc8-c68f84c09b46,Namespace:kube-system,Atte
mpt:0,},State:SANDBOX_READY,CreatedAt:1733741412691272331,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-8hlml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d820cd6c-5064-4934-adc8-c68f84c09b46,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-09T10:50:12.378384594Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:82b54a7467a7aa9cf761959fb4e7e7ea830c6fdcf33629ec45c8b55326334f67,Metadata:&PodSandboxMetadata{Name:kube-proxy-wrvgb,Uid:2531e29f-a4d5-41f9-8c38-3220b4caf96b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733741398278045244,Labels:map[string]string{controller-revision-hash: 77987969cc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-wrvgb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2531e29f-a4d5-41f9-8c38-3220b4caf96b,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]
string{kubernetes.io/config.seen: 2024-12-09T10:49:56.468694189Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:cfb791c6d05cec0b9cc244408089c309a41f321fe06c2083eabaeaf9184c0ff6,Metadata:&PodSandboxMetadata{Name:kindnet-bqp2z,Uid:b2c40579-4d72-4efe-b921-1e0f98b91544,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733741396742615236,Labels:map[string]string{app: kindnet,controller-revision-hash: 7dff7cd75d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-bqp2z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2c40579-4d72-4efe-b921-1e0f98b91544,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-09T10:49:56.430662967Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9493b93aded71efd1c05e2df3b9537c0c63dc9b2e9abdbf1be430d3c29d5ad9f,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-792382,Uid:082fcfac40bcf36b76f1e733a9f73bc8,Namespace:kube-system,Attempt:0
,},State:SANDBOX_READY,CreatedAt:1733741385787750594,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-792382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 082fcfac40bcf36b76f1e733a9f73bc8,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 082fcfac40bcf36b76f1e733a9f73bc8,kubernetes.io/config.seen: 2024-12-09T10:49:45.114989762Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:02e8433fa67cc5aef5d43a5edf820340ad5d91eef3bdd9e7149eeec0e55a8c95,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-792382,Uid:a4d8d358ed72ac30c9365aedd3aee4d1,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733741385786710751,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-792382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4d8d358ed72ac30c9365aedd3aee4d
1,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: a4d8d358ed72ac30c9365aedd3aee4d1,kubernetes.io/config.seen: 2024-12-09T10:49:45.114988700Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:7bbf390b8ef03829849defea1ba3817acb7a86ce1705fb53d765d7ddca57a066,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-792382,Uid:89a89b1c65df6e3ad9608c5607172f77,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733741385780260822,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-792382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89a89b1c65df6e3ad9608c5607172f77,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.69:8443,kubernetes.io/config.hash: 89a89b1c65df6e3ad9608c5607172f77,kubernetes.io/config.seen: 2024-12-09T10:49:45.114987412Z,kubernetes.io/config.source: file,},RuntimeHandler:,}
,&PodSandbox{Id:27e12e36b1bd81bfdb6cc876b8c1a9c925ed2eec6ac687056f4bd10eb872b7a6,Metadata:&PodSandboxMetadata{Name:etcd-ha-792382,Uid:2460a8b15a62b9cf3ad5343586bde402,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733741385774534212,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-792382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2460a8b15a62b9cf3ad5343586bde402,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.69:2379,kubernetes.io/config.hash: 2460a8b15a62b9cf3ad5343586bde402,kubernetes.io/config.seen: 2024-12-09T10:49:45.114986053Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:1486ff19db45e7b9cb8b545f56579a8765f6cef7fe3b1d44f967a5c5823b4cdc,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-792382,Uid:9922f13afb31842008ba0179dabd897e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733741385757881865,Labels:
map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-792382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9922f13afb31842008ba0179dabd897e,},Annotations:map[string]string{kubernetes.io/config.hash: 9922f13afb31842008ba0179dabd897e,kubernetes.io/config.seen: 2024-12-09T10:49:45.114982392Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=d34ff141-c93f-485c-8338-2ab23d985cc3 name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 09 10:56:18 ha-792382 crio[665]: time="2024-12-09 10:56:18.493224256Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=236284e1-bba4-4407-a142-35999469996d name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 10:56:18 ha-792382 crio[665]: time="2024-12-09 10:56:18.493299275Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=236284e1-bba4-4407-a142-35999469996d name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 10:56:18 ha-792382 crio[665]: time="2024-12-09 10:56:18.493656150Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3354d3bec20606deb1f19130215939f5c7b2123b73319ef892bffc278d375fb8,PodSandboxId:e47f42b7e0900b7149c93ac504858a9b845addf2278ae1c0abd45e66fc9df066,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733741551077576684,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-z9wjm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 00b911f2-4cd1-486a-9276-1e98745ede0e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4ba11ff07ea5065d66a5e8b8d091a9ba9a1c680ab1ade1d19aecf153081d7dd,PodSandboxId:a5c60a0e3c19becc3387cabd45cb24c34922bd26351286c88744f9e2caf6f8d6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733741412981896414,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8hlml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d820cd6c-5064-4934-adc8-c68f84c09b46,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afc0f0aea4c8acce16752f3e50cc08b660c2c26b24932beaf28ba3d1bc596733,PodSandboxId:038ff3d97cfe50dddc147974bab92d243212a5896794a4bc3f1ce2b77fdb5e39,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733741412958470450,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rz6mw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
af297b6d-91f1-4114-b98c-cdfdfbd1589e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9fa96349b5a7bf6e8223fd16d01f77aa0d3c45aa83ad28fc42eec5d2a80dd24,PodSandboxId:02bd44e5a67d9df1c56cc6297ec66940ce286c89b103f7efa4b35259d2a2f8c7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733741412903452485,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4419fe4f-e2ed-4ecb-a912-2dd074e29727,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6bf7c7cf0d689f954fca58d1d94afa21f5bfd0f606552fbbf1479a9ae1593d3,PodSandboxId:cfb791c6d05cec0b9cc244408089c309a41f321fe06c2083eabaeaf9184c0ff6,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e,State:CO
NTAINER_RUNNING,CreatedAt:1733741400823508110,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bqp2z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2c40579-4d72-4efe-b921-1e0f98b91544,},Annotations:map[string]string{io.kubernetes.container.hash: 9533276,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cf6196a4789ec6471e8e2fe474e71d1756368a4423e425262410e6c7a71e522,PodSandboxId:82b54a7467a7aa9cf761959fb4e7e7ea830c6fdcf33629ec45c8b55326334f67,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733741398
382511881,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wrvgb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2531e29f-a4d5-41f9-8c38-3220b4caf96b,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:082e8ff7e6c7e3e1110852687152ddb22202b3134aa3105a8b11aa7d702bcd6a,PodSandboxId:1486ff19db45e7b9cb8b545f56579a8765f6cef7fe3b1d44f967a5c5823b4cdc,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:32829cc6f8630eba4e1b5e4df5bcbc34c767e70703d26e64a0f7317951c7b517,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1c87c24be687be21b68228b1b53b7792a3d82fb4d99bc68a5a984f582508a37,State:CONTAINER_RUNNING,CreatedAt:173374138888
4669062,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-792382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9922f13afb31842008ba0179dabd897e,},Annotations:map[string]string{io.kubernetes.container.hash: 69195791,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:778345b29099a60c663a83117fc6b409ae496d9228f79ecc48c26209e7a87f63,PodSandboxId:27e12e36b1bd81bfdb6cc876b8c1a9c925ed2eec6ac687056f4bd10eb872b7a6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733741386069265058,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-792382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2460a8b15a62b9cf3ad5343586bde402,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64b96c1c2297057b3b928bc7494df511d909f05efb4f8d5be27458c164efe95f,PodSandboxId:7bbf390b8ef03829849defea1ba3817acb7a86ce1705fb53d765d7ddca57a066,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733741386071035939,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kub
ernetes.pod.name: kube-apiserver-ha-792382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89a89b1c65df6e3ad9608c5607172f77,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d93c68b855d9f72ce368ee4f59250a988032fe781209e2115a8d70ea60f77fee,PodSandboxId:9493b93aded71efd1c05e2df3b9537c0c63dc9b2e9abdbf1be430d3c29d5ad9f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733741386009117508,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kub
e-scheduler-ha-792382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 082fcfac40bcf36b76f1e733a9f73bc8,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00db8f77881efc15e9bf57aba34890302ec37f9239ba1d191862ea78b9833604,PodSandboxId:02e8433fa67cc5aef5d43a5edf820340ad5d91eef3bdd9e7149eeec0e55a8c95,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733741385994463722,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-792382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4d8d358ed72ac30c9365aedd3aee4d1,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=236284e1-bba4-4407-a142-35999469996d name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 10:56:18 ha-792382 crio[665]: time="2024-12-09 10:56:18.521574096Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e2aee54d-c2fb-4bad-a375-043c4e0237d9 name=/runtime.v1.RuntimeService/Version
	Dec 09 10:56:18 ha-792382 crio[665]: time="2024-12-09 10:56:18.521659151Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e2aee54d-c2fb-4bad-a375-043c4e0237d9 name=/runtime.v1.RuntimeService/Version
	Dec 09 10:56:18 ha-792382 crio[665]: time="2024-12-09 10:56:18.522870886Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6520c962-4cc3-46e6-b9e6-64f78268cc3b name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 10:56:18 ha-792382 crio[665]: time="2024-12-09 10:56:18.523308587Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733741778523287809,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6520c962-4cc3-46e6-b9e6-64f78268cc3b name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 10:56:18 ha-792382 crio[665]: time="2024-12-09 10:56:18.524083967Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=28d93817-c684-468f-aaa2-5c96b995c736 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 10:56:18 ha-792382 crio[665]: time="2024-12-09 10:56:18.524132710Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=28d93817-c684-468f-aaa2-5c96b995c736 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 10:56:18 ha-792382 crio[665]: time="2024-12-09 10:56:18.524471522Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3354d3bec20606deb1f19130215939f5c7b2123b73319ef892bffc278d375fb8,PodSandboxId:e47f42b7e0900b7149c93ac504858a9b845addf2278ae1c0abd45e66fc9df066,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733741551077576684,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-z9wjm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 00b911f2-4cd1-486a-9276-1e98745ede0e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4ba11ff07ea5065d66a5e8b8d091a9ba9a1c680ab1ade1d19aecf153081d7dd,PodSandboxId:a5c60a0e3c19becc3387cabd45cb24c34922bd26351286c88744f9e2caf6f8d6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733741412981896414,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8hlml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d820cd6c-5064-4934-adc8-c68f84c09b46,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afc0f0aea4c8acce16752f3e50cc08b660c2c26b24932beaf28ba3d1bc596733,PodSandboxId:038ff3d97cfe50dddc147974bab92d243212a5896794a4bc3f1ce2b77fdb5e39,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733741412958470450,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rz6mw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
af297b6d-91f1-4114-b98c-cdfdfbd1589e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9fa96349b5a7bf6e8223fd16d01f77aa0d3c45aa83ad28fc42eec5d2a80dd24,PodSandboxId:02bd44e5a67d9df1c56cc6297ec66940ce286c89b103f7efa4b35259d2a2f8c7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733741412903452485,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4419fe4f-e2ed-4ecb-a912-2dd074e29727,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6bf7c7cf0d689f954fca58d1d94afa21f5bfd0f606552fbbf1479a9ae1593d3,PodSandboxId:cfb791c6d05cec0b9cc244408089c309a41f321fe06c2083eabaeaf9184c0ff6,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e,State:CO
NTAINER_RUNNING,CreatedAt:1733741400823508110,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bqp2z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2c40579-4d72-4efe-b921-1e0f98b91544,},Annotations:map[string]string{io.kubernetes.container.hash: 9533276,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cf6196a4789ec6471e8e2fe474e71d1756368a4423e425262410e6c7a71e522,PodSandboxId:82b54a7467a7aa9cf761959fb4e7e7ea830c6fdcf33629ec45c8b55326334f67,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733741398
382511881,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wrvgb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2531e29f-a4d5-41f9-8c38-3220b4caf96b,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:082e8ff7e6c7e3e1110852687152ddb22202b3134aa3105a8b11aa7d702bcd6a,PodSandboxId:1486ff19db45e7b9cb8b545f56579a8765f6cef7fe3b1d44f967a5c5823b4cdc,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:32829cc6f8630eba4e1b5e4df5bcbc34c767e70703d26e64a0f7317951c7b517,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1c87c24be687be21b68228b1b53b7792a3d82fb4d99bc68a5a984f582508a37,State:CONTAINER_RUNNING,CreatedAt:173374138888
4669062,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-792382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9922f13afb31842008ba0179dabd897e,},Annotations:map[string]string{io.kubernetes.container.hash: 69195791,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:778345b29099a60c663a83117fc6b409ae496d9228f79ecc48c26209e7a87f63,PodSandboxId:27e12e36b1bd81bfdb6cc876b8c1a9c925ed2eec6ac687056f4bd10eb872b7a6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733741386069265058,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-792382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2460a8b15a62b9cf3ad5343586bde402,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64b96c1c2297057b3b928bc7494df511d909f05efb4f8d5be27458c164efe95f,PodSandboxId:7bbf390b8ef03829849defea1ba3817acb7a86ce1705fb53d765d7ddca57a066,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733741386071035939,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kub
ernetes.pod.name: kube-apiserver-ha-792382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89a89b1c65df6e3ad9608c5607172f77,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d93c68b855d9f72ce368ee4f59250a988032fe781209e2115a8d70ea60f77fee,PodSandboxId:9493b93aded71efd1c05e2df3b9537c0c63dc9b2e9abdbf1be430d3c29d5ad9f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733741386009117508,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kub
e-scheduler-ha-792382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 082fcfac40bcf36b76f1e733a9f73bc8,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00db8f77881efc15e9bf57aba34890302ec37f9239ba1d191862ea78b9833604,PodSandboxId:02e8433fa67cc5aef5d43a5edf820340ad5d91eef3bdd9e7149eeec0e55a8c95,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733741385994463722,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-792382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4d8d358ed72ac30c9365aedd3aee4d1,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=28d93817-c684-468f-aaa2-5c96b995c736 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	3354d3bec2060       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   e47f42b7e0900       busybox-7dff88458-z9wjm
	f4ba11ff07ea5       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   a5c60a0e3c19b       coredns-7c65d6cfc9-8hlml
	afc0f0aea4c8a       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   038ff3d97cfe5       coredns-7c65d6cfc9-rz6mw
	d9fa96349b5a7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   02bd44e5a67d9       storage-provisioner
	b6bf7c7cf0d68       docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3    6 minutes ago       Running             kindnet-cni               0                   cfb791c6d05ce       kindnet-bqp2z
	3cf6196a4789e       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                      6 minutes ago       Running             kube-proxy                0                   82b54a7467a7a       kube-proxy-wrvgb
	082e8ff7e6c7e       ghcr.io/kube-vip/kube-vip@sha256:32829cc6f8630eba4e1b5e4df5bcbc34c767e70703d26e64a0f7317951c7b517     6 minutes ago       Running             kube-vip                  0                   1486ff19db45e       kube-vip-ha-792382
	64b96c1c22970       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                      6 minutes ago       Running             kube-apiserver            0                   7bbf390b8ef03       kube-apiserver-ha-792382
	778345b29099a       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   27e12e36b1bd8       etcd-ha-792382
	d93c68b855d9f       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                      6 minutes ago       Running             kube-scheduler            0                   9493b93aded71       kube-scheduler-ha-792382
	00db8f77881ef       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                      6 minutes ago       Running             kube-controller-manager   0                   02e8433fa67cc       kube-controller-manager-ha-792382
	
	
	==> coredns [afc0f0aea4c8acce16752f3e50cc08b660c2c26b24932beaf28ba3d1bc596733] <==
	[INFO] 10.244.2.2:57485 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000178522s
	[INFO] 10.244.2.2:51008 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003461693s
	[INFO] 10.244.2.2:51209 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000132423s
	[INFO] 10.244.2.2:44233 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000160403s
	[INFO] 10.244.2.2:36343 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000113366s
	[INFO] 10.244.1.2:40108 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001755871s
	[INFO] 10.244.1.2:57627 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000088641s
	[INFO] 10.244.0.4:49175 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000210271s
	[INFO] 10.244.0.4:42721 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001653061s
	[INFO] 10.244.0.4:53085 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000087293s
	[INFO] 10.244.2.2:46633 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000111394s
	[INFO] 10.244.2.2:34060 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000087724s
	[INFO] 10.244.2.2:42086 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000112165s
	[INFO] 10.244.1.2:55917 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000167759s
	[INFO] 10.244.1.2:38190 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000113655s
	[INFO] 10.244.1.2:46262 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000092112s
	[INFO] 10.244.1.2:55410 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000080217s
	[INFO] 10.244.0.4:43802 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000073668s
	[INFO] 10.244.0.4:48010 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000099328s
	[INFO] 10.244.0.4:45687 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00004859s
	[INFO] 10.244.2.2:35669 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00019184s
	[INFO] 10.244.2.2:54242 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000232065s
	[INFO] 10.244.2.2:41931 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000140914s
	[INFO] 10.244.0.4:48531 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000105047s
	[INFO] 10.244.0.4:36756 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000068167s
	
	
	==> coredns [f4ba11ff07ea5065d66a5e8b8d091a9ba9a1c680ab1ade1d19aecf153081d7dd] <==
	[INFO] 10.244.0.4:58900 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000184784s
	[INFO] 10.244.0.4:59585 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.004212695s
	[INFO] 10.244.0.4:42331 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001567158s
	[INFO] 10.244.2.2:43555 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003700387s
	[INFO] 10.244.2.2:38437 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000268841s
	[INFO] 10.244.1.2:36722 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000174774s
	[INFO] 10.244.1.2:46295 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000167521s
	[INFO] 10.244.1.2:36004 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000192453s
	[INFO] 10.244.1.2:54275 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001271437s
	[INFO] 10.244.1.2:48954 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000183213s
	[INFO] 10.244.1.2:57839 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00017811s
	[INFO] 10.244.0.4:54946 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001925365s
	[INFO] 10.244.0.4:59669 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0000722s
	[INFO] 10.244.0.4:40897 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000074421s
	[INFO] 10.244.0.4:46937 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000174065s
	[INFO] 10.244.0.4:34613 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000075946s
	[INFO] 10.244.2.2:44189 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000216239s
	[INFO] 10.244.0.4:39246 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000155453s
	[INFO] 10.244.2.2:48134 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000162494s
	[INFO] 10.244.1.2:44589 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000125364s
	[INFO] 10.244.1.2:59702 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00019329s
	[INFO] 10.244.1.2:58920 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000146935s
	[INFO] 10.244.1.2:55802 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000116158s
	[INFO] 10.244.0.4:47226 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000097556s
	[INFO] 10.244.0.4:42857 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000073279s
	
	
	==> describe nodes <==
	Name:               ha-792382
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-792382
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=60110addcdbf0fec7168b962521659e922988d6c
	                    minikube.k8s.io/name=ha-792382
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_09T10_49_53_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 09 Dec 2024 10:49:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-792382
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 09 Dec 2024 10:56:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 09 Dec 2024 10:52:55 +0000   Mon, 09 Dec 2024 10:49:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 09 Dec 2024 10:52:55 +0000   Mon, 09 Dec 2024 10:49:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 09 Dec 2024 10:52:55 +0000   Mon, 09 Dec 2024 10:49:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 09 Dec 2024 10:52:55 +0000   Mon, 09 Dec 2024 10:50:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.69
	  Hostname:    ha-792382
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 2c956a5ad4d142099b593c1d9352f7b5
	  System UUID:                2c956a5a-d4d1-4209-9b59-3c1d9352f7b5
	  Boot ID:                    5140ef96-1a92-4f56-b80b-7e99ce150ca0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-z9wjm              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m51s
	  kube-system                 coredns-7c65d6cfc9-8hlml             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m22s
	  kube-system                 coredns-7c65d6cfc9-rz6mw             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m22s
	  kube-system                 etcd-ha-792382                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m26s
	  kube-system                 kindnet-bqp2z                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m22s
	  kube-system                 kube-apiserver-ha-792382             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m26s
	  kube-system                 kube-controller-manager-ha-792382    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m26s
	  kube-system                 kube-proxy-wrvgb                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m22s
	  kube-system                 kube-scheduler-ha-792382             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m27s
	  kube-system                 kube-vip-ha-792382                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m29s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m21s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m20s                  kube-proxy       
	  Normal  NodeHasSufficientPID     6m33s (x7 over 6m33s)  kubelet          Node ha-792382 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m33s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 6m33s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m33s (x8 over 6m33s)  kubelet          Node ha-792382 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m33s (x8 over 6m33s)  kubelet          Node ha-792382 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 6m27s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m26s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m26s                  kubelet          Node ha-792382 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m26s                  kubelet          Node ha-792382 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m26s                  kubelet          Node ha-792382 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m23s                  node-controller  Node ha-792382 event: Registered Node ha-792382 in Controller
	  Normal  NodeReady                6m6s                   kubelet          Node ha-792382 status is now: NodeReady
	  Normal  RegisteredNode           5m26s                  node-controller  Node ha-792382 event: Registered Node ha-792382 in Controller
	  Normal  RegisteredNode           4m11s                  node-controller  Node ha-792382 event: Registered Node ha-792382 in Controller
	
	
	Name:               ha-792382-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-792382-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=60110addcdbf0fec7168b962521659e922988d6c
	                    minikube.k8s.io/name=ha-792382
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_09T10_50_47_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 09 Dec 2024 10:50:44 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-792382-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 09 Dec 2024 10:53:37 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 09 Dec 2024 10:52:48 +0000   Mon, 09 Dec 2024 10:54:20 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 09 Dec 2024 10:52:48 +0000   Mon, 09 Dec 2024 10:54:20 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 09 Dec 2024 10:52:48 +0000   Mon, 09 Dec 2024 10:54:20 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 09 Dec 2024 10:52:48 +0000   Mon, 09 Dec 2024 10:54:20 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.89
	  Hostname:    ha-792382-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 167721adca2249268bf51688530c2893
	  System UUID:                167721ad-ca22-4926-8bf5-1688530c2893
	  Boot ID:                    74f1c671-e420-4f88-b05b-e50c0597ee01
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-rbrpt                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m51s
	  kube-system                 etcd-ha-792382-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m32s
	  kube-system                 kindnet-hkrhk                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m34s
	  kube-system                 kube-apiserver-ha-792382-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m32s
	  kube-system                 kube-controller-manager-ha-792382-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m24s
	  kube-system                 kube-proxy-dckpl                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m34s
	  kube-system                 kube-scheduler-ha-792382-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m25s
	  kube-system                 kube-vip-ha-792382-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m30s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m34s (x8 over 5m34s)  kubelet          Node ha-792382-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m34s (x8 over 5m34s)  kubelet          Node ha-792382-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m34s (x7 over 5m34s)  kubelet          Node ha-792382-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m34s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m33s                  node-controller  Node ha-792382-m02 event: Registered Node ha-792382-m02 in Controller
	  Normal  RegisteredNode           5m26s                  node-controller  Node ha-792382-m02 event: Registered Node ha-792382-m02 in Controller
	  Normal  RegisteredNode           4m11s                  node-controller  Node ha-792382-m02 event: Registered Node ha-792382-m02 in Controller
	  Normal  NodeNotReady             118s                   node-controller  Node ha-792382-m02 status is now: NodeNotReady
	
	
	Name:               ha-792382-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-792382-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=60110addcdbf0fec7168b962521659e922988d6c
	                    minikube.k8s.io/name=ha-792382
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_09T10_52_02_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 09 Dec 2024 10:51:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-792382-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 09 Dec 2024 10:56:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 09 Dec 2024 10:53:00 +0000   Mon, 09 Dec 2024 10:51:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 09 Dec 2024 10:53:00 +0000   Mon, 09 Dec 2024 10:51:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 09 Dec 2024 10:53:00 +0000   Mon, 09 Dec 2024 10:51:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 09 Dec 2024 10:53:00 +0000   Mon, 09 Dec 2024 10:52:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.82
	  Hostname:    ha-792382-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c7e770a97238401cb03ba22edd7f66bc
	  System UUID:                c7e770a9-7238-401c-b03b-a22edd7f66bc
	  Boot ID:                    75bcd068-8763-4e3a-b01e-036ac11d2956
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-ft8s2                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m51s
	  kube-system                 etcd-ha-792382-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m19s
	  kube-system                 kindnet-6hlht                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m20s
	  kube-system                 kube-apiserver-ha-792382-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m19s
	  kube-system                 kube-controller-manager-ha-792382-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m19s
	  kube-system                 kube-proxy-2l42s                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m20s
	  kube-system                 kube-scheduler-ha-792382-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m19s
	  kube-system                 kube-vip-ha-792382-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m16s                  kube-proxy       
	  Normal  Starting                 4m20s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m20s (x8 over 4m20s)  kubelet          Node ha-792382-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m20s (x8 over 4m20s)  kubelet          Node ha-792382-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m20s (x7 over 4m20s)  kubelet          Node ha-792382-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m20s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m18s                  node-controller  Node ha-792382-m03 event: Registered Node ha-792382-m03 in Controller
	  Normal  RegisteredNode           4m16s                  node-controller  Node ha-792382-m03 event: Registered Node ha-792382-m03 in Controller
	  Normal  RegisteredNode           4m11s                  node-controller  Node ha-792382-m03 event: Registered Node ha-792382-m03 in Controller
	
	
	Name:               ha-792382-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-792382-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=60110addcdbf0fec7168b962521659e922988d6c
	                    minikube.k8s.io/name=ha-792382
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_09T10_53_05_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 09 Dec 2024 10:53:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-792382-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 09 Dec 2024 10:56:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 09 Dec 2024 10:53:35 +0000   Mon, 09 Dec 2024 10:53:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 09 Dec 2024 10:53:35 +0000   Mon, 09 Dec 2024 10:53:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 09 Dec 2024 10:53:35 +0000   Mon, 09 Dec 2024 10:53:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 09 Dec 2024 10:53:35 +0000   Mon, 09 Dec 2024 10:53:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.54
	  Hostname:    ha-792382-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f7109c0766654d148c611df97b2ed795
	  System UUID:                f7109c07-6665-4d14-8c61-1df97b2ed795
	  Boot ID:                    8d79820d-d818-486f-88fb-9a376256bc79
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-rwsmp       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m14s
	  kube-system                 kube-proxy-727n6    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m14s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m8s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  3m14s (x2 over 3m14s)  kubelet          Node ha-792382-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m14s (x2 over 3m14s)  kubelet          Node ha-792382-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m14s (x2 over 3m14s)  kubelet          Node ha-792382-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m14s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m13s                  node-controller  Node ha-792382-m04 event: Registered Node ha-792382-m04 in Controller
	  Normal  RegisteredNode           3m11s                  node-controller  Node ha-792382-m04 event: Registered Node ha-792382-m04 in Controller
	  Normal  RegisteredNode           3m11s                  node-controller  Node ha-792382-m04 event: Registered Node ha-792382-m04 in Controller
	  Normal  NodeReady                2m53s                  kubelet          Node ha-792382-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec 9 10:49] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052723] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037555] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.827157] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.929161] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.560988] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.837514] systemd-fstab-generator[584]: Ignoring "noauto" option for root device
	[  +0.057481] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.052320] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.193651] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.117185] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.263430] systemd-fstab-generator[656]: Ignoring "noauto" option for root device
	[  +3.805323] systemd-fstab-generator[749]: Ignoring "noauto" option for root device
	[  +3.647118] systemd-fstab-generator[878]: Ignoring "noauto" option for root device
	[  +0.055434] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.026961] systemd-fstab-generator[1297]: Ignoring "noauto" option for root device
	[  +0.076746] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.128281] kauditd_printk_skb: 21 callbacks suppressed
	[Dec 9 10:50] kauditd_printk_skb: 38 callbacks suppressed
	[ +38.131475] kauditd_printk_skb: 28 callbacks suppressed
	
	
	==> etcd [778345b29099a60c663a83117fc6b409ae496d9228f79ecc48c26209e7a87f63] <==
	{"level":"warn","ts":"2024-12-09T10:56:18.792826Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"950d7fcf7e5c88d0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T10:56:18.811257Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"950d7fcf7e5c88d0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T10:56:18.818026Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"950d7fcf7e5c88d0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T10:56:18.821185Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"950d7fcf7e5c88d0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T10:56:18.825379Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"950d7fcf7e5c88d0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T10:56:18.830151Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"950d7fcf7e5c88d0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T10:56:18.832002Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"950d7fcf7e5c88d0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T10:56:18.834381Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"950d7fcf7e5c88d0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T10:56:18.834441Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"950d7fcf7e5c88d0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T10:56:18.837858Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"950d7fcf7e5c88d0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T10:56:18.901486Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"950d7fcf7e5c88d0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T10:56:18.908500Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"950d7fcf7e5c88d0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T10:56:18.914982Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"950d7fcf7e5c88d0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T10:56:18.918145Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"950d7fcf7e5c88d0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T10:56:18.921069Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"950d7fcf7e5c88d0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T10:56:18.927842Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"950d7fcf7e5c88d0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T10:56:18.931754Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"950d7fcf7e5c88d0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T10:56:18.934274Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"950d7fcf7e5c88d0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T10:56:18.940883Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"950d7fcf7e5c88d0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T10:56:18.943988Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"950d7fcf7e5c88d0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T10:56:18.946840Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"950d7fcf7e5c88d0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T10:56:18.950166Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"950d7fcf7e5c88d0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T10:56:18.955640Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"950d7fcf7e5c88d0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T10:56:18.961400Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"950d7fcf7e5c88d0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T10:56:19.009639Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"950d7fcf7e5c88d0","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 10:56:19 up 7 min,  0 users,  load average: 0.41, 0.31, 0.16
	Linux ha-792382 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [b6bf7c7cf0d689f954fca58d1d94afa21f5bfd0f606552fbbf1479a9ae1593d3] <==
	I1209 10:55:41.792124       1 main.go:324] Node ha-792382-m04 has CIDR [10.244.3.0/24] 
	I1209 10:55:51.785788       1 main.go:297] Handling node with IPs: map[192.168.39.69:{}]
	I1209 10:55:51.785901       1 main.go:301] handling current node
	I1209 10:55:51.785962       1 main.go:297] Handling node with IPs: map[192.168.39.89:{}]
	I1209 10:55:51.785993       1 main.go:324] Node ha-792382-m02 has CIDR [10.244.1.0/24] 
	I1209 10:55:51.786189       1 main.go:297] Handling node with IPs: map[192.168.39.82:{}]
	I1209 10:55:51.786293       1 main.go:324] Node ha-792382-m03 has CIDR [10.244.2.0/24] 
	I1209 10:55:51.786573       1 main.go:297] Handling node with IPs: map[192.168.39.54:{}]
	I1209 10:55:51.786644       1 main.go:324] Node ha-792382-m04 has CIDR [10.244.3.0/24] 
	I1209 10:56:01.783030       1 main.go:297] Handling node with IPs: map[192.168.39.69:{}]
	I1209 10:56:01.783176       1 main.go:301] handling current node
	I1209 10:56:01.783209       1 main.go:297] Handling node with IPs: map[192.168.39.89:{}]
	I1209 10:56:01.783262       1 main.go:324] Node ha-792382-m02 has CIDR [10.244.1.0/24] 
	I1209 10:56:01.783503       1 main.go:297] Handling node with IPs: map[192.168.39.82:{}]
	I1209 10:56:01.783567       1 main.go:324] Node ha-792382-m03 has CIDR [10.244.2.0/24] 
	I1209 10:56:01.784071       1 main.go:297] Handling node with IPs: map[192.168.39.54:{}]
	I1209 10:56:01.784166       1 main.go:324] Node ha-792382-m04 has CIDR [10.244.3.0/24] 
	I1209 10:56:11.792014       1 main.go:297] Handling node with IPs: map[192.168.39.69:{}]
	I1209 10:56:11.792252       1 main.go:301] handling current node
	I1209 10:56:11.792297       1 main.go:297] Handling node with IPs: map[192.168.39.89:{}]
	I1209 10:56:11.792379       1 main.go:324] Node ha-792382-m02 has CIDR [10.244.1.0/24] 
	I1209 10:56:11.792752       1 main.go:297] Handling node with IPs: map[192.168.39.82:{}]
	I1209 10:56:11.792788       1 main.go:324] Node ha-792382-m03 has CIDR [10.244.2.0/24] 
	I1209 10:56:11.792953       1 main.go:297] Handling node with IPs: map[192.168.39.54:{}]
	I1209 10:56:11.792978       1 main.go:324] Node ha-792382-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [64b96c1c2297057b3b928bc7494df511d909f05efb4f8d5be27458c164efe95f] <==
	I1209 10:49:52.072307       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1209 10:49:52.095069       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1209 10:49:56.392767       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I1209 10:49:56.516080       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E1209 10:51:59.302973       1 writers.go:122] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E1209 10:51:59.303668       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 331.746µs, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E1209 10:51:59.304570       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E1209 10:51:59.308414       1 writers.go:135] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E1209 10:51:59.309695       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="6.795998ms" method="POST" path="/api/v1/namespaces/kube-system/events" result=null
	E1209 10:52:32.421048       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:43832: use of closed network connection
	E1209 10:52:32.619590       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:43852: use of closed network connection
	E1209 10:52:32.815616       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:43862: use of closed network connection
	E1209 10:52:33.010440       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:43888: use of closed network connection
	E1209 10:52:33.191451       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:43910: use of closed network connection
	E1209 10:52:33.385647       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:43930: use of closed network connection
	E1209 10:52:33.571472       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:43946: use of closed network connection
	E1209 10:52:33.741655       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:43972: use of closed network connection
	E1209 10:52:33.919176       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:43990: use of closed network connection
	E1209 10:52:34.226233       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44000: use of closed network connection
	E1209 10:52:34.408728       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44016: use of closed network connection
	E1209 10:52:34.588897       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44034: use of closed network connection
	E1209 10:52:34.765608       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44050: use of closed network connection
	E1209 10:52:34.943122       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44058: use of closed network connection
	E1209 10:52:35.115793       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44068: use of closed network connection
	W1209 10:54:00.405476       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.69 192.168.39.82]
	
	
	==> kube-controller-manager [00db8f77881efc15e9bf57aba34890302ec37f9239ba1d191862ea78b9833604] <==
	I1209 10:53:04.483677       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-792382-m04" podCIDRs=["10.244.3.0/24"]
	I1209 10:53:04.483873       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-792382-m04"
	I1209 10:53:04.484031       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-792382-m04"
	I1209 10:53:04.508782       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-792382-m04"
	I1209 10:53:04.947247       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-792382-m04"
	I1209 10:53:05.336150       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-792382-m04"
	I1209 10:53:05.632610       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-792382-m04"
	I1209 10:53:05.665145       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-792382-m04"
	I1209 10:53:07.101579       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-792382-m04"
	I1209 10:53:07.148958       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-792382-m04"
	I1209 10:53:08.041907       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-792382-m04"
	I1209 10:53:08.474258       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-792382-m04"
	I1209 10:53:14.706287       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-792382-m04"
	I1209 10:53:25.397617       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-792382-m04"
	I1209 10:53:25.397765       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-792382-m04"
	I1209 10:53:25.412410       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-792382-m04"
	I1209 10:53:25.649201       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-792382-m04"
	I1209 10:53:35.378859       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-792382-m04"
	I1209 10:54:20.671888       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-792382-m02"
	I1209 10:54:20.672434       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-792382-m04"
	I1209 10:54:20.703980       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-792382-m02"
	I1209 10:54:20.840624       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="14.419282ms"
	I1209 10:54:20.841721       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="157.508µs"
	I1209 10:54:22.157822       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-792382-m02"
	I1209 10:54:25.899451       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-792382-m02"
	
	
	==> kube-proxy [3cf6196a4789ec6471e8e2fe474e71d1756368a4423e425262410e6c7a71e522] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1209 10:49:58.601423       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1209 10:49:58.617859       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.69"]
	E1209 10:49:58.617945       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1209 10:49:58.657152       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1209 10:49:58.657213       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1209 10:49:58.657247       1 server_linux.go:169] "Using iptables Proxier"
	I1209 10:49:58.660760       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1209 10:49:58.661154       1 server.go:483] "Version info" version="v1.31.2"
	I1209 10:49:58.661230       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1209 10:49:58.663604       1 config.go:199] "Starting service config controller"
	I1209 10:49:58.663767       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1209 10:49:58.664471       1 config.go:105] "Starting endpoint slice config controller"
	I1209 10:49:58.664498       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1209 10:49:58.666409       1 config.go:328] "Starting node config controller"
	I1209 10:49:58.666433       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1209 10:49:58.765096       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1209 10:49:58.767373       1 shared_informer.go:320] Caches are synced for service config
	I1209 10:49:58.767373       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [d93c68b855d9f72ce368ee4f59250a988032fe781209e2115a8d70ea60f77fee] <==
	W1209 10:49:49.686971       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1209 10:49:49.687036       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1209 10:49:49.693717       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1209 10:49:49.693755       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1209 10:49:49.756854       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1209 10:49:49.756907       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1209 10:49:49.761365       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1209 10:49:49.761407       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1209 10:49:49.901909       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1209 10:49:49.902484       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1209 10:49:50.012571       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1209 10:49:50.012617       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1209 10:49:50.018069       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1209 10:49:50.018128       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1209 10:49:50.045681       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1209 10:49:50.045732       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1209 10:49:50.048146       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1209 10:49:50.048203       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1209 10:49:51.665195       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1209 10:52:27.353144       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-ft8s2\": pod busybox-7dff88458-ft8s2 is already assigned to node \"ha-792382-m03\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-ft8s2" node="ha-792382-m03"
	E1209 10:52:27.354035       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 51271b6c-9fb3-4893-8502-54b74c4cbaa5(default/busybox-7dff88458-ft8s2) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-ft8s2"
	E1209 10:52:27.354086       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-ft8s2\": pod busybox-7dff88458-ft8s2 is already assigned to node \"ha-792382-m03\"" pod="default/busybox-7dff88458-ft8s2"
	I1209 10:52:27.354141       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-ft8s2" node="ha-792382-m03"
	E1209 10:52:27.402980       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-z9wjm\": pod busybox-7dff88458-z9wjm is already assigned to node \"ha-792382\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-z9wjm" node="ha-792382"
	E1209 10:52:27.403164       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-z9wjm\": pod busybox-7dff88458-z9wjm is already assigned to node \"ha-792382\"" pod="default/busybox-7dff88458-z9wjm"
	
	
	==> kubelet <==
	Dec 09 10:54:52 ha-792382 kubelet[1304]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 09 10:54:52 ha-792382 kubelet[1304]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 09 10:54:52 ha-792382 kubelet[1304]: E1209 10:54:52.082247    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733741692081818749,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 10:54:52 ha-792382 kubelet[1304]: E1209 10:54:52.082273    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733741692081818749,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 10:55:02 ha-792382 kubelet[1304]: E1209 10:55:02.088147    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733741702086894201,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 10:55:02 ha-792382 kubelet[1304]: E1209 10:55:02.088210    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733741702086894201,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 10:55:12 ha-792382 kubelet[1304]: E1209 10:55:12.089935    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733741712089600382,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 10:55:12 ha-792382 kubelet[1304]: E1209 10:55:12.090372    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733741712089600382,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 10:55:22 ha-792382 kubelet[1304]: E1209 10:55:22.094837    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733741722094438540,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 10:55:22 ha-792382 kubelet[1304]: E1209 10:55:22.094877    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733741722094438540,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 10:55:32 ha-792382 kubelet[1304]: E1209 10:55:32.096240    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733741732095902907,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 10:55:32 ha-792382 kubelet[1304]: E1209 10:55:32.096268    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733741732095902907,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 10:55:42 ha-792382 kubelet[1304]: E1209 10:55:42.098166    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733741742097877429,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 10:55:42 ha-792382 kubelet[1304]: E1209 10:55:42.098566    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733741742097877429,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 10:55:52 ha-792382 kubelet[1304]: E1209 10:55:52.004085    1304 iptables.go:577] "Could not set up iptables canary" err=<
	Dec 09 10:55:52 ha-792382 kubelet[1304]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 09 10:55:52 ha-792382 kubelet[1304]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 09 10:55:52 ha-792382 kubelet[1304]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 09 10:55:52 ha-792382 kubelet[1304]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 09 10:55:52 ha-792382 kubelet[1304]: E1209 10:55:52.100761    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733741752100425512,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 10:55:52 ha-792382 kubelet[1304]: E1209 10:55:52.100783    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733741752100425512,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 10:56:02 ha-792382 kubelet[1304]: E1209 10:56:02.102546    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733741762102177289,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 10:56:02 ha-792382 kubelet[1304]: E1209 10:56:02.102939    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733741762102177289,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 10:56:12 ha-792382 kubelet[1304]: E1209 10:56:12.104513    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733741772104031126,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 10:56:12 ha-792382 kubelet[1304]: E1209 10:56:12.104554    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733741772104031126,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-792382 -n ha-792382
helpers_test.go:261: (dbg) Run:  kubectl --context ha-792382 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (6.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (6.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.972345947s)
ha_test.go:309: expected profile "ha-792382" in json of 'profile list' to have "HAppy" status but have "Unknown" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-792382\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"ha-792382\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"kvm2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\
"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-792382\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.39.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.39.69\",\"Port\":8443,\"Kubernet
esVersion\":\"v1.31.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.39.89\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.39.82\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.39.54\",\"Port\":0,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":f
alse,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/home/jenkins:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"Mo
untIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-792382 -n ha-792382
helpers_test.go:244: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-792382 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-792382 logs -n 25: (1.32571046s)
helpers_test.go:252: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-792382 ssh -n                                                                 | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | ha-792382-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-792382 cp ha-792382-m03:/home/docker/cp-test.txt                              | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | ha-792382:/home/docker/cp-test_ha-792382-m03_ha-792382.txt                       |           |         |         |                     |                     |
	| ssh     | ha-792382 ssh -n                                                                 | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | ha-792382-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-792382 ssh -n ha-792382 sudo cat                                              | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | /home/docker/cp-test_ha-792382-m03_ha-792382.txt                                 |           |         |         |                     |                     |
	| cp      | ha-792382 cp ha-792382-m03:/home/docker/cp-test.txt                              | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | ha-792382-m02:/home/docker/cp-test_ha-792382-m03_ha-792382-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-792382 ssh -n                                                                 | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | ha-792382-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-792382 ssh -n ha-792382-m02 sudo cat                                          | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | /home/docker/cp-test_ha-792382-m03_ha-792382-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-792382 cp ha-792382-m03:/home/docker/cp-test.txt                              | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | ha-792382-m04:/home/docker/cp-test_ha-792382-m03_ha-792382-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-792382 ssh -n                                                                 | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | ha-792382-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-792382 ssh -n ha-792382-m04 sudo cat                                          | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | /home/docker/cp-test_ha-792382-m03_ha-792382-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-792382 cp testdata/cp-test.txt                                                | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | ha-792382-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-792382 ssh -n                                                                 | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | ha-792382-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-792382 cp ha-792382-m04:/home/docker/cp-test.txt                              | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3558467820/001/cp-test_ha-792382-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-792382 ssh -n                                                                 | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | ha-792382-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-792382 cp ha-792382-m04:/home/docker/cp-test.txt                              | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | ha-792382:/home/docker/cp-test_ha-792382-m04_ha-792382.txt                       |           |         |         |                     |                     |
	| ssh     | ha-792382 ssh -n                                                                 | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | ha-792382-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-792382 ssh -n ha-792382 sudo cat                                              | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | /home/docker/cp-test_ha-792382-m04_ha-792382.txt                                 |           |         |         |                     |                     |
	| cp      | ha-792382 cp ha-792382-m04:/home/docker/cp-test.txt                              | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | ha-792382-m02:/home/docker/cp-test_ha-792382-m04_ha-792382-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-792382 ssh -n                                                                 | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | ha-792382-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-792382 ssh -n ha-792382-m02 sudo cat                                          | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | /home/docker/cp-test_ha-792382-m04_ha-792382-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-792382 cp ha-792382-m04:/home/docker/cp-test.txt                              | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | ha-792382-m03:/home/docker/cp-test_ha-792382-m04_ha-792382-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-792382 ssh -n                                                                 | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | ha-792382-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-792382 ssh -n ha-792382-m03 sudo cat                                          | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | /home/docker/cp-test_ha-792382-m04_ha-792382-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-792382 node stop m02 -v=7                                                     | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-792382 node start m02 -v=7                                                    | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:56 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/09 10:49:12
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 10:49:12.155112  627293 out.go:345] Setting OutFile to fd 1 ...
	I1209 10:49:12.155243  627293 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 10:49:12.155252  627293 out.go:358] Setting ErrFile to fd 2...
	I1209 10:49:12.155256  627293 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 10:49:12.155455  627293 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20068-609844/.minikube/bin
	I1209 10:49:12.156111  627293 out.go:352] Setting JSON to false
	I1209 10:49:12.157109  627293 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":12696,"bootTime":1733728656,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 10:49:12.157245  627293 start.go:139] virtualization: kvm guest
	I1209 10:49:12.159303  627293 out.go:177] * [ha-792382] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1209 10:49:12.160611  627293 out.go:177]   - MINIKUBE_LOCATION=20068
	I1209 10:49:12.160611  627293 notify.go:220] Checking for updates...
	I1209 10:49:12.163029  627293 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 10:49:12.164218  627293 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20068-609844/kubeconfig
	I1209 10:49:12.165346  627293 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20068-609844/.minikube
	I1209 10:49:12.166392  627293 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1209 10:49:12.168066  627293 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 10:49:12.169526  627293 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 10:49:12.205667  627293 out.go:177] * Using the kvm2 driver based on user configuration
	I1209 10:49:12.206853  627293 start.go:297] selected driver: kvm2
	I1209 10:49:12.206869  627293 start.go:901] validating driver "kvm2" against <nil>
	I1209 10:49:12.206881  627293 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 10:49:12.207633  627293 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 10:49:12.207718  627293 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20068-609844/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1209 10:49:12.223409  627293 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1209 10:49:12.223621  627293 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1209 10:49:12.224275  627293 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 10:49:12.224320  627293 cni.go:84] Creating CNI manager for ""
	I1209 10:49:12.224382  627293 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1209 10:49:12.224394  627293 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1209 10:49:12.224467  627293 start.go:340] cluster config:
	{Name:ha-792382 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-792382 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1209 10:49:12.224624  627293 iso.go:125] acquiring lock: {Name:mk3bf4d9da9b75cc0ae0a3732f4492bac54a4b46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 10:49:12.226221  627293 out.go:177] * Starting "ha-792382" primary control-plane node in "ha-792382" cluster
	I1209 10:49:12.227308  627293 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 10:49:12.227336  627293 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1209 10:49:12.227354  627293 cache.go:56] Caching tarball of preloaded images
	I1209 10:49:12.227432  627293 preload.go:172] Found /home/jenkins/minikube-integration/20068-609844/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1209 10:49:12.227447  627293 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1209 10:49:12.227749  627293 profile.go:143] Saving config to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/config.json ...
	I1209 10:49:12.227772  627293 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/config.json: {Name:mkc1440c2022322fca4f71077ddb8bd509450a18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 10:49:12.227928  627293 start.go:360] acquireMachinesLock for ha-792382: {Name:mka6afffe9ddf2faebdc603ed0401805d58bf31e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 10:49:12.227972  627293 start.go:364] duration metric: took 26.731µs to acquireMachinesLock for "ha-792382"
	I1209 10:49:12.227996  627293 start.go:93] Provisioning new machine with config: &{Name:ha-792382 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-792382 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 10:49:12.228057  627293 start.go:125] createHost starting for "" (driver="kvm2")
	I1209 10:49:12.229507  627293 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1209 10:49:12.229650  627293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:49:12.229688  627293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:49:12.243739  627293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40935
	I1209 10:49:12.244181  627293 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:49:12.244733  627293 main.go:141] libmachine: Using API Version  1
	I1209 10:49:12.244754  627293 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:49:12.245151  627293 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:49:12.245359  627293 main.go:141] libmachine: (ha-792382) Calling .GetMachineName
	I1209 10:49:12.245524  627293 main.go:141] libmachine: (ha-792382) Calling .DriverName
	I1209 10:49:12.245673  627293 start.go:159] libmachine.API.Create for "ha-792382" (driver="kvm2")
	I1209 10:49:12.245706  627293 client.go:168] LocalClient.Create starting
	I1209 10:49:12.245734  627293 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem
	I1209 10:49:12.245764  627293 main.go:141] libmachine: Decoding PEM data...
	I1209 10:49:12.245782  627293 main.go:141] libmachine: Parsing certificate...
	I1209 10:49:12.245831  627293 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem
	I1209 10:49:12.245849  627293 main.go:141] libmachine: Decoding PEM data...
	I1209 10:49:12.245860  627293 main.go:141] libmachine: Parsing certificate...
	I1209 10:49:12.245876  627293 main.go:141] libmachine: Running pre-create checks...
	I1209 10:49:12.245884  627293 main.go:141] libmachine: (ha-792382) Calling .PreCreateCheck
	I1209 10:49:12.246327  627293 main.go:141] libmachine: (ha-792382) Calling .GetConfigRaw
	I1209 10:49:12.246669  627293 main.go:141] libmachine: Creating machine...
	I1209 10:49:12.246682  627293 main.go:141] libmachine: (ha-792382) Calling .Create
	I1209 10:49:12.246831  627293 main.go:141] libmachine: (ha-792382) Creating KVM machine...
	I1209 10:49:12.248145  627293 main.go:141] libmachine: (ha-792382) DBG | found existing default KVM network
	I1209 10:49:12.248911  627293 main.go:141] libmachine: (ha-792382) DBG | I1209 10:49:12.248755  627316 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000123350}
	I1209 10:49:12.248939  627293 main.go:141] libmachine: (ha-792382) DBG | created network xml: 
	I1209 10:49:12.248951  627293 main.go:141] libmachine: (ha-792382) DBG | <network>
	I1209 10:49:12.248971  627293 main.go:141] libmachine: (ha-792382) DBG |   <name>mk-ha-792382</name>
	I1209 10:49:12.248981  627293 main.go:141] libmachine: (ha-792382) DBG |   <dns enable='no'/>
	I1209 10:49:12.248994  627293 main.go:141] libmachine: (ha-792382) DBG |   
	I1209 10:49:12.249009  627293 main.go:141] libmachine: (ha-792382) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1209 10:49:12.249019  627293 main.go:141] libmachine: (ha-792382) DBG |     <dhcp>
	I1209 10:49:12.249032  627293 main.go:141] libmachine: (ha-792382) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1209 10:49:12.249045  627293 main.go:141] libmachine: (ha-792382) DBG |     </dhcp>
	I1209 10:49:12.249058  627293 main.go:141] libmachine: (ha-792382) DBG |   </ip>
	I1209 10:49:12.249067  627293 main.go:141] libmachine: (ha-792382) DBG |   
	I1209 10:49:12.249134  627293 main.go:141] libmachine: (ha-792382) DBG | </network>
	I1209 10:49:12.249173  627293 main.go:141] libmachine: (ha-792382) DBG | 
	I1209 10:49:12.253952  627293 main.go:141] libmachine: (ha-792382) DBG | trying to create private KVM network mk-ha-792382 192.168.39.0/24...
	I1209 10:49:12.320765  627293 main.go:141] libmachine: (ha-792382) DBG | private KVM network mk-ha-792382 192.168.39.0/24 created
	I1209 10:49:12.320810  627293 main.go:141] libmachine: (ha-792382) Setting up store path in /home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382 ...
	I1209 10:49:12.320824  627293 main.go:141] libmachine: (ha-792382) DBG | I1209 10:49:12.320703  627316 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20068-609844/.minikube
	I1209 10:49:12.320846  627293 main.go:141] libmachine: (ha-792382) Building disk image from file:///home/jenkins/minikube-integration/20068-609844/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1209 10:49:12.320864  627293 main.go:141] libmachine: (ha-792382) Downloading /home/jenkins/minikube-integration/20068-609844/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20068-609844/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1209 10:49:12.624365  627293 main.go:141] libmachine: (ha-792382) DBG | I1209 10:49:12.624217  627316 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382/id_rsa...
	I1209 10:49:12.718158  627293 main.go:141] libmachine: (ha-792382) DBG | I1209 10:49:12.718015  627316 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382/ha-792382.rawdisk...
	I1209 10:49:12.718234  627293 main.go:141] libmachine: (ha-792382) DBG | Writing magic tar header
	I1209 10:49:12.718307  627293 main.go:141] libmachine: (ha-792382) DBG | Writing SSH key tar header
	I1209 10:49:12.718345  627293 main.go:141] libmachine: (ha-792382) DBG | I1209 10:49:12.718134  627316 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382 ...
	I1209 10:49:12.718360  627293 main.go:141] libmachine: (ha-792382) Setting executable bit set on /home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382 (perms=drwx------)
	I1209 10:49:12.718367  627293 main.go:141] libmachine: (ha-792382) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382
	I1209 10:49:12.718384  627293 main.go:141] libmachine: (ha-792382) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20068-609844/.minikube/machines
	I1209 10:49:12.718399  627293 main.go:141] libmachine: (ha-792382) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20068-609844/.minikube
	I1209 10:49:12.718409  627293 main.go:141] libmachine: (ha-792382) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20068-609844
	I1209 10:49:12.718416  627293 main.go:141] libmachine: (ha-792382) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1209 10:49:12.718424  627293 main.go:141] libmachine: (ha-792382) Setting executable bit set on /home/jenkins/minikube-integration/20068-609844/.minikube/machines (perms=drwxr-xr-x)
	I1209 10:49:12.718431  627293 main.go:141] libmachine: (ha-792382) Setting executable bit set on /home/jenkins/minikube-integration/20068-609844/.minikube (perms=drwxr-xr-x)
	I1209 10:49:12.718436  627293 main.go:141] libmachine: (ha-792382) DBG | Checking permissions on dir: /home/jenkins
	I1209 10:49:12.718443  627293 main.go:141] libmachine: (ha-792382) DBG | Checking permissions on dir: /home
	I1209 10:49:12.718449  627293 main.go:141] libmachine: (ha-792382) DBG | Skipping /home - not owner
	I1209 10:49:12.718461  627293 main.go:141] libmachine: (ha-792382) Setting executable bit set on /home/jenkins/minikube-integration/20068-609844 (perms=drwxrwxr-x)
	I1209 10:49:12.718475  627293 main.go:141] libmachine: (ha-792382) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1209 10:49:12.718495  627293 main.go:141] libmachine: (ha-792382) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1209 10:49:12.718506  627293 main.go:141] libmachine: (ha-792382) Creating domain...
	I1209 10:49:12.719443  627293 main.go:141] libmachine: (ha-792382) define libvirt domain using xml: 
	I1209 10:49:12.719473  627293 main.go:141] libmachine: (ha-792382) <domain type='kvm'>
	I1209 10:49:12.719482  627293 main.go:141] libmachine: (ha-792382)   <name>ha-792382</name>
	I1209 10:49:12.719490  627293 main.go:141] libmachine: (ha-792382)   <memory unit='MiB'>2200</memory>
	I1209 10:49:12.719512  627293 main.go:141] libmachine: (ha-792382)   <vcpu>2</vcpu>
	I1209 10:49:12.719521  627293 main.go:141] libmachine: (ha-792382)   <features>
	I1209 10:49:12.719529  627293 main.go:141] libmachine: (ha-792382)     <acpi/>
	I1209 10:49:12.719537  627293 main.go:141] libmachine: (ha-792382)     <apic/>
	I1209 10:49:12.719561  627293 main.go:141] libmachine: (ha-792382)     <pae/>
	I1209 10:49:12.719580  627293 main.go:141] libmachine: (ha-792382)     
	I1209 10:49:12.719586  627293 main.go:141] libmachine: (ha-792382)   </features>
	I1209 10:49:12.719602  627293 main.go:141] libmachine: (ha-792382)   <cpu mode='host-passthrough'>
	I1209 10:49:12.719613  627293 main.go:141] libmachine: (ha-792382)   
	I1209 10:49:12.719619  627293 main.go:141] libmachine: (ha-792382)   </cpu>
	I1209 10:49:12.719631  627293 main.go:141] libmachine: (ha-792382)   <os>
	I1209 10:49:12.719637  627293 main.go:141] libmachine: (ha-792382)     <type>hvm</type>
	I1209 10:49:12.719648  627293 main.go:141] libmachine: (ha-792382)     <boot dev='cdrom'/>
	I1209 10:49:12.719659  627293 main.go:141] libmachine: (ha-792382)     <boot dev='hd'/>
	I1209 10:49:12.719681  627293 main.go:141] libmachine: (ha-792382)     <bootmenu enable='no'/>
	I1209 10:49:12.719701  627293 main.go:141] libmachine: (ha-792382)   </os>
	I1209 10:49:12.719719  627293 main.go:141] libmachine: (ha-792382)   <devices>
	I1209 10:49:12.719738  627293 main.go:141] libmachine: (ha-792382)     <disk type='file' device='cdrom'>
	I1209 10:49:12.719756  627293 main.go:141] libmachine: (ha-792382)       <source file='/home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382/boot2docker.iso'/>
	I1209 10:49:12.719767  627293 main.go:141] libmachine: (ha-792382)       <target dev='hdc' bus='scsi'/>
	I1209 10:49:12.719777  627293 main.go:141] libmachine: (ha-792382)       <readonly/>
	I1209 10:49:12.719791  627293 main.go:141] libmachine: (ha-792382)     </disk>
	I1209 10:49:12.719805  627293 main.go:141] libmachine: (ha-792382)     <disk type='file' device='disk'>
	I1209 10:49:12.719816  627293 main.go:141] libmachine: (ha-792382)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1209 10:49:12.719831  627293 main.go:141] libmachine: (ha-792382)       <source file='/home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382/ha-792382.rawdisk'/>
	I1209 10:49:12.719845  627293 main.go:141] libmachine: (ha-792382)       <target dev='hda' bus='virtio'/>
	I1209 10:49:12.719857  627293 main.go:141] libmachine: (ha-792382)     </disk>
	I1209 10:49:12.719868  627293 main.go:141] libmachine: (ha-792382)     <interface type='network'>
	I1209 10:49:12.719881  627293 main.go:141] libmachine: (ha-792382)       <source network='mk-ha-792382'/>
	I1209 10:49:12.719892  627293 main.go:141] libmachine: (ha-792382)       <model type='virtio'/>
	I1209 10:49:12.719902  627293 main.go:141] libmachine: (ha-792382)     </interface>
	I1209 10:49:12.719910  627293 main.go:141] libmachine: (ha-792382)     <interface type='network'>
	I1209 10:49:12.719940  627293 main.go:141] libmachine: (ha-792382)       <source network='default'/>
	I1209 10:49:12.719966  627293 main.go:141] libmachine: (ha-792382)       <model type='virtio'/>
	I1209 10:49:12.719981  627293 main.go:141] libmachine: (ha-792382)     </interface>
	I1209 10:49:12.719994  627293 main.go:141] libmachine: (ha-792382)     <serial type='pty'>
	I1209 10:49:12.720009  627293 main.go:141] libmachine: (ha-792382)       <target port='0'/>
	I1209 10:49:12.720026  627293 main.go:141] libmachine: (ha-792382)     </serial>
	I1209 10:49:12.720038  627293 main.go:141] libmachine: (ha-792382)     <console type='pty'>
	I1209 10:49:12.720049  627293 main.go:141] libmachine: (ha-792382)       <target type='serial' port='0'/>
	I1209 10:49:12.720070  627293 main.go:141] libmachine: (ha-792382)     </console>
	I1209 10:49:12.720083  627293 main.go:141] libmachine: (ha-792382)     <rng model='virtio'>
	I1209 10:49:12.720106  627293 main.go:141] libmachine: (ha-792382)       <backend model='random'>/dev/random</backend>
	I1209 10:49:12.720122  627293 main.go:141] libmachine: (ha-792382)     </rng>
	I1209 10:49:12.720133  627293 main.go:141] libmachine: (ha-792382)     
	I1209 10:49:12.720141  627293 main.go:141] libmachine: (ha-792382)     
	I1209 10:49:12.720152  627293 main.go:141] libmachine: (ha-792382)   </devices>
	I1209 10:49:12.720161  627293 main.go:141] libmachine: (ha-792382) </domain>
	I1209 10:49:12.720175  627293 main.go:141] libmachine: (ha-792382) 
	I1209 10:49:12.724156  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:b1:77:e1 in network default
	I1209 10:49:12.724674  627293 main.go:141] libmachine: (ha-792382) Ensuring networks are active...
	I1209 10:49:12.724713  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:12.725331  627293 main.go:141] libmachine: (ha-792382) Ensuring network default is active
	I1209 10:49:12.725573  627293 main.go:141] libmachine: (ha-792382) Ensuring network mk-ha-792382 is active
	I1209 10:49:12.726011  627293 main.go:141] libmachine: (ha-792382) Getting domain xml...
	I1209 10:49:12.726856  627293 main.go:141] libmachine: (ha-792382) Creating domain...
	I1209 10:49:13.913426  627293 main.go:141] libmachine: (ha-792382) Waiting to get IP...
	I1209 10:49:13.914474  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:13.914854  627293 main.go:141] libmachine: (ha-792382) DBG | unable to find current IP address of domain ha-792382 in network mk-ha-792382
	I1209 10:49:13.914884  627293 main.go:141] libmachine: (ha-792382) DBG | I1209 10:49:13.914843  627316 retry.go:31] will retry after 231.46558ms: waiting for machine to come up
	I1209 10:49:14.148392  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:14.148786  627293 main.go:141] libmachine: (ha-792382) DBG | unable to find current IP address of domain ha-792382 in network mk-ha-792382
	I1209 10:49:14.148818  627293 main.go:141] libmachine: (ha-792382) DBG | I1209 10:49:14.148733  627316 retry.go:31] will retry after 323.334507ms: waiting for machine to come up
	I1209 10:49:14.473105  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:14.473482  627293 main.go:141] libmachine: (ha-792382) DBG | unable to find current IP address of domain ha-792382 in network mk-ha-792382
	I1209 10:49:14.473521  627293 main.go:141] libmachine: (ha-792382) DBG | I1209 10:49:14.473432  627316 retry.go:31] will retry after 293.410473ms: waiting for machine to come up
	I1209 10:49:14.769073  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:14.769413  627293 main.go:141] libmachine: (ha-792382) DBG | unable to find current IP address of domain ha-792382 in network mk-ha-792382
	I1209 10:49:14.769442  627293 main.go:141] libmachine: (ha-792382) DBG | I1209 10:49:14.769369  627316 retry.go:31] will retry after 414.561658ms: waiting for machine to come up
	I1209 10:49:15.186115  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:15.186526  627293 main.go:141] libmachine: (ha-792382) DBG | unable to find current IP address of domain ha-792382 in network mk-ha-792382
	I1209 10:49:15.186550  627293 main.go:141] libmachine: (ha-792382) DBG | I1209 10:49:15.186486  627316 retry.go:31] will retry after 602.170929ms: waiting for machine to come up
	I1209 10:49:15.790232  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:15.790609  627293 main.go:141] libmachine: (ha-792382) DBG | unable to find current IP address of domain ha-792382 in network mk-ha-792382
	I1209 10:49:15.790636  627293 main.go:141] libmachine: (ha-792382) DBG | I1209 10:49:15.790561  627316 retry.go:31] will retry after 626.828073ms: waiting for machine to come up
	I1209 10:49:16.419433  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:16.419896  627293 main.go:141] libmachine: (ha-792382) DBG | unable to find current IP address of domain ha-792382 in network mk-ha-792382
	I1209 10:49:16.419938  627293 main.go:141] libmachine: (ha-792382) DBG | I1209 10:49:16.419857  627316 retry.go:31] will retry after 735.370165ms: waiting for machine to come up
	I1209 10:49:17.156849  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:17.157231  627293 main.go:141] libmachine: (ha-792382) DBG | unable to find current IP address of domain ha-792382 in network mk-ha-792382
	I1209 10:49:17.157266  627293 main.go:141] libmachine: (ha-792382) DBG | I1209 10:49:17.157218  627316 retry.go:31] will retry after 1.229419392s: waiting for machine to come up
	I1209 10:49:18.387855  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:18.388261  627293 main.go:141] libmachine: (ha-792382) DBG | unable to find current IP address of domain ha-792382 in network mk-ha-792382
	I1209 10:49:18.388300  627293 main.go:141] libmachine: (ha-792382) DBG | I1209 10:49:18.388201  627316 retry.go:31] will retry after 1.781823768s: waiting for machine to come up
	I1209 10:49:20.172140  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:20.172552  627293 main.go:141] libmachine: (ha-792382) DBG | unable to find current IP address of domain ha-792382 in network mk-ha-792382
	I1209 10:49:20.172583  627293 main.go:141] libmachine: (ha-792382) DBG | I1209 10:49:20.172526  627316 retry.go:31] will retry after 1.563022016s: waiting for machine to come up
	I1209 10:49:21.736731  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:21.737192  627293 main.go:141] libmachine: (ha-792382) DBG | unable to find current IP address of domain ha-792382 in network mk-ha-792382
	I1209 10:49:21.737227  627293 main.go:141] libmachine: (ha-792382) DBG | I1209 10:49:21.737132  627316 retry.go:31] will retry after 1.796183688s: waiting for machine to come up
	I1209 10:49:23.536165  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:23.536600  627293 main.go:141] libmachine: (ha-792382) DBG | unable to find current IP address of domain ha-792382 in network mk-ha-792382
	I1209 10:49:23.536633  627293 main.go:141] libmachine: (ha-792382) DBG | I1209 10:49:23.536553  627316 retry.go:31] will retry after 2.766987907s: waiting for machine to come up
	I1209 10:49:26.306562  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:26.306896  627293 main.go:141] libmachine: (ha-792382) DBG | unable to find current IP address of domain ha-792382 in network mk-ha-792382
	I1209 10:49:26.306918  627293 main.go:141] libmachine: (ha-792382) DBG | I1209 10:49:26.306878  627316 retry.go:31] will retry after 3.713874413s: waiting for machine to come up
	I1209 10:49:30.024188  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:30.024650  627293 main.go:141] libmachine: (ha-792382) DBG | unable to find current IP address of domain ha-792382 in network mk-ha-792382
	I1209 10:49:30.024693  627293 main.go:141] libmachine: (ha-792382) DBG | I1209 10:49:30.024632  627316 retry.go:31] will retry after 4.575233995s: waiting for machine to come up
	I1209 10:49:34.603079  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:34.603556  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has current primary IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:34.603577  627293 main.go:141] libmachine: (ha-792382) Found IP for machine: 192.168.39.69
	I1209 10:49:34.603593  627293 main.go:141] libmachine: (ha-792382) Reserving static IP address...
	I1209 10:49:34.603995  627293 main.go:141] libmachine: (ha-792382) DBG | unable to find host DHCP lease matching {name: "ha-792382", mac: "52:54:00:a8:82:f7", ip: "192.168.39.69"} in network mk-ha-792382
	I1209 10:49:34.677115  627293 main.go:141] libmachine: (ha-792382) DBG | Getting to WaitForSSH function...
	I1209 10:49:34.677150  627293 main.go:141] libmachine: (ha-792382) Reserved static IP address: 192.168.39.69
	I1209 10:49:34.677164  627293 main.go:141] libmachine: (ha-792382) Waiting for SSH to be available...
	I1209 10:49:34.680016  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:34.680510  627293 main.go:141] libmachine: (ha-792382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:82:f7", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:49:26 +0000 UTC Type:0 Mac:52:54:00:a8:82:f7 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:minikube Clientid:01:52:54:00:a8:82:f7}
	I1209 10:49:34.680547  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:34.680683  627293 main.go:141] libmachine: (ha-792382) DBG | Using SSH client type: external
	I1209 10:49:34.680713  627293 main.go:141] libmachine: (ha-792382) DBG | Using SSH private key: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382/id_rsa (-rw-------)
	I1209 10:49:34.680743  627293 main.go:141] libmachine: (ha-792382) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.69 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1209 10:49:34.680759  627293 main.go:141] libmachine: (ha-792382) DBG | About to run SSH command:
	I1209 10:49:34.680771  627293 main.go:141] libmachine: (ha-792382) DBG | exit 0
	I1209 10:49:34.802056  627293 main.go:141] libmachine: (ha-792382) DBG | SSH cmd err, output: <nil>: 
	I1209 10:49:34.802342  627293 main.go:141] libmachine: (ha-792382) KVM machine creation complete!
	I1209 10:49:34.802652  627293 main.go:141] libmachine: (ha-792382) Calling .GetConfigRaw
	I1209 10:49:34.803265  627293 main.go:141] libmachine: (ha-792382) Calling .DriverName
	I1209 10:49:34.803470  627293 main.go:141] libmachine: (ha-792382) Calling .DriverName
	I1209 10:49:34.803641  627293 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1209 10:49:34.803655  627293 main.go:141] libmachine: (ha-792382) Calling .GetState
	I1209 10:49:34.804897  627293 main.go:141] libmachine: Detecting operating system of created instance...
	I1209 10:49:34.804910  627293 main.go:141] libmachine: Waiting for SSH to be available...
	I1209 10:49:34.804920  627293 main.go:141] libmachine: Getting to WaitForSSH function...
	I1209 10:49:34.804925  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHHostname
	I1209 10:49:34.807181  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:34.807580  627293 main.go:141] libmachine: (ha-792382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:82:f7", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:49:26 +0000 UTC Type:0 Mac:52:54:00:a8:82:f7 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-792382 Clientid:01:52:54:00:a8:82:f7}
	I1209 10:49:34.807606  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:34.807797  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHPort
	I1209 10:49:34.807971  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHKeyPath
	I1209 10:49:34.808252  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHKeyPath
	I1209 10:49:34.808380  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHUsername
	I1209 10:49:34.808550  627293 main.go:141] libmachine: Using SSH client type: native
	I1209 10:49:34.808901  627293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.69 22 <nil> <nil>}
	I1209 10:49:34.808916  627293 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1209 10:49:34.901048  627293 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 10:49:34.901075  627293 main.go:141] libmachine: Detecting the provisioner...
	I1209 10:49:34.901084  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHHostname
	I1209 10:49:34.903801  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:34.904137  627293 main.go:141] libmachine: (ha-792382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:82:f7", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:49:26 +0000 UTC Type:0 Mac:52:54:00:a8:82:f7 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-792382 Clientid:01:52:54:00:a8:82:f7}
	I1209 10:49:34.904167  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:34.904294  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHPort
	I1209 10:49:34.904473  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHKeyPath
	I1209 10:49:34.904619  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHKeyPath
	I1209 10:49:34.904801  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHUsername
	I1209 10:49:34.904935  627293 main.go:141] libmachine: Using SSH client type: native
	I1209 10:49:34.905144  627293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.69 22 <nil> <nil>}
	I1209 10:49:34.905156  627293 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1209 10:49:34.998134  627293 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1209 10:49:34.998232  627293 main.go:141] libmachine: found compatible host: buildroot
	I1209 10:49:34.998245  627293 main.go:141] libmachine: Provisioning with buildroot...
	I1209 10:49:34.998256  627293 main.go:141] libmachine: (ha-792382) Calling .GetMachineName
	I1209 10:49:34.998517  627293 buildroot.go:166] provisioning hostname "ha-792382"
	I1209 10:49:34.998550  627293 main.go:141] libmachine: (ha-792382) Calling .GetMachineName
	I1209 10:49:34.998742  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHHostname
	I1209 10:49:35.001204  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:35.001556  627293 main.go:141] libmachine: (ha-792382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:82:f7", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:49:26 +0000 UTC Type:0 Mac:52:54:00:a8:82:f7 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-792382 Clientid:01:52:54:00:a8:82:f7}
	I1209 10:49:35.001585  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:35.001746  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHPort
	I1209 10:49:35.001925  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHKeyPath
	I1209 10:49:35.002086  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHKeyPath
	I1209 10:49:35.002233  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHUsername
	I1209 10:49:35.002387  627293 main.go:141] libmachine: Using SSH client type: native
	I1209 10:49:35.002580  627293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.69 22 <nil> <nil>}
	I1209 10:49:35.002594  627293 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-792382 && echo "ha-792382" | sudo tee /etc/hostname
	I1209 10:49:35.111878  627293 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-792382
	
	I1209 10:49:35.111914  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHHostname
	I1209 10:49:35.114679  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:35.114968  627293 main.go:141] libmachine: (ha-792382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:82:f7", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:49:26 +0000 UTC Type:0 Mac:52:54:00:a8:82:f7 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-792382 Clientid:01:52:54:00:a8:82:f7}
	I1209 10:49:35.114999  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:35.115174  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHPort
	I1209 10:49:35.115415  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHKeyPath
	I1209 10:49:35.115601  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHKeyPath
	I1209 10:49:35.115731  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHUsername
	I1209 10:49:35.115880  627293 main.go:141] libmachine: Using SSH client type: native
	I1209 10:49:35.116106  627293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.69 22 <nil> <nil>}
	I1209 10:49:35.116130  627293 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-792382' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-792382/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-792382' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 10:49:35.218632  627293 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 10:49:35.218667  627293 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20068-609844/.minikube CaCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20068-609844/.minikube}
	I1209 10:49:35.218688  627293 buildroot.go:174] setting up certificates
	I1209 10:49:35.218699  627293 provision.go:84] configureAuth start
	I1209 10:49:35.218708  627293 main.go:141] libmachine: (ha-792382) Calling .GetMachineName
	I1209 10:49:35.218985  627293 main.go:141] libmachine: (ha-792382) Calling .GetIP
	I1209 10:49:35.221513  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:35.221813  627293 main.go:141] libmachine: (ha-792382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:82:f7", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:49:26 +0000 UTC Type:0 Mac:52:54:00:a8:82:f7 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-792382 Clientid:01:52:54:00:a8:82:f7}
	I1209 10:49:35.221835  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:35.221978  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHHostname
	I1209 10:49:35.224283  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:35.224638  627293 main.go:141] libmachine: (ha-792382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:82:f7", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:49:26 +0000 UTC Type:0 Mac:52:54:00:a8:82:f7 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-792382 Clientid:01:52:54:00:a8:82:f7}
	I1209 10:49:35.224666  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:35.224816  627293 provision.go:143] copyHostCerts
	I1209 10:49:35.224849  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem
	I1209 10:49:35.224892  627293 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem, removing ...
	I1209 10:49:35.224913  627293 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem
	I1209 10:49:35.225004  627293 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem (1123 bytes)
	I1209 10:49:35.225113  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem
	I1209 10:49:35.225145  627293 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem, removing ...
	I1209 10:49:35.225155  627293 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem
	I1209 10:49:35.225195  627293 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem (1679 bytes)
	I1209 10:49:35.225255  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem
	I1209 10:49:35.225280  627293 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem, removing ...
	I1209 10:49:35.225290  627293 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem
	I1209 10:49:35.225325  627293 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem (1082 bytes)
	I1209 10:49:35.225392  627293 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem org=jenkins.ha-792382 san=[127.0.0.1 192.168.39.69 ha-792382 localhost minikube]
	I1209 10:49:35.530739  627293 provision.go:177] copyRemoteCerts
	I1209 10:49:35.530807  627293 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 10:49:35.530832  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHHostname
	I1209 10:49:35.533806  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:35.534127  627293 main.go:141] libmachine: (ha-792382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:82:f7", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:49:26 +0000 UTC Type:0 Mac:52:54:00:a8:82:f7 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-792382 Clientid:01:52:54:00:a8:82:f7}
	I1209 10:49:35.534158  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:35.534311  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHPort
	I1209 10:49:35.534552  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHKeyPath
	I1209 10:49:35.534707  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHUsername
	I1209 10:49:35.534862  627293 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382/id_rsa Username:docker}
	I1209 10:49:35.611999  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1209 10:49:35.612097  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1209 10:49:35.633738  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1209 10:49:35.633820  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1209 10:49:35.654744  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1209 10:49:35.654813  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1209 10:49:35.675689  627293 provision.go:87] duration metric: took 456.977679ms to configureAuth
	I1209 10:49:35.675718  627293 buildroot.go:189] setting minikube options for container-runtime
	I1209 10:49:35.675925  627293 config.go:182] Loaded profile config "ha-792382": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 10:49:35.676032  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHHostname
	I1209 10:49:35.678943  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:35.679261  627293 main.go:141] libmachine: (ha-792382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:82:f7", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:49:26 +0000 UTC Type:0 Mac:52:54:00:a8:82:f7 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-792382 Clientid:01:52:54:00:a8:82:f7}
	I1209 10:49:35.679289  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:35.679496  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHPort
	I1209 10:49:35.679710  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHKeyPath
	I1209 10:49:35.679841  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHKeyPath
	I1209 10:49:35.679959  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHUsername
	I1209 10:49:35.680105  627293 main.go:141] libmachine: Using SSH client type: native
	I1209 10:49:35.680332  627293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.69 22 <nil> <nil>}
	I1209 10:49:35.680355  627293 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 10:49:35.879810  627293 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 10:49:35.879848  627293 main.go:141] libmachine: Checking connection to Docker...
	I1209 10:49:35.879878  627293 main.go:141] libmachine: (ha-792382) Calling .GetURL
	I1209 10:49:35.881298  627293 main.go:141] libmachine: (ha-792382) DBG | Using libvirt version 6000000
	I1209 10:49:35.883322  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:35.883653  627293 main.go:141] libmachine: (ha-792382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:82:f7", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:49:26 +0000 UTC Type:0 Mac:52:54:00:a8:82:f7 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-792382 Clientid:01:52:54:00:a8:82:f7}
	I1209 10:49:35.883694  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:35.883840  627293 main.go:141] libmachine: Docker is up and running!
	I1209 10:49:35.883855  627293 main.go:141] libmachine: Reticulating splines...
	I1209 10:49:35.883863  627293 client.go:171] duration metric: took 23.63814664s to LocalClient.Create
	I1209 10:49:35.883888  627293 start.go:167] duration metric: took 23.638217304s to libmachine.API.Create "ha-792382"
	I1209 10:49:35.883903  627293 start.go:293] postStartSetup for "ha-792382" (driver="kvm2")
	I1209 10:49:35.883916  627293 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 10:49:35.883934  627293 main.go:141] libmachine: (ha-792382) Calling .DriverName
	I1209 10:49:35.884193  627293 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 10:49:35.884224  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHHostname
	I1209 10:49:35.886333  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:35.886719  627293 main.go:141] libmachine: (ha-792382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:82:f7", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:49:26 +0000 UTC Type:0 Mac:52:54:00:a8:82:f7 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-792382 Clientid:01:52:54:00:a8:82:f7}
	I1209 10:49:35.886746  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:35.886830  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHPort
	I1209 10:49:35.887023  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHKeyPath
	I1209 10:49:35.887177  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHUsername
	I1209 10:49:35.887342  627293 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382/id_rsa Username:docker}
	I1209 10:49:35.963840  627293 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 10:49:35.967678  627293 info.go:137] Remote host: Buildroot 2023.02.9
	I1209 10:49:35.967709  627293 filesync.go:126] Scanning /home/jenkins/minikube-integration/20068-609844/.minikube/addons for local assets ...
	I1209 10:49:35.967791  627293 filesync.go:126] Scanning /home/jenkins/minikube-integration/20068-609844/.minikube/files for local assets ...
	I1209 10:49:35.967866  627293 filesync.go:149] local asset: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem -> 6170172.pem in /etc/ssl/certs
	I1209 10:49:35.967876  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem -> /etc/ssl/certs/6170172.pem
	I1209 10:49:35.967969  627293 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 10:49:35.976432  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem --> /etc/ssl/certs/6170172.pem (1708 bytes)
	I1209 10:49:35.997593  627293 start.go:296] duration metric: took 113.67336ms for postStartSetup
	I1209 10:49:35.997658  627293 main.go:141] libmachine: (ha-792382) Calling .GetConfigRaw
	I1209 10:49:35.998325  627293 main.go:141] libmachine: (ha-792382) Calling .GetIP
	I1209 10:49:36.000848  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:36.001239  627293 main.go:141] libmachine: (ha-792382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:82:f7", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:49:26 +0000 UTC Type:0 Mac:52:54:00:a8:82:f7 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-792382 Clientid:01:52:54:00:a8:82:f7}
	I1209 10:49:36.001267  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:36.001479  627293 profile.go:143] Saving config to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/config.json ...
	I1209 10:49:36.001656  627293 start.go:128] duration metric: took 23.77358998s to createHost
	I1209 10:49:36.001690  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHHostname
	I1209 10:49:36.004043  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:36.004400  627293 main.go:141] libmachine: (ha-792382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:82:f7", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:49:26 +0000 UTC Type:0 Mac:52:54:00:a8:82:f7 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-792382 Clientid:01:52:54:00:a8:82:f7}
	I1209 10:49:36.004431  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:36.004549  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHPort
	I1209 10:49:36.004734  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHKeyPath
	I1209 10:49:36.004893  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHKeyPath
	I1209 10:49:36.005024  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHUsername
	I1209 10:49:36.005202  627293 main.go:141] libmachine: Using SSH client type: native
	I1209 10:49:36.005368  627293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.69 22 <nil> <nil>}
	I1209 10:49:36.005389  627293 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1209 10:49:36.102487  627293 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733741376.078541083
	
	I1209 10:49:36.102513  627293 fix.go:216] guest clock: 1733741376.078541083
	I1209 10:49:36.102520  627293 fix.go:229] Guest: 2024-12-09 10:49:36.078541083 +0000 UTC Remote: 2024-12-09 10:49:36.001674575 +0000 UTC m=+23.885913523 (delta=76.866508ms)
	I1209 10:49:36.102562  627293 fix.go:200] guest clock delta is within tolerance: 76.866508ms
	I1209 10:49:36.102567  627293 start.go:83] releasing machines lock for "ha-792382", held for 23.874584082s
	I1209 10:49:36.102599  627293 main.go:141] libmachine: (ha-792382) Calling .DriverName
	I1209 10:49:36.102894  627293 main.go:141] libmachine: (ha-792382) Calling .GetIP
	I1209 10:49:36.105447  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:36.105786  627293 main.go:141] libmachine: (ha-792382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:82:f7", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:49:26 +0000 UTC Type:0 Mac:52:54:00:a8:82:f7 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-792382 Clientid:01:52:54:00:a8:82:f7}
	I1209 10:49:36.105824  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:36.105948  627293 main.go:141] libmachine: (ha-792382) Calling .DriverName
	I1209 10:49:36.106428  627293 main.go:141] libmachine: (ha-792382) Calling .DriverName
	I1209 10:49:36.106564  627293 main.go:141] libmachine: (ha-792382) Calling .DriverName
	I1209 10:49:36.106659  627293 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 10:49:36.106712  627293 ssh_runner.go:195] Run: cat /version.json
	I1209 10:49:36.106729  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHHostname
	I1209 10:49:36.106735  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHHostname
	I1209 10:49:36.108936  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:36.108975  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:36.109292  627293 main.go:141] libmachine: (ha-792382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:82:f7", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:49:26 +0000 UTC Type:0 Mac:52:54:00:a8:82:f7 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-792382 Clientid:01:52:54:00:a8:82:f7}
	I1209 10:49:36.109315  627293 main.go:141] libmachine: (ha-792382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:82:f7", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:49:26 +0000 UTC Type:0 Mac:52:54:00:a8:82:f7 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-792382 Clientid:01:52:54:00:a8:82:f7}
	I1209 10:49:36.109331  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:36.109347  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:36.109458  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHPort
	I1209 10:49:36.109631  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHPort
	I1209 10:49:36.109648  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHKeyPath
	I1209 10:49:36.109795  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHKeyPath
	I1209 10:49:36.109838  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHUsername
	I1209 10:49:36.109969  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHUsername
	I1209 10:49:36.109997  627293 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382/id_rsa Username:docker}
	I1209 10:49:36.110076  627293 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382/id_rsa Username:docker}
	I1209 10:49:36.213912  627293 ssh_runner.go:195] Run: systemctl --version
	I1209 10:49:36.219737  627293 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 10:49:36.373775  627293 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 10:49:36.379232  627293 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 10:49:36.379295  627293 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 10:49:36.394395  627293 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1209 10:49:36.394420  627293 start.go:495] detecting cgroup driver to use...
	I1209 10:49:36.394492  627293 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 10:49:36.409701  627293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 10:49:36.422542  627293 docker.go:217] disabling cri-docker service (if available) ...
	I1209 10:49:36.422600  627293 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 10:49:36.434811  627293 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 10:49:36.447372  627293 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 10:49:36.555614  627293 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 10:49:36.712890  627293 docker.go:233] disabling docker service ...
	I1209 10:49:36.712971  627293 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 10:49:36.726789  627293 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 10:49:36.738514  627293 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 10:49:36.860478  627293 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 10:49:36.981442  627293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 10:49:36.994232  627293 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 10:49:37.010639  627293 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1209 10:49:37.010699  627293 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 10:49:37.019623  627293 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 10:49:37.019678  627293 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 10:49:37.028741  627293 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 10:49:37.037802  627293 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 10:49:37.047112  627293 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 10:49:37.056587  627293 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 10:49:37.065626  627293 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 10:49:37.081471  627293 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 10:49:37.090400  627293 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 10:49:37.098511  627293 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1209 10:49:37.098567  627293 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1209 10:49:37.112020  627293 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 10:49:37.122574  627293 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 10:49:37.244301  627293 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 10:49:37.327990  627293 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 10:49:37.328076  627293 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 10:49:37.332519  627293 start.go:563] Will wait 60s for crictl version
	I1209 10:49:37.332580  627293 ssh_runner.go:195] Run: which crictl
	I1209 10:49:37.336027  627293 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 10:49:37.371600  627293 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1209 10:49:37.371689  627293 ssh_runner.go:195] Run: crio --version
	I1209 10:49:37.397060  627293 ssh_runner.go:195] Run: crio --version
	I1209 10:49:37.427301  627293 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1209 10:49:37.428631  627293 main.go:141] libmachine: (ha-792382) Calling .GetIP
	I1209 10:49:37.431338  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:37.431646  627293 main.go:141] libmachine: (ha-792382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:82:f7", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:49:26 +0000 UTC Type:0 Mac:52:54:00:a8:82:f7 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-792382 Clientid:01:52:54:00:a8:82:f7}
	I1209 10:49:37.431664  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:37.431871  627293 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1209 10:49:37.435530  627293 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 10:49:37.447078  627293 kubeadm.go:883] updating cluster {Name:ha-792382 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cl
usterName:ha-792382 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.69 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 10:49:37.447263  627293 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 10:49:37.447334  627293 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 10:49:37.477408  627293 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1209 10:49:37.477478  627293 ssh_runner.go:195] Run: which lz4
	I1209 10:49:37.480957  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1209 10:49:37.481050  627293 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1209 10:49:37.484762  627293 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1209 10:49:37.484788  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1209 10:49:38.710605  627293 crio.go:462] duration metric: took 1.229579062s to copy over tarball
	I1209 10:49:38.710680  627293 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1209 10:49:40.690695  627293 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.979974769s)
	I1209 10:49:40.690734  627293 crio.go:469] duration metric: took 1.980097705s to extract the tarball
	I1209 10:49:40.690745  627293 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1209 10:49:40.726929  627293 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 10:49:40.771095  627293 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 10:49:40.771125  627293 cache_images.go:84] Images are preloaded, skipping loading
	I1209 10:49:40.771136  627293 kubeadm.go:934] updating node { 192.168.39.69 8443 v1.31.2 crio true true} ...
	I1209 10:49:40.771264  627293 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-792382 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.69
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-792382 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 10:49:40.771357  627293 ssh_runner.go:195] Run: crio config
	I1209 10:49:40.816747  627293 cni.go:84] Creating CNI manager for ""
	I1209 10:49:40.816772  627293 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1209 10:49:40.816783  627293 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1209 10:49:40.816808  627293 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.69 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-792382 NodeName:ha-792382 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.69"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.69 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 10:49:40.816935  627293 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.69
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-792382"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.69"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.69"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 10:49:40.816960  627293 kube-vip.go:115] generating kube-vip config ...
	I1209 10:49:40.817003  627293 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1209 10:49:40.831794  627293 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1209 10:49:40.831917  627293 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.7
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1209 10:49:40.831988  627293 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1209 10:49:40.841266  627293 binaries.go:44] Found k8s binaries, skipping transfer
	I1209 10:49:40.841344  627293 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1209 10:49:40.850351  627293 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I1209 10:49:40.865301  627293 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 10:49:40.880173  627293 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2286 bytes)
	I1209 10:49:40.895089  627293 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I1209 10:49:40.909836  627293 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1209 10:49:40.913336  627293 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 10:49:40.924356  627293 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 10:49:41.046665  627293 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 10:49:41.063018  627293 certs.go:68] Setting up /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382 for IP: 192.168.39.69
	I1209 10:49:41.063041  627293 certs.go:194] generating shared ca certs ...
	I1209 10:49:41.063062  627293 certs.go:226] acquiring lock for ca certs: {Name:mk2df5887e08965d909a9c950da5dfffb8a04ddc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 10:49:41.063244  627293 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.key
	I1209 10:49:41.063289  627293 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.key
	I1209 10:49:41.063300  627293 certs.go:256] generating profile certs ...
	I1209 10:49:41.063355  627293 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/client.key
	I1209 10:49:41.063367  627293 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/client.crt with IP's: []
	I1209 10:49:41.129843  627293 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/client.crt ...
	I1209 10:49:41.129870  627293 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/client.crt: {Name:mkf984c9e526db9b810af9b168d6930601d7ed72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 10:49:41.130077  627293 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/client.key ...
	I1209 10:49:41.130094  627293 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/client.key: {Name:mk7ce7334711bfa08abe5164a05b3a0e352b8f81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 10:49:41.130213  627293 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.key.27c0a765
	I1209 10:49:41.130234  627293 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.crt.27c0a765 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.69 192.168.39.254]
	I1209 10:49:41.505985  627293 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.crt.27c0a765 ...
	I1209 10:49:41.506019  627293 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.crt.27c0a765: {Name:mkd0b0619960f58505ea5c5b1f53c5a2d8b55baf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 10:49:41.506242  627293 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.key.27c0a765 ...
	I1209 10:49:41.506261  627293 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.key.27c0a765: {Name:mk67bc39f2b151954187d9bdff2b01a7060c0444 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 10:49:41.506368  627293 certs.go:381] copying /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.crt.27c0a765 -> /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.crt
	I1209 10:49:41.506445  627293 certs.go:385] copying /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.key.27c0a765 -> /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.key
	I1209 10:49:41.506499  627293 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/proxy-client.key
	I1209 10:49:41.506513  627293 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/proxy-client.crt with IP's: []
	I1209 10:49:41.582775  627293 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/proxy-client.crt ...
	I1209 10:49:41.582805  627293 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/proxy-client.crt: {Name:mk8ba382df4a8d41cbb5595274fb67800a146923 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 10:49:41.582997  627293 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/proxy-client.key ...
	I1209 10:49:41.583012  627293 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/proxy-client.key: {Name:mka4002ccf01f2f736e4a0e998ece96628af1083 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 10:49:41.583117  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1209 10:49:41.583147  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1209 10:49:41.583161  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1209 10:49:41.583173  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1209 10:49:41.583197  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1209 10:49:41.583210  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1209 10:49:41.583222  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1209 10:49:41.583234  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1209 10:49:41.583286  627293 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017.pem (1338 bytes)
	W1209 10:49:41.583322  627293 certs.go:480] ignoring /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017_empty.pem, impossibly tiny 0 bytes
	I1209 10:49:41.583332  627293 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem (1675 bytes)
	I1209 10:49:41.583354  627293 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem (1082 bytes)
	I1209 10:49:41.583377  627293 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem (1123 bytes)
	I1209 10:49:41.583404  627293 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem (1679 bytes)
	I1209 10:49:41.583441  627293 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem (1708 bytes)
	I1209 10:49:41.583468  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem -> /usr/share/ca-certificates/6170172.pem
	I1209 10:49:41.583481  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1209 10:49:41.583493  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017.pem -> /usr/share/ca-certificates/617017.pem
	I1209 10:49:41.584023  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 10:49:41.607858  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1209 10:49:41.629298  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 10:49:41.650915  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1209 10:49:41.672892  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1209 10:49:41.695834  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1209 10:49:41.719653  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 10:49:41.742298  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1209 10:49:41.764468  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem --> /usr/share/ca-certificates/6170172.pem (1708 bytes)
	I1209 10:49:41.786947  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 10:49:41.811703  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017.pem --> /usr/share/ca-certificates/617017.pem (1338 bytes)
	I1209 10:49:41.837346  627293 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 10:49:41.855854  627293 ssh_runner.go:195] Run: openssl version
	I1209 10:49:41.862371  627293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/617017.pem && ln -fs /usr/share/ca-certificates/617017.pem /etc/ssl/certs/617017.pem"
	I1209 10:49:41.872771  627293 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/617017.pem
	I1209 10:49:41.878140  627293 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 10:45 /usr/share/ca-certificates/617017.pem
	I1209 10:49:41.878210  627293 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/617017.pem
	I1209 10:49:41.883640  627293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/617017.pem /etc/ssl/certs/51391683.0"
	I1209 10:49:41.893209  627293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6170172.pem && ln -fs /usr/share/ca-certificates/6170172.pem /etc/ssl/certs/6170172.pem"
	I1209 10:49:41.902869  627293 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6170172.pem
	I1209 10:49:41.906850  627293 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 10:45 /usr/share/ca-certificates/6170172.pem
	I1209 10:49:41.906898  627293 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6170172.pem
	I1209 10:49:41.912084  627293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6170172.pem /etc/ssl/certs/3ec20f2e.0"
	I1209 10:49:41.922405  627293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1209 10:49:41.932252  627293 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 10:49:41.936213  627293 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 10:34 /usr/share/ca-certificates/minikubeCA.pem
	I1209 10:49:41.936274  627293 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 10:49:41.941486  627293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1209 10:49:41.951188  627293 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 10:49:41.954834  627293 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1209 10:49:41.954890  627293 kubeadm.go:392] StartCluster: {Name:ha-792382 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Clust
erName:ha-792382 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.69 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 10:49:41.954978  627293 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 10:49:41.955029  627293 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 10:49:41.990596  627293 cri.go:89] found id: ""
	I1209 10:49:41.990674  627293 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 10:49:41.999783  627293 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 10:49:42.008238  627293 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 10:49:42.016846  627293 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 10:49:42.016865  627293 kubeadm.go:157] found existing configuration files:
	
	I1209 10:49:42.016904  627293 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 10:49:42.024739  627293 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 10:49:42.024809  627293 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 10:49:42.033044  627293 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 10:49:42.040972  627293 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 10:49:42.041020  627293 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 10:49:42.049238  627293 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 10:49:42.056966  627293 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 10:49:42.057032  627293 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 10:49:42.065232  627293 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 10:49:42.073082  627293 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 10:49:42.073123  627293 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 10:49:42.081145  627293 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1209 10:49:42.179849  627293 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1209 10:49:42.179910  627293 kubeadm.go:310] [preflight] Running pre-flight checks
	I1209 10:49:42.276408  627293 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1209 10:49:42.276561  627293 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1209 10:49:42.276716  627293 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1209 10:49:42.284852  627293 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1209 10:49:42.286435  627293 out.go:235]   - Generating certificates and keys ...
	I1209 10:49:42.286522  627293 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1209 10:49:42.286594  627293 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1209 10:49:42.590387  627293 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1209 10:49:42.745055  627293 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1209 10:49:42.887467  627293 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1209 10:49:43.151549  627293 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1209 10:49:43.207644  627293 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1209 10:49:43.207798  627293 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-792382 localhost] and IPs [192.168.39.69 127.0.0.1 ::1]
	I1209 10:49:43.393565  627293 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1209 10:49:43.393710  627293 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-792382 localhost] and IPs [192.168.39.69 127.0.0.1 ::1]
	I1209 10:49:43.595429  627293 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1209 10:49:43.672644  627293 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1209 10:49:43.819815  627293 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1209 10:49:43.819914  627293 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1209 10:49:44.041243  627293 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1209 10:49:44.173892  627293 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1209 10:49:44.337644  627293 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1209 10:49:44.481944  627293 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1209 10:49:44.539526  627293 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1209 10:49:44.540094  627293 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1209 10:49:44.543689  627293 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1209 10:49:44.575870  627293 out.go:235]   - Booting up control plane ...
	I1209 10:49:44.576053  627293 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1209 10:49:44.576187  627293 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1209 10:49:44.576309  627293 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1209 10:49:44.576459  627293 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1209 10:49:44.576560  627293 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1209 10:49:44.576606  627293 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1209 10:49:44.708364  627293 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1209 10:49:44.708561  627293 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1209 10:49:45.209677  627293 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.518639ms
	I1209 10:49:45.209811  627293 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1209 10:49:51.244834  627293 kubeadm.go:310] [api-check] The API server is healthy after 6.038769474s
	I1209 10:49:51.258766  627293 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1209 10:49:51.275586  627293 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1209 10:49:51.347505  627293 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1209 10:49:51.347730  627293 kubeadm.go:310] [mark-control-plane] Marking the node ha-792382 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1209 10:49:51.363557  627293 kubeadm.go:310] [bootstrap-token] Using token: 3fogiz.oanziwjzsm1wr1kv
	I1209 10:49:51.364826  627293 out.go:235]   - Configuring RBAC rules ...
	I1209 10:49:51.364951  627293 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1209 10:49:51.370786  627293 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1209 10:49:51.381797  627293 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1209 10:49:51.388857  627293 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1209 10:49:51.392743  627293 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1209 10:49:51.397933  627293 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1209 10:49:51.652382  627293 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1209 10:49:52.085079  627293 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1209 10:49:52.651844  627293 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1209 10:49:52.653438  627293 kubeadm.go:310] 
	I1209 10:49:52.653557  627293 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1209 10:49:52.653580  627293 kubeadm.go:310] 
	I1209 10:49:52.653672  627293 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1209 10:49:52.653682  627293 kubeadm.go:310] 
	I1209 10:49:52.653710  627293 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1209 10:49:52.653783  627293 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1209 10:49:52.653859  627293 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1209 10:49:52.653869  627293 kubeadm.go:310] 
	I1209 10:49:52.653946  627293 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1209 10:49:52.653955  627293 kubeadm.go:310] 
	I1209 10:49:52.654040  627293 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1209 10:49:52.654062  627293 kubeadm.go:310] 
	I1209 10:49:52.654116  627293 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1209 10:49:52.654229  627293 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1209 10:49:52.654328  627293 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1209 10:49:52.654347  627293 kubeadm.go:310] 
	I1209 10:49:52.654461  627293 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1209 10:49:52.654579  627293 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1209 10:49:52.654591  627293 kubeadm.go:310] 
	I1209 10:49:52.654710  627293 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 3fogiz.oanziwjzsm1wr1kv \
	I1209 10:49:52.654860  627293 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c011865ce27f188d55feab5f04226df0aae8d2e7cfe661283806c6f2de0ef29a \
	I1209 10:49:52.654894  627293 kubeadm.go:310] 	--control-plane 
	I1209 10:49:52.654903  627293 kubeadm.go:310] 
	I1209 10:49:52.655035  627293 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1209 10:49:52.655045  627293 kubeadm.go:310] 
	I1209 10:49:52.655125  627293 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 3fogiz.oanziwjzsm1wr1kv \
	I1209 10:49:52.655253  627293 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c011865ce27f188d55feab5f04226df0aae8d2e7cfe661283806c6f2de0ef29a 
	I1209 10:49:52.656128  627293 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1209 10:49:52.656180  627293 cni.go:84] Creating CNI manager for ""
	I1209 10:49:52.656208  627293 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1209 10:49:52.657779  627293 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1209 10:49:52.659033  627293 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1209 10:49:52.663808  627293 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.2/kubectl ...
	I1209 10:49:52.663829  627293 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1209 10:49:52.683028  627293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1209 10:49:53.058715  627293 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1209 10:49:53.058827  627293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 10:49:53.058833  627293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-792382 minikube.k8s.io/updated_at=2024_12_09T10_49_53_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=60110addcdbf0fec7168b962521659e922988d6c minikube.k8s.io/name=ha-792382 minikube.k8s.io/primary=true
	I1209 10:49:53.086878  627293 ops.go:34] apiserver oom_adj: -16
	I1209 10:49:53.256202  627293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 10:49:53.756573  627293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 10:49:54.256994  627293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 10:49:54.756404  627293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 10:49:55.257137  627293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 10:49:55.756813  627293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 10:49:56.256686  627293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 10:49:56.352743  627293 kubeadm.go:1113] duration metric: took 3.294004538s to wait for elevateKubeSystemPrivileges
	I1209 10:49:56.352793  627293 kubeadm.go:394] duration metric: took 14.397907015s to StartCluster
	I1209 10:49:56.352820  627293 settings.go:142] acquiring lock: {Name:mkd819f3469f56ef651f2e5bb461e93bb6b2b5fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 10:49:56.352918  627293 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20068-609844/kubeconfig
	I1209 10:49:56.354019  627293 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/kubeconfig: {Name:mk32876a1ab47f13c0d7c0ba1ee5e32de01b4a4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 10:49:56.354304  627293 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1209 10:49:56.354300  627293 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.69 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 10:49:56.354326  627293 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1209 10:49:56.354417  627293 start.go:241] waiting for startup goroutines ...
	I1209 10:49:56.354432  627293 addons.go:69] Setting storage-provisioner=true in profile "ha-792382"
	I1209 10:49:56.354455  627293 addons.go:234] Setting addon storage-provisioner=true in "ha-792382"
	I1209 10:49:56.354464  627293 addons.go:69] Setting default-storageclass=true in profile "ha-792382"
	I1209 10:49:56.354495  627293 host.go:66] Checking if "ha-792382" exists ...
	I1209 10:49:56.354504  627293 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-792382"
	I1209 10:49:56.354547  627293 config.go:182] Loaded profile config "ha-792382": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 10:49:56.354836  627293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:49:56.354867  627293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:49:56.354970  627293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:49:56.355019  627293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:49:56.371190  627293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42209
	I1209 10:49:56.371264  627293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40229
	I1209 10:49:56.371767  627293 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:49:56.371795  627293 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:49:56.372258  627293 main.go:141] libmachine: Using API Version  1
	I1209 10:49:56.372273  627293 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:49:56.372420  627293 main.go:141] libmachine: Using API Version  1
	I1209 10:49:56.372446  627293 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:49:56.372589  627293 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:49:56.372844  627293 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:49:56.373068  627293 main.go:141] libmachine: (ha-792382) Calling .GetState
	I1209 10:49:56.373184  627293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:49:56.373230  627293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:49:56.375150  627293 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/20068-609844/kubeconfig
	I1209 10:49:56.375437  627293 kapi.go:59] client config for ha-792382: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/client.crt", KeyFile:"/home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/client.key", CAFile:"/home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243b6e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1209 10:49:56.375916  627293 cert_rotation.go:140] Starting client certificate rotation controller
	I1209 10:49:56.376176  627293 addons.go:234] Setting addon default-storageclass=true in "ha-792382"
	I1209 10:49:56.376225  627293 host.go:66] Checking if "ha-792382" exists ...
	I1209 10:49:56.376515  627293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:49:56.376560  627293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:49:56.389420  627293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45615
	I1209 10:49:56.390064  627293 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:49:56.390648  627293 main.go:141] libmachine: Using API Version  1
	I1209 10:49:56.390676  627293 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:49:56.391072  627293 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:49:56.391316  627293 main.go:141] libmachine: (ha-792382) Calling .GetState
	I1209 10:49:56.391995  627293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42071
	I1209 10:49:56.392539  627293 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:49:56.393048  627293 main.go:141] libmachine: Using API Version  1
	I1209 10:49:56.393071  627293 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:49:56.393381  627293 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:49:56.393446  627293 main.go:141] libmachine: (ha-792382) Calling .DriverName
	I1209 10:49:56.393880  627293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:49:56.393927  627293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:49:56.395537  627293 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 10:49:56.396877  627293 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 10:49:56.396901  627293 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1209 10:49:56.396927  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHHostname
	I1209 10:49:56.399986  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:56.400413  627293 main.go:141] libmachine: (ha-792382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:82:f7", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:49:26 +0000 UTC Type:0 Mac:52:54:00:a8:82:f7 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-792382 Clientid:01:52:54:00:a8:82:f7}
	I1209 10:49:56.400445  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:56.400639  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHPort
	I1209 10:49:56.400862  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHKeyPath
	I1209 10:49:56.401027  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHUsername
	I1209 10:49:56.401192  627293 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382/id_rsa Username:docker}
	I1209 10:49:56.410237  627293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41957
	I1209 10:49:56.411256  627293 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:49:56.413501  627293 main.go:141] libmachine: Using API Version  1
	I1209 10:49:56.413527  627293 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:49:56.414391  627293 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:49:56.414656  627293 main.go:141] libmachine: (ha-792382) Calling .GetState
	I1209 10:49:56.416343  627293 main.go:141] libmachine: (ha-792382) Calling .DriverName
	I1209 10:49:56.416575  627293 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1209 10:49:56.416592  627293 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1209 10:49:56.416608  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHHostname
	I1209 10:49:56.419239  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:56.419746  627293 main.go:141] libmachine: (ha-792382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:82:f7", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:49:26 +0000 UTC Type:0 Mac:52:54:00:a8:82:f7 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-792382 Clientid:01:52:54:00:a8:82:f7}
	I1209 10:49:56.419776  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:49:56.419875  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHPort
	I1209 10:49:56.420076  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHKeyPath
	I1209 10:49:56.420261  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHUsername
	I1209 10:49:56.420422  627293 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382/id_rsa Username:docker}
	I1209 10:49:56.497434  627293 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1209 10:49:56.595755  627293 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 10:49:56.677666  627293 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1209 10:49:57.066334  627293 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1209 10:49:57.258939  627293 main.go:141] libmachine: Making call to close driver server
	I1209 10:49:57.258974  627293 main.go:141] libmachine: (ha-792382) Calling .Close
	I1209 10:49:57.258947  627293 main.go:141] libmachine: Making call to close driver server
	I1209 10:49:57.259060  627293 main.go:141] libmachine: (ha-792382) Calling .Close
	I1209 10:49:57.259277  627293 main.go:141] libmachine: Successfully made call to close driver server
	I1209 10:49:57.259322  627293 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 10:49:57.259343  627293 main.go:141] libmachine: Making call to close driver server
	I1209 10:49:57.259358  627293 main.go:141] libmachine: (ha-792382) Calling .Close
	I1209 10:49:57.259450  627293 main.go:141] libmachine: (ha-792382) DBG | Closing plugin on server side
	I1209 10:49:57.259495  627293 main.go:141] libmachine: Successfully made call to close driver server
	I1209 10:49:57.259510  627293 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 10:49:57.259523  627293 main.go:141] libmachine: Making call to close driver server
	I1209 10:49:57.259535  627293 main.go:141] libmachine: (ha-792382) Calling .Close
	I1209 10:49:57.259638  627293 main.go:141] libmachine: Successfully made call to close driver server
	I1209 10:49:57.259658  627293 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 10:49:57.259664  627293 main.go:141] libmachine: (ha-792382) DBG | Closing plugin on server side
	I1209 10:49:57.259795  627293 main.go:141] libmachine: Successfully made call to close driver server
	I1209 10:49:57.259815  627293 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 10:49:57.259895  627293 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1209 10:49:57.259914  627293 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1209 10:49:57.260014  627293 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1209 10:49:57.260024  627293 round_trippers.go:469] Request Headers:
	I1209 10:49:57.260035  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:49:57.260046  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:49:57.272826  627293 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1209 10:49:57.273379  627293 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1209 10:49:57.273393  627293 round_trippers.go:469] Request Headers:
	I1209 10:49:57.273400  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:49:57.273404  627293 round_trippers.go:473]     Content-Type: application/json
	I1209 10:49:57.273408  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:49:57.276004  627293 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 10:49:57.276170  627293 main.go:141] libmachine: Making call to close driver server
	I1209 10:49:57.276182  627293 main.go:141] libmachine: (ha-792382) Calling .Close
	I1209 10:49:57.276582  627293 main.go:141] libmachine: Successfully made call to close driver server
	I1209 10:49:57.276606  627293 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 10:49:57.278423  627293 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1209 10:49:57.279715  627293 addons.go:510] duration metric: took 925.38672ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1209 10:49:57.279752  627293 start.go:246] waiting for cluster config update ...
	I1209 10:49:57.279765  627293 start.go:255] writing updated cluster config ...
	I1209 10:49:57.281341  627293 out.go:201] 
	I1209 10:49:57.282688  627293 config.go:182] Loaded profile config "ha-792382": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 10:49:57.282758  627293 profile.go:143] Saving config to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/config.json ...
	I1209 10:49:57.284265  627293 out.go:177] * Starting "ha-792382-m02" control-plane node in "ha-792382" cluster
	I1209 10:49:57.285340  627293 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 10:49:57.285363  627293 cache.go:56] Caching tarball of preloaded images
	I1209 10:49:57.285479  627293 preload.go:172] Found /home/jenkins/minikube-integration/20068-609844/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1209 10:49:57.285499  627293 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1209 10:49:57.285580  627293 profile.go:143] Saving config to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/config.json ...
	I1209 10:49:57.285772  627293 start.go:360] acquireMachinesLock for ha-792382-m02: {Name:mka6afffe9ddf2faebdc603ed0401805d58bf31e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 10:49:57.285830  627293 start.go:364] duration metric: took 34.649µs to acquireMachinesLock for "ha-792382-m02"
	I1209 10:49:57.285855  627293 start.go:93] Provisioning new machine with config: &{Name:ha-792382 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-792382 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.69 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 10:49:57.285945  627293 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1209 10:49:57.287544  627293 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1209 10:49:57.287637  627293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:49:57.287679  627293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:49:57.302923  627293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45627
	I1209 10:49:57.303345  627293 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:49:57.303929  627293 main.go:141] libmachine: Using API Version  1
	I1209 10:49:57.303955  627293 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:49:57.304276  627293 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:49:57.304507  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetMachineName
	I1209 10:49:57.304682  627293 main.go:141] libmachine: (ha-792382-m02) Calling .DriverName
	I1209 10:49:57.304915  627293 start.go:159] libmachine.API.Create for "ha-792382" (driver="kvm2")
	I1209 10:49:57.304958  627293 client.go:168] LocalClient.Create starting
	I1209 10:49:57.305006  627293 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem
	I1209 10:49:57.305054  627293 main.go:141] libmachine: Decoding PEM data...
	I1209 10:49:57.305076  627293 main.go:141] libmachine: Parsing certificate...
	I1209 10:49:57.305150  627293 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem
	I1209 10:49:57.305184  627293 main.go:141] libmachine: Decoding PEM data...
	I1209 10:49:57.305200  627293 main.go:141] libmachine: Parsing certificate...
	I1209 10:49:57.305226  627293 main.go:141] libmachine: Running pre-create checks...
	I1209 10:49:57.305237  627293 main.go:141] libmachine: (ha-792382-m02) Calling .PreCreateCheck
	I1209 10:49:57.305467  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetConfigRaw
	I1209 10:49:57.305949  627293 main.go:141] libmachine: Creating machine...
	I1209 10:49:57.305967  627293 main.go:141] libmachine: (ha-792382-m02) Calling .Create
	I1209 10:49:57.306165  627293 main.go:141] libmachine: (ha-792382-m02) Creating KVM machine...
	I1209 10:49:57.307365  627293 main.go:141] libmachine: (ha-792382-m02) DBG | found existing default KVM network
	I1209 10:49:57.307532  627293 main.go:141] libmachine: (ha-792382-m02) DBG | found existing private KVM network mk-ha-792382
	I1209 10:49:57.307606  627293 main.go:141] libmachine: (ha-792382-m02) Setting up store path in /home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382-m02 ...
	I1209 10:49:57.307640  627293 main.go:141] libmachine: (ha-792382-m02) Building disk image from file:///home/jenkins/minikube-integration/20068-609844/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1209 10:49:57.307676  627293 main.go:141] libmachine: (ha-792382-m02) DBG | I1209 10:49:57.307595  627662 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20068-609844/.minikube
	I1209 10:49:57.307776  627293 main.go:141] libmachine: (ha-792382-m02) Downloading /home/jenkins/minikube-integration/20068-609844/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20068-609844/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1209 10:49:57.586533  627293 main.go:141] libmachine: (ha-792382-m02) DBG | I1209 10:49:57.586377  627662 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382-m02/id_rsa...
	I1209 10:49:57.697560  627293 main.go:141] libmachine: (ha-792382-m02) DBG | I1209 10:49:57.697424  627662 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382-m02/ha-792382-m02.rawdisk...
	I1209 10:49:57.697602  627293 main.go:141] libmachine: (ha-792382-m02) DBG | Writing magic tar header
	I1209 10:49:57.697613  627293 main.go:141] libmachine: (ha-792382-m02) DBG | Writing SSH key tar header
	I1209 10:49:57.697621  627293 main.go:141] libmachine: (ha-792382-m02) DBG | I1209 10:49:57.697562  627662 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382-m02 ...
	I1209 10:49:57.697695  627293 main.go:141] libmachine: (ha-792382-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382-m02
	I1209 10:49:57.697714  627293 main.go:141] libmachine: (ha-792382-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20068-609844/.minikube/machines
	I1209 10:49:57.697722  627293 main.go:141] libmachine: (ha-792382-m02) Setting executable bit set on /home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382-m02 (perms=drwx------)
	I1209 10:49:57.697738  627293 main.go:141] libmachine: (ha-792382-m02) Setting executable bit set on /home/jenkins/minikube-integration/20068-609844/.minikube/machines (perms=drwxr-xr-x)
	I1209 10:49:57.697757  627293 main.go:141] libmachine: (ha-792382-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20068-609844/.minikube
	I1209 10:49:57.697771  627293 main.go:141] libmachine: (ha-792382-m02) Setting executable bit set on /home/jenkins/minikube-integration/20068-609844/.minikube (perms=drwxr-xr-x)
	I1209 10:49:57.697780  627293 main.go:141] libmachine: (ha-792382-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20068-609844
	I1209 10:49:57.697790  627293 main.go:141] libmachine: (ha-792382-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1209 10:49:57.697797  627293 main.go:141] libmachine: (ha-792382-m02) DBG | Checking permissions on dir: /home/jenkins
	I1209 10:49:57.697803  627293 main.go:141] libmachine: (ha-792382-m02) DBG | Checking permissions on dir: /home
	I1209 10:49:57.697812  627293 main.go:141] libmachine: (ha-792382-m02) DBG | Skipping /home - not owner
	I1209 10:49:57.697828  627293 main.go:141] libmachine: (ha-792382-m02) Setting executable bit set on /home/jenkins/minikube-integration/20068-609844 (perms=drwxrwxr-x)
	I1209 10:49:57.697853  627293 main.go:141] libmachine: (ha-792382-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1209 10:49:57.697862  627293 main.go:141] libmachine: (ha-792382-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1209 10:49:57.697867  627293 main.go:141] libmachine: (ha-792382-m02) Creating domain...
	I1209 10:49:57.698931  627293 main.go:141] libmachine: (ha-792382-m02) define libvirt domain using xml: 
	I1209 10:49:57.698948  627293 main.go:141] libmachine: (ha-792382-m02) <domain type='kvm'>
	I1209 10:49:57.698955  627293 main.go:141] libmachine: (ha-792382-m02)   <name>ha-792382-m02</name>
	I1209 10:49:57.698960  627293 main.go:141] libmachine: (ha-792382-m02)   <memory unit='MiB'>2200</memory>
	I1209 10:49:57.698965  627293 main.go:141] libmachine: (ha-792382-m02)   <vcpu>2</vcpu>
	I1209 10:49:57.698968  627293 main.go:141] libmachine: (ha-792382-m02)   <features>
	I1209 10:49:57.698974  627293 main.go:141] libmachine: (ha-792382-m02)     <acpi/>
	I1209 10:49:57.698977  627293 main.go:141] libmachine: (ha-792382-m02)     <apic/>
	I1209 10:49:57.698982  627293 main.go:141] libmachine: (ha-792382-m02)     <pae/>
	I1209 10:49:57.698985  627293 main.go:141] libmachine: (ha-792382-m02)     
	I1209 10:49:57.698991  627293 main.go:141] libmachine: (ha-792382-m02)   </features>
	I1209 10:49:57.698996  627293 main.go:141] libmachine: (ha-792382-m02)   <cpu mode='host-passthrough'>
	I1209 10:49:57.699000  627293 main.go:141] libmachine: (ha-792382-m02)   
	I1209 10:49:57.699004  627293 main.go:141] libmachine: (ha-792382-m02)   </cpu>
	I1209 10:49:57.699009  627293 main.go:141] libmachine: (ha-792382-m02)   <os>
	I1209 10:49:57.699013  627293 main.go:141] libmachine: (ha-792382-m02)     <type>hvm</type>
	I1209 10:49:57.699018  627293 main.go:141] libmachine: (ha-792382-m02)     <boot dev='cdrom'/>
	I1209 10:49:57.699034  627293 main.go:141] libmachine: (ha-792382-m02)     <boot dev='hd'/>
	I1209 10:49:57.699053  627293 main.go:141] libmachine: (ha-792382-m02)     <bootmenu enable='no'/>
	I1209 10:49:57.699065  627293 main.go:141] libmachine: (ha-792382-m02)   </os>
	I1209 10:49:57.699070  627293 main.go:141] libmachine: (ha-792382-m02)   <devices>
	I1209 10:49:57.699074  627293 main.go:141] libmachine: (ha-792382-m02)     <disk type='file' device='cdrom'>
	I1209 10:49:57.699083  627293 main.go:141] libmachine: (ha-792382-m02)       <source file='/home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382-m02/boot2docker.iso'/>
	I1209 10:49:57.699087  627293 main.go:141] libmachine: (ha-792382-m02)       <target dev='hdc' bus='scsi'/>
	I1209 10:49:57.699092  627293 main.go:141] libmachine: (ha-792382-m02)       <readonly/>
	I1209 10:49:57.699095  627293 main.go:141] libmachine: (ha-792382-m02)     </disk>
	I1209 10:49:57.699101  627293 main.go:141] libmachine: (ha-792382-m02)     <disk type='file' device='disk'>
	I1209 10:49:57.699106  627293 main.go:141] libmachine: (ha-792382-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1209 10:49:57.699114  627293 main.go:141] libmachine: (ha-792382-m02)       <source file='/home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382-m02/ha-792382-m02.rawdisk'/>
	I1209 10:49:57.699122  627293 main.go:141] libmachine: (ha-792382-m02)       <target dev='hda' bus='virtio'/>
	I1209 10:49:57.699137  627293 main.go:141] libmachine: (ha-792382-m02)     </disk>
	I1209 10:49:57.699147  627293 main.go:141] libmachine: (ha-792382-m02)     <interface type='network'>
	I1209 10:49:57.699179  627293 main.go:141] libmachine: (ha-792382-m02)       <source network='mk-ha-792382'/>
	I1209 10:49:57.699205  627293 main.go:141] libmachine: (ha-792382-m02)       <model type='virtio'/>
	I1209 10:49:57.699214  627293 main.go:141] libmachine: (ha-792382-m02)     </interface>
	I1209 10:49:57.699227  627293 main.go:141] libmachine: (ha-792382-m02)     <interface type='network'>
	I1209 10:49:57.699257  627293 main.go:141] libmachine: (ha-792382-m02)       <source network='default'/>
	I1209 10:49:57.699276  627293 main.go:141] libmachine: (ha-792382-m02)       <model type='virtio'/>
	I1209 10:49:57.699287  627293 main.go:141] libmachine: (ha-792382-m02)     </interface>
	I1209 10:49:57.699295  627293 main.go:141] libmachine: (ha-792382-m02)     <serial type='pty'>
	I1209 10:49:57.699302  627293 main.go:141] libmachine: (ha-792382-m02)       <target port='0'/>
	I1209 10:49:57.699309  627293 main.go:141] libmachine: (ha-792382-m02)     </serial>
	I1209 10:49:57.699314  627293 main.go:141] libmachine: (ha-792382-m02)     <console type='pty'>
	I1209 10:49:57.699320  627293 main.go:141] libmachine: (ha-792382-m02)       <target type='serial' port='0'/>
	I1209 10:49:57.699325  627293 main.go:141] libmachine: (ha-792382-m02)     </console>
	I1209 10:49:57.699332  627293 main.go:141] libmachine: (ha-792382-m02)     <rng model='virtio'>
	I1209 10:49:57.699338  627293 main.go:141] libmachine: (ha-792382-m02)       <backend model='random'>/dev/random</backend>
	I1209 10:49:57.699352  627293 main.go:141] libmachine: (ha-792382-m02)     </rng>
	I1209 10:49:57.699360  627293 main.go:141] libmachine: (ha-792382-m02)     
	I1209 10:49:57.699364  627293 main.go:141] libmachine: (ha-792382-m02)     
	I1209 10:49:57.699370  627293 main.go:141] libmachine: (ha-792382-m02)   </devices>
	I1209 10:49:57.699374  627293 main.go:141] libmachine: (ha-792382-m02) </domain>
	I1209 10:49:57.699384  627293 main.go:141] libmachine: (ha-792382-m02) 
	I1209 10:49:57.706829  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:be:31:4f in network default
	I1209 10:49:57.707394  627293 main.go:141] libmachine: (ha-792382-m02) Ensuring networks are active...
	I1209 10:49:57.707420  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:49:57.708099  627293 main.go:141] libmachine: (ha-792382-m02) Ensuring network default is active
	I1209 10:49:57.708447  627293 main.go:141] libmachine: (ha-792382-m02) Ensuring network mk-ha-792382 is active
	I1209 10:49:57.708833  627293 main.go:141] libmachine: (ha-792382-m02) Getting domain xml...
	I1209 10:49:57.709562  627293 main.go:141] libmachine: (ha-792382-m02) Creating domain...
	I1209 10:49:58.965991  627293 main.go:141] libmachine: (ha-792382-m02) Waiting to get IP...
	I1209 10:49:58.967025  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:49:58.967615  627293 main.go:141] libmachine: (ha-792382-m02) DBG | unable to find current IP address of domain ha-792382-m02 in network mk-ha-792382
	I1209 10:49:58.967718  627293 main.go:141] libmachine: (ha-792382-m02) DBG | I1209 10:49:58.967609  627662 retry.go:31] will retry after 289.483594ms: waiting for machine to come up
	I1209 10:49:59.259398  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:49:59.259929  627293 main.go:141] libmachine: (ha-792382-m02) DBG | unable to find current IP address of domain ha-792382-m02 in network mk-ha-792382
	I1209 10:49:59.259958  627293 main.go:141] libmachine: (ha-792382-m02) DBG | I1209 10:49:59.259877  627662 retry.go:31] will retry after 368.739813ms: waiting for machine to come up
	I1209 10:49:59.630595  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:49:59.631082  627293 main.go:141] libmachine: (ha-792382-m02) DBG | unable to find current IP address of domain ha-792382-m02 in network mk-ha-792382
	I1209 10:49:59.631111  627293 main.go:141] libmachine: (ha-792382-m02) DBG | I1209 10:49:59.631032  627662 retry.go:31] will retry after 468.793736ms: waiting for machine to come up
	I1209 10:50:00.101924  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:00.102437  627293 main.go:141] libmachine: (ha-792382-m02) DBG | unable to find current IP address of domain ha-792382-m02 in network mk-ha-792382
	I1209 10:50:00.102468  627293 main.go:141] libmachine: (ha-792382-m02) DBG | I1209 10:50:00.102389  627662 retry.go:31] will retry after 467.16032ms: waiting for machine to come up
	I1209 10:50:00.571568  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:00.572085  627293 main.go:141] libmachine: (ha-792382-m02) DBG | unable to find current IP address of domain ha-792382-m02 in network mk-ha-792382
	I1209 10:50:00.572158  627293 main.go:141] libmachine: (ha-792382-m02) DBG | I1209 10:50:00.571967  627662 retry.go:31] will retry after 614.331886ms: waiting for machine to come up
	I1209 10:50:01.188165  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:01.188721  627293 main.go:141] libmachine: (ha-792382-m02) DBG | unable to find current IP address of domain ha-792382-m02 in network mk-ha-792382
	I1209 10:50:01.188753  627293 main.go:141] libmachine: (ha-792382-m02) DBG | I1209 10:50:01.188683  627662 retry.go:31] will retry after 622.291039ms: waiting for machine to come up
	I1209 10:50:01.812761  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:01.813166  627293 main.go:141] libmachine: (ha-792382-m02) DBG | unable to find current IP address of domain ha-792382-m02 in network mk-ha-792382
	I1209 10:50:01.813197  627293 main.go:141] libmachine: (ha-792382-m02) DBG | I1209 10:50:01.813093  627662 retry.go:31] will retry after 970.350077ms: waiting for machine to come up
	I1209 10:50:02.785861  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:02.786416  627293 main.go:141] libmachine: (ha-792382-m02) DBG | unable to find current IP address of domain ha-792382-m02 in network mk-ha-792382
	I1209 10:50:02.786477  627293 main.go:141] libmachine: (ha-792382-m02) DBG | I1209 10:50:02.786368  627662 retry.go:31] will retry after 1.09205339s: waiting for machine to come up
	I1209 10:50:03.879814  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:03.880303  627293 main.go:141] libmachine: (ha-792382-m02) DBG | unable to find current IP address of domain ha-792382-m02 in network mk-ha-792382
	I1209 10:50:03.880327  627293 main.go:141] libmachine: (ha-792382-m02) DBG | I1209 10:50:03.880248  627662 retry.go:31] will retry after 1.765651975s: waiting for machine to come up
	I1209 10:50:05.648159  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:05.648671  627293 main.go:141] libmachine: (ha-792382-m02) DBG | unable to find current IP address of domain ha-792382-m02 in network mk-ha-792382
	I1209 10:50:05.648696  627293 main.go:141] libmachine: (ha-792382-m02) DBG | I1209 10:50:05.648615  627662 retry.go:31] will retry after 1.762832578s: waiting for machine to come up
	I1209 10:50:07.413599  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:07.414030  627293 main.go:141] libmachine: (ha-792382-m02) DBG | unable to find current IP address of domain ha-792382-m02 in network mk-ha-792382
	I1209 10:50:07.414059  627293 main.go:141] libmachine: (ha-792382-m02) DBG | I1209 10:50:07.413978  627662 retry.go:31] will retry after 2.150383903s: waiting for machine to come up
	I1209 10:50:09.565911  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:09.566390  627293 main.go:141] libmachine: (ha-792382-m02) DBG | unable to find current IP address of domain ha-792382-m02 in network mk-ha-792382
	I1209 10:50:09.566420  627293 main.go:141] libmachine: (ha-792382-m02) DBG | I1209 10:50:09.566350  627662 retry.go:31] will retry after 3.049537741s: waiting for machine to come up
	I1209 10:50:12.617744  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:12.618241  627293 main.go:141] libmachine: (ha-792382-m02) DBG | unable to find current IP address of domain ha-792382-m02 in network mk-ha-792382
	I1209 10:50:12.618276  627293 main.go:141] libmachine: (ha-792382-m02) DBG | I1209 10:50:12.618155  627662 retry.go:31] will retry after 3.599687882s: waiting for machine to come up
	I1209 10:50:16.219399  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:16.219837  627293 main.go:141] libmachine: (ha-792382-m02) DBG | unable to find current IP address of domain ha-792382-m02 in network mk-ha-792382
	I1209 10:50:16.219868  627293 main.go:141] libmachine: (ha-792382-m02) DBG | I1209 10:50:16.219789  627662 retry.go:31] will retry after 3.518875962s: waiting for machine to come up
	I1209 10:50:19.740130  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:19.740985  627293 main.go:141] libmachine: (ha-792382-m02) Found IP for machine: 192.168.39.89
	I1209 10:50:19.741024  627293 main.go:141] libmachine: (ha-792382-m02) Reserving static IP address...
	I1209 10:50:19.741037  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has current primary IP address 192.168.39.89 and MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:19.741518  627293 main.go:141] libmachine: (ha-792382-m02) DBG | unable to find host DHCP lease matching {name: "ha-792382-m02", mac: "52:54:00:95:40:00", ip: "192.168.39.89"} in network mk-ha-792382
	I1209 10:50:19.814048  627293 main.go:141] libmachine: (ha-792382-m02) DBG | Getting to WaitForSSH function...
	I1209 10:50:19.814070  627293 main.go:141] libmachine: (ha-792382-m02) Reserved static IP address: 192.168.39.89
	I1209 10:50:19.814078  627293 main.go:141] libmachine: (ha-792382-m02) Waiting for SSH to be available...
	I1209 10:50:19.816613  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:19.817057  627293 main.go:141] libmachine: (ha-792382-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:40:00", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:50:11 +0000 UTC Type:0 Mac:52:54:00:95:40:00 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:minikube Clientid:01:52:54:00:95:40:00}
	I1209 10:50:19.817166  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:19.817261  627293 main.go:141] libmachine: (ha-792382-m02) DBG | Using SSH client type: external
	I1209 10:50:19.817282  627293 main.go:141] libmachine: (ha-792382-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382-m02/id_rsa (-rw-------)
	I1209 10:50:19.817362  627293 main.go:141] libmachine: (ha-792382-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.89 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1209 10:50:19.817390  627293 main.go:141] libmachine: (ha-792382-m02) DBG | About to run SSH command:
	I1209 10:50:19.817411  627293 main.go:141] libmachine: (ha-792382-m02) DBG | exit 0
	I1209 10:50:19.942297  627293 main.go:141] libmachine: (ha-792382-m02) DBG | SSH cmd err, output: <nil>: 
	I1209 10:50:19.942595  627293 main.go:141] libmachine: (ha-792382-m02) KVM machine creation complete!
	I1209 10:50:19.942914  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetConfigRaw
	I1209 10:50:19.943559  627293 main.go:141] libmachine: (ha-792382-m02) Calling .DriverName
	I1209 10:50:19.943781  627293 main.go:141] libmachine: (ha-792382-m02) Calling .DriverName
	I1209 10:50:19.943947  627293 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1209 10:50:19.943965  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetState
	I1209 10:50:19.945579  627293 main.go:141] libmachine: Detecting operating system of created instance...
	I1209 10:50:19.945598  627293 main.go:141] libmachine: Waiting for SSH to be available...
	I1209 10:50:19.945607  627293 main.go:141] libmachine: Getting to WaitForSSH function...
	I1209 10:50:19.945616  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHHostname
	I1209 10:50:19.947916  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:19.948374  627293 main.go:141] libmachine: (ha-792382-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:40:00", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:50:11 +0000 UTC Type:0 Mac:52:54:00:95:40:00 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-792382-m02 Clientid:01:52:54:00:95:40:00}
	I1209 10:50:19.948400  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:19.948582  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHPort
	I1209 10:50:19.948773  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHKeyPath
	I1209 10:50:19.948920  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHKeyPath
	I1209 10:50:19.949049  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHUsername
	I1209 10:50:19.949307  627293 main.go:141] libmachine: Using SSH client type: native
	I1209 10:50:19.949555  627293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I1209 10:50:19.949573  627293 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1209 10:50:20.053499  627293 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 10:50:20.053528  627293 main.go:141] libmachine: Detecting the provisioner...
	I1209 10:50:20.053541  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHHostname
	I1209 10:50:20.056444  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:20.056881  627293 main.go:141] libmachine: (ha-792382-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:40:00", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:50:11 +0000 UTC Type:0 Mac:52:54:00:95:40:00 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-792382-m02 Clientid:01:52:54:00:95:40:00}
	I1209 10:50:20.056911  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:20.057119  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHPort
	I1209 10:50:20.057366  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHKeyPath
	I1209 10:50:20.057545  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHKeyPath
	I1209 10:50:20.057698  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHUsername
	I1209 10:50:20.057856  627293 main.go:141] libmachine: Using SSH client type: native
	I1209 10:50:20.058022  627293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I1209 10:50:20.058034  627293 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1209 10:50:20.162532  627293 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1209 10:50:20.162621  627293 main.go:141] libmachine: found compatible host: buildroot
	I1209 10:50:20.162636  627293 main.go:141] libmachine: Provisioning with buildroot...
	I1209 10:50:20.162651  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetMachineName
	I1209 10:50:20.162892  627293 buildroot.go:166] provisioning hostname "ha-792382-m02"
	I1209 10:50:20.162921  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetMachineName
	I1209 10:50:20.163135  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHHostname
	I1209 10:50:20.165692  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:20.166051  627293 main.go:141] libmachine: (ha-792382-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:40:00", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:50:11 +0000 UTC Type:0 Mac:52:54:00:95:40:00 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-792382-m02 Clientid:01:52:54:00:95:40:00}
	I1209 10:50:20.166078  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:20.166237  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHPort
	I1209 10:50:20.166425  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHKeyPath
	I1209 10:50:20.166592  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHKeyPath
	I1209 10:50:20.166734  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHUsername
	I1209 10:50:20.166863  627293 main.go:141] libmachine: Using SSH client type: native
	I1209 10:50:20.167071  627293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I1209 10:50:20.167087  627293 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-792382-m02 && echo "ha-792382-m02" | sudo tee /etc/hostname
	I1209 10:50:20.285783  627293 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-792382-m02
	
	I1209 10:50:20.285812  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHHostname
	I1209 10:50:20.288581  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:20.288945  627293 main.go:141] libmachine: (ha-792382-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:40:00", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:50:11 +0000 UTC Type:0 Mac:52:54:00:95:40:00 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-792382-m02 Clientid:01:52:54:00:95:40:00}
	I1209 10:50:20.289006  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:20.289156  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHPort
	I1209 10:50:20.289374  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHKeyPath
	I1209 10:50:20.289525  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHKeyPath
	I1209 10:50:20.289675  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHUsername
	I1209 10:50:20.289834  627293 main.go:141] libmachine: Using SSH client type: native
	I1209 10:50:20.290050  627293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I1209 10:50:20.290067  627293 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-792382-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-792382-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-792382-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 10:50:20.403745  627293 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 10:50:20.403780  627293 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20068-609844/.minikube CaCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20068-609844/.minikube}
	I1209 10:50:20.403797  627293 buildroot.go:174] setting up certificates
	I1209 10:50:20.403807  627293 provision.go:84] configureAuth start
	I1209 10:50:20.403816  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetMachineName
	I1209 10:50:20.404127  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetIP
	I1209 10:50:20.406853  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:20.407317  627293 main.go:141] libmachine: (ha-792382-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:40:00", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:50:11 +0000 UTC Type:0 Mac:52:54:00:95:40:00 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-792382-m02 Clientid:01:52:54:00:95:40:00}
	I1209 10:50:20.407339  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:20.407523  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHHostname
	I1209 10:50:20.410235  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:20.410616  627293 main.go:141] libmachine: (ha-792382-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:40:00", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:50:11 +0000 UTC Type:0 Mac:52:54:00:95:40:00 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-792382-m02 Clientid:01:52:54:00:95:40:00}
	I1209 10:50:20.410641  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:20.410813  627293 provision.go:143] copyHostCerts
	I1209 10:50:20.410851  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem
	I1209 10:50:20.410897  627293 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem, removing ...
	I1209 10:50:20.410910  627293 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem
	I1209 10:50:20.410996  627293 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem (1082 bytes)
	I1209 10:50:20.411092  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem
	I1209 10:50:20.411117  627293 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem, removing ...
	I1209 10:50:20.411127  627293 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem
	I1209 10:50:20.411167  627293 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem (1123 bytes)
	I1209 10:50:20.411241  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem
	I1209 10:50:20.411265  627293 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem, removing ...
	I1209 10:50:20.411274  627293 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem
	I1209 10:50:20.411310  627293 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem (1679 bytes)
	I1209 10:50:20.411379  627293 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem org=jenkins.ha-792382-m02 san=[127.0.0.1 192.168.39.89 ha-792382-m02 localhost minikube]
	I1209 10:50:20.506946  627293 provision.go:177] copyRemoteCerts
	I1209 10:50:20.507013  627293 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 10:50:20.507043  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHHostname
	I1209 10:50:20.509588  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:20.509997  627293 main.go:141] libmachine: (ha-792382-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:40:00", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:50:11 +0000 UTC Type:0 Mac:52:54:00:95:40:00 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-792382-m02 Clientid:01:52:54:00:95:40:00}
	I1209 10:50:20.510031  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:20.510256  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHPort
	I1209 10:50:20.510485  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHKeyPath
	I1209 10:50:20.510630  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHUsername
	I1209 10:50:20.510792  627293 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382-m02/id_rsa Username:docker}
	I1209 10:50:20.591669  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1209 10:50:20.591738  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1209 10:50:20.614379  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1209 10:50:20.614474  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1209 10:50:20.635752  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1209 10:50:20.635819  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1209 10:50:20.657840  627293 provision.go:87] duration metric: took 254.019642ms to configureAuth
	I1209 10:50:20.657873  627293 buildroot.go:189] setting minikube options for container-runtime
	I1209 10:50:20.658088  627293 config.go:182] Loaded profile config "ha-792382": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 10:50:20.658221  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHHostname
	I1209 10:50:20.661758  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:20.662150  627293 main.go:141] libmachine: (ha-792382-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:40:00", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:50:11 +0000 UTC Type:0 Mac:52:54:00:95:40:00 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-792382-m02 Clientid:01:52:54:00:95:40:00}
	I1209 10:50:20.662207  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:20.662350  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHPort
	I1209 10:50:20.662590  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHKeyPath
	I1209 10:50:20.662773  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHKeyPath
	I1209 10:50:20.662982  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHUsername
	I1209 10:50:20.663174  627293 main.go:141] libmachine: Using SSH client type: native
	I1209 10:50:20.663396  627293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I1209 10:50:20.663417  627293 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 10:50:20.895342  627293 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 10:50:20.895376  627293 main.go:141] libmachine: Checking connection to Docker...
	I1209 10:50:20.895386  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetURL
	I1209 10:50:20.896678  627293 main.go:141] libmachine: (ha-792382-m02) DBG | Using libvirt version 6000000
	I1209 10:50:20.899127  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:20.899492  627293 main.go:141] libmachine: (ha-792382-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:40:00", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:50:11 +0000 UTC Type:0 Mac:52:54:00:95:40:00 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-792382-m02 Clientid:01:52:54:00:95:40:00}
	I1209 10:50:20.899524  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:20.899662  627293 main.go:141] libmachine: Docker is up and running!
	I1209 10:50:20.899675  627293 main.go:141] libmachine: Reticulating splines...
	I1209 10:50:20.899683  627293 client.go:171] duration metric: took 23.594715586s to LocalClient.Create
	I1209 10:50:20.899712  627293 start.go:167] duration metric: took 23.594799788s to libmachine.API.Create "ha-792382"
	I1209 10:50:20.899727  627293 start.go:293] postStartSetup for "ha-792382-m02" (driver="kvm2")
	I1209 10:50:20.899740  627293 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 10:50:20.899762  627293 main.go:141] libmachine: (ha-792382-m02) Calling .DriverName
	I1209 10:50:20.899988  627293 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 10:50:20.900011  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHHostname
	I1209 10:50:20.902193  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:20.902545  627293 main.go:141] libmachine: (ha-792382-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:40:00", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:50:11 +0000 UTC Type:0 Mac:52:54:00:95:40:00 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-792382-m02 Clientid:01:52:54:00:95:40:00}
	I1209 10:50:20.902574  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:20.902733  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHPort
	I1209 10:50:20.902907  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHKeyPath
	I1209 10:50:20.903055  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHUsername
	I1209 10:50:20.903224  627293 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382-m02/id_rsa Username:docker}
	I1209 10:50:20.987979  627293 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 10:50:20.992183  627293 info.go:137] Remote host: Buildroot 2023.02.9
	I1209 10:50:20.992210  627293 filesync.go:126] Scanning /home/jenkins/minikube-integration/20068-609844/.minikube/addons for local assets ...
	I1209 10:50:20.992280  627293 filesync.go:126] Scanning /home/jenkins/minikube-integration/20068-609844/.minikube/files for local assets ...
	I1209 10:50:20.992373  627293 filesync.go:149] local asset: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem -> 6170172.pem in /etc/ssl/certs
	I1209 10:50:20.992388  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem -> /etc/ssl/certs/6170172.pem
	I1209 10:50:20.992517  627293 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 10:50:21.001255  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem --> /etc/ssl/certs/6170172.pem (1708 bytes)
	I1209 10:50:21.023333  627293 start.go:296] duration metric: took 123.590873ms for postStartSetup
	I1209 10:50:21.023382  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetConfigRaw
	I1209 10:50:21.024074  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetIP
	I1209 10:50:21.026760  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:21.027216  627293 main.go:141] libmachine: (ha-792382-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:40:00", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:50:11 +0000 UTC Type:0 Mac:52:54:00:95:40:00 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-792382-m02 Clientid:01:52:54:00:95:40:00}
	I1209 10:50:21.027253  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:21.027452  627293 profile.go:143] Saving config to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/config.json ...
	I1209 10:50:21.027657  627293 start.go:128] duration metric: took 23.741699232s to createHost
	I1209 10:50:21.027689  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHHostname
	I1209 10:50:21.029948  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:21.030322  627293 main.go:141] libmachine: (ha-792382-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:40:00", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:50:11 +0000 UTC Type:0 Mac:52:54:00:95:40:00 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-792382-m02 Clientid:01:52:54:00:95:40:00}
	I1209 10:50:21.030343  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:21.030537  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHPort
	I1209 10:50:21.030708  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHKeyPath
	I1209 10:50:21.030868  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHKeyPath
	I1209 10:50:21.031040  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHUsername
	I1209 10:50:21.031235  627293 main.go:141] libmachine: Using SSH client type: native
	I1209 10:50:21.031525  627293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I1209 10:50:21.031542  627293 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1209 10:50:21.134634  627293 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733741421.109382404
	
	I1209 10:50:21.134664  627293 fix.go:216] guest clock: 1733741421.109382404
	I1209 10:50:21.134671  627293 fix.go:229] Guest: 2024-12-09 10:50:21.109382404 +0000 UTC Remote: 2024-12-09 10:50:21.027672389 +0000 UTC m=+68.911911388 (delta=81.710015ms)
	I1209 10:50:21.134687  627293 fix.go:200] guest clock delta is within tolerance: 81.710015ms
	I1209 10:50:21.134693  627293 start.go:83] releasing machines lock for "ha-792382-m02", held for 23.84885063s
	I1209 10:50:21.134711  627293 main.go:141] libmachine: (ha-792382-m02) Calling .DriverName
	I1209 10:50:21.135011  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetIP
	I1209 10:50:21.137922  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:21.138329  627293 main.go:141] libmachine: (ha-792382-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:40:00", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:50:11 +0000 UTC Type:0 Mac:52:54:00:95:40:00 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-792382-m02 Clientid:01:52:54:00:95:40:00}
	I1209 10:50:21.138359  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:21.140711  627293 out.go:177] * Found network options:
	I1209 10:50:21.142033  627293 out.go:177]   - NO_PROXY=192.168.39.69
	W1209 10:50:21.143264  627293 proxy.go:119] fail to check proxy env: Error ip not in block
	I1209 10:50:21.143304  627293 main.go:141] libmachine: (ha-792382-m02) Calling .DriverName
	I1209 10:50:21.143961  627293 main.go:141] libmachine: (ha-792382-m02) Calling .DriverName
	I1209 10:50:21.144186  627293 main.go:141] libmachine: (ha-792382-m02) Calling .DriverName
	I1209 10:50:21.144305  627293 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 10:50:21.144354  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHHostname
	W1209 10:50:21.144454  627293 proxy.go:119] fail to check proxy env: Error ip not in block
	I1209 10:50:21.144534  627293 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 10:50:21.144559  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHHostname
	I1209 10:50:21.147622  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:21.147846  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:21.147959  627293 main.go:141] libmachine: (ha-792382-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:40:00", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:50:11 +0000 UTC Type:0 Mac:52:54:00:95:40:00 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-792382-m02 Clientid:01:52:54:00:95:40:00}
	I1209 10:50:21.147994  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:21.148084  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHPort
	I1209 10:50:21.148250  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHKeyPath
	I1209 10:50:21.148369  627293 main.go:141] libmachine: (ha-792382-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:40:00", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:50:11 +0000 UTC Type:0 Mac:52:54:00:95:40:00 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-792382-m02 Clientid:01:52:54:00:95:40:00}
	I1209 10:50:21.148396  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:21.148419  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHUsername
	I1209 10:50:21.148619  627293 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382-m02/id_rsa Username:docker}
	I1209 10:50:21.148763  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHPort
	I1209 10:50:21.148870  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHKeyPath
	I1209 10:50:21.149177  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetSSHUsername
	I1209 10:50:21.149326  627293 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382-m02/id_rsa Username:docker}
	I1209 10:50:21.377528  627293 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 10:50:21.383869  627293 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 10:50:21.383962  627293 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 10:50:21.402713  627293 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1209 10:50:21.402747  627293 start.go:495] detecting cgroup driver to use...
	I1209 10:50:21.402836  627293 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 10:50:21.418644  627293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 10:50:21.431825  627293 docker.go:217] disabling cri-docker service (if available) ...
	I1209 10:50:21.431894  627293 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 10:50:21.445030  627293 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 10:50:21.458235  627293 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 10:50:21.576888  627293 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 10:50:21.715254  627293 docker.go:233] disabling docker service ...
	I1209 10:50:21.715337  627293 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 10:50:21.728777  627293 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 10:50:21.741484  627293 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 10:50:21.877920  627293 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 10:50:21.987438  627293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 10:50:22.000287  627293 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 10:50:22.017967  627293 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1209 10:50:22.018044  627293 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 10:50:22.027586  627293 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 10:50:22.027647  627293 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 10:50:22.037032  627293 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 10:50:22.046716  627293 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 10:50:22.056390  627293 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 10:50:22.066025  627293 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 10:50:22.075591  627293 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 10:50:22.092169  627293 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 10:50:22.102292  627293 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 10:50:22.111580  627293 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1209 10:50:22.111645  627293 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1209 10:50:22.124823  627293 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 10:50:22.134059  627293 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 10:50:22.267517  627293 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 10:50:22.360113  627293 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 10:50:22.360202  627293 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 10:50:22.366049  627293 start.go:563] Will wait 60s for crictl version
	I1209 10:50:22.366124  627293 ssh_runner.go:195] Run: which crictl
	I1209 10:50:22.369685  627293 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 10:50:22.406117  627293 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1209 10:50:22.406233  627293 ssh_runner.go:195] Run: crio --version
	I1209 10:50:22.433831  627293 ssh_runner.go:195] Run: crio --version
	I1209 10:50:22.466702  627293 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1209 10:50:22.468114  627293 out.go:177]   - env NO_PROXY=192.168.39.69
	I1209 10:50:22.469415  627293 main.go:141] libmachine: (ha-792382-m02) Calling .GetIP
	I1209 10:50:22.472354  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:22.472792  627293 main.go:141] libmachine: (ha-792382-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:40:00", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:50:11 +0000 UTC Type:0 Mac:52:54:00:95:40:00 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-792382-m02 Clientid:01:52:54:00:95:40:00}
	I1209 10:50:22.472838  627293 main.go:141] libmachine: (ha-792382-m02) DBG | domain ha-792382-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:95:40:00 in network mk-ha-792382
	I1209 10:50:22.473063  627293 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1209 10:50:22.478206  627293 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 10:50:22.490975  627293 mustload.go:65] Loading cluster: ha-792382
	I1209 10:50:22.491223  627293 config.go:182] Loaded profile config "ha-792382": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 10:50:22.491515  627293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:50:22.491566  627293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:50:22.507354  627293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34813
	I1209 10:50:22.507839  627293 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:50:22.508378  627293 main.go:141] libmachine: Using API Version  1
	I1209 10:50:22.508407  627293 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:50:22.508811  627293 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:50:22.509022  627293 main.go:141] libmachine: (ha-792382) Calling .GetState
	I1209 10:50:22.510469  627293 host.go:66] Checking if "ha-792382" exists ...
	I1209 10:50:22.510748  627293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:50:22.510785  627293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:50:22.525474  627293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34445
	I1209 10:50:22.525972  627293 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:50:22.526524  627293 main.go:141] libmachine: Using API Version  1
	I1209 10:50:22.526554  627293 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:50:22.526848  627293 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:50:22.527055  627293 main.go:141] libmachine: (ha-792382) Calling .DriverName
	I1209 10:50:22.527271  627293 certs.go:68] Setting up /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382 for IP: 192.168.39.89
	I1209 10:50:22.527285  627293 certs.go:194] generating shared ca certs ...
	I1209 10:50:22.527308  627293 certs.go:226] acquiring lock for ca certs: {Name:mk2df5887e08965d909a9c950da5dfffb8a04ddc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 10:50:22.527465  627293 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.key
	I1209 10:50:22.527507  627293 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.key
	I1209 10:50:22.527514  627293 certs.go:256] generating profile certs ...
	I1209 10:50:22.527587  627293 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/client.key
	I1209 10:50:22.527613  627293 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.key.8c4cfabb
	I1209 10:50:22.527628  627293 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.crt.8c4cfabb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.69 192.168.39.89 192.168.39.254]
	I1209 10:50:22.618893  627293 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.crt.8c4cfabb ...
	I1209 10:50:22.618924  627293 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.crt.8c4cfabb: {Name:mk9fc14aa3aaf65091f9f2d45f3765515e31473e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 10:50:22.619129  627293 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.key.8c4cfabb ...
	I1209 10:50:22.619148  627293 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.key.8c4cfabb: {Name:mk41f99fa98267e5a58e4b407fa7296350fea4d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 10:50:22.619255  627293 certs.go:381] copying /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.crt.8c4cfabb -> /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.crt
	I1209 10:50:22.619394  627293 certs.go:385] copying /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.key.8c4cfabb -> /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.key
	I1209 10:50:22.619538  627293 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/proxy-client.key
	I1209 10:50:22.619555  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1209 10:50:22.619568  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1209 10:50:22.619579  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1209 10:50:22.619593  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1209 10:50:22.619603  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1209 10:50:22.619614  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1209 10:50:22.619626  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1209 10:50:22.619636  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1209 10:50:22.619683  627293 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017.pem (1338 bytes)
	W1209 10:50:22.619711  627293 certs.go:480] ignoring /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017_empty.pem, impossibly tiny 0 bytes
	I1209 10:50:22.619720  627293 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem (1675 bytes)
	I1209 10:50:22.619743  627293 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem (1082 bytes)
	I1209 10:50:22.619767  627293 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem (1123 bytes)
	I1209 10:50:22.619790  627293 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem (1679 bytes)
	I1209 10:50:22.619828  627293 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem (1708 bytes)
	I1209 10:50:22.619853  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem -> /usr/share/ca-certificates/6170172.pem
	I1209 10:50:22.619866  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1209 10:50:22.619877  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017.pem -> /usr/share/ca-certificates/617017.pem
	I1209 10:50:22.619908  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHHostname
	I1209 10:50:22.623291  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:50:22.623706  627293 main.go:141] libmachine: (ha-792382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:82:f7", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:49:26 +0000 UTC Type:0 Mac:52:54:00:a8:82:f7 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-792382 Clientid:01:52:54:00:a8:82:f7}
	I1209 10:50:22.623734  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:50:22.623919  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHPort
	I1209 10:50:22.624122  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHKeyPath
	I1209 10:50:22.624329  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHUsername
	I1209 10:50:22.624526  627293 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382/id_rsa Username:docker}
	I1209 10:50:22.694590  627293 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1209 10:50:22.700190  627293 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1209 10:50:22.715537  627293 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1209 10:50:22.720737  627293 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1209 10:50:22.731623  627293 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1209 10:50:22.736050  627293 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1209 10:50:22.747578  627293 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1209 10:50:22.752312  627293 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1209 10:50:22.763588  627293 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1209 10:50:22.768050  627293 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1209 10:50:22.777655  627293 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1209 10:50:22.781717  627293 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1209 10:50:22.792464  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 10:50:22.816318  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1209 10:50:22.837988  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 10:50:22.861671  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1209 10:50:22.883735  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1209 10:50:22.904888  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1209 10:50:22.926092  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 10:50:22.947329  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1209 10:50:22.968466  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem --> /usr/share/ca-certificates/6170172.pem (1708 bytes)
	I1209 10:50:22.989908  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 10:50:23.012190  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017.pem --> /usr/share/ca-certificates/617017.pem (1338 bytes)
	I1209 10:50:23.036349  627293 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1209 10:50:23.051329  627293 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1209 10:50:23.066824  627293 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1209 10:50:23.081626  627293 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1209 10:50:23.096856  627293 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1209 10:50:23.112249  627293 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1209 10:50:23.126784  627293 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1209 10:50:23.141365  627293 ssh_runner.go:195] Run: openssl version
	I1209 10:50:23.146879  627293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6170172.pem && ln -fs /usr/share/ca-certificates/6170172.pem /etc/ssl/certs/6170172.pem"
	I1209 10:50:23.156698  627293 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6170172.pem
	I1209 10:50:23.160669  627293 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 10:45 /usr/share/ca-certificates/6170172.pem
	I1209 10:50:23.160717  627293 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6170172.pem
	I1209 10:50:23.166987  627293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6170172.pem /etc/ssl/certs/3ec20f2e.0"
	I1209 10:50:23.176745  627293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1209 10:50:23.186586  627293 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 10:50:23.190639  627293 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 10:34 /usr/share/ca-certificates/minikubeCA.pem
	I1209 10:50:23.190687  627293 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 10:50:23.195990  627293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1209 10:50:23.205745  627293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/617017.pem && ln -fs /usr/share/ca-certificates/617017.pem /etc/ssl/certs/617017.pem"
	I1209 10:50:23.215364  627293 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/617017.pem
	I1209 10:50:23.219316  627293 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 10:45 /usr/share/ca-certificates/617017.pem
	I1209 10:50:23.219368  627293 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/617017.pem
	I1209 10:50:23.225208  627293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/617017.pem /etc/ssl/certs/51391683.0"
	I1209 10:50:23.235141  627293 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 10:50:23.238820  627293 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1209 10:50:23.238882  627293 kubeadm.go:934] updating node {m02 192.168.39.89 8443 v1.31.2 crio true true} ...
	I1209 10:50:23.238988  627293 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-792382-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.89
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-792382 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 10:50:23.239016  627293 kube-vip.go:115] generating kube-vip config ...
	I1209 10:50:23.239060  627293 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1209 10:50:23.254073  627293 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1209 10:50:23.254184  627293 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.7
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1209 10:50:23.254233  627293 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1209 10:50:23.263688  627293 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1209 10:50:23.263749  627293 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1209 10:50:23.272494  627293 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1209 10:50:23.272527  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1209 10:50:23.272570  627293 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/20068-609844/.minikube/cache/linux/amd64/v1.31.2/kubeadm
	I1209 10:50:23.272599  627293 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1209 10:50:23.272611  627293 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/20068-609844/.minikube/cache/linux/amd64/v1.31.2/kubelet
	I1209 10:50:23.276784  627293 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1209 10:50:23.276819  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1209 10:50:24.168986  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1209 10:50:24.169072  627293 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1209 10:50:24.174707  627293 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1209 10:50:24.174764  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1209 10:50:24.294393  627293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 10:50:24.325197  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1209 10:50:24.325289  627293 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1209 10:50:24.335547  627293 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1209 10:50:24.335594  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1209 10:50:24.706937  627293 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1209 10:50:24.715886  627293 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1209 10:50:24.731189  627293 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 10:50:24.746662  627293 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1209 10:50:24.762089  627293 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1209 10:50:24.765881  627293 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 10:50:24.777191  627293 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 10:50:24.904006  627293 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 10:50:24.921009  627293 host.go:66] Checking if "ha-792382" exists ...
	I1209 10:50:24.921461  627293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:50:24.921511  627293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:50:24.937482  627293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34099
	I1209 10:50:24.937973  627293 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:50:24.938486  627293 main.go:141] libmachine: Using API Version  1
	I1209 10:50:24.938508  627293 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:50:24.938885  627293 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:50:24.939098  627293 main.go:141] libmachine: (ha-792382) Calling .DriverName
	I1209 10:50:24.939248  627293 start.go:317] joinCluster: &{Name:ha-792382 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-792382 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.69 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.89 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 10:50:24.939386  627293 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1209 10:50:24.939418  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHHostname
	I1209 10:50:24.942285  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:50:24.942827  627293 main.go:141] libmachine: (ha-792382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:82:f7", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:49:26 +0000 UTC Type:0 Mac:52:54:00:a8:82:f7 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-792382 Clientid:01:52:54:00:a8:82:f7}
	I1209 10:50:24.942855  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:50:24.942985  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHPort
	I1209 10:50:24.943215  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHKeyPath
	I1209 10:50:24.943387  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHUsername
	I1209 10:50:24.943515  627293 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382/id_rsa Username:docker}
	I1209 10:50:25.097594  627293 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.89 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 10:50:25.097643  627293 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token dvotig.smgl74cs6saznre8 --discovery-token-ca-cert-hash sha256:c011865ce27f188d55feab5f04226df0aae8d2e7cfe661283806c6f2de0ef29a --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-792382-m02 --control-plane --apiserver-advertise-address=192.168.39.89 --apiserver-bind-port=8443"
	I1209 10:50:47.230030  627293 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token dvotig.smgl74cs6saznre8 --discovery-token-ca-cert-hash sha256:c011865ce27f188d55feab5f04226df0aae8d2e7cfe661283806c6f2de0ef29a --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-792382-m02 --control-plane --apiserver-advertise-address=192.168.39.89 --apiserver-bind-port=8443": (22.132356511s)
	I1209 10:50:47.230081  627293 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1209 10:50:47.777805  627293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-792382-m02 minikube.k8s.io/updated_at=2024_12_09T10_50_47_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=60110addcdbf0fec7168b962521659e922988d6c minikube.k8s.io/name=ha-792382 minikube.k8s.io/primary=false
	I1209 10:50:47.938150  627293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-792382-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I1209 10:50:48.082480  627293 start.go:319] duration metric: took 23.143228187s to joinCluster
	I1209 10:50:48.082581  627293 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.89 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 10:50:48.082941  627293 config.go:182] Loaded profile config "ha-792382": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 10:50:48.084770  627293 out.go:177] * Verifying Kubernetes components...
	I1209 10:50:48.085991  627293 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 10:50:48.337603  627293 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 10:50:48.368412  627293 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/20068-609844/kubeconfig
	I1209 10:50:48.368651  627293 kapi.go:59] client config for ha-792382: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/client.crt", KeyFile:"/home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/client.key", CAFile:"/home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243b6e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1209 10:50:48.368776  627293 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.69:8443
	I1209 10:50:48.369027  627293 node_ready.go:35] waiting up to 6m0s for node "ha-792382-m02" to be "Ready" ...
	I1209 10:50:48.369182  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:50:48.369197  627293 round_trippers.go:469] Request Headers:
	I1209 10:50:48.369210  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:50:48.369215  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:50:48.379219  627293 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1209 10:50:48.869436  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:50:48.869471  627293 round_trippers.go:469] Request Headers:
	I1209 10:50:48.869484  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:50:48.869491  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:50:48.873562  627293 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 10:50:49.369649  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:50:49.369671  627293 round_trippers.go:469] Request Headers:
	I1209 10:50:49.369679  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:50:49.369685  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:50:49.372678  627293 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 10:50:49.869490  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:50:49.869516  627293 round_trippers.go:469] Request Headers:
	I1209 10:50:49.869525  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:50:49.869529  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:50:49.872495  627293 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 10:50:50.369998  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:50:50.370028  627293 round_trippers.go:469] Request Headers:
	I1209 10:50:50.370038  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:50:50.370043  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:50:50.374983  627293 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 10:50:50.377595  627293 node_ready.go:53] node "ha-792382-m02" has status "Ready":"False"
	I1209 10:50:50.869651  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:50:50.869674  627293 round_trippers.go:469] Request Headers:
	I1209 10:50:50.869688  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:50:50.869692  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:50:50.906453  627293 round_trippers.go:574] Response Status: 200 OK in 36 milliseconds
	I1209 10:50:51.369287  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:50:51.369317  627293 round_trippers.go:469] Request Headers:
	I1209 10:50:51.369329  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:50:51.369335  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:50:51.372362  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:50:51.870258  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:50:51.870289  627293 round_trippers.go:469] Request Headers:
	I1209 10:50:51.870302  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:50:51.870310  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:50:51.873898  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:50:52.370080  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:50:52.370105  627293 round_trippers.go:469] Request Headers:
	I1209 10:50:52.370115  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:50:52.370118  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:50:52.376430  627293 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1209 10:50:52.869331  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:50:52.869355  627293 round_trippers.go:469] Request Headers:
	I1209 10:50:52.869364  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:50:52.869368  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:50:52.873136  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:50:52.873737  627293 node_ready.go:53] node "ha-792382-m02" has status "Ready":"False"
	I1209 10:50:53.370232  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:50:53.370258  627293 round_trippers.go:469] Request Headers:
	I1209 10:50:53.370267  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:50:53.370272  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:50:53.373647  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:50:53.869640  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:50:53.869666  627293 round_trippers.go:469] Request Headers:
	I1209 10:50:53.869674  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:50:53.869678  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:50:53.872620  627293 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 10:50:54.369762  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:50:54.369789  627293 round_trippers.go:469] Request Headers:
	I1209 10:50:54.369798  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:50:54.369802  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:50:54.373551  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:50:54.869513  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:50:54.869538  627293 round_trippers.go:469] Request Headers:
	I1209 10:50:54.869547  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:50:54.869552  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:50:54.872817  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:50:55.369351  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:50:55.369377  627293 round_trippers.go:469] Request Headers:
	I1209 10:50:55.369387  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:50:55.369391  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:50:55.372662  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:50:55.373185  627293 node_ready.go:53] node "ha-792382-m02" has status "Ready":"False"
	I1209 10:50:55.869601  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:50:55.869626  627293 round_trippers.go:469] Request Headers:
	I1209 10:50:55.869636  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:50:55.869642  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:50:55.873128  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:50:56.369713  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:50:56.369741  627293 round_trippers.go:469] Request Headers:
	I1209 10:50:56.369751  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:50:56.369755  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:50:56.373053  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:50:56.870191  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:50:56.870225  627293 round_trippers.go:469] Request Headers:
	I1209 10:50:56.870238  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:50:56.870247  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:50:56.873685  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:50:57.369825  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:50:57.369849  627293 round_trippers.go:469] Request Headers:
	I1209 10:50:57.369858  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:50:57.369861  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:50:57.373394  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:50:57.373898  627293 node_ready.go:53] node "ha-792382-m02" has status "Ready":"False"
	I1209 10:50:57.869257  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:50:57.869284  627293 round_trippers.go:469] Request Headers:
	I1209 10:50:57.869293  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:50:57.869297  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:50:57.872590  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:50:58.369600  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:50:58.369629  627293 round_trippers.go:469] Request Headers:
	I1209 10:50:58.369641  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:50:58.369648  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:50:58.372771  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:50:58.869748  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:50:58.869775  627293 round_trippers.go:469] Request Headers:
	I1209 10:50:58.869784  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:50:58.869788  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:50:58.873037  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:50:59.369979  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:50:59.370004  627293 round_trippers.go:469] Request Headers:
	I1209 10:50:59.370013  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:50:59.370017  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:50:59.373442  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:50:59.869269  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:50:59.869294  627293 round_trippers.go:469] Request Headers:
	I1209 10:50:59.869309  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:50:59.869314  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:50:59.872720  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:50:59.873370  627293 node_ready.go:53] node "ha-792382-m02" has status "Ready":"False"
	I1209 10:51:00.369254  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:51:00.369281  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:00.369289  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:00.369294  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:00.372431  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:51:00.869327  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:51:00.869352  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:00.869361  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:00.869365  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:00.872790  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:51:01.369711  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:51:01.369743  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:01.369755  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:01.369761  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:01.372739  627293 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 10:51:01.869629  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:51:01.869659  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:01.869672  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:01.869680  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:01.873312  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:51:01.873858  627293 node_ready.go:53] node "ha-792382-m02" has status "Ready":"False"
	I1209 10:51:02.369761  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:51:02.369798  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:02.369811  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:02.369818  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:02.373514  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:51:02.869485  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:51:02.869511  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:02.869524  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:02.869530  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:02.875847  627293 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1209 10:51:03.369998  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:51:03.370025  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:03.370034  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:03.370039  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:03.373227  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:51:03.870196  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:51:03.870226  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:03.870238  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:03.870245  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:03.873280  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:51:03.873981  627293 node_ready.go:53] node "ha-792382-m02" has status "Ready":"False"
	I1209 10:51:04.369276  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:51:04.369305  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:04.369314  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:04.369318  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:04.373386  627293 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 10:51:04.869282  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:51:04.869309  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:04.869317  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:04.869321  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:04.872919  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:51:05.369501  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:51:05.369531  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:05.369544  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:05.369551  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:05.373273  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:51:05.869275  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:51:05.869301  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:05.869313  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:05.869319  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:05.875077  627293 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1209 10:51:05.875712  627293 node_ready.go:49] node "ha-792382-m02" has status "Ready":"True"
	I1209 10:51:05.875741  627293 node_ready.go:38] duration metric: took 17.506691417s for node "ha-792382-m02" to be "Ready" ...
	I1209 10:51:05.875753  627293 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 10:51:05.875877  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods
	I1209 10:51:05.875894  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:05.875903  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:05.875908  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:05.880622  627293 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 10:51:05.886687  627293 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-8hlml" in "kube-system" namespace to be "Ready" ...
	I1209 10:51:05.886796  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-8hlml
	I1209 10:51:05.886807  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:05.886815  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:05.886820  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:05.891623  627293 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 10:51:05.892565  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382
	I1209 10:51:05.892583  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:05.892608  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:05.892615  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:05.895456  627293 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 10:51:05.895899  627293 pod_ready.go:93] pod "coredns-7c65d6cfc9-8hlml" in "kube-system" namespace has status "Ready":"True"
	I1209 10:51:05.895917  627293 pod_ready.go:82] duration metric: took 9.205439ms for pod "coredns-7c65d6cfc9-8hlml" in "kube-system" namespace to be "Ready" ...
	I1209 10:51:05.895927  627293 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-rz6mw" in "kube-system" namespace to be "Ready" ...
	I1209 10:51:05.895993  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-rz6mw
	I1209 10:51:05.896006  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:05.896013  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:05.896016  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:05.898484  627293 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 10:51:05.899083  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382
	I1209 10:51:05.899101  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:05.899108  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:05.899112  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:05.901260  627293 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 10:51:05.901817  627293 pod_ready.go:93] pod "coredns-7c65d6cfc9-rz6mw" in "kube-system" namespace has status "Ready":"True"
	I1209 10:51:05.901842  627293 pod_ready.go:82] duration metric: took 5.908358ms for pod "coredns-7c65d6cfc9-rz6mw" in "kube-system" namespace to be "Ready" ...
	I1209 10:51:05.901854  627293 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-792382" in "kube-system" namespace to be "Ready" ...
	I1209 10:51:05.901923  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-792382
	I1209 10:51:05.901934  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:05.901946  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:05.901953  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:05.904274  627293 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 10:51:05.905123  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382
	I1209 10:51:05.905142  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:05.905152  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:05.905158  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:05.907644  627293 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 10:51:05.908181  627293 pod_ready.go:93] pod "etcd-ha-792382" in "kube-system" namespace has status "Ready":"True"
	I1209 10:51:05.908211  627293 pod_ready.go:82] duration metric: took 6.349761ms for pod "etcd-ha-792382" in "kube-system" namespace to be "Ready" ...
	I1209 10:51:05.908224  627293 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-792382-m02" in "kube-system" namespace to be "Ready" ...
	I1209 10:51:05.908297  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-792382-m02
	I1209 10:51:05.908307  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:05.908318  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:05.908329  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:05.910369  627293 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 10:51:05.910967  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:51:05.910983  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:05.910992  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:05.910997  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:05.913018  627293 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 10:51:05.913518  627293 pod_ready.go:93] pod "etcd-ha-792382-m02" in "kube-system" namespace has status "Ready":"True"
	I1209 10:51:05.913539  627293 pod_ready.go:82] duration metric: took 5.308048ms for pod "etcd-ha-792382-m02" in "kube-system" namespace to be "Ready" ...
	I1209 10:51:05.913558  627293 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-792382" in "kube-system" namespace to be "Ready" ...
	I1209 10:51:06.070017  627293 request.go:632] Waited for 156.363826ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-792382
	I1209 10:51:06.070081  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-792382
	I1209 10:51:06.070086  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:06.070095  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:06.070102  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:06.073645  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:51:06.269848  627293 request.go:632] Waited for 195.364699ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-792382
	I1209 10:51:06.269918  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382
	I1209 10:51:06.269924  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:06.269931  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:06.269935  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:06.272803  627293 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 10:51:06.273443  627293 pod_ready.go:93] pod "kube-apiserver-ha-792382" in "kube-system" namespace has status "Ready":"True"
	I1209 10:51:06.273469  627293 pod_ready.go:82] duration metric: took 359.901606ms for pod "kube-apiserver-ha-792382" in "kube-system" namespace to be "Ready" ...
	I1209 10:51:06.273484  627293 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-792382-m02" in "kube-system" namespace to be "Ready" ...
	I1209 10:51:06.469639  627293 request.go:632] Waited for 196.043735ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-792382-m02
	I1209 10:51:06.469733  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-792382-m02
	I1209 10:51:06.469741  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:06.469754  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:06.469762  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:06.473158  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:51:06.670306  627293 request.go:632] Waited for 196.412719ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:51:06.670379  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:51:06.670387  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:06.670399  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:06.670409  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:06.673435  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:51:06.673975  627293 pod_ready.go:93] pod "kube-apiserver-ha-792382-m02" in "kube-system" namespace has status "Ready":"True"
	I1209 10:51:06.673996  627293 pod_ready.go:82] duration metric: took 400.504015ms for pod "kube-apiserver-ha-792382-m02" in "kube-system" namespace to be "Ready" ...
	I1209 10:51:06.674006  627293 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-792382" in "kube-system" namespace to be "Ready" ...
	I1209 10:51:06.870147  627293 request.go:632] Waited for 196.063707ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-792382
	I1209 10:51:06.870265  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-792382
	I1209 10:51:06.870276  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:06.870285  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:06.870292  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:06.873707  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:51:07.069908  627293 request.go:632] Waited for 195.387799ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-792382
	I1209 10:51:07.069975  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382
	I1209 10:51:07.069983  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:07.069995  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:07.070015  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:07.073101  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:51:07.073736  627293 pod_ready.go:93] pod "kube-controller-manager-ha-792382" in "kube-system" namespace has status "Ready":"True"
	I1209 10:51:07.073758  627293 pod_ready.go:82] duration metric: took 399.744041ms for pod "kube-controller-manager-ha-792382" in "kube-system" namespace to be "Ready" ...
	I1209 10:51:07.073774  627293 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-792382-m02" in "kube-system" namespace to be "Ready" ...
	I1209 10:51:07.269459  627293 request.go:632] Waited for 195.589987ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-792382-m02
	I1209 10:51:07.269554  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-792382-m02
	I1209 10:51:07.269566  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:07.269577  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:07.269584  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:07.273156  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:51:07.470290  627293 request.go:632] Waited for 196.338376ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:51:07.470357  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:51:07.470364  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:07.470374  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:07.470384  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:07.474385  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:51:07.474970  627293 pod_ready.go:93] pod "kube-controller-manager-ha-792382-m02" in "kube-system" namespace has status "Ready":"True"
	I1209 10:51:07.474989  627293 pod_ready.go:82] duration metric: took 401.206827ms for pod "kube-controller-manager-ha-792382-m02" in "kube-system" namespace to be "Ready" ...
	I1209 10:51:07.475001  627293 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-dckpl" in "kube-system" namespace to be "Ready" ...
	I1209 10:51:07.670046  627293 request.go:632] Waited for 194.938435ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dckpl
	I1209 10:51:07.670123  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dckpl
	I1209 10:51:07.670153  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:07.670161  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:07.670177  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:07.673612  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:51:07.869971  627293 request.go:632] Waited for 195.374837ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:51:07.870066  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:51:07.870077  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:07.870089  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:07.870096  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:07.873498  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:51:07.873966  627293 pod_ready.go:93] pod "kube-proxy-dckpl" in "kube-system" namespace has status "Ready":"True"
	I1209 10:51:07.873986  627293 pod_ready.go:82] duration metric: took 398.974048ms for pod "kube-proxy-dckpl" in "kube-system" namespace to be "Ready" ...
	I1209 10:51:07.873999  627293 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-wrvgb" in "kube-system" namespace to be "Ready" ...
	I1209 10:51:08.070122  627293 request.go:632] Waited for 195.97145ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wrvgb
	I1209 10:51:08.070208  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wrvgb
	I1209 10:51:08.070220  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:08.070232  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:08.070246  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:08.073337  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:51:08.270335  627293 request.go:632] Waited for 196.383902ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-792382
	I1209 10:51:08.270428  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382
	I1209 10:51:08.270439  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:08.270446  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:08.270450  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:08.273875  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:51:08.274422  627293 pod_ready.go:93] pod "kube-proxy-wrvgb" in "kube-system" namespace has status "Ready":"True"
	I1209 10:51:08.274444  627293 pod_ready.go:82] duration metric: took 400.436343ms for pod "kube-proxy-wrvgb" in "kube-system" namespace to be "Ready" ...
	I1209 10:51:08.274455  627293 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-792382" in "kube-system" namespace to be "Ready" ...
	I1209 10:51:08.469480  627293 request.go:632] Waited for 194.92406ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-792382
	I1209 10:51:08.469571  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-792382
	I1209 10:51:08.469579  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:08.469593  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:08.469604  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:08.473101  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:51:08.670247  627293 request.go:632] Waited for 196.404632ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-792382
	I1209 10:51:08.670318  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382
	I1209 10:51:08.670323  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:08.670331  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:08.670334  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:08.673487  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:51:08.674226  627293 pod_ready.go:93] pod "kube-scheduler-ha-792382" in "kube-system" namespace has status "Ready":"True"
	I1209 10:51:08.674250  627293 pod_ready.go:82] duration metric: took 399.789273ms for pod "kube-scheduler-ha-792382" in "kube-system" namespace to be "Ready" ...
	I1209 10:51:08.674263  627293 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-792382-m02" in "kube-system" namespace to be "Ready" ...
	I1209 10:51:08.870290  627293 request.go:632] Waited for 195.926045ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-792382-m02
	I1209 10:51:08.870371  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-792382-m02
	I1209 10:51:08.870379  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:08.870387  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:08.870393  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:08.873809  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:51:09.069870  627293 request.go:632] Waited for 195.368943ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:51:09.069944  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:51:09.069950  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:09.069962  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:09.069967  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:09.074483  627293 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 10:51:09.075070  627293 pod_ready.go:93] pod "kube-scheduler-ha-792382-m02" in "kube-system" namespace has status "Ready":"True"
	I1209 10:51:09.075095  627293 pod_ready.go:82] duration metric: took 400.825701ms for pod "kube-scheduler-ha-792382-m02" in "kube-system" namespace to be "Ready" ...
	I1209 10:51:09.075107  627293 pod_ready.go:39] duration metric: took 3.199339967s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 10:51:09.075137  627293 api_server.go:52] waiting for apiserver process to appear ...
	I1209 10:51:09.075203  627293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 10:51:09.089759  627293 api_server.go:72] duration metric: took 21.007136874s to wait for apiserver process to appear ...
	I1209 10:51:09.089785  627293 api_server.go:88] waiting for apiserver healthz status ...
	I1209 10:51:09.089806  627293 api_server.go:253] Checking apiserver healthz at https://192.168.39.69:8443/healthz ...
	I1209 10:51:09.093868  627293 api_server.go:279] https://192.168.39.69:8443/healthz returned 200:
	ok
	I1209 10:51:09.093935  627293 round_trippers.go:463] GET https://192.168.39.69:8443/version
	I1209 10:51:09.093940  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:09.093949  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:09.093957  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:09.094830  627293 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1209 10:51:09.094916  627293 api_server.go:141] control plane version: v1.31.2
	I1209 10:51:09.094932  627293 api_server.go:131] duration metric: took 5.141357ms to wait for apiserver health ...
	I1209 10:51:09.094940  627293 system_pods.go:43] waiting for kube-system pods to appear ...
	I1209 10:51:09.269312  627293 request.go:632] Waited for 174.277582ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods
	I1209 10:51:09.269388  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods
	I1209 10:51:09.269394  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:09.269402  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:09.269407  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:09.274316  627293 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 10:51:09.278484  627293 system_pods.go:59] 17 kube-system pods found
	I1209 10:51:09.278512  627293 system_pods.go:61] "coredns-7c65d6cfc9-8hlml" [d820cd6c-5064-4934-adc8-c68f84c09b46] Running
	I1209 10:51:09.278518  627293 system_pods.go:61] "coredns-7c65d6cfc9-rz6mw" [af297b6d-91f1-4114-b98c-cdfdfbd1589e] Running
	I1209 10:51:09.278523  627293 system_pods.go:61] "etcd-ha-792382" [eee2fbad-7f2f-4f5a-b701-abdbf8456f99] Running
	I1209 10:51:09.278527  627293 system_pods.go:61] "etcd-ha-792382-m02" [424768ff-af5d-4a31-b062-ab2c9576884d] Running
	I1209 10:51:09.278531  627293 system_pods.go:61] "kindnet-bqp2z" [b2c40579-4d72-4efe-b921-1e0f98b91544] Running
	I1209 10:51:09.278534  627293 system_pods.go:61] "kindnet-hkrhk" [9b35011c-ab45-4f55-a60f-08f9c4509c1d] Running
	I1209 10:51:09.278540  627293 system_pods.go:61] "kube-apiserver-ha-792382" [5157cfb0-bc91-4efe-b2e2-689778d5b012] Running
	I1209 10:51:09.278544  627293 system_pods.go:61] "kube-apiserver-ha-792382-m02" [71f9ce1c-aeff-4853-83ec-01a7fb6a81d5] Running
	I1209 10:51:09.278547  627293 system_pods.go:61] "kube-controller-manager-ha-792382" [f8cb7175-f6fd-4dcf-a6a5-66c44aa136ce] Running
	I1209 10:51:09.278550  627293 system_pods.go:61] "kube-controller-manager-ha-792382-m02" [2f7d8deb-ddad-47ac-ad18-5f8e7d0e095d] Running
	I1209 10:51:09.278553  627293 system_pods.go:61] "kube-proxy-dckpl" [13f7bda1-f9c2-4fd0-96e0-b6aee1139bc1] Running
	I1209 10:51:09.278556  627293 system_pods.go:61] "kube-proxy-wrvgb" [2531e29f-a4d5-41f9-8c38-3220b4caf96b] Running
	I1209 10:51:09.278560  627293 system_pods.go:61] "kube-scheduler-ha-792382" [b11693d0-b2e7-45a1-a1a2-6519a1535c45] Running
	I1209 10:51:09.278566  627293 system_pods.go:61] "kube-scheduler-ha-792382-m02" [250ce40c-5cbf-4f5d-a475-4f3d7ec100d9] Running
	I1209 10:51:09.278569  627293 system_pods.go:61] "kube-vip-ha-792382" [511f3a84-b444-4603-8f6f-eeeb262b1384] Running
	I1209 10:51:09.278574  627293 system_pods.go:61] "kube-vip-ha-792382-m02" [f2110c28-09f5-42f9-8e16-beec9759a0a2] Running
	I1209 10:51:09.278578  627293 system_pods.go:61] "storage-provisioner" [4419fe4f-e2ed-4ecb-a912-2dd074e29727] Running
	I1209 10:51:09.278587  627293 system_pods.go:74] duration metric: took 183.639674ms to wait for pod list to return data ...
	I1209 10:51:09.278598  627293 default_sa.go:34] waiting for default service account to be created ...
	I1209 10:51:09.470106  627293 request.go:632] Waited for 191.4045ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/default/serviceaccounts
	I1209 10:51:09.470215  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/default/serviceaccounts
	I1209 10:51:09.470227  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:09.470242  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:09.470252  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:09.479626  627293 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1209 10:51:09.479907  627293 default_sa.go:45] found service account: "default"
	I1209 10:51:09.479929  627293 default_sa.go:55] duration metric: took 201.319758ms for default service account to be created ...
	I1209 10:51:09.479942  627293 system_pods.go:116] waiting for k8s-apps to be running ...
	I1209 10:51:09.670105  627293 request.go:632] Waited for 190.065824ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods
	I1209 10:51:09.670208  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods
	I1209 10:51:09.670215  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:09.670223  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:09.670228  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:09.674641  627293 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 10:51:09.679080  627293 system_pods.go:86] 17 kube-system pods found
	I1209 10:51:09.679114  627293 system_pods.go:89] "coredns-7c65d6cfc9-8hlml" [d820cd6c-5064-4934-adc8-c68f84c09b46] Running
	I1209 10:51:09.679123  627293 system_pods.go:89] "coredns-7c65d6cfc9-rz6mw" [af297b6d-91f1-4114-b98c-cdfdfbd1589e] Running
	I1209 10:51:09.679131  627293 system_pods.go:89] "etcd-ha-792382" [eee2fbad-7f2f-4f5a-b701-abdbf8456f99] Running
	I1209 10:51:09.679138  627293 system_pods.go:89] "etcd-ha-792382-m02" [424768ff-af5d-4a31-b062-ab2c9576884d] Running
	I1209 10:51:09.679143  627293 system_pods.go:89] "kindnet-bqp2z" [b2c40579-4d72-4efe-b921-1e0f98b91544] Running
	I1209 10:51:09.679149  627293 system_pods.go:89] "kindnet-hkrhk" [9b35011c-ab45-4f55-a60f-08f9c4509c1d] Running
	I1209 10:51:09.679156  627293 system_pods.go:89] "kube-apiserver-ha-792382" [5157cfb0-bc91-4efe-b2e2-689778d5b012] Running
	I1209 10:51:09.679165  627293 system_pods.go:89] "kube-apiserver-ha-792382-m02" [71f9ce1c-aeff-4853-83ec-01a7fb6a81d5] Running
	I1209 10:51:09.679171  627293 system_pods.go:89] "kube-controller-manager-ha-792382" [f8cb7175-f6fd-4dcf-a6a5-66c44aa136ce] Running
	I1209 10:51:09.679180  627293 system_pods.go:89] "kube-controller-manager-ha-792382-m02" [2f7d8deb-ddad-47ac-ad18-5f8e7d0e095d] Running
	I1209 10:51:09.679184  627293 system_pods.go:89] "kube-proxy-dckpl" [13f7bda1-f9c2-4fd0-96e0-b6aee1139bc1] Running
	I1209 10:51:09.679188  627293 system_pods.go:89] "kube-proxy-wrvgb" [2531e29f-a4d5-41f9-8c38-3220b4caf96b] Running
	I1209 10:51:09.679195  627293 system_pods.go:89] "kube-scheduler-ha-792382" [b11693d0-b2e7-45a1-a1a2-6519a1535c45] Running
	I1209 10:51:09.679198  627293 system_pods.go:89] "kube-scheduler-ha-792382-m02" [250ce40c-5cbf-4f5d-a475-4f3d7ec100d9] Running
	I1209 10:51:09.679204  627293 system_pods.go:89] "kube-vip-ha-792382" [511f3a84-b444-4603-8f6f-eeeb262b1384] Running
	I1209 10:51:09.679208  627293 system_pods.go:89] "kube-vip-ha-792382-m02" [f2110c28-09f5-42f9-8e16-beec9759a0a2] Running
	I1209 10:51:09.679214  627293 system_pods.go:89] "storage-provisioner" [4419fe4f-e2ed-4ecb-a912-2dd074e29727] Running
	I1209 10:51:09.679221  627293 system_pods.go:126] duration metric: took 199.268781ms to wait for k8s-apps to be running ...
	I1209 10:51:09.679230  627293 system_svc.go:44] waiting for kubelet service to be running ....
	I1209 10:51:09.679276  627293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 10:51:09.694076  627293 system_svc.go:56] duration metric: took 14.835467ms WaitForService to wait for kubelet
	I1209 10:51:09.694109  627293 kubeadm.go:582] duration metric: took 21.611489035s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 10:51:09.694134  627293 node_conditions.go:102] verifying NodePressure condition ...
	I1209 10:51:09.869608  627293 request.go:632] Waited for 175.356595ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes
	I1209 10:51:09.869706  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes
	I1209 10:51:09.869714  627293 round_trippers.go:469] Request Headers:
	I1209 10:51:09.869723  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:51:09.869734  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:51:09.873420  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:51:09.874254  627293 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1209 10:51:09.874278  627293 node_conditions.go:123] node cpu capacity is 2
	I1209 10:51:09.874300  627293 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1209 10:51:09.874304  627293 node_conditions.go:123] node cpu capacity is 2
	I1209 10:51:09.874310  627293 node_conditions.go:105] duration metric: took 180.168766ms to run NodePressure ...
	I1209 10:51:09.874324  627293 start.go:241] waiting for startup goroutines ...
	I1209 10:51:09.874349  627293 start.go:255] writing updated cluster config ...
	I1209 10:51:09.876293  627293 out.go:201] 
	I1209 10:51:09.877844  627293 config.go:182] Loaded profile config "ha-792382": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 10:51:09.877938  627293 profile.go:143] Saving config to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/config.json ...
	I1209 10:51:09.879618  627293 out.go:177] * Starting "ha-792382-m03" control-plane node in "ha-792382" cluster
	I1209 10:51:09.880651  627293 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 10:51:09.880677  627293 cache.go:56] Caching tarball of preloaded images
	I1209 10:51:09.880794  627293 preload.go:172] Found /home/jenkins/minikube-integration/20068-609844/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1209 10:51:09.880808  627293 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1209 10:51:09.880894  627293 profile.go:143] Saving config to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/config.json ...
	I1209 10:51:09.881065  627293 start.go:360] acquireMachinesLock for ha-792382-m03: {Name:mka6afffe9ddf2faebdc603ed0401805d58bf31e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 10:51:09.881109  627293 start.go:364] duration metric: took 24.695µs to acquireMachinesLock for "ha-792382-m03"
	I1209 10:51:09.881155  627293 start.go:93] Provisioning new machine with config: &{Name:ha-792382 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-792382 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.69 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.89 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 10:51:09.881251  627293 start.go:125] createHost starting for "m03" (driver="kvm2")
	I1209 10:51:09.882597  627293 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1209 10:51:09.882697  627293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:51:09.882736  627293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:51:09.898133  627293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41609
	I1209 10:51:09.898752  627293 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:51:09.899364  627293 main.go:141] libmachine: Using API Version  1
	I1209 10:51:09.899388  627293 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:51:09.899714  627293 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:51:09.899932  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetMachineName
	I1209 10:51:09.900153  627293 main.go:141] libmachine: (ha-792382-m03) Calling .DriverName
	I1209 10:51:09.900311  627293 start.go:159] libmachine.API.Create for "ha-792382" (driver="kvm2")
	I1209 10:51:09.900340  627293 client.go:168] LocalClient.Create starting
	I1209 10:51:09.900368  627293 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem
	I1209 10:51:09.900399  627293 main.go:141] libmachine: Decoding PEM data...
	I1209 10:51:09.900414  627293 main.go:141] libmachine: Parsing certificate...
	I1209 10:51:09.900469  627293 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem
	I1209 10:51:09.900490  627293 main.go:141] libmachine: Decoding PEM data...
	I1209 10:51:09.900500  627293 main.go:141] libmachine: Parsing certificate...
	I1209 10:51:09.900517  627293 main.go:141] libmachine: Running pre-create checks...
	I1209 10:51:09.900526  627293 main.go:141] libmachine: (ha-792382-m03) Calling .PreCreateCheck
	I1209 10:51:09.900676  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetConfigRaw
	I1209 10:51:09.901024  627293 main.go:141] libmachine: Creating machine...
	I1209 10:51:09.901037  627293 main.go:141] libmachine: (ha-792382-m03) Calling .Create
	I1209 10:51:09.901229  627293 main.go:141] libmachine: (ha-792382-m03) Creating KVM machine...
	I1209 10:51:09.902418  627293 main.go:141] libmachine: (ha-792382-m03) DBG | found existing default KVM network
	I1209 10:51:09.902584  627293 main.go:141] libmachine: (ha-792382-m03) DBG | found existing private KVM network mk-ha-792382
	I1209 10:51:09.902745  627293 main.go:141] libmachine: (ha-792382-m03) Setting up store path in /home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382-m03 ...
	I1209 10:51:09.902768  627293 main.go:141] libmachine: (ha-792382-m03) Building disk image from file:///home/jenkins/minikube-integration/20068-609844/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1209 10:51:09.902867  627293 main.go:141] libmachine: (ha-792382-m03) DBG | I1209 10:51:09.902742  628056 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20068-609844/.minikube
	I1209 10:51:09.902959  627293 main.go:141] libmachine: (ha-792382-m03) Downloading /home/jenkins/minikube-integration/20068-609844/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20068-609844/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1209 10:51:10.187575  627293 main.go:141] libmachine: (ha-792382-m03) DBG | I1209 10:51:10.187437  628056 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382-m03/id_rsa...
	I1209 10:51:10.500975  627293 main.go:141] libmachine: (ha-792382-m03) DBG | I1209 10:51:10.500841  628056 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382-m03/ha-792382-m03.rawdisk...
	I1209 10:51:10.501016  627293 main.go:141] libmachine: (ha-792382-m03) DBG | Writing magic tar header
	I1209 10:51:10.501026  627293 main.go:141] libmachine: (ha-792382-m03) DBG | Writing SSH key tar header
	I1209 10:51:10.501034  627293 main.go:141] libmachine: (ha-792382-m03) DBG | I1209 10:51:10.500985  628056 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382-m03 ...
	I1209 10:51:10.501188  627293 main.go:141] libmachine: (ha-792382-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382-m03
	I1209 10:51:10.501214  627293 main.go:141] libmachine: (ha-792382-m03) Setting executable bit set on /home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382-m03 (perms=drwx------)
	I1209 10:51:10.501235  627293 main.go:141] libmachine: (ha-792382-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20068-609844/.minikube/machines
	I1209 10:51:10.501255  627293 main.go:141] libmachine: (ha-792382-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20068-609844/.minikube
	I1209 10:51:10.501270  627293 main.go:141] libmachine: (ha-792382-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20068-609844
	I1209 10:51:10.501289  627293 main.go:141] libmachine: (ha-792382-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1209 10:51:10.501315  627293 main.go:141] libmachine: (ha-792382-m03) Setting executable bit set on /home/jenkins/minikube-integration/20068-609844/.minikube/machines (perms=drwxr-xr-x)
	I1209 10:51:10.501328  627293 main.go:141] libmachine: (ha-792382-m03) DBG | Checking permissions on dir: /home/jenkins
	I1209 10:51:10.501340  627293 main.go:141] libmachine: (ha-792382-m03) DBG | Checking permissions on dir: /home
	I1209 10:51:10.501354  627293 main.go:141] libmachine: (ha-792382-m03) DBG | Skipping /home - not owner
	I1209 10:51:10.501371  627293 main.go:141] libmachine: (ha-792382-m03) Setting executable bit set on /home/jenkins/minikube-integration/20068-609844/.minikube (perms=drwxr-xr-x)
	I1209 10:51:10.501393  627293 main.go:141] libmachine: (ha-792382-m03) Setting executable bit set on /home/jenkins/minikube-integration/20068-609844 (perms=drwxrwxr-x)
	I1209 10:51:10.501413  627293 main.go:141] libmachine: (ha-792382-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1209 10:51:10.501426  627293 main.go:141] libmachine: (ha-792382-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1209 10:51:10.501440  627293 main.go:141] libmachine: (ha-792382-m03) Creating domain...
	I1209 10:51:10.502439  627293 main.go:141] libmachine: (ha-792382-m03) define libvirt domain using xml: 
	I1209 10:51:10.502466  627293 main.go:141] libmachine: (ha-792382-m03) <domain type='kvm'>
	I1209 10:51:10.502476  627293 main.go:141] libmachine: (ha-792382-m03)   <name>ha-792382-m03</name>
	I1209 10:51:10.502484  627293 main.go:141] libmachine: (ha-792382-m03)   <memory unit='MiB'>2200</memory>
	I1209 10:51:10.502490  627293 main.go:141] libmachine: (ha-792382-m03)   <vcpu>2</vcpu>
	I1209 10:51:10.502495  627293 main.go:141] libmachine: (ha-792382-m03)   <features>
	I1209 10:51:10.502506  627293 main.go:141] libmachine: (ha-792382-m03)     <acpi/>
	I1209 10:51:10.502516  627293 main.go:141] libmachine: (ha-792382-m03)     <apic/>
	I1209 10:51:10.502524  627293 main.go:141] libmachine: (ha-792382-m03)     <pae/>
	I1209 10:51:10.502534  627293 main.go:141] libmachine: (ha-792382-m03)     
	I1209 10:51:10.502544  627293 main.go:141] libmachine: (ha-792382-m03)   </features>
	I1209 10:51:10.502552  627293 main.go:141] libmachine: (ha-792382-m03)   <cpu mode='host-passthrough'>
	I1209 10:51:10.502587  627293 main.go:141] libmachine: (ha-792382-m03)   
	I1209 10:51:10.502612  627293 main.go:141] libmachine: (ha-792382-m03)   </cpu>
	I1209 10:51:10.502650  627293 main.go:141] libmachine: (ha-792382-m03)   <os>
	I1209 10:51:10.502668  627293 main.go:141] libmachine: (ha-792382-m03)     <type>hvm</type>
	I1209 10:51:10.502674  627293 main.go:141] libmachine: (ha-792382-m03)     <boot dev='cdrom'/>
	I1209 10:51:10.502679  627293 main.go:141] libmachine: (ha-792382-m03)     <boot dev='hd'/>
	I1209 10:51:10.502688  627293 main.go:141] libmachine: (ha-792382-m03)     <bootmenu enable='no'/>
	I1209 10:51:10.502693  627293 main.go:141] libmachine: (ha-792382-m03)   </os>
	I1209 10:51:10.502731  627293 main.go:141] libmachine: (ha-792382-m03)   <devices>
	I1209 10:51:10.502756  627293 main.go:141] libmachine: (ha-792382-m03)     <disk type='file' device='cdrom'>
	I1209 10:51:10.502773  627293 main.go:141] libmachine: (ha-792382-m03)       <source file='/home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382-m03/boot2docker.iso'/>
	I1209 10:51:10.502784  627293 main.go:141] libmachine: (ha-792382-m03)       <target dev='hdc' bus='scsi'/>
	I1209 10:51:10.502796  627293 main.go:141] libmachine: (ha-792382-m03)       <readonly/>
	I1209 10:51:10.502806  627293 main.go:141] libmachine: (ha-792382-m03)     </disk>
	I1209 10:51:10.502815  627293 main.go:141] libmachine: (ha-792382-m03)     <disk type='file' device='disk'>
	I1209 10:51:10.502827  627293 main.go:141] libmachine: (ha-792382-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1209 10:51:10.502844  627293 main.go:141] libmachine: (ha-792382-m03)       <source file='/home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382-m03/ha-792382-m03.rawdisk'/>
	I1209 10:51:10.502854  627293 main.go:141] libmachine: (ha-792382-m03)       <target dev='hda' bus='virtio'/>
	I1209 10:51:10.502862  627293 main.go:141] libmachine: (ha-792382-m03)     </disk>
	I1209 10:51:10.502873  627293 main.go:141] libmachine: (ha-792382-m03)     <interface type='network'>
	I1209 10:51:10.502886  627293 main.go:141] libmachine: (ha-792382-m03)       <source network='mk-ha-792382'/>
	I1209 10:51:10.502901  627293 main.go:141] libmachine: (ha-792382-m03)       <model type='virtio'/>
	I1209 10:51:10.502917  627293 main.go:141] libmachine: (ha-792382-m03)     </interface>
	I1209 10:51:10.502927  627293 main.go:141] libmachine: (ha-792382-m03)     <interface type='network'>
	I1209 10:51:10.502935  627293 main.go:141] libmachine: (ha-792382-m03)       <source network='default'/>
	I1209 10:51:10.502945  627293 main.go:141] libmachine: (ha-792382-m03)       <model type='virtio'/>
	I1209 10:51:10.502954  627293 main.go:141] libmachine: (ha-792382-m03)     </interface>
	I1209 10:51:10.502965  627293 main.go:141] libmachine: (ha-792382-m03)     <serial type='pty'>
	I1209 10:51:10.502981  627293 main.go:141] libmachine: (ha-792382-m03)       <target port='0'/>
	I1209 10:51:10.503011  627293 main.go:141] libmachine: (ha-792382-m03)     </serial>
	I1209 10:51:10.503041  627293 main.go:141] libmachine: (ha-792382-m03)     <console type='pty'>
	I1209 10:51:10.503058  627293 main.go:141] libmachine: (ha-792382-m03)       <target type='serial' port='0'/>
	I1209 10:51:10.503071  627293 main.go:141] libmachine: (ha-792382-m03)     </console>
	I1209 10:51:10.503082  627293 main.go:141] libmachine: (ha-792382-m03)     <rng model='virtio'>
	I1209 10:51:10.503096  627293 main.go:141] libmachine: (ha-792382-m03)       <backend model='random'>/dev/random</backend>
	I1209 10:51:10.503113  627293 main.go:141] libmachine: (ha-792382-m03)     </rng>
	I1209 10:51:10.503127  627293 main.go:141] libmachine: (ha-792382-m03)     
	I1209 10:51:10.503136  627293 main.go:141] libmachine: (ha-792382-m03)     
	I1209 10:51:10.503142  627293 main.go:141] libmachine: (ha-792382-m03)   </devices>
	I1209 10:51:10.503150  627293 main.go:141] libmachine: (ha-792382-m03) </domain>
	I1209 10:51:10.503164  627293 main.go:141] libmachine: (ha-792382-m03) 
	I1209 10:51:10.509799  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:26:51:82 in network default
	I1209 10:51:10.510544  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:10.510571  627293 main.go:141] libmachine: (ha-792382-m03) Ensuring networks are active...
	I1209 10:51:10.511459  627293 main.go:141] libmachine: (ha-792382-m03) Ensuring network default is active
	I1209 10:51:10.511785  627293 main.go:141] libmachine: (ha-792382-m03) Ensuring network mk-ha-792382 is active
	I1209 10:51:10.512166  627293 main.go:141] libmachine: (ha-792382-m03) Getting domain xml...
	I1209 10:51:10.512954  627293 main.go:141] libmachine: (ha-792382-m03) Creating domain...
	I1209 10:51:11.772243  627293 main.go:141] libmachine: (ha-792382-m03) Waiting to get IP...
	I1209 10:51:11.773341  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:11.773804  627293 main.go:141] libmachine: (ha-792382-m03) DBG | unable to find current IP address of domain ha-792382-m03 in network mk-ha-792382
	I1209 10:51:11.773837  627293 main.go:141] libmachine: (ha-792382-m03) DBG | I1209 10:51:11.773768  628056 retry.go:31] will retry after 261.519944ms: waiting for machine to come up
	I1209 10:51:12.038077  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:12.038774  627293 main.go:141] libmachine: (ha-792382-m03) DBG | unable to find current IP address of domain ha-792382-m03 in network mk-ha-792382
	I1209 10:51:12.038804  627293 main.go:141] libmachine: (ha-792382-m03) DBG | I1209 10:51:12.038709  628056 retry.go:31] will retry after 310.562513ms: waiting for machine to come up
	I1209 10:51:12.350405  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:12.350812  627293 main.go:141] libmachine: (ha-792382-m03) DBG | unable to find current IP address of domain ha-792382-m03 in network mk-ha-792382
	I1209 10:51:12.350870  627293 main.go:141] libmachine: (ha-792382-m03) DBG | I1209 10:51:12.350779  628056 retry.go:31] will retry after 381.875413ms: waiting for machine to come up
	I1209 10:51:12.734428  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:12.734917  627293 main.go:141] libmachine: (ha-792382-m03) DBG | unable to find current IP address of domain ha-792382-m03 in network mk-ha-792382
	I1209 10:51:12.734939  627293 main.go:141] libmachine: (ha-792382-m03) DBG | I1209 10:51:12.734868  628056 retry.go:31] will retry after 376.611685ms: waiting for machine to come up
	I1209 10:51:13.113430  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:13.113850  627293 main.go:141] libmachine: (ha-792382-m03) DBG | unable to find current IP address of domain ha-792382-m03 in network mk-ha-792382
	I1209 10:51:13.113878  627293 main.go:141] libmachine: (ha-792382-m03) DBG | I1209 10:51:13.113807  628056 retry.go:31] will retry after 480.736793ms: waiting for machine to come up
	I1209 10:51:13.596329  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:13.596796  627293 main.go:141] libmachine: (ha-792382-m03) DBG | unable to find current IP address of domain ha-792382-m03 in network mk-ha-792382
	I1209 10:51:13.596819  627293 main.go:141] libmachine: (ha-792382-m03) DBG | I1209 10:51:13.596753  628056 retry.go:31] will retry after 875.034768ms: waiting for machine to come up
	I1209 10:51:14.473751  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:14.474126  627293 main.go:141] libmachine: (ha-792382-m03) DBG | unable to find current IP address of domain ha-792382-m03 in network mk-ha-792382
	I1209 10:51:14.474155  627293 main.go:141] libmachine: (ha-792382-m03) DBG | I1209 10:51:14.474088  628056 retry.go:31] will retry after 816.368717ms: waiting for machine to come up
	I1209 10:51:15.292960  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:15.293587  627293 main.go:141] libmachine: (ha-792382-m03) DBG | unable to find current IP address of domain ha-792382-m03 in network mk-ha-792382
	I1209 10:51:15.293618  627293 main.go:141] libmachine: (ha-792382-m03) DBG | I1209 10:51:15.293489  628056 retry.go:31] will retry after 1.183655157s: waiting for machine to come up
	I1209 10:51:16.478955  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:16.479455  627293 main.go:141] libmachine: (ha-792382-m03) DBG | unable to find current IP address of domain ha-792382-m03 in network mk-ha-792382
	I1209 10:51:16.479486  627293 main.go:141] libmachine: (ha-792382-m03) DBG | I1209 10:51:16.479390  628056 retry.go:31] will retry after 1.459421983s: waiting for machine to come up
	I1209 10:51:17.940565  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:17.940909  627293 main.go:141] libmachine: (ha-792382-m03) DBG | unable to find current IP address of domain ha-792382-m03 in network mk-ha-792382
	I1209 10:51:17.940939  627293 main.go:141] libmachine: (ha-792382-m03) DBG | I1209 10:51:17.940853  628056 retry.go:31] will retry after 2.01883018s: waiting for machine to come up
	I1209 10:51:19.961861  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:19.962417  627293 main.go:141] libmachine: (ha-792382-m03) DBG | unable to find current IP address of domain ha-792382-m03 in network mk-ha-792382
	I1209 10:51:19.962457  627293 main.go:141] libmachine: (ha-792382-m03) DBG | I1209 10:51:19.962353  628056 retry.go:31] will retry after 1.857861431s: waiting for machine to come up
	I1209 10:51:21.822060  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:21.822610  627293 main.go:141] libmachine: (ha-792382-m03) DBG | unable to find current IP address of domain ha-792382-m03 in network mk-ha-792382
	I1209 10:51:21.822640  627293 main.go:141] libmachine: (ha-792382-m03) DBG | I1209 10:51:21.822556  628056 retry.go:31] will retry after 2.674364218s: waiting for machine to come up
	I1209 10:51:24.499290  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:24.499696  627293 main.go:141] libmachine: (ha-792382-m03) DBG | unable to find current IP address of domain ha-792382-m03 in network mk-ha-792382
	I1209 10:51:24.499718  627293 main.go:141] libmachine: (ha-792382-m03) DBG | I1209 10:51:24.499647  628056 retry.go:31] will retry after 3.815833745s: waiting for machine to come up
	I1209 10:51:28.319279  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:28.319654  627293 main.go:141] libmachine: (ha-792382-m03) DBG | unable to find current IP address of domain ha-792382-m03 in network mk-ha-792382
	I1209 10:51:28.319685  627293 main.go:141] libmachine: (ha-792382-m03) DBG | I1209 10:51:28.319601  628056 retry.go:31] will retry after 5.165694329s: waiting for machine to come up
	I1209 10:51:33.487484  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:33.487908  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has current primary IP address 192.168.39.82 and MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:33.487939  627293 main.go:141] libmachine: (ha-792382-m03) Found IP for machine: 192.168.39.82
	I1209 10:51:33.487954  627293 main.go:141] libmachine: (ha-792382-m03) Reserving static IP address...
	I1209 10:51:33.488381  627293 main.go:141] libmachine: (ha-792382-m03) DBG | unable to find host DHCP lease matching {name: "ha-792382-m03", mac: "52:54:00:10:ae:3c", ip: "192.168.39.82"} in network mk-ha-792382
	I1209 10:51:33.564150  627293 main.go:141] libmachine: (ha-792382-m03) Reserved static IP address: 192.168.39.82
	I1209 10:51:33.564197  627293 main.go:141] libmachine: (ha-792382-m03) DBG | Getting to WaitForSSH function...
	I1209 10:51:33.564206  627293 main.go:141] libmachine: (ha-792382-m03) Waiting for SSH to be available...
	I1209 10:51:33.567024  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:33.567471  627293 main.go:141] libmachine: (ha-792382-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:ae:3c", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:51:25 +0000 UTC Type:0 Mac:52:54:00:10:ae:3c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:minikube Clientid:01:52:54:00:10:ae:3c}
	I1209 10:51:33.567501  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined IP address 192.168.39.82 and MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:33.567664  627293 main.go:141] libmachine: (ha-792382-m03) DBG | Using SSH client type: external
	I1209 10:51:33.567687  627293 main.go:141] libmachine: (ha-792382-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382-m03/id_rsa (-rw-------)
	I1209 10:51:33.567722  627293 main.go:141] libmachine: (ha-792382-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.82 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1209 10:51:33.567734  627293 main.go:141] libmachine: (ha-792382-m03) DBG | About to run SSH command:
	I1209 10:51:33.567748  627293 main.go:141] libmachine: (ha-792382-m03) DBG | exit 0
	I1209 10:51:33.698092  627293 main.go:141] libmachine: (ha-792382-m03) DBG | SSH cmd err, output: <nil>: 
	I1209 10:51:33.698421  627293 main.go:141] libmachine: (ha-792382-m03) KVM machine creation complete!
	I1209 10:51:33.698819  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetConfigRaw
	I1209 10:51:33.699478  627293 main.go:141] libmachine: (ha-792382-m03) Calling .DriverName
	I1209 10:51:33.699674  627293 main.go:141] libmachine: (ha-792382-m03) Calling .DriverName
	I1209 10:51:33.699826  627293 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1209 10:51:33.699837  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetState
	I1209 10:51:33.701167  627293 main.go:141] libmachine: Detecting operating system of created instance...
	I1209 10:51:33.701183  627293 main.go:141] libmachine: Waiting for SSH to be available...
	I1209 10:51:33.701191  627293 main.go:141] libmachine: Getting to WaitForSSH function...
	I1209 10:51:33.701198  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHHostname
	I1209 10:51:33.703744  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:33.704133  627293 main.go:141] libmachine: (ha-792382-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:ae:3c", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:51:25 +0000 UTC Type:0 Mac:52:54:00:10:ae:3c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:ha-792382-m03 Clientid:01:52:54:00:10:ae:3c}
	I1209 10:51:33.704162  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined IP address 192.168.39.82 and MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:33.704266  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHPort
	I1209 10:51:33.704462  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHKeyPath
	I1209 10:51:33.704600  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHKeyPath
	I1209 10:51:33.704723  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHUsername
	I1209 10:51:33.704916  627293 main.go:141] libmachine: Using SSH client type: native
	I1209 10:51:33.705157  627293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.82 22 <nil> <nil>}
	I1209 10:51:33.705168  627293 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1209 10:51:33.813390  627293 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 10:51:33.813423  627293 main.go:141] libmachine: Detecting the provisioner...
	I1209 10:51:33.813436  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHHostname
	I1209 10:51:33.816441  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:33.816804  627293 main.go:141] libmachine: (ha-792382-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:ae:3c", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:51:25 +0000 UTC Type:0 Mac:52:54:00:10:ae:3c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:ha-792382-m03 Clientid:01:52:54:00:10:ae:3c}
	I1209 10:51:33.816841  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined IP address 192.168.39.82 and MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:33.816951  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHPort
	I1209 10:51:33.817167  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHKeyPath
	I1209 10:51:33.817376  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHKeyPath
	I1209 10:51:33.817559  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHUsername
	I1209 10:51:33.817716  627293 main.go:141] libmachine: Using SSH client type: native
	I1209 10:51:33.817907  627293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.82 22 <nil> <nil>}
	I1209 10:51:33.817921  627293 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1209 10:51:33.926605  627293 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1209 10:51:33.926676  627293 main.go:141] libmachine: found compatible host: buildroot
	I1209 10:51:33.926683  627293 main.go:141] libmachine: Provisioning with buildroot...
	I1209 10:51:33.926691  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetMachineName
	I1209 10:51:33.926942  627293 buildroot.go:166] provisioning hostname "ha-792382-m03"
	I1209 10:51:33.926972  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetMachineName
	I1209 10:51:33.927120  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHHostname
	I1209 10:51:33.929899  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:33.930353  627293 main.go:141] libmachine: (ha-792382-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:ae:3c", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:51:25 +0000 UTC Type:0 Mac:52:54:00:10:ae:3c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:ha-792382-m03 Clientid:01:52:54:00:10:ae:3c}
	I1209 10:51:33.930382  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined IP address 192.168.39.82 and MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:33.930545  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHPort
	I1209 10:51:33.930780  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHKeyPath
	I1209 10:51:33.930935  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHKeyPath
	I1209 10:51:33.931076  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHUsername
	I1209 10:51:33.931236  627293 main.go:141] libmachine: Using SSH client type: native
	I1209 10:51:33.931442  627293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.82 22 <nil> <nil>}
	I1209 10:51:33.931455  627293 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-792382-m03 && echo "ha-792382-m03" | sudo tee /etc/hostname
	I1209 10:51:34.053804  627293 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-792382-m03
	
	I1209 10:51:34.053838  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHHostname
	I1209 10:51:34.056450  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:34.056795  627293 main.go:141] libmachine: (ha-792382-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:ae:3c", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:51:25 +0000 UTC Type:0 Mac:52:54:00:10:ae:3c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:ha-792382-m03 Clientid:01:52:54:00:10:ae:3c}
	I1209 10:51:34.056821  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined IP address 192.168.39.82 and MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:34.057070  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHPort
	I1209 10:51:34.057253  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHKeyPath
	I1209 10:51:34.057460  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHKeyPath
	I1209 10:51:34.057580  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHUsername
	I1209 10:51:34.057749  627293 main.go:141] libmachine: Using SSH client type: native
	I1209 10:51:34.057912  627293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.82 22 <nil> <nil>}
	I1209 10:51:34.057932  627293 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-792382-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-792382-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-792382-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 10:51:34.174396  627293 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 10:51:34.174436  627293 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20068-609844/.minikube CaCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20068-609844/.minikube}
	I1209 10:51:34.174459  627293 buildroot.go:174] setting up certificates
	I1209 10:51:34.174471  627293 provision.go:84] configureAuth start
	I1209 10:51:34.174484  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetMachineName
	I1209 10:51:34.174826  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetIP
	I1209 10:51:34.178006  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:34.178384  627293 main.go:141] libmachine: (ha-792382-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:ae:3c", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:51:25 +0000 UTC Type:0 Mac:52:54:00:10:ae:3c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:ha-792382-m03 Clientid:01:52:54:00:10:ae:3c}
	I1209 10:51:34.178414  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined IP address 192.168.39.82 and MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:34.178593  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHHostname
	I1209 10:51:34.180882  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:34.181259  627293 main.go:141] libmachine: (ha-792382-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:ae:3c", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:51:25 +0000 UTC Type:0 Mac:52:54:00:10:ae:3c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:ha-792382-m03 Clientid:01:52:54:00:10:ae:3c}
	I1209 10:51:34.181297  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined IP address 192.168.39.82 and MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:34.181434  627293 provision.go:143] copyHostCerts
	I1209 10:51:34.181467  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem
	I1209 10:51:34.181509  627293 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem, removing ...
	I1209 10:51:34.181521  627293 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem
	I1209 10:51:34.181599  627293 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem (1082 bytes)
	I1209 10:51:34.181708  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem
	I1209 10:51:34.181739  627293 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem, removing ...
	I1209 10:51:34.181750  627293 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem
	I1209 10:51:34.181796  627293 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem (1123 bytes)
	I1209 10:51:34.181862  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem
	I1209 10:51:34.181879  627293 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem, removing ...
	I1209 10:51:34.181885  627293 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem
	I1209 10:51:34.181910  627293 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem (1679 bytes)
	I1209 10:51:34.181961  627293 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem org=jenkins.ha-792382-m03 san=[127.0.0.1 192.168.39.82 ha-792382-m03 localhost minikube]
	I1209 10:51:34.410867  627293 provision.go:177] copyRemoteCerts
	I1209 10:51:34.410930  627293 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 10:51:34.410961  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHHostname
	I1209 10:51:34.414202  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:34.414663  627293 main.go:141] libmachine: (ha-792382-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:ae:3c", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:51:25 +0000 UTC Type:0 Mac:52:54:00:10:ae:3c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:ha-792382-m03 Clientid:01:52:54:00:10:ae:3c}
	I1209 10:51:34.414696  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined IP address 192.168.39.82 and MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:34.414964  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHPort
	I1209 10:51:34.415202  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHKeyPath
	I1209 10:51:34.415374  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHUsername
	I1209 10:51:34.415561  627293 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382-m03/id_rsa Username:docker}
	I1209 10:51:34.500121  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1209 10:51:34.500216  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1209 10:51:34.525465  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1209 10:51:34.525566  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1209 10:51:34.548733  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1209 10:51:34.548819  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1209 10:51:34.570848  627293 provision.go:87] duration metric: took 396.361471ms to configureAuth
	I1209 10:51:34.570884  627293 buildroot.go:189] setting minikube options for container-runtime
	I1209 10:51:34.571164  627293 config.go:182] Loaded profile config "ha-792382": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 10:51:34.571276  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHHostname
	I1209 10:51:34.574107  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:34.574532  627293 main.go:141] libmachine: (ha-792382-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:ae:3c", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:51:25 +0000 UTC Type:0 Mac:52:54:00:10:ae:3c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:ha-792382-m03 Clientid:01:52:54:00:10:ae:3c}
	I1209 10:51:34.574557  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined IP address 192.168.39.82 and MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:34.574761  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHPort
	I1209 10:51:34.574957  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHKeyPath
	I1209 10:51:34.575114  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHKeyPath
	I1209 10:51:34.575329  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHUsername
	I1209 10:51:34.575548  627293 main.go:141] libmachine: Using SSH client type: native
	I1209 10:51:34.575797  627293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.82 22 <nil> <nil>}
	I1209 10:51:34.575824  627293 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 10:51:34.816625  627293 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 10:51:34.816655  627293 main.go:141] libmachine: Checking connection to Docker...
	I1209 10:51:34.816670  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetURL
	I1209 10:51:34.817924  627293 main.go:141] libmachine: (ha-792382-m03) DBG | Using libvirt version 6000000
	I1209 10:51:34.820293  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:34.820739  627293 main.go:141] libmachine: (ha-792382-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:ae:3c", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:51:25 +0000 UTC Type:0 Mac:52:54:00:10:ae:3c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:ha-792382-m03 Clientid:01:52:54:00:10:ae:3c}
	I1209 10:51:34.820782  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined IP address 192.168.39.82 and MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:34.820943  627293 main.go:141] libmachine: Docker is up and running!
	I1209 10:51:34.820954  627293 main.go:141] libmachine: Reticulating splines...
	I1209 10:51:34.820962  627293 client.go:171] duration metric: took 24.920612799s to LocalClient.Create
	I1209 10:51:34.820990  627293 start.go:167] duration metric: took 24.920677638s to libmachine.API.Create "ha-792382"
	I1209 10:51:34.821001  627293 start.go:293] postStartSetup for "ha-792382-m03" (driver="kvm2")
	I1209 10:51:34.821015  627293 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 10:51:34.821041  627293 main.go:141] libmachine: (ha-792382-m03) Calling .DriverName
	I1209 10:51:34.821314  627293 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 10:51:34.821340  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHHostname
	I1209 10:51:34.823716  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:34.824123  627293 main.go:141] libmachine: (ha-792382-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:ae:3c", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:51:25 +0000 UTC Type:0 Mac:52:54:00:10:ae:3c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:ha-792382-m03 Clientid:01:52:54:00:10:ae:3c}
	I1209 10:51:34.824150  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined IP address 192.168.39.82 and MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:34.824346  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHPort
	I1209 10:51:34.824540  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHKeyPath
	I1209 10:51:34.824710  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHUsername
	I1209 10:51:34.824899  627293 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382-m03/id_rsa Username:docker}
	I1209 10:51:34.908596  627293 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 10:51:34.912587  627293 info.go:137] Remote host: Buildroot 2023.02.9
	I1209 10:51:34.912634  627293 filesync.go:126] Scanning /home/jenkins/minikube-integration/20068-609844/.minikube/addons for local assets ...
	I1209 10:51:34.912758  627293 filesync.go:126] Scanning /home/jenkins/minikube-integration/20068-609844/.minikube/files for local assets ...
	I1209 10:51:34.912881  627293 filesync.go:149] local asset: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem -> 6170172.pem in /etc/ssl/certs
	I1209 10:51:34.912894  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem -> /etc/ssl/certs/6170172.pem
	I1209 10:51:34.913014  627293 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 10:51:34.921828  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem --> /etc/ssl/certs/6170172.pem (1708 bytes)
	I1209 10:51:34.944676  627293 start.go:296] duration metric: took 123.657477ms for postStartSetup
	I1209 10:51:34.944735  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetConfigRaw
	I1209 10:51:34.945372  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetIP
	I1209 10:51:34.948020  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:34.948350  627293 main.go:141] libmachine: (ha-792382-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:ae:3c", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:51:25 +0000 UTC Type:0 Mac:52:54:00:10:ae:3c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:ha-792382-m03 Clientid:01:52:54:00:10:ae:3c}
	I1209 10:51:34.948374  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined IP address 192.168.39.82 and MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:34.948706  627293 profile.go:143] Saving config to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/config.json ...
	I1209 10:51:34.948901  627293 start.go:128] duration metric: took 25.067639086s to createHost
	I1209 10:51:34.948928  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHHostname
	I1209 10:51:34.951092  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:34.951471  627293 main.go:141] libmachine: (ha-792382-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:ae:3c", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:51:25 +0000 UTC Type:0 Mac:52:54:00:10:ae:3c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:ha-792382-m03 Clientid:01:52:54:00:10:ae:3c}
	I1209 10:51:34.951504  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined IP address 192.168.39.82 and MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:34.951672  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHPort
	I1209 10:51:34.951858  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHKeyPath
	I1209 10:51:34.952015  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHKeyPath
	I1209 10:51:34.952130  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHUsername
	I1209 10:51:34.952269  627293 main.go:141] libmachine: Using SSH client type: native
	I1209 10:51:34.952475  627293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.82 22 <nil> <nil>}
	I1209 10:51:34.952491  627293 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1209 10:51:35.062736  627293 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733741495.040495881
	
	I1209 10:51:35.062764  627293 fix.go:216] guest clock: 1733741495.040495881
	I1209 10:51:35.062773  627293 fix.go:229] Guest: 2024-12-09 10:51:35.040495881 +0000 UTC Remote: 2024-12-09 10:51:34.948914535 +0000 UTC m=+142.833153468 (delta=91.581346ms)
	I1209 10:51:35.062795  627293 fix.go:200] guest clock delta is within tolerance: 91.581346ms
	I1209 10:51:35.062802  627293 start.go:83] releasing machines lock for "ha-792382-m03", held for 25.181683585s
	I1209 10:51:35.062825  627293 main.go:141] libmachine: (ha-792382-m03) Calling .DriverName
	I1209 10:51:35.063125  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetIP
	I1209 10:51:35.065564  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:35.065919  627293 main.go:141] libmachine: (ha-792382-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:ae:3c", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:51:25 +0000 UTC Type:0 Mac:52:54:00:10:ae:3c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:ha-792382-m03 Clientid:01:52:54:00:10:ae:3c}
	I1209 10:51:35.065950  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined IP address 192.168.39.82 and MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:35.068041  627293 out.go:177] * Found network options:
	I1209 10:51:35.069311  627293 out.go:177]   - NO_PROXY=192.168.39.69,192.168.39.89
	W1209 10:51:35.070337  627293 proxy.go:119] fail to check proxy env: Error ip not in block
	W1209 10:51:35.070367  627293 proxy.go:119] fail to check proxy env: Error ip not in block
	I1209 10:51:35.070382  627293 main.go:141] libmachine: (ha-792382-m03) Calling .DriverName
	I1209 10:51:35.070888  627293 main.go:141] libmachine: (ha-792382-m03) Calling .DriverName
	I1209 10:51:35.071098  627293 main.go:141] libmachine: (ha-792382-m03) Calling .DriverName
	I1209 10:51:35.071216  627293 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 10:51:35.071260  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHHostname
	W1209 10:51:35.071333  627293 proxy.go:119] fail to check proxy env: Error ip not in block
	W1209 10:51:35.071358  627293 proxy.go:119] fail to check proxy env: Error ip not in block
	I1209 10:51:35.071448  627293 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 10:51:35.071472  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHHostname
	I1209 10:51:35.074136  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:35.074287  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:35.074566  627293 main.go:141] libmachine: (ha-792382-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:ae:3c", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:51:25 +0000 UTC Type:0 Mac:52:54:00:10:ae:3c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:ha-792382-m03 Clientid:01:52:54:00:10:ae:3c}
	I1209 10:51:35.074588  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined IP address 192.168.39.82 and MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:35.074614  627293 main.go:141] libmachine: (ha-792382-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:ae:3c", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:51:25 +0000 UTC Type:0 Mac:52:54:00:10:ae:3c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:ha-792382-m03 Clientid:01:52:54:00:10:ae:3c}
	I1209 10:51:35.074633  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined IP address 192.168.39.82 and MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:35.074729  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHPort
	I1209 10:51:35.074920  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHPort
	I1209 10:51:35.074923  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHKeyPath
	I1209 10:51:35.075091  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHKeyPath
	I1209 10:51:35.075094  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHUsername
	I1209 10:51:35.075270  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHUsername
	I1209 10:51:35.075298  627293 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382-m03/id_rsa Username:docker}
	I1209 10:51:35.075413  627293 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382-m03/id_rsa Username:docker}
	I1209 10:51:35.318511  627293 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 10:51:35.324511  627293 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 10:51:35.324586  627293 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 10:51:35.341575  627293 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1209 10:51:35.341607  627293 start.go:495] detecting cgroup driver to use...
	I1209 10:51:35.341686  627293 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 10:51:35.357724  627293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 10:51:35.372685  627293 docker.go:217] disabling cri-docker service (if available) ...
	I1209 10:51:35.372771  627293 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 10:51:35.387627  627293 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 10:51:35.401716  627293 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 10:51:35.525416  627293 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 10:51:35.688544  627293 docker.go:233] disabling docker service ...
	I1209 10:51:35.688627  627293 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 10:51:35.703495  627293 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 10:51:35.717769  627293 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 10:51:35.838656  627293 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 10:51:35.968740  627293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 10:51:35.982914  627293 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 10:51:36.001011  627293 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1209 10:51:36.001092  627293 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 10:51:36.011496  627293 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 10:51:36.011565  627293 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 10:51:36.021527  627293 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 10:51:36.031202  627293 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 10:51:36.041196  627293 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 10:51:36.051656  627293 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 10:51:36.062085  627293 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 10:51:36.078955  627293 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 10:51:36.088919  627293 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 10:51:36.098428  627293 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1209 10:51:36.098491  627293 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1209 10:51:36.112478  627293 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 10:51:36.121985  627293 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 10:51:36.236147  627293 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 10:51:36.331891  627293 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 10:51:36.331989  627293 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 10:51:36.336578  627293 start.go:563] Will wait 60s for crictl version
	I1209 10:51:36.336641  627293 ssh_runner.go:195] Run: which crictl
	I1209 10:51:36.340301  627293 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 10:51:36.380474  627293 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1209 10:51:36.380557  627293 ssh_runner.go:195] Run: crio --version
	I1209 10:51:36.408527  627293 ssh_runner.go:195] Run: crio --version
	I1209 10:51:36.438078  627293 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1209 10:51:36.439329  627293 out.go:177]   - env NO_PROXY=192.168.39.69
	I1209 10:51:36.440501  627293 out.go:177]   - env NO_PROXY=192.168.39.69,192.168.39.89
	I1209 10:51:36.441659  627293 main.go:141] libmachine: (ha-792382-m03) Calling .GetIP
	I1209 10:51:36.444828  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:36.445310  627293 main.go:141] libmachine: (ha-792382-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:ae:3c", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:51:25 +0000 UTC Type:0 Mac:52:54:00:10:ae:3c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:ha-792382-m03 Clientid:01:52:54:00:10:ae:3c}
	I1209 10:51:36.445339  627293 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined IP address 192.168.39.82 and MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:51:36.445521  627293 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1209 10:51:36.449517  627293 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 10:51:36.461352  627293 mustload.go:65] Loading cluster: ha-792382
	I1209 10:51:36.461581  627293 config.go:182] Loaded profile config "ha-792382": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 10:51:36.461851  627293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:51:36.461915  627293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:51:36.476757  627293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45915
	I1209 10:51:36.477266  627293 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:51:36.477839  627293 main.go:141] libmachine: Using API Version  1
	I1209 10:51:36.477861  627293 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:51:36.478264  627293 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:51:36.478470  627293 main.go:141] libmachine: (ha-792382) Calling .GetState
	I1209 10:51:36.480228  627293 host.go:66] Checking if "ha-792382" exists ...
	I1209 10:51:36.480540  627293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:51:36.480578  627293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:51:36.495892  627293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37057
	I1209 10:51:36.496439  627293 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:51:36.496999  627293 main.go:141] libmachine: Using API Version  1
	I1209 10:51:36.497024  627293 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:51:36.497365  627293 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:51:36.497597  627293 main.go:141] libmachine: (ha-792382) Calling .DriverName
	I1209 10:51:36.497777  627293 certs.go:68] Setting up /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382 for IP: 192.168.39.82
	I1209 10:51:36.497796  627293 certs.go:194] generating shared ca certs ...
	I1209 10:51:36.497816  627293 certs.go:226] acquiring lock for ca certs: {Name:mk2df5887e08965d909a9c950da5dfffb8a04ddc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 10:51:36.497951  627293 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.key
	I1209 10:51:36.497987  627293 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.key
	I1209 10:51:36.497996  627293 certs.go:256] generating profile certs ...
	I1209 10:51:36.498067  627293 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/client.key
	I1209 10:51:36.498091  627293 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.key.74be6275
	I1209 10:51:36.498107  627293 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.crt.74be6275 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.69 192.168.39.89 192.168.39.82 192.168.39.254]
	I1209 10:51:36.575706  627293 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.crt.74be6275 ...
	I1209 10:51:36.575744  627293 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.crt.74be6275: {Name:mkc0279d5f95c7c05a4a03239304c698f543bc23 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 10:51:36.575927  627293 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.key.74be6275 ...
	I1209 10:51:36.575940  627293 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.key.74be6275: {Name:mk628bdb195c5612308f11734296bd7934f36956 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 10:51:36.576016  627293 certs.go:381] copying /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.crt.74be6275 -> /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.crt
	I1209 10:51:36.576148  627293 certs.go:385] copying /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.key.74be6275 -> /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.key
	I1209 10:51:36.576277  627293 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/proxy-client.key
	I1209 10:51:36.576293  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1209 10:51:36.576307  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1209 10:51:36.576321  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1209 10:51:36.576334  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1209 10:51:36.576347  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1209 10:51:36.576359  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1209 10:51:36.576371  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1209 10:51:36.590260  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1209 10:51:36.590358  627293 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017.pem (1338 bytes)
	W1209 10:51:36.590394  627293 certs.go:480] ignoring /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017_empty.pem, impossibly tiny 0 bytes
	I1209 10:51:36.590412  627293 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem (1675 bytes)
	I1209 10:51:36.590439  627293 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem (1082 bytes)
	I1209 10:51:36.590462  627293 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem (1123 bytes)
	I1209 10:51:36.590483  627293 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem (1679 bytes)
	I1209 10:51:36.590521  627293 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem (1708 bytes)
	I1209 10:51:36.590548  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem -> /usr/share/ca-certificates/6170172.pem
	I1209 10:51:36.590563  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1209 10:51:36.590576  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017.pem -> /usr/share/ca-certificates/617017.pem
	I1209 10:51:36.590614  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHHostname
	I1209 10:51:36.594031  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:51:36.594418  627293 main.go:141] libmachine: (ha-792382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:82:f7", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:49:26 +0000 UTC Type:0 Mac:52:54:00:a8:82:f7 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-792382 Clientid:01:52:54:00:a8:82:f7}
	I1209 10:51:36.594452  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:51:36.594660  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHPort
	I1209 10:51:36.594910  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHKeyPath
	I1209 10:51:36.595086  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHUsername
	I1209 10:51:36.595232  627293 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382/id_rsa Username:docker}
	I1209 10:51:36.666577  627293 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1209 10:51:36.671392  627293 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1209 10:51:36.681688  627293 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1209 10:51:36.685694  627293 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1209 10:51:36.696364  627293 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1209 10:51:36.700718  627293 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1209 10:51:36.712302  627293 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1209 10:51:36.716534  627293 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1209 10:51:36.728128  627293 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1209 10:51:36.732026  627293 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1209 10:51:36.743956  627293 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1209 10:51:36.748200  627293 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1209 10:51:36.761818  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 10:51:36.786260  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1209 10:51:36.809394  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 10:51:36.832350  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1209 10:51:36.854875  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1209 10:51:36.876691  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1209 10:51:36.900011  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 10:51:36.922859  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1209 10:51:36.945086  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem --> /usr/share/ca-certificates/6170172.pem (1708 bytes)
	I1209 10:51:36.966983  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 10:51:36.989660  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017.pem --> /usr/share/ca-certificates/617017.pem (1338 bytes)
	I1209 10:51:37.011442  627293 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1209 10:51:37.027256  627293 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1209 10:51:37.042921  627293 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1209 10:51:37.059579  627293 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1209 10:51:37.078911  627293 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1209 10:51:37.094738  627293 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1209 10:51:37.112113  627293 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1209 10:51:37.130720  627293 ssh_runner.go:195] Run: openssl version
	I1209 10:51:37.136460  627293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1209 10:51:37.148061  627293 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 10:51:37.152555  627293 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 10:34 /usr/share/ca-certificates/minikubeCA.pem
	I1209 10:51:37.152627  627293 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 10:51:37.158639  627293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1209 10:51:37.170061  627293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/617017.pem && ln -fs /usr/share/ca-certificates/617017.pem /etc/ssl/certs/617017.pem"
	I1209 10:51:37.180567  627293 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/617017.pem
	I1209 10:51:37.184633  627293 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 10:45 /usr/share/ca-certificates/617017.pem
	I1209 10:51:37.184695  627293 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/617017.pem
	I1209 10:51:37.190044  627293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/617017.pem /etc/ssl/certs/51391683.0"
	I1209 10:51:37.200767  627293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6170172.pem && ln -fs /usr/share/ca-certificates/6170172.pem /etc/ssl/certs/6170172.pem"
	I1209 10:51:37.211239  627293 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6170172.pem
	I1209 10:51:37.215531  627293 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 10:45 /usr/share/ca-certificates/6170172.pem
	I1209 10:51:37.215617  627293 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6170172.pem
	I1209 10:51:37.221282  627293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6170172.pem /etc/ssl/certs/3ec20f2e.0"
	I1209 10:51:37.232891  627293 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 10:51:37.237033  627293 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1209 10:51:37.237096  627293 kubeadm.go:934] updating node {m03 192.168.39.82 8443 v1.31.2 crio true true} ...
	I1209 10:51:37.237210  627293 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-792382-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.82
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-792382 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 10:51:37.237247  627293 kube-vip.go:115] generating kube-vip config ...
	I1209 10:51:37.237291  627293 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1209 10:51:37.254154  627293 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1209 10:51:37.254286  627293 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.7
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1209 10:51:37.254376  627293 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1209 10:51:37.266499  627293 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1209 10:51:37.266573  627293 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1209 10:51:37.276989  627293 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256
	I1209 10:51:37.277004  627293 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256
	I1209 10:51:37.277031  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1209 10:51:37.277052  627293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 10:51:37.277099  627293 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1209 10:51:37.276989  627293 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1209 10:51:37.277162  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1209 10:51:37.277221  627293 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1209 10:51:37.294260  627293 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1209 10:51:37.294329  627293 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1209 10:51:37.294354  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1209 10:51:37.294397  627293 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1209 10:51:37.294410  627293 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1209 10:51:37.294447  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1209 10:51:37.309738  627293 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1209 10:51:37.309777  627293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1209 10:51:38.106081  627293 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1209 10:51:38.115636  627293 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1209 10:51:38.132759  627293 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 10:51:38.149726  627293 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1209 10:51:38.166083  627293 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1209 10:51:38.169937  627293 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 10:51:38.181150  627293 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 10:51:38.308494  627293 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 10:51:38.325679  627293 host.go:66] Checking if "ha-792382" exists ...
	I1209 10:51:38.326045  627293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:51:38.326105  627293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:51:38.344459  627293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45865
	I1209 10:51:38.345084  627293 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:51:38.345753  627293 main.go:141] libmachine: Using API Version  1
	I1209 10:51:38.345796  627293 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:51:38.346197  627293 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:51:38.346437  627293 main.go:141] libmachine: (ha-792382) Calling .DriverName
	I1209 10:51:38.346586  627293 start.go:317] joinCluster: &{Name:ha-792382 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-792382 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.69 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.89 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.82 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns
:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 10:51:38.346740  627293 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1209 10:51:38.346768  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHHostname
	I1209 10:51:38.349642  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:51:38.350099  627293 main.go:141] libmachine: (ha-792382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:82:f7", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:49:26 +0000 UTC Type:0 Mac:52:54:00:a8:82:f7 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-792382 Clientid:01:52:54:00:a8:82:f7}
	I1209 10:51:38.350125  627293 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:51:38.350286  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHPort
	I1209 10:51:38.350484  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHKeyPath
	I1209 10:51:38.350634  627293 main.go:141] libmachine: (ha-792382) Calling .GetSSHUsername
	I1209 10:51:38.350780  627293 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382/id_rsa Username:docker}
	I1209 10:51:38.514216  627293 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.82 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 10:51:38.514274  627293 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token exrmr9.huiz7swpoaojy929 --discovery-token-ca-cert-hash sha256:c011865ce27f188d55feab5f04226df0aae8d2e7cfe661283806c6f2de0ef29a --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-792382-m03 --control-plane --apiserver-advertise-address=192.168.39.82 --apiserver-bind-port=8443"
	I1209 10:52:01.803198  627293 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token exrmr9.huiz7swpoaojy929 --discovery-token-ca-cert-hash sha256:c011865ce27f188d55feab5f04226df0aae8d2e7cfe661283806c6f2de0ef29a --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-792382-m03 --control-plane --apiserver-advertise-address=192.168.39.82 --apiserver-bind-port=8443": (23.288893034s)
	I1209 10:52:01.803245  627293 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1209 10:52:02.338453  627293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-792382-m03 minikube.k8s.io/updated_at=2024_12_09T10_52_02_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=60110addcdbf0fec7168b962521659e922988d6c minikube.k8s.io/name=ha-792382 minikube.k8s.io/primary=false
	I1209 10:52:02.475613  627293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-792382-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I1209 10:52:02.591820  627293 start.go:319] duration metric: took 24.245228011s to joinCluster
	I1209 10:52:02.591921  627293 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.82 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 10:52:02.592324  627293 config.go:182] Loaded profile config "ha-792382": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 10:52:02.593526  627293 out.go:177] * Verifying Kubernetes components...
	I1209 10:52:02.594809  627293 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 10:52:02.839263  627293 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 10:52:02.861519  627293 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/20068-609844/kubeconfig
	I1209 10:52:02.861874  627293 kapi.go:59] client config for ha-792382: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/client.crt", KeyFile:"/home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/client.key", CAFile:"/home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243b6e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1209 10:52:02.861974  627293 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.69:8443
	I1209 10:52:02.862413  627293 node_ready.go:35] waiting up to 6m0s for node "ha-792382-m03" to be "Ready" ...
	I1209 10:52:02.862536  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:02.862551  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:02.862563  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:02.862569  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:02.866706  627293 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 10:52:03.363562  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:03.363585  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:03.363593  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:03.363597  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:03.367171  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:03.863250  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:03.863275  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:03.863284  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:03.863288  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:03.866476  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:04.363562  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:04.363593  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:04.363607  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:04.363611  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:04.367286  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:04.862912  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:04.862943  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:04.862957  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:04.862964  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:04.866217  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:04.866889  627293 node_ready.go:53] node "ha-792382-m03" has status "Ready":"False"
	I1209 10:52:05.363334  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:05.363359  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:05.363368  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:05.363371  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:05.366850  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:05.863531  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:05.863565  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:05.863577  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:05.863584  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:05.867191  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:06.363075  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:06.363103  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:06.363116  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:06.363123  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:06.368722  627293 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1209 10:52:06.862720  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:06.862750  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:06.862764  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:06.862773  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:06.865876  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:07.363131  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:07.363158  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:07.363167  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:07.363181  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:07.366603  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:07.367388  627293 node_ready.go:53] node "ha-792382-m03" has status "Ready":"False"
	I1209 10:52:07.862715  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:07.862743  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:07.862756  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:07.862762  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:07.866073  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:08.362710  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:08.362744  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:08.362756  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:08.362763  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:08.366953  627293 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 10:52:08.862771  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:08.862799  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:08.862808  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:08.862813  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:08.866875  627293 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 10:52:09.362787  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:09.362812  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:09.362820  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:09.362824  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:09.367053  627293 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 10:52:09.367603  627293 node_ready.go:53] node "ha-792382-m03" has status "Ready":"False"
	I1209 10:52:09.862752  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:09.862786  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:09.862803  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:09.862809  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:09.866207  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:10.363296  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:10.363329  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:10.363341  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:10.363347  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:10.368594  627293 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1209 10:52:10.863471  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:10.863504  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:10.863518  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:10.863523  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:10.868956  627293 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1209 10:52:11.362961  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:11.362988  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:11.362998  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:11.363003  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:11.366828  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:11.862866  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:11.862896  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:11.862906  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:11.862912  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:11.868040  627293 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1209 10:52:11.868910  627293 node_ready.go:53] node "ha-792382-m03" has status "Ready":"False"
	I1209 10:52:12.363520  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:12.363543  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:12.363551  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:12.363555  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:12.367064  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:12.862709  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:12.862738  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:12.862747  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:12.862751  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:12.866024  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:13.362946  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:13.362972  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:13.362981  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:13.362985  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:13.367208  627293 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 10:52:13.863257  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:13.863282  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:13.863291  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:13.863295  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:13.866570  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:14.363551  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:14.363576  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:14.363588  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:14.363595  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:14.367509  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:14.368341  627293 node_ready.go:53] node "ha-792382-m03" has status "Ready":"False"
	I1209 10:52:14.863449  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:14.863475  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:14.863485  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:14.863492  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:14.866808  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:15.363473  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:15.363501  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:15.363510  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:15.363514  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:15.367252  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:15.863063  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:15.863086  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:15.863095  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:15.863099  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:15.866694  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:16.363487  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:16.363515  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:16.363525  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:16.363529  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:16.366968  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:16.863237  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:16.863267  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:16.863277  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:16.863285  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:16.866528  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:16.867067  627293 node_ready.go:53] node "ha-792382-m03" has status "Ready":"False"
	I1209 10:52:17.363592  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:17.363616  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:17.363628  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:17.363634  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:17.367261  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:17.863310  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:17.863334  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:17.863343  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:17.863347  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:17.866881  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:18.363575  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:18.363603  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:18.363614  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:18.363624  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:18.368502  627293 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 10:52:18.863660  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:18.863684  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:18.863693  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:18.863698  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:18.866946  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:18.867391  627293 node_ready.go:53] node "ha-792382-m03" has status "Ready":"False"
	I1209 10:52:19.362762  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:19.362786  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:19.362794  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:19.362798  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:19.366684  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:19.863495  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:19.863581  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:19.863600  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:19.863608  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:19.870858  627293 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1209 10:52:20.363448  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:20.363473  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:20.363482  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:20.363487  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:20.367472  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:20.368003  627293 node_ready.go:49] node "ha-792382-m03" has status "Ready":"True"
	I1209 10:52:20.368025  627293 node_ready.go:38] duration metric: took 17.505584111s for node "ha-792382-m03" to be "Ready" ...
	I1209 10:52:20.368035  627293 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 10:52:20.368124  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods
	I1209 10:52:20.368135  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:20.368143  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:20.368147  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:20.375067  627293 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1209 10:52:20.382809  627293 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-8hlml" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:20.382913  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-8hlml
	I1209 10:52:20.382922  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:20.382932  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:20.382939  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:20.386681  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:20.387473  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382
	I1209 10:52:20.387492  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:20.387502  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:20.387506  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:20.390201  627293 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 10:52:20.390989  627293 pod_ready.go:93] pod "coredns-7c65d6cfc9-8hlml" in "kube-system" namespace has status "Ready":"True"
	I1209 10:52:20.391012  627293 pod_ready.go:82] duration metric: took 8.170284ms for pod "coredns-7c65d6cfc9-8hlml" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:20.391025  627293 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-rz6mw" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:20.391107  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-rz6mw
	I1209 10:52:20.391121  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:20.391132  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:20.391139  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:20.393896  627293 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 10:52:20.394886  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382
	I1209 10:52:20.394902  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:20.394910  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:20.394913  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:20.397630  627293 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 10:52:20.398092  627293 pod_ready.go:93] pod "coredns-7c65d6cfc9-rz6mw" in "kube-system" namespace has status "Ready":"True"
	I1209 10:52:20.398114  627293 pod_ready.go:82] duration metric: took 7.080989ms for pod "coredns-7c65d6cfc9-rz6mw" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:20.398128  627293 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-792382" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:20.398227  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-792382
	I1209 10:52:20.398238  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:20.398249  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:20.398255  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:20.402755  627293 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 10:52:20.403454  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382
	I1209 10:52:20.403477  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:20.403487  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:20.403495  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:20.407171  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:20.407675  627293 pod_ready.go:93] pod "etcd-ha-792382" in "kube-system" namespace has status "Ready":"True"
	I1209 10:52:20.407690  627293 pod_ready.go:82] duration metric: took 9.55619ms for pod "etcd-ha-792382" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:20.407701  627293 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-792382-m02" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:20.407761  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-792382-m02
	I1209 10:52:20.407769  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:20.407776  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:20.407782  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:20.411699  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:20.412198  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:52:20.412214  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:20.412221  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:20.412228  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:20.415128  627293 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 10:52:20.415876  627293 pod_ready.go:93] pod "etcd-ha-792382-m02" in "kube-system" namespace has status "Ready":"True"
	I1209 10:52:20.415895  627293 pod_ready.go:82] duration metric: took 8.185439ms for pod "etcd-ha-792382-m02" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:20.415927  627293 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-792382-m03" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:20.564348  627293 request.go:632] Waited for 148.293235ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-792382-m03
	I1209 10:52:20.564443  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-792382-m03
	I1209 10:52:20.564455  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:20.564475  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:20.564485  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:20.567758  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:20.763843  627293 request.go:632] Waited for 195.366287ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:20.763920  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:20.763933  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:20.763945  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:20.763957  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:20.772124  627293 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1209 10:52:20.772769  627293 pod_ready.go:93] pod "etcd-ha-792382-m03" in "kube-system" namespace has status "Ready":"True"
	I1209 10:52:20.772802  627293 pod_ready.go:82] duration metric: took 356.849767ms for pod "etcd-ha-792382-m03" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:20.772827  627293 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-792382" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:20.963692  627293 request.go:632] Waited for 190.744323ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-792382
	I1209 10:52:20.963762  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-792382
	I1209 10:52:20.963767  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:20.963775  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:20.963781  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:20.966983  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:21.163987  627293 request.go:632] Waited for 196.382643ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-792382
	I1209 10:52:21.164057  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382
	I1209 10:52:21.164062  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:21.164070  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:21.164074  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:21.167406  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:21.168047  627293 pod_ready.go:93] pod "kube-apiserver-ha-792382" in "kube-system" namespace has status "Ready":"True"
	I1209 10:52:21.168074  627293 pod_ready.go:82] duration metric: took 395.237987ms for pod "kube-apiserver-ha-792382" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:21.168086  627293 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-792382-m02" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:21.364059  627293 request.go:632] Waited for 195.853676ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-792382-m02
	I1209 10:52:21.364141  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-792382-m02
	I1209 10:52:21.364147  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:21.364155  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:21.364164  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:21.368500  627293 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 10:52:21.563923  627293 request.go:632] Waited for 194.790397ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:52:21.563997  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:52:21.564006  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:21.564018  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:21.564029  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:21.567739  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:21.568495  627293 pod_ready.go:93] pod "kube-apiserver-ha-792382-m02" in "kube-system" namespace has status "Ready":"True"
	I1209 10:52:21.568518  627293 pod_ready.go:82] duration metric: took 400.423423ms for pod "kube-apiserver-ha-792382-m02" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:21.568529  627293 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-792382-m03" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:21.763480  627293 request.go:632] Waited for 194.86491ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-792382-m03
	I1209 10:52:21.763574  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-792382-m03
	I1209 10:52:21.763581  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:21.763594  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:21.763602  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:21.767033  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:21.964208  627293 request.go:632] Waited for 196.356498ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:21.964296  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:21.964305  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:21.964340  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:21.964351  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:21.967752  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:21.968228  627293 pod_ready.go:93] pod "kube-apiserver-ha-792382-m03" in "kube-system" namespace has status "Ready":"True"
	I1209 10:52:21.968247  627293 pod_ready.go:82] duration metric: took 399.712092ms for pod "kube-apiserver-ha-792382-m03" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:21.968258  627293 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-792382" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:22.163746  627293 request.go:632] Waited for 195.415661ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-792382
	I1209 10:52:22.163805  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-792382
	I1209 10:52:22.163810  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:22.163823  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:22.163830  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:22.166645  627293 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 10:52:22.364336  627293 request.go:632] Waited for 197.03194ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-792382
	I1209 10:52:22.364428  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382
	I1209 10:52:22.364449  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:22.364480  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:22.364491  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:22.368286  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:22.369016  627293 pod_ready.go:93] pod "kube-controller-manager-ha-792382" in "kube-system" namespace has status "Ready":"True"
	I1209 10:52:22.369039  627293 pod_ready.go:82] duration metric: took 400.774826ms for pod "kube-controller-manager-ha-792382" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:22.369050  627293 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-792382-m02" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:22.564041  627293 request.go:632] Waited for 194.907266ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-792382-m02
	I1209 10:52:22.564119  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-792382-m02
	I1209 10:52:22.564127  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:22.564140  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:22.564149  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:22.567707  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:22.763845  627293 request.go:632] Waited for 195.40032ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:52:22.763928  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:52:22.763935  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:22.763956  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:22.763982  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:22.767705  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:22.768312  627293 pod_ready.go:93] pod "kube-controller-manager-ha-792382-m02" in "kube-system" namespace has status "Ready":"True"
	I1209 10:52:22.768335  627293 pod_ready.go:82] duration metric: took 399.277854ms for pod "kube-controller-manager-ha-792382-m02" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:22.768350  627293 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-792382-m03" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:22.964360  627293 request.go:632] Waited for 195.903206ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-792382-m03
	I1209 10:52:22.964433  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-792382-m03
	I1209 10:52:22.964446  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:22.964457  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:22.964465  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:22.967540  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:23.163523  627293 request.go:632] Waited for 195.162382ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:23.163590  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:23.163596  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:23.163611  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:23.163618  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:23.166875  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:23.167557  627293 pod_ready.go:93] pod "kube-controller-manager-ha-792382-m03" in "kube-system" namespace has status "Ready":"True"
	I1209 10:52:23.167581  627293 pod_ready.go:82] duration metric: took 399.219283ms for pod "kube-controller-manager-ha-792382-m03" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:23.167592  627293 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-2l42s" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:23.364163  627293 request.go:632] Waited for 196.469736ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2l42s
	I1209 10:52:23.364233  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2l42s
	I1209 10:52:23.364240  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:23.364250  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:23.364256  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:23.368871  627293 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 10:52:23.564369  627293 request.go:632] Waited for 194.396631ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:23.564485  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:23.564496  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:23.564504  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:23.564509  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:23.567861  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:23.568367  627293 pod_ready.go:93] pod "kube-proxy-2l42s" in "kube-system" namespace has status "Ready":"True"
	I1209 10:52:23.568387  627293 pod_ready.go:82] duration metric: took 400.786442ms for pod "kube-proxy-2l42s" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:23.568400  627293 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-dckpl" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:23.763515  627293 request.go:632] Waited for 195.023087ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dckpl
	I1209 10:52:23.763600  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dckpl
	I1209 10:52:23.763608  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:23.763619  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:23.763628  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:23.767899  627293 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 10:52:23.964038  627293 request.go:632] Waited for 195.369645ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:52:23.964137  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:52:23.964144  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:23.964152  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:23.964161  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:23.967628  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:23.968543  627293 pod_ready.go:93] pod "kube-proxy-dckpl" in "kube-system" namespace has status "Ready":"True"
	I1209 10:52:23.968572  627293 pod_ready.go:82] duration metric: took 400.162458ms for pod "kube-proxy-dckpl" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:23.968586  627293 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-wrvgb" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:24.164418  627293 request.go:632] Waited for 195.731455ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wrvgb
	I1209 10:52:24.164497  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wrvgb
	I1209 10:52:24.164502  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:24.164511  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:24.164516  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:24.167227  627293 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1209 10:52:24.364211  627293 request.go:632] Waited for 196.319396ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-792382
	I1209 10:52:24.364296  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382
	I1209 10:52:24.364308  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:24.364319  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:24.364330  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:24.368387  627293 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 10:52:24.369158  627293 pod_ready.go:93] pod "kube-proxy-wrvgb" in "kube-system" namespace has status "Ready":"True"
	I1209 10:52:24.369182  627293 pod_ready.go:82] duration metric: took 400.580765ms for pod "kube-proxy-wrvgb" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:24.369195  627293 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-792382" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:24.564251  627293 request.go:632] Waited for 194.959562ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-792382
	I1209 10:52:24.564342  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-792382
	I1209 10:52:24.564348  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:24.564357  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:24.564361  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:24.568298  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:24.764304  627293 request.go:632] Waited for 195.363618ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-792382
	I1209 10:52:24.764392  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382
	I1209 10:52:24.764408  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:24.764418  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:24.764425  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:24.768139  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:24.768711  627293 pod_ready.go:93] pod "kube-scheduler-ha-792382" in "kube-system" namespace has status "Ready":"True"
	I1209 10:52:24.768733  627293 pod_ready.go:82] duration metric: took 399.519254ms for pod "kube-scheduler-ha-792382" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:24.768746  627293 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-792382-m02" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:24.963667  627293 request.go:632] Waited for 194.82946ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-792382-m02
	I1209 10:52:24.963730  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-792382-m02
	I1209 10:52:24.963736  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:24.963744  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:24.963749  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:24.967092  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:25.164276  627293 request.go:632] Waited for 196.380929ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:52:25.164345  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m02
	I1209 10:52:25.164349  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:25.164358  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:25.164364  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:25.169070  627293 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1209 10:52:25.169673  627293 pod_ready.go:93] pod "kube-scheduler-ha-792382-m02" in "kube-system" namespace has status "Ready":"True"
	I1209 10:52:25.169696  627293 pod_ready.go:82] duration metric: took 400.939865ms for pod "kube-scheduler-ha-792382-m02" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:25.169706  627293 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-792382-m03" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:25.363779  627293 request.go:632] Waited for 193.996151ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-792382-m03
	I1209 10:52:25.363866  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-792382-m03
	I1209 10:52:25.363882  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:25.363912  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:25.363923  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:25.367885  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:25.563919  627293 request.go:632] Waited for 195.39244ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:25.563987  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-792382-m03
	I1209 10:52:25.563992  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:25.564000  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:25.564003  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:25.567759  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:25.568223  627293 pod_ready.go:93] pod "kube-scheduler-ha-792382-m03" in "kube-system" namespace has status "Ready":"True"
	I1209 10:52:25.568247  627293 pod_ready.go:82] duration metric: took 398.53325ms for pod "kube-scheduler-ha-792382-m03" in "kube-system" namespace to be "Ready" ...
	I1209 10:52:25.568262  627293 pod_ready.go:39] duration metric: took 5.200212564s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 10:52:25.568288  627293 api_server.go:52] waiting for apiserver process to appear ...
	I1209 10:52:25.568359  627293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 10:52:25.588000  627293 api_server.go:72] duration metric: took 22.996035203s to wait for apiserver process to appear ...
	I1209 10:52:25.588031  627293 api_server.go:88] waiting for apiserver healthz status ...
	I1209 10:52:25.588055  627293 api_server.go:253] Checking apiserver healthz at https://192.168.39.69:8443/healthz ...
	I1209 10:52:25.592469  627293 api_server.go:279] https://192.168.39.69:8443/healthz returned 200:
	ok
	I1209 10:52:25.592544  627293 round_trippers.go:463] GET https://192.168.39.69:8443/version
	I1209 10:52:25.592549  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:25.592557  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:25.592563  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:25.593630  627293 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1209 10:52:25.593699  627293 api_server.go:141] control plane version: v1.31.2
	I1209 10:52:25.593714  627293 api_server.go:131] duration metric: took 5.676129ms to wait for apiserver health ...
	I1209 10:52:25.593722  627293 system_pods.go:43] waiting for kube-system pods to appear ...
	I1209 10:52:25.764156  627293 request.go:632] Waited for 170.352326ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods
	I1209 10:52:25.764268  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods
	I1209 10:52:25.764281  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:25.764294  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:25.764301  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:25.774462  627293 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1209 10:52:25.781848  627293 system_pods.go:59] 24 kube-system pods found
	I1209 10:52:25.781880  627293 system_pods.go:61] "coredns-7c65d6cfc9-8hlml" [d820cd6c-5064-4934-adc8-c68f84c09b46] Running
	I1209 10:52:25.781886  627293 system_pods.go:61] "coredns-7c65d6cfc9-rz6mw" [af297b6d-91f1-4114-b98c-cdfdfbd1589e] Running
	I1209 10:52:25.781890  627293 system_pods.go:61] "etcd-ha-792382" [eee2fbad-7f2f-4f5a-b701-abdbf8456f99] Running
	I1209 10:52:25.781894  627293 system_pods.go:61] "etcd-ha-792382-m02" [424768ff-af5d-4a31-b062-ab2c9576884d] Running
	I1209 10:52:25.781897  627293 system_pods.go:61] "etcd-ha-792382-m03" [4112b988-6915-413a-badd-c0207865e60d] Running
	I1209 10:52:25.781900  627293 system_pods.go:61] "kindnet-6hlht" [23156ebc-d366-4fc2-bedb-7a63e950b116] Running
	I1209 10:52:25.781903  627293 system_pods.go:61] "kindnet-bqp2z" [b2c40579-4d72-4efe-b921-1e0f98b91544] Running
	I1209 10:52:25.781906  627293 system_pods.go:61] "kindnet-hkrhk" [9b35011c-ab45-4f55-a60f-08f9c4509c1d] Running
	I1209 10:52:25.781909  627293 system_pods.go:61] "kube-apiserver-ha-792382" [5157cfb0-bc91-4efe-b2e2-689778d5b012] Running
	I1209 10:52:25.781913  627293 system_pods.go:61] "kube-apiserver-ha-792382-m02" [71f9ce1c-aeff-4853-83ec-01a7fb6a81d5] Running
	I1209 10:52:25.781916  627293 system_pods.go:61] "kube-apiserver-ha-792382-m03" [5cd4395c-58a8-45ba-90ea-72105d25fadd] Running
	I1209 10:52:25.781919  627293 system_pods.go:61] "kube-controller-manager-ha-792382" [f8cb7175-f6fd-4dcf-a6a5-66c44aa136ce] Running
	I1209 10:52:25.781922  627293 system_pods.go:61] "kube-controller-manager-ha-792382-m02" [2f7d8deb-ddad-47ac-ad18-5f8e7d0e095d] Running
	I1209 10:52:25.781926  627293 system_pods.go:61] "kube-controller-manager-ha-792382-m03" [5c5d03de-e7e9-491b-a6fd-fdc50b4ce7ed] Running
	I1209 10:52:25.781930  627293 system_pods.go:61] "kube-proxy-2l42s" [a4bfe3cb-9b06-4d1e-9887-c461d31aaaec] Running
	I1209 10:52:25.781934  627293 system_pods.go:61] "kube-proxy-dckpl" [13f7bda1-f9c2-4fd0-96e0-b6aee1139bc1] Running
	I1209 10:52:25.781940  627293 system_pods.go:61] "kube-proxy-wrvgb" [2531e29f-a4d5-41f9-8c38-3220b4caf96b] Running
	I1209 10:52:25.781942  627293 system_pods.go:61] "kube-scheduler-ha-792382" [b11693d0-b2e7-45a1-a1a2-6519a1535c45] Running
	I1209 10:52:25.781945  627293 system_pods.go:61] "kube-scheduler-ha-792382-m02" [250ce40c-5cbf-4f5d-a475-4f3d7ec100d9] Running
	I1209 10:52:25.781948  627293 system_pods.go:61] "kube-scheduler-ha-792382-m03" [b994f699-40b5-423e-b92f-3ca6208e69d0] Running
	I1209 10:52:25.781951  627293 system_pods.go:61] "kube-vip-ha-792382" [511f3a84-b444-4603-8f6f-eeeb262b1384] Running
	I1209 10:52:25.781954  627293 system_pods.go:61] "kube-vip-ha-792382-m02" [f2110c28-09f5-42f9-8e16-beec9759a0a2] Running
	I1209 10:52:25.781957  627293 system_pods.go:61] "kube-vip-ha-792382-m03" [5eee7c3c-1b75-48ad-813e-963fa4308d1b] Running
	I1209 10:52:25.781960  627293 system_pods.go:61] "storage-provisioner" [4419fe4f-e2ed-4ecb-a912-2dd074e29727] Running
	I1209 10:52:25.781965  627293 system_pods.go:74] duration metric: took 188.238253ms to wait for pod list to return data ...
	I1209 10:52:25.781976  627293 default_sa.go:34] waiting for default service account to be created ...
	I1209 10:52:25.964450  627293 request.go:632] Waited for 182.375955ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/default/serviceaccounts
	I1209 10:52:25.964524  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/default/serviceaccounts
	I1209 10:52:25.964529  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:25.964538  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:25.964543  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:25.968489  627293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1209 10:52:25.968636  627293 default_sa.go:45] found service account: "default"
	I1209 10:52:25.968653  627293 default_sa.go:55] duration metric: took 186.669919ms for default service account to be created ...
	I1209 10:52:25.968664  627293 system_pods.go:116] waiting for k8s-apps to be running ...
	I1209 10:52:26.163895  627293 request.go:632] Waited for 195.104758ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods
	I1209 10:52:26.163963  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods
	I1209 10:52:26.163969  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:26.163977  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:26.163981  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:26.169457  627293 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1209 10:52:26.176126  627293 system_pods.go:86] 24 kube-system pods found
	I1209 10:52:26.176160  627293 system_pods.go:89] "coredns-7c65d6cfc9-8hlml" [d820cd6c-5064-4934-adc8-c68f84c09b46] Running
	I1209 10:52:26.176166  627293 system_pods.go:89] "coredns-7c65d6cfc9-rz6mw" [af297b6d-91f1-4114-b98c-cdfdfbd1589e] Running
	I1209 10:52:26.176171  627293 system_pods.go:89] "etcd-ha-792382" [eee2fbad-7f2f-4f5a-b701-abdbf8456f99] Running
	I1209 10:52:26.176175  627293 system_pods.go:89] "etcd-ha-792382-m02" [424768ff-af5d-4a31-b062-ab2c9576884d] Running
	I1209 10:52:26.176178  627293 system_pods.go:89] "etcd-ha-792382-m03" [4112b988-6915-413a-badd-c0207865e60d] Running
	I1209 10:52:26.176184  627293 system_pods.go:89] "kindnet-6hlht" [23156ebc-d366-4fc2-bedb-7a63e950b116] Running
	I1209 10:52:26.176189  627293 system_pods.go:89] "kindnet-bqp2z" [b2c40579-4d72-4efe-b921-1e0f98b91544] Running
	I1209 10:52:26.176195  627293 system_pods.go:89] "kindnet-hkrhk" [9b35011c-ab45-4f55-a60f-08f9c4509c1d] Running
	I1209 10:52:26.176201  627293 system_pods.go:89] "kube-apiserver-ha-792382" [5157cfb0-bc91-4efe-b2e2-689778d5b012] Running
	I1209 10:52:26.176206  627293 system_pods.go:89] "kube-apiserver-ha-792382-m02" [71f9ce1c-aeff-4853-83ec-01a7fb6a81d5] Running
	I1209 10:52:26.176212  627293 system_pods.go:89] "kube-apiserver-ha-792382-m03" [5cd4395c-58a8-45ba-90ea-72105d25fadd] Running
	I1209 10:52:26.176220  627293 system_pods.go:89] "kube-controller-manager-ha-792382" [f8cb7175-f6fd-4dcf-a6a5-66c44aa136ce] Running
	I1209 10:52:26.176231  627293 system_pods.go:89] "kube-controller-manager-ha-792382-m02" [2f7d8deb-ddad-47ac-ad18-5f8e7d0e095d] Running
	I1209 10:52:26.176240  627293 system_pods.go:89] "kube-controller-manager-ha-792382-m03" [5c5d03de-e7e9-491b-a6fd-fdc50b4ce7ed] Running
	I1209 10:52:26.176245  627293 system_pods.go:89] "kube-proxy-2l42s" [a4bfe3cb-9b06-4d1e-9887-c461d31aaaec] Running
	I1209 10:52:26.176254  627293 system_pods.go:89] "kube-proxy-dckpl" [13f7bda1-f9c2-4fd0-96e0-b6aee1139bc1] Running
	I1209 10:52:26.176263  627293 system_pods.go:89] "kube-proxy-wrvgb" [2531e29f-a4d5-41f9-8c38-3220b4caf96b] Running
	I1209 10:52:26.176272  627293 system_pods.go:89] "kube-scheduler-ha-792382" [b11693d0-b2e7-45a1-a1a2-6519a1535c45] Running
	I1209 10:52:26.176285  627293 system_pods.go:89] "kube-scheduler-ha-792382-m02" [250ce40c-5cbf-4f5d-a475-4f3d7ec100d9] Running
	I1209 10:52:26.176294  627293 system_pods.go:89] "kube-scheduler-ha-792382-m03" [b994f699-40b5-423e-b92f-3ca6208e69d0] Running
	I1209 10:52:26.176303  627293 system_pods.go:89] "kube-vip-ha-792382" [511f3a84-b444-4603-8f6f-eeeb262b1384] Running
	I1209 10:52:26.176312  627293 system_pods.go:89] "kube-vip-ha-792382-m02" [f2110c28-09f5-42f9-8e16-beec9759a0a2] Running
	I1209 10:52:26.176320  627293 system_pods.go:89] "kube-vip-ha-792382-m03" [5eee7c3c-1b75-48ad-813e-963fa4308d1b] Running
	I1209 10:52:26.176327  627293 system_pods.go:89] "storage-provisioner" [4419fe4f-e2ed-4ecb-a912-2dd074e29727] Running
	I1209 10:52:26.176338  627293 system_pods.go:126] duration metric: took 207.663846ms to wait for k8s-apps to be running ...
	I1209 10:52:26.176348  627293 system_svc.go:44] waiting for kubelet service to be running ....
	I1209 10:52:26.176410  627293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 10:52:26.193241  627293 system_svc.go:56] duration metric: took 16.882967ms WaitForService to wait for kubelet
	I1209 10:52:26.193274  627293 kubeadm.go:582] duration metric: took 23.601316183s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 10:52:26.193295  627293 node_conditions.go:102] verifying NodePressure condition ...
	I1209 10:52:26.363791  627293 request.go:632] Waited for 170.378697ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes
	I1209 10:52:26.363869  627293 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes
	I1209 10:52:26.363877  627293 round_trippers.go:469] Request Headers:
	I1209 10:52:26.363893  627293 round_trippers.go:473]     Accept: application/json, */*
	I1209 10:52:26.363902  627293 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1209 10:52:26.369525  627293 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1209 10:52:26.370723  627293 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1209 10:52:26.370747  627293 node_conditions.go:123] node cpu capacity is 2
	I1209 10:52:26.370760  627293 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1209 10:52:26.370763  627293 node_conditions.go:123] node cpu capacity is 2
	I1209 10:52:26.370766  627293 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1209 10:52:26.370770  627293 node_conditions.go:123] node cpu capacity is 2
	I1209 10:52:26.370774  627293 node_conditions.go:105] duration metric: took 177.473705ms to run NodePressure ...
	I1209 10:52:26.370790  627293 start.go:241] waiting for startup goroutines ...
	I1209 10:52:26.370823  627293 start.go:255] writing updated cluster config ...
	I1209 10:52:26.371156  627293 ssh_runner.go:195] Run: rm -f paused
	I1209 10:52:26.426485  627293 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1209 10:52:26.428634  627293 out.go:177] * Done! kubectl is now configured to use "ha-792382" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 09 10:56:24 ha-792382 crio[665]: time="2024-12-09 10:56:24.673667647Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733741784673643480,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ef26f90d-edc8-458e-b364-0e4cdb885936 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 10:56:24 ha-792382 crio[665]: time="2024-12-09 10:56:24.674102408Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=10a89ece-8798-407c-9b15-7de654387d6e name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 10:56:24 ha-792382 crio[665]: time="2024-12-09 10:56:24.674151382Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=10a89ece-8798-407c-9b15-7de654387d6e name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 10:56:24 ha-792382 crio[665]: time="2024-12-09 10:56:24.674537474Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3354d3bec20606deb1f19130215939f5c7b2123b73319ef892bffc278d375fb8,PodSandboxId:e47f42b7e0900b7149c93ac504858a9b845addf2278ae1c0abd45e66fc9df066,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733741551077576684,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-z9wjm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 00b911f2-4cd1-486a-9276-1e98745ede0e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4ba11ff07ea5065d66a5e8b8d091a9ba9a1c680ab1ade1d19aecf153081d7dd,PodSandboxId:a5c60a0e3c19becc3387cabd45cb24c34922bd26351286c88744f9e2caf6f8d6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733741412981896414,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8hlml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d820cd6c-5064-4934-adc8-c68f84c09b46,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afc0f0aea4c8acce16752f3e50cc08b660c2c26b24932beaf28ba3d1bc596733,PodSandboxId:038ff3d97cfe50dddc147974bab92d243212a5896794a4bc3f1ce2b77fdb5e39,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733741412958470450,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rz6mw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
af297b6d-91f1-4114-b98c-cdfdfbd1589e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9fa96349b5a7bf6e8223fd16d01f77aa0d3c45aa83ad28fc42eec5d2a80dd24,PodSandboxId:02bd44e5a67d9df1c56cc6297ec66940ce286c89b103f7efa4b35259d2a2f8c7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733741412903452485,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4419fe4f-e2ed-4ecb-a912-2dd074e29727,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6bf7c7cf0d689f954fca58d1d94afa21f5bfd0f606552fbbf1479a9ae1593d3,PodSandboxId:cfb791c6d05cec0b9cc244408089c309a41f321fe06c2083eabaeaf9184c0ff6,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e,State:CO
NTAINER_RUNNING,CreatedAt:1733741400823508110,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bqp2z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2c40579-4d72-4efe-b921-1e0f98b91544,},Annotations:map[string]string{io.kubernetes.container.hash: 9533276,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cf6196a4789ec6471e8e2fe474e71d1756368a4423e425262410e6c7a71e522,PodSandboxId:82b54a7467a7aa9cf761959fb4e7e7ea830c6fdcf33629ec45c8b55326334f67,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733741398
382511881,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wrvgb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2531e29f-a4d5-41f9-8c38-3220b4caf96b,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:082e8ff7e6c7e3e1110852687152ddb22202b3134aa3105a8b11aa7d702bcd6a,PodSandboxId:1486ff19db45e7b9cb8b545f56579a8765f6cef7fe3b1d44f967a5c5823b4cdc,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:32829cc6f8630eba4e1b5e4df5bcbc34c767e70703d26e64a0f7317951c7b517,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1c87c24be687be21b68228b1b53b7792a3d82fb4d99bc68a5a984f582508a37,State:CONTAINER_RUNNING,CreatedAt:173374138888
4669062,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-792382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9922f13afb31842008ba0179dabd897e,},Annotations:map[string]string{io.kubernetes.container.hash: 69195791,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:778345b29099a60c663a83117fc6b409ae496d9228f79ecc48c26209e7a87f63,PodSandboxId:27e12e36b1bd81bfdb6cc876b8c1a9c925ed2eec6ac687056f4bd10eb872b7a6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733741386069265058,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-792382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2460a8b15a62b9cf3ad5343586bde402,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64b96c1c2297057b3b928bc7494df511d909f05efb4f8d5be27458c164efe95f,PodSandboxId:7bbf390b8ef03829849defea1ba3817acb7a86ce1705fb53d765d7ddca57a066,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733741386071035939,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kub
ernetes.pod.name: kube-apiserver-ha-792382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89a89b1c65df6e3ad9608c5607172f77,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d93c68b855d9f72ce368ee4f59250a988032fe781209e2115a8d70ea60f77fee,PodSandboxId:9493b93aded71efd1c05e2df3b9537c0c63dc9b2e9abdbf1be430d3c29d5ad9f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733741386009117508,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kub
e-scheduler-ha-792382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 082fcfac40bcf36b76f1e733a9f73bc8,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00db8f77881efc15e9bf57aba34890302ec37f9239ba1d191862ea78b9833604,PodSandboxId:02e8433fa67cc5aef5d43a5edf820340ad5d91eef3bdd9e7149eeec0e55a8c95,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733741385994463722,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-792382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4d8d358ed72ac30c9365aedd3aee4d1,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=10a89ece-8798-407c-9b15-7de654387d6e name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 10:56:24 ha-792382 crio[665]: time="2024-12-09 10:56:24.710626466Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f910c593-235c-4f30-9472-feb3449f3171 name=/runtime.v1.RuntimeService/Version
	Dec 09 10:56:24 ha-792382 crio[665]: time="2024-12-09 10:56:24.710705495Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f910c593-235c-4f30-9472-feb3449f3171 name=/runtime.v1.RuntimeService/Version
	Dec 09 10:56:24 ha-792382 crio[665]: time="2024-12-09 10:56:24.711817764Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e436f63c-dc5a-402f-a652-cd33063b1cf8 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 10:56:24 ha-792382 crio[665]: time="2024-12-09 10:56:24.712237270Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733741784712215082,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e436f63c-dc5a-402f-a652-cd33063b1cf8 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 10:56:24 ha-792382 crio[665]: time="2024-12-09 10:56:24.712892873Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7a7abf5c-957a-43eb-ae1b-a38da14efd7a name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 10:56:24 ha-792382 crio[665]: time="2024-12-09 10:56:24.712950124Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7a7abf5c-957a-43eb-ae1b-a38da14efd7a name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 10:56:24 ha-792382 crio[665]: time="2024-12-09 10:56:24.713217540Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3354d3bec20606deb1f19130215939f5c7b2123b73319ef892bffc278d375fb8,PodSandboxId:e47f42b7e0900b7149c93ac504858a9b845addf2278ae1c0abd45e66fc9df066,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733741551077576684,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-z9wjm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 00b911f2-4cd1-486a-9276-1e98745ede0e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4ba11ff07ea5065d66a5e8b8d091a9ba9a1c680ab1ade1d19aecf153081d7dd,PodSandboxId:a5c60a0e3c19becc3387cabd45cb24c34922bd26351286c88744f9e2caf6f8d6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733741412981896414,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8hlml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d820cd6c-5064-4934-adc8-c68f84c09b46,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afc0f0aea4c8acce16752f3e50cc08b660c2c26b24932beaf28ba3d1bc596733,PodSandboxId:038ff3d97cfe50dddc147974bab92d243212a5896794a4bc3f1ce2b77fdb5e39,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733741412958470450,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rz6mw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
af297b6d-91f1-4114-b98c-cdfdfbd1589e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9fa96349b5a7bf6e8223fd16d01f77aa0d3c45aa83ad28fc42eec5d2a80dd24,PodSandboxId:02bd44e5a67d9df1c56cc6297ec66940ce286c89b103f7efa4b35259d2a2f8c7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733741412903452485,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4419fe4f-e2ed-4ecb-a912-2dd074e29727,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6bf7c7cf0d689f954fca58d1d94afa21f5bfd0f606552fbbf1479a9ae1593d3,PodSandboxId:cfb791c6d05cec0b9cc244408089c309a41f321fe06c2083eabaeaf9184c0ff6,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e,State:CO
NTAINER_RUNNING,CreatedAt:1733741400823508110,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bqp2z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2c40579-4d72-4efe-b921-1e0f98b91544,},Annotations:map[string]string{io.kubernetes.container.hash: 9533276,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cf6196a4789ec6471e8e2fe474e71d1756368a4423e425262410e6c7a71e522,PodSandboxId:82b54a7467a7aa9cf761959fb4e7e7ea830c6fdcf33629ec45c8b55326334f67,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733741398
382511881,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wrvgb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2531e29f-a4d5-41f9-8c38-3220b4caf96b,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:082e8ff7e6c7e3e1110852687152ddb22202b3134aa3105a8b11aa7d702bcd6a,PodSandboxId:1486ff19db45e7b9cb8b545f56579a8765f6cef7fe3b1d44f967a5c5823b4cdc,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:32829cc6f8630eba4e1b5e4df5bcbc34c767e70703d26e64a0f7317951c7b517,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1c87c24be687be21b68228b1b53b7792a3d82fb4d99bc68a5a984f582508a37,State:CONTAINER_RUNNING,CreatedAt:173374138888
4669062,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-792382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9922f13afb31842008ba0179dabd897e,},Annotations:map[string]string{io.kubernetes.container.hash: 69195791,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:778345b29099a60c663a83117fc6b409ae496d9228f79ecc48c26209e7a87f63,PodSandboxId:27e12e36b1bd81bfdb6cc876b8c1a9c925ed2eec6ac687056f4bd10eb872b7a6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733741386069265058,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-792382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2460a8b15a62b9cf3ad5343586bde402,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64b96c1c2297057b3b928bc7494df511d909f05efb4f8d5be27458c164efe95f,PodSandboxId:7bbf390b8ef03829849defea1ba3817acb7a86ce1705fb53d765d7ddca57a066,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733741386071035939,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kub
ernetes.pod.name: kube-apiserver-ha-792382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89a89b1c65df6e3ad9608c5607172f77,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d93c68b855d9f72ce368ee4f59250a988032fe781209e2115a8d70ea60f77fee,PodSandboxId:9493b93aded71efd1c05e2df3b9537c0c63dc9b2e9abdbf1be430d3c29d5ad9f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733741386009117508,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kub
e-scheduler-ha-792382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 082fcfac40bcf36b76f1e733a9f73bc8,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00db8f77881efc15e9bf57aba34890302ec37f9239ba1d191862ea78b9833604,PodSandboxId:02e8433fa67cc5aef5d43a5edf820340ad5d91eef3bdd9e7149eeec0e55a8c95,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733741385994463722,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-792382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4d8d358ed72ac30c9365aedd3aee4d1,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7a7abf5c-957a-43eb-ae1b-a38da14efd7a name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 10:56:24 ha-792382 crio[665]: time="2024-12-09 10:56:24.753733113Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=092dc546-c8ae-46f5-b639-ec92fec24d87 name=/runtime.v1.RuntimeService/Version
	Dec 09 10:56:24 ha-792382 crio[665]: time="2024-12-09 10:56:24.753832624Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=092dc546-c8ae-46f5-b639-ec92fec24d87 name=/runtime.v1.RuntimeService/Version
	Dec 09 10:56:24 ha-792382 crio[665]: time="2024-12-09 10:56:24.755064874Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=09b0579c-9315-47d6-8f09-1e6f40e0d86e name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 10:56:24 ha-792382 crio[665]: time="2024-12-09 10:56:24.755594155Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733741784755560058,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=09b0579c-9315-47d6-8f09-1e6f40e0d86e name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 10:56:24 ha-792382 crio[665]: time="2024-12-09 10:56:24.756346092Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2306bcb9-875d-4fd5-8ec0-eefb4c7e0069 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 10:56:24 ha-792382 crio[665]: time="2024-12-09 10:56:24.756400858Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2306bcb9-875d-4fd5-8ec0-eefb4c7e0069 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 10:56:24 ha-792382 crio[665]: time="2024-12-09 10:56:24.756998231Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3354d3bec20606deb1f19130215939f5c7b2123b73319ef892bffc278d375fb8,PodSandboxId:e47f42b7e0900b7149c93ac504858a9b845addf2278ae1c0abd45e66fc9df066,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733741551077576684,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-z9wjm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 00b911f2-4cd1-486a-9276-1e98745ede0e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4ba11ff07ea5065d66a5e8b8d091a9ba9a1c680ab1ade1d19aecf153081d7dd,PodSandboxId:a5c60a0e3c19becc3387cabd45cb24c34922bd26351286c88744f9e2caf6f8d6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733741412981896414,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8hlml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d820cd6c-5064-4934-adc8-c68f84c09b46,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afc0f0aea4c8acce16752f3e50cc08b660c2c26b24932beaf28ba3d1bc596733,PodSandboxId:038ff3d97cfe50dddc147974bab92d243212a5896794a4bc3f1ce2b77fdb5e39,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733741412958470450,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rz6mw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
af297b6d-91f1-4114-b98c-cdfdfbd1589e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9fa96349b5a7bf6e8223fd16d01f77aa0d3c45aa83ad28fc42eec5d2a80dd24,PodSandboxId:02bd44e5a67d9df1c56cc6297ec66940ce286c89b103f7efa4b35259d2a2f8c7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733741412903452485,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4419fe4f-e2ed-4ecb-a912-2dd074e29727,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6bf7c7cf0d689f954fca58d1d94afa21f5bfd0f606552fbbf1479a9ae1593d3,PodSandboxId:cfb791c6d05cec0b9cc244408089c309a41f321fe06c2083eabaeaf9184c0ff6,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e,State:CO
NTAINER_RUNNING,CreatedAt:1733741400823508110,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bqp2z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2c40579-4d72-4efe-b921-1e0f98b91544,},Annotations:map[string]string{io.kubernetes.container.hash: 9533276,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cf6196a4789ec6471e8e2fe474e71d1756368a4423e425262410e6c7a71e522,PodSandboxId:82b54a7467a7aa9cf761959fb4e7e7ea830c6fdcf33629ec45c8b55326334f67,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733741398
382511881,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wrvgb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2531e29f-a4d5-41f9-8c38-3220b4caf96b,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:082e8ff7e6c7e3e1110852687152ddb22202b3134aa3105a8b11aa7d702bcd6a,PodSandboxId:1486ff19db45e7b9cb8b545f56579a8765f6cef7fe3b1d44f967a5c5823b4cdc,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:32829cc6f8630eba4e1b5e4df5bcbc34c767e70703d26e64a0f7317951c7b517,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1c87c24be687be21b68228b1b53b7792a3d82fb4d99bc68a5a984f582508a37,State:CONTAINER_RUNNING,CreatedAt:173374138888
4669062,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-792382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9922f13afb31842008ba0179dabd897e,},Annotations:map[string]string{io.kubernetes.container.hash: 69195791,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:778345b29099a60c663a83117fc6b409ae496d9228f79ecc48c26209e7a87f63,PodSandboxId:27e12e36b1bd81bfdb6cc876b8c1a9c925ed2eec6ac687056f4bd10eb872b7a6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733741386069265058,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-792382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2460a8b15a62b9cf3ad5343586bde402,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64b96c1c2297057b3b928bc7494df511d909f05efb4f8d5be27458c164efe95f,PodSandboxId:7bbf390b8ef03829849defea1ba3817acb7a86ce1705fb53d765d7ddca57a066,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733741386071035939,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kub
ernetes.pod.name: kube-apiserver-ha-792382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89a89b1c65df6e3ad9608c5607172f77,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d93c68b855d9f72ce368ee4f59250a988032fe781209e2115a8d70ea60f77fee,PodSandboxId:9493b93aded71efd1c05e2df3b9537c0c63dc9b2e9abdbf1be430d3c29d5ad9f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733741386009117508,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kub
e-scheduler-ha-792382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 082fcfac40bcf36b76f1e733a9f73bc8,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00db8f77881efc15e9bf57aba34890302ec37f9239ba1d191862ea78b9833604,PodSandboxId:02e8433fa67cc5aef5d43a5edf820340ad5d91eef3bdd9e7149eeec0e55a8c95,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733741385994463722,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-792382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4d8d358ed72ac30c9365aedd3aee4d1,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2306bcb9-875d-4fd5-8ec0-eefb4c7e0069 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 10:56:24 ha-792382 crio[665]: time="2024-12-09 10:56:24.798890201Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bc0e6cf9-0e2c-401b-a10f-ca25a50d0571 name=/runtime.v1.RuntimeService/Version
	Dec 09 10:56:24 ha-792382 crio[665]: time="2024-12-09 10:56:24.799015687Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bc0e6cf9-0e2c-401b-a10f-ca25a50d0571 name=/runtime.v1.RuntimeService/Version
	Dec 09 10:56:24 ha-792382 crio[665]: time="2024-12-09 10:56:24.800203122Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b1b43ba9-643f-4959-855f-df01f6fd2be5 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 10:56:24 ha-792382 crio[665]: time="2024-12-09 10:56:24.801001118Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733741784800975110,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b1b43ba9-643f-4959-855f-df01f6fd2be5 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 10:56:24 ha-792382 crio[665]: time="2024-12-09 10:56:24.801903039Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6eaaea57-cf6b-4520-83ed-dcd83b1b64ca name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 10:56:24 ha-792382 crio[665]: time="2024-12-09 10:56:24.801992583Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6eaaea57-cf6b-4520-83ed-dcd83b1b64ca name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 10:56:24 ha-792382 crio[665]: time="2024-12-09 10:56:24.802374793Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3354d3bec20606deb1f19130215939f5c7b2123b73319ef892bffc278d375fb8,PodSandboxId:e47f42b7e0900b7149c93ac504858a9b845addf2278ae1c0abd45e66fc9df066,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733741551077576684,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-z9wjm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 00b911f2-4cd1-486a-9276-1e98745ede0e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4ba11ff07ea5065d66a5e8b8d091a9ba9a1c680ab1ade1d19aecf153081d7dd,PodSandboxId:a5c60a0e3c19becc3387cabd45cb24c34922bd26351286c88744f9e2caf6f8d6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733741412981896414,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8hlml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d820cd6c-5064-4934-adc8-c68f84c09b46,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afc0f0aea4c8acce16752f3e50cc08b660c2c26b24932beaf28ba3d1bc596733,PodSandboxId:038ff3d97cfe50dddc147974bab92d243212a5896794a4bc3f1ce2b77fdb5e39,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733741412958470450,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rz6mw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
af297b6d-91f1-4114-b98c-cdfdfbd1589e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9fa96349b5a7bf6e8223fd16d01f77aa0d3c45aa83ad28fc42eec5d2a80dd24,PodSandboxId:02bd44e5a67d9df1c56cc6297ec66940ce286c89b103f7efa4b35259d2a2f8c7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733741412903452485,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4419fe4f-e2ed-4ecb-a912-2dd074e29727,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6bf7c7cf0d689f954fca58d1d94afa21f5bfd0f606552fbbf1479a9ae1593d3,PodSandboxId:cfb791c6d05cec0b9cc244408089c309a41f321fe06c2083eabaeaf9184c0ff6,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e,State:CO
NTAINER_RUNNING,CreatedAt:1733741400823508110,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bqp2z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2c40579-4d72-4efe-b921-1e0f98b91544,},Annotations:map[string]string{io.kubernetes.container.hash: 9533276,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cf6196a4789ec6471e8e2fe474e71d1756368a4423e425262410e6c7a71e522,PodSandboxId:82b54a7467a7aa9cf761959fb4e7e7ea830c6fdcf33629ec45c8b55326334f67,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733741398
382511881,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wrvgb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2531e29f-a4d5-41f9-8c38-3220b4caf96b,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:082e8ff7e6c7e3e1110852687152ddb22202b3134aa3105a8b11aa7d702bcd6a,PodSandboxId:1486ff19db45e7b9cb8b545f56579a8765f6cef7fe3b1d44f967a5c5823b4cdc,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:32829cc6f8630eba4e1b5e4df5bcbc34c767e70703d26e64a0f7317951c7b517,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1c87c24be687be21b68228b1b53b7792a3d82fb4d99bc68a5a984f582508a37,State:CONTAINER_RUNNING,CreatedAt:173374138888
4669062,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-792382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9922f13afb31842008ba0179dabd897e,},Annotations:map[string]string{io.kubernetes.container.hash: 69195791,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:778345b29099a60c663a83117fc6b409ae496d9228f79ecc48c26209e7a87f63,PodSandboxId:27e12e36b1bd81bfdb6cc876b8c1a9c925ed2eec6ac687056f4bd10eb872b7a6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733741386069265058,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-792382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2460a8b15a62b9cf3ad5343586bde402,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64b96c1c2297057b3b928bc7494df511d909f05efb4f8d5be27458c164efe95f,PodSandboxId:7bbf390b8ef03829849defea1ba3817acb7a86ce1705fb53d765d7ddca57a066,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733741386071035939,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kub
ernetes.pod.name: kube-apiserver-ha-792382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89a89b1c65df6e3ad9608c5607172f77,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d93c68b855d9f72ce368ee4f59250a988032fe781209e2115a8d70ea60f77fee,PodSandboxId:9493b93aded71efd1c05e2df3b9537c0c63dc9b2e9abdbf1be430d3c29d5ad9f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733741386009117508,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kub
e-scheduler-ha-792382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 082fcfac40bcf36b76f1e733a9f73bc8,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00db8f77881efc15e9bf57aba34890302ec37f9239ba1d191862ea78b9833604,PodSandboxId:02e8433fa67cc5aef5d43a5edf820340ad5d91eef3bdd9e7149eeec0e55a8c95,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733741385994463722,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-792382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4d8d358ed72ac30c9365aedd3aee4d1,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6eaaea57-cf6b-4520-83ed-dcd83b1b64ca name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	3354d3bec2060       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   e47f42b7e0900       busybox-7dff88458-z9wjm
	f4ba11ff07ea5       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   a5c60a0e3c19b       coredns-7c65d6cfc9-8hlml
	afc0f0aea4c8a       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   038ff3d97cfe5       coredns-7c65d6cfc9-rz6mw
	d9fa96349b5a7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   02bd44e5a67d9       storage-provisioner
	b6bf7c7cf0d68       docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3    6 minutes ago       Running             kindnet-cni               0                   cfb791c6d05ce       kindnet-bqp2z
	3cf6196a4789e       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                      6 minutes ago       Running             kube-proxy                0                   82b54a7467a7a       kube-proxy-wrvgb
	082e8ff7e6c7e       ghcr.io/kube-vip/kube-vip@sha256:32829cc6f8630eba4e1b5e4df5bcbc34c767e70703d26e64a0f7317951c7b517     6 minutes ago       Running             kube-vip                  0                   1486ff19db45e       kube-vip-ha-792382
	64b96c1c22970       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                      6 minutes ago       Running             kube-apiserver            0                   7bbf390b8ef03       kube-apiserver-ha-792382
	778345b29099a       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   27e12e36b1bd8       etcd-ha-792382
	d93c68b855d9f       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                      6 minutes ago       Running             kube-scheduler            0                   9493b93aded71       kube-scheduler-ha-792382
	00db8f77881ef       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                      6 minutes ago       Running             kube-controller-manager   0                   02e8433fa67cc       kube-controller-manager-ha-792382
	
	
	==> coredns [afc0f0aea4c8acce16752f3e50cc08b660c2c26b24932beaf28ba3d1bc596733] <==
	[INFO] 10.244.2.2:57485 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000178522s
	[INFO] 10.244.2.2:51008 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003461693s
	[INFO] 10.244.2.2:51209 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000132423s
	[INFO] 10.244.2.2:44233 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000160403s
	[INFO] 10.244.2.2:36343 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000113366s
	[INFO] 10.244.1.2:40108 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001755871s
	[INFO] 10.244.1.2:57627 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000088641s
	[INFO] 10.244.0.4:49175 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000210271s
	[INFO] 10.244.0.4:42721 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001653061s
	[INFO] 10.244.0.4:53085 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000087293s
	[INFO] 10.244.2.2:46633 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000111394s
	[INFO] 10.244.2.2:34060 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000087724s
	[INFO] 10.244.2.2:42086 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000112165s
	[INFO] 10.244.1.2:55917 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000167759s
	[INFO] 10.244.1.2:38190 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000113655s
	[INFO] 10.244.1.2:46262 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000092112s
	[INFO] 10.244.1.2:55410 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000080217s
	[INFO] 10.244.0.4:43802 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000073668s
	[INFO] 10.244.0.4:48010 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000099328s
	[INFO] 10.244.0.4:45687 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00004859s
	[INFO] 10.244.2.2:35669 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00019184s
	[INFO] 10.244.2.2:54242 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000232065s
	[INFO] 10.244.2.2:41931 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000140914s
	[INFO] 10.244.0.4:48531 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000105047s
	[INFO] 10.244.0.4:36756 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000068167s
	
	
	==> coredns [f4ba11ff07ea5065d66a5e8b8d091a9ba9a1c680ab1ade1d19aecf153081d7dd] <==
	[INFO] 10.244.0.4:58900 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000184784s
	[INFO] 10.244.0.4:59585 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.004212695s
	[INFO] 10.244.0.4:42331 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001567158s
	[INFO] 10.244.2.2:43555 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003700387s
	[INFO] 10.244.2.2:38437 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000268841s
	[INFO] 10.244.1.2:36722 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000174774s
	[INFO] 10.244.1.2:46295 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000167521s
	[INFO] 10.244.1.2:36004 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000192453s
	[INFO] 10.244.1.2:54275 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001271437s
	[INFO] 10.244.1.2:48954 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000183213s
	[INFO] 10.244.1.2:57839 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00017811s
	[INFO] 10.244.0.4:54946 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001925365s
	[INFO] 10.244.0.4:59669 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0000722s
	[INFO] 10.244.0.4:40897 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000074421s
	[INFO] 10.244.0.4:46937 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000174065s
	[INFO] 10.244.0.4:34613 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000075946s
	[INFO] 10.244.2.2:44189 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000216239s
	[INFO] 10.244.0.4:39246 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000155453s
	[INFO] 10.244.2.2:48134 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000162494s
	[INFO] 10.244.1.2:44589 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000125364s
	[INFO] 10.244.1.2:59702 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00019329s
	[INFO] 10.244.1.2:58920 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000146935s
	[INFO] 10.244.1.2:55802 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000116158s
	[INFO] 10.244.0.4:47226 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000097556s
	[INFO] 10.244.0.4:42857 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000073279s
	
	
	==> describe nodes <==
	Name:               ha-792382
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-792382
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=60110addcdbf0fec7168b962521659e922988d6c
	                    minikube.k8s.io/name=ha-792382
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_09T10_49_53_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 09 Dec 2024 10:49:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-792382
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 09 Dec 2024 10:56:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 09 Dec 2024 10:52:55 +0000   Mon, 09 Dec 2024 10:49:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 09 Dec 2024 10:52:55 +0000   Mon, 09 Dec 2024 10:49:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 09 Dec 2024 10:52:55 +0000   Mon, 09 Dec 2024 10:49:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 09 Dec 2024 10:52:55 +0000   Mon, 09 Dec 2024 10:50:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.69
	  Hostname:    ha-792382
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 2c956a5ad4d142099b593c1d9352f7b5
	  System UUID:                2c956a5a-d4d1-4209-9b59-3c1d9352f7b5
	  Boot ID:                    5140ef96-1a92-4f56-b80b-7e99ce150ca0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-z9wjm              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m58s
	  kube-system                 coredns-7c65d6cfc9-8hlml             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m29s
	  kube-system                 coredns-7c65d6cfc9-rz6mw             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m29s
	  kube-system                 etcd-ha-792382                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m33s
	  kube-system                 kindnet-bqp2z                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m29s
	  kube-system                 kube-apiserver-ha-792382             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m33s
	  kube-system                 kube-controller-manager-ha-792382    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m33s
	  kube-system                 kube-proxy-wrvgb                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m29s
	  kube-system                 kube-scheduler-ha-792382             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m34s
	  kube-system                 kube-vip-ha-792382                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m36s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m26s                  kube-proxy       
	  Normal  NodeHasSufficientPID     6m40s (x7 over 6m40s)  kubelet          Node ha-792382 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m40s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 6m40s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m40s (x8 over 6m40s)  kubelet          Node ha-792382 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m40s (x8 over 6m40s)  kubelet          Node ha-792382 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 6m34s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m33s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m33s                  kubelet          Node ha-792382 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m33s                  kubelet          Node ha-792382 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m33s                  kubelet          Node ha-792382 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m30s                  node-controller  Node ha-792382 event: Registered Node ha-792382 in Controller
	  Normal  NodeReady                6m13s                  kubelet          Node ha-792382 status is now: NodeReady
	  Normal  RegisteredNode           5m33s                  node-controller  Node ha-792382 event: Registered Node ha-792382 in Controller
	  Normal  RegisteredNode           4m18s                  node-controller  Node ha-792382 event: Registered Node ha-792382 in Controller
	
	
	Name:               ha-792382-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-792382-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=60110addcdbf0fec7168b962521659e922988d6c
	                    minikube.k8s.io/name=ha-792382
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_09T10_50_47_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 09 Dec 2024 10:50:44 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-792382-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 09 Dec 2024 10:53:37 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 09 Dec 2024 10:52:48 +0000   Mon, 09 Dec 2024 10:54:20 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 09 Dec 2024 10:52:48 +0000   Mon, 09 Dec 2024 10:54:20 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 09 Dec 2024 10:52:48 +0000   Mon, 09 Dec 2024 10:54:20 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 09 Dec 2024 10:52:48 +0000   Mon, 09 Dec 2024 10:54:20 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.89
	  Hostname:    ha-792382-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 167721adca2249268bf51688530c2893
	  System UUID:                167721ad-ca22-4926-8bf5-1688530c2893
	  Boot ID:                    74f1c671-e420-4f88-b05b-e50c0597ee01
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-rbrpt                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m58s
	  kube-system                 etcd-ha-792382-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m39s
	  kube-system                 kindnet-hkrhk                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m41s
	  kube-system                 kube-apiserver-ha-792382-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m39s
	  kube-system                 kube-controller-manager-ha-792382-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m31s
	  kube-system                 kube-proxy-dckpl                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m41s
	  kube-system                 kube-scheduler-ha-792382-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m32s
	  kube-system                 kube-vip-ha-792382-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m37s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m36s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m41s (x8 over 5m41s)  kubelet          Node ha-792382-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m41s (x8 over 5m41s)  kubelet          Node ha-792382-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m41s (x7 over 5m41s)  kubelet          Node ha-792382-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m41s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m40s                  node-controller  Node ha-792382-m02 event: Registered Node ha-792382-m02 in Controller
	  Normal  RegisteredNode           5m33s                  node-controller  Node ha-792382-m02 event: Registered Node ha-792382-m02 in Controller
	  Normal  RegisteredNode           4m18s                  node-controller  Node ha-792382-m02 event: Registered Node ha-792382-m02 in Controller
	  Normal  NodeNotReady             2m5s                   node-controller  Node ha-792382-m02 status is now: NodeNotReady
	
	
	Name:               ha-792382-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-792382-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=60110addcdbf0fec7168b962521659e922988d6c
	                    minikube.k8s.io/name=ha-792382
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_09T10_52_02_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 09 Dec 2024 10:51:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-792382-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 09 Dec 2024 10:56:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 09 Dec 2024 10:53:00 +0000   Mon, 09 Dec 2024 10:51:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 09 Dec 2024 10:53:00 +0000   Mon, 09 Dec 2024 10:51:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 09 Dec 2024 10:53:00 +0000   Mon, 09 Dec 2024 10:51:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 09 Dec 2024 10:53:00 +0000   Mon, 09 Dec 2024 10:52:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.82
	  Hostname:    ha-792382-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c7e770a97238401cb03ba22edd7f66bc
	  System UUID:                c7e770a9-7238-401c-b03b-a22edd7f66bc
	  Boot ID:                    75bcd068-8763-4e3a-b01e-036ac11d2956
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-ft8s2                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m58s
	  kube-system                 etcd-ha-792382-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m26s
	  kube-system                 kindnet-6hlht                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m27s
	  kube-system                 kube-apiserver-ha-792382-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m26s
	  kube-system                 kube-controller-manager-ha-792382-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m26s
	  kube-system                 kube-proxy-2l42s                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m27s
	  kube-system                 kube-scheduler-ha-792382-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m26s
	  kube-system                 kube-vip-ha-792382-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m22s                  kube-proxy       
	  Normal  Starting                 4m27s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m27s (x8 over 4m27s)  kubelet          Node ha-792382-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m27s (x8 over 4m27s)  kubelet          Node ha-792382-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m27s (x7 over 4m27s)  kubelet          Node ha-792382-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m27s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m25s                  node-controller  Node ha-792382-m03 event: Registered Node ha-792382-m03 in Controller
	  Normal  RegisteredNode           4m23s                  node-controller  Node ha-792382-m03 event: Registered Node ha-792382-m03 in Controller
	  Normal  RegisteredNode           4m18s                  node-controller  Node ha-792382-m03 event: Registered Node ha-792382-m03 in Controller
	
	
	Name:               ha-792382-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-792382-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=60110addcdbf0fec7168b962521659e922988d6c
	                    minikube.k8s.io/name=ha-792382
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_09T10_53_05_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 09 Dec 2024 10:53:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-792382-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 09 Dec 2024 10:56:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 09 Dec 2024 10:53:35 +0000   Mon, 09 Dec 2024 10:53:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 09 Dec 2024 10:53:35 +0000   Mon, 09 Dec 2024 10:53:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 09 Dec 2024 10:53:35 +0000   Mon, 09 Dec 2024 10:53:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 09 Dec 2024 10:53:35 +0000   Mon, 09 Dec 2024 10:53:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.54
	  Hostname:    ha-792382-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f7109c0766654d148c611df97b2ed795
	  System UUID:                f7109c07-6665-4d14-8c61-1df97b2ed795
	  Boot ID:                    8d79820d-d818-486f-88fb-9a376256bc79
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-rwsmp       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m21s
	  kube-system                 kube-proxy-727n6    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m21s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m14s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  3m21s (x2 over 3m21s)  kubelet          Node ha-792382-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m21s (x2 over 3m21s)  kubelet          Node ha-792382-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m21s (x2 over 3m21s)  kubelet          Node ha-792382-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m20s                  node-controller  Node ha-792382-m04 event: Registered Node ha-792382-m04 in Controller
	  Normal  RegisteredNode           3m18s                  node-controller  Node ha-792382-m04 event: Registered Node ha-792382-m04 in Controller
	  Normal  RegisteredNode           3m18s                  node-controller  Node ha-792382-m04 event: Registered Node ha-792382-m04 in Controller
	  Normal  NodeReady                3m                     kubelet          Node ha-792382-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec 9 10:49] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052723] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037555] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.827157] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.929161] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.560988] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.837514] systemd-fstab-generator[584]: Ignoring "noauto" option for root device
	[  +0.057481] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.052320] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.193651] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.117185] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.263430] systemd-fstab-generator[656]: Ignoring "noauto" option for root device
	[  +3.805323] systemd-fstab-generator[749]: Ignoring "noauto" option for root device
	[  +3.647118] systemd-fstab-generator[878]: Ignoring "noauto" option for root device
	[  +0.055434] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.026961] systemd-fstab-generator[1297]: Ignoring "noauto" option for root device
	[  +0.076746] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.128281] kauditd_printk_skb: 21 callbacks suppressed
	[Dec 9 10:50] kauditd_printk_skb: 38 callbacks suppressed
	[ +38.131475] kauditd_printk_skb: 28 callbacks suppressed
	
	
	==> etcd [778345b29099a60c663a83117fc6b409ae496d9228f79ecc48c26209e7a87f63] <==
	{"level":"warn","ts":"2024-12-09T10:56:25.077984Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"950d7fcf7e5c88d0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T10:56:25.087650Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"950d7fcf7e5c88d0","rtt":"936.555µs","error":"dial tcp 192.168.39.89:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-12-09T10:56:25.087716Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"950d7fcf7e5c88d0","rtt":"8.363905ms","error":"dial tcp 192.168.39.89:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-12-09T10:56:25.088252Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"950d7fcf7e5c88d0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T10:56:25.091866Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"950d7fcf7e5c88d0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T10:56:25.095163Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"950d7fcf7e5c88d0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T10:56:25.101691Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"950d7fcf7e5c88d0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T10:56:25.108218Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"950d7fcf7e5c88d0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T10:56:25.115485Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"950d7fcf7e5c88d0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T10:56:25.118651Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"950d7fcf7e5c88d0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T10:56:25.121626Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"950d7fcf7e5c88d0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T10:56:25.126908Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"950d7fcf7e5c88d0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T10:56:25.131621Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"950d7fcf7e5c88d0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T10:56:25.134240Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"950d7fcf7e5c88d0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T10:56:25.141522Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"950d7fcf7e5c88d0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T10:56:25.146091Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"950d7fcf7e5c88d0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T10:56:25.149461Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"950d7fcf7e5c88d0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T10:56:25.153745Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"950d7fcf7e5c88d0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T10:56:25.161495Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"950d7fcf7e5c88d0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T10:56:25.168725Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"950d7fcf7e5c88d0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T10:56:25.219272Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"950d7fcf7e5c88d0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T10:56:25.231795Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"950d7fcf7e5c88d0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T10:56:25.231961Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"950d7fcf7e5c88d0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T10:56:25.234475Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"950d7fcf7e5c88d0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-09T10:56:25.236896Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"950d7fcf7e5c88d0","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 10:56:25 up 7 min,  0 users,  load average: 0.38, 0.31, 0.16
	Linux ha-792382 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [b6bf7c7cf0d689f954fca58d1d94afa21f5bfd0f606552fbbf1479a9ae1593d3] <==
	I1209 10:55:51.786644       1 main.go:324] Node ha-792382-m04 has CIDR [10.244.3.0/24] 
	I1209 10:56:01.783030       1 main.go:297] Handling node with IPs: map[192.168.39.69:{}]
	I1209 10:56:01.783176       1 main.go:301] handling current node
	I1209 10:56:01.783209       1 main.go:297] Handling node with IPs: map[192.168.39.89:{}]
	I1209 10:56:01.783262       1 main.go:324] Node ha-792382-m02 has CIDR [10.244.1.0/24] 
	I1209 10:56:01.783503       1 main.go:297] Handling node with IPs: map[192.168.39.82:{}]
	I1209 10:56:01.783567       1 main.go:324] Node ha-792382-m03 has CIDR [10.244.2.0/24] 
	I1209 10:56:01.784071       1 main.go:297] Handling node with IPs: map[192.168.39.54:{}]
	I1209 10:56:01.784166       1 main.go:324] Node ha-792382-m04 has CIDR [10.244.3.0/24] 
	I1209 10:56:11.792014       1 main.go:297] Handling node with IPs: map[192.168.39.69:{}]
	I1209 10:56:11.792252       1 main.go:301] handling current node
	I1209 10:56:11.792297       1 main.go:297] Handling node with IPs: map[192.168.39.89:{}]
	I1209 10:56:11.792379       1 main.go:324] Node ha-792382-m02 has CIDR [10.244.1.0/24] 
	I1209 10:56:11.792752       1 main.go:297] Handling node with IPs: map[192.168.39.82:{}]
	I1209 10:56:11.792788       1 main.go:324] Node ha-792382-m03 has CIDR [10.244.2.0/24] 
	I1209 10:56:11.792953       1 main.go:297] Handling node with IPs: map[192.168.39.54:{}]
	I1209 10:56:11.792978       1 main.go:324] Node ha-792382-m04 has CIDR [10.244.3.0/24] 
	I1209 10:56:21.792397       1 main.go:297] Handling node with IPs: map[192.168.39.69:{}]
	I1209 10:56:21.792532       1 main.go:301] handling current node
	I1209 10:56:21.792631       1 main.go:297] Handling node with IPs: map[192.168.39.89:{}]
	I1209 10:56:21.792641       1 main.go:324] Node ha-792382-m02 has CIDR [10.244.1.0/24] 
	I1209 10:56:21.793010       1 main.go:297] Handling node with IPs: map[192.168.39.82:{}]
	I1209 10:56:21.793029       1 main.go:324] Node ha-792382-m03 has CIDR [10.244.2.0/24] 
	I1209 10:56:21.793227       1 main.go:297] Handling node with IPs: map[192.168.39.54:{}]
	I1209 10:56:21.793244       1 main.go:324] Node ha-792382-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [64b96c1c2297057b3b928bc7494df511d909f05efb4f8d5be27458c164efe95f] <==
	I1209 10:49:52.072307       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1209 10:49:52.095069       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1209 10:49:56.392767       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I1209 10:49:56.516080       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E1209 10:51:59.302973       1 writers.go:122] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E1209 10:51:59.303668       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 331.746µs, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E1209 10:51:59.304570       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E1209 10:51:59.308414       1 writers.go:135] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E1209 10:51:59.309695       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="6.795998ms" method="POST" path="/api/v1/namespaces/kube-system/events" result=null
	E1209 10:52:32.421048       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:43832: use of closed network connection
	E1209 10:52:32.619590       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:43852: use of closed network connection
	E1209 10:52:32.815616       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:43862: use of closed network connection
	E1209 10:52:33.010440       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:43888: use of closed network connection
	E1209 10:52:33.191451       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:43910: use of closed network connection
	E1209 10:52:33.385647       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:43930: use of closed network connection
	E1209 10:52:33.571472       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:43946: use of closed network connection
	E1209 10:52:33.741655       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:43972: use of closed network connection
	E1209 10:52:33.919176       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:43990: use of closed network connection
	E1209 10:52:34.226233       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44000: use of closed network connection
	E1209 10:52:34.408728       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44016: use of closed network connection
	E1209 10:52:34.588897       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44034: use of closed network connection
	E1209 10:52:34.765608       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44050: use of closed network connection
	E1209 10:52:34.943122       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44058: use of closed network connection
	E1209 10:52:35.115793       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44068: use of closed network connection
	W1209 10:54:00.405476       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.69 192.168.39.82]
	
	
	==> kube-controller-manager [00db8f77881efc15e9bf57aba34890302ec37f9239ba1d191862ea78b9833604] <==
	I1209 10:53:04.483677       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-792382-m04" podCIDRs=["10.244.3.0/24"]
	I1209 10:53:04.483873       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-792382-m04"
	I1209 10:53:04.484031       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-792382-m04"
	I1209 10:53:04.508782       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-792382-m04"
	I1209 10:53:04.947247       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-792382-m04"
	I1209 10:53:05.336150       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-792382-m04"
	I1209 10:53:05.632610       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-792382-m04"
	I1209 10:53:05.665145       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-792382-m04"
	I1209 10:53:07.101579       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-792382-m04"
	I1209 10:53:07.148958       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-792382-m04"
	I1209 10:53:08.041907       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-792382-m04"
	I1209 10:53:08.474258       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-792382-m04"
	I1209 10:53:14.706287       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-792382-m04"
	I1209 10:53:25.397617       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-792382-m04"
	I1209 10:53:25.397765       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-792382-m04"
	I1209 10:53:25.412410       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-792382-m04"
	I1209 10:53:25.649201       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-792382-m04"
	I1209 10:53:35.378859       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-792382-m04"
	I1209 10:54:20.671888       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-792382-m02"
	I1209 10:54:20.672434       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-792382-m04"
	I1209 10:54:20.703980       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-792382-m02"
	I1209 10:54:20.840624       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="14.419282ms"
	I1209 10:54:20.841721       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="157.508µs"
	I1209 10:54:22.157822       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-792382-m02"
	I1209 10:54:25.899451       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-792382-m02"
	
	
	==> kube-proxy [3cf6196a4789ec6471e8e2fe474e71d1756368a4423e425262410e6c7a71e522] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1209 10:49:58.601423       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1209 10:49:58.617859       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.69"]
	E1209 10:49:58.617945       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1209 10:49:58.657152       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1209 10:49:58.657213       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1209 10:49:58.657247       1 server_linux.go:169] "Using iptables Proxier"
	I1209 10:49:58.660760       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1209 10:49:58.661154       1 server.go:483] "Version info" version="v1.31.2"
	I1209 10:49:58.661230       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1209 10:49:58.663604       1 config.go:199] "Starting service config controller"
	I1209 10:49:58.663767       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1209 10:49:58.664471       1 config.go:105] "Starting endpoint slice config controller"
	I1209 10:49:58.664498       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1209 10:49:58.666409       1 config.go:328] "Starting node config controller"
	I1209 10:49:58.666433       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1209 10:49:58.765096       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1209 10:49:58.767373       1 shared_informer.go:320] Caches are synced for service config
	I1209 10:49:58.767373       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [d93c68b855d9f72ce368ee4f59250a988032fe781209e2115a8d70ea60f77fee] <==
	W1209 10:49:49.686971       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1209 10:49:49.687036       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1209 10:49:49.693717       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1209 10:49:49.693755       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1209 10:49:49.756854       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1209 10:49:49.756907       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1209 10:49:49.761365       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1209 10:49:49.761407       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1209 10:49:49.901909       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1209 10:49:49.902484       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1209 10:49:50.012571       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1209 10:49:50.012617       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1209 10:49:50.018069       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1209 10:49:50.018128       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1209 10:49:50.045681       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1209 10:49:50.045732       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1209 10:49:50.048146       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1209 10:49:50.048203       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1209 10:49:51.665195       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1209 10:52:27.353144       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-ft8s2\": pod busybox-7dff88458-ft8s2 is already assigned to node \"ha-792382-m03\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-ft8s2" node="ha-792382-m03"
	E1209 10:52:27.354035       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 51271b6c-9fb3-4893-8502-54b74c4cbaa5(default/busybox-7dff88458-ft8s2) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-ft8s2"
	E1209 10:52:27.354086       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-ft8s2\": pod busybox-7dff88458-ft8s2 is already assigned to node \"ha-792382-m03\"" pod="default/busybox-7dff88458-ft8s2"
	I1209 10:52:27.354141       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-ft8s2" node="ha-792382-m03"
	E1209 10:52:27.402980       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-z9wjm\": pod busybox-7dff88458-z9wjm is already assigned to node \"ha-792382\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-z9wjm" node="ha-792382"
	E1209 10:52:27.403164       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-z9wjm\": pod busybox-7dff88458-z9wjm is already assigned to node \"ha-792382\"" pod="default/busybox-7dff88458-z9wjm"
	
	
	==> kubelet <==
	Dec 09 10:54:52 ha-792382 kubelet[1304]: E1209 10:54:52.082247    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733741692081818749,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 10:54:52 ha-792382 kubelet[1304]: E1209 10:54:52.082273    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733741692081818749,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 10:55:02 ha-792382 kubelet[1304]: E1209 10:55:02.088147    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733741702086894201,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 10:55:02 ha-792382 kubelet[1304]: E1209 10:55:02.088210    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733741702086894201,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 10:55:12 ha-792382 kubelet[1304]: E1209 10:55:12.089935    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733741712089600382,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 10:55:12 ha-792382 kubelet[1304]: E1209 10:55:12.090372    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733741712089600382,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 10:55:22 ha-792382 kubelet[1304]: E1209 10:55:22.094837    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733741722094438540,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 10:55:22 ha-792382 kubelet[1304]: E1209 10:55:22.094877    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733741722094438540,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 10:55:32 ha-792382 kubelet[1304]: E1209 10:55:32.096240    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733741732095902907,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 10:55:32 ha-792382 kubelet[1304]: E1209 10:55:32.096268    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733741732095902907,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 10:55:42 ha-792382 kubelet[1304]: E1209 10:55:42.098166    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733741742097877429,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 10:55:42 ha-792382 kubelet[1304]: E1209 10:55:42.098566    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733741742097877429,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 10:55:52 ha-792382 kubelet[1304]: E1209 10:55:52.004085    1304 iptables.go:577] "Could not set up iptables canary" err=<
	Dec 09 10:55:52 ha-792382 kubelet[1304]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 09 10:55:52 ha-792382 kubelet[1304]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 09 10:55:52 ha-792382 kubelet[1304]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 09 10:55:52 ha-792382 kubelet[1304]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 09 10:55:52 ha-792382 kubelet[1304]: E1209 10:55:52.100761    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733741752100425512,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 10:55:52 ha-792382 kubelet[1304]: E1209 10:55:52.100783    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733741752100425512,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 10:56:02 ha-792382 kubelet[1304]: E1209 10:56:02.102546    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733741762102177289,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 10:56:02 ha-792382 kubelet[1304]: E1209 10:56:02.102939    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733741762102177289,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 10:56:12 ha-792382 kubelet[1304]: E1209 10:56:12.104513    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733741772104031126,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 10:56:12 ha-792382 kubelet[1304]: E1209 10:56:12.104554    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733741772104031126,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 10:56:22 ha-792382 kubelet[1304]: E1209 10:56:22.106017    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733741782105667766,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 10:56:22 ha-792382 kubelet[1304]: E1209 10:56:22.106283    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733741782105667766,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-792382 -n ha-792382
helpers_test.go:261: (dbg) Run:  kubectl --context ha-792382 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (6.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (361.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-792382 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-792382 -v=7 --alsologtostderr
E1209 10:56:33.303540  617017 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/addons-156041/client.crt: no such file or directory" logger="UnhandledError"
E1209 10:58:22.652461  617017 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/functional-032350/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-792382 -v=7 --alsologtostderr: exit status 82 (2m1.912872689s)

                                                
                                                
-- stdout --
	* Stopping node "ha-792382-m04"  ...
	* Stopping node "ha-792382-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 10:56:26.278276  632610 out.go:345] Setting OutFile to fd 1 ...
	I1209 10:56:26.278426  632610 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 10:56:26.278437  632610 out.go:358] Setting ErrFile to fd 2...
	I1209 10:56:26.278444  632610 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 10:56:26.278606  632610 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20068-609844/.minikube/bin
	I1209 10:56:26.278843  632610 out.go:352] Setting JSON to false
	I1209 10:56:26.278933  632610 mustload.go:65] Loading cluster: ha-792382
	I1209 10:56:26.279401  632610 config.go:182] Loaded profile config "ha-792382": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 10:56:26.279527  632610 profile.go:143] Saving config to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/config.json ...
	I1209 10:56:26.279784  632610 mustload.go:65] Loading cluster: ha-792382
	I1209 10:56:26.279972  632610 config.go:182] Loaded profile config "ha-792382": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 10:56:26.280008  632610 stop.go:39] StopHost: ha-792382-m04
	I1209 10:56:26.280454  632610 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:56:26.280523  632610 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:56:26.296223  632610 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45975
	I1209 10:56:26.296803  632610 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:56:26.297481  632610 main.go:141] libmachine: Using API Version  1
	I1209 10:56:26.297511  632610 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:56:26.297883  632610 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:56:26.300396  632610 out.go:177] * Stopping node "ha-792382-m04"  ...
	I1209 10:56:26.301552  632610 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1209 10:56:26.301594  632610 main.go:141] libmachine: (ha-792382-m04) Calling .DriverName
	I1209 10:56:26.301803  632610 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1209 10:56:26.301852  632610 main.go:141] libmachine: (ha-792382-m04) Calling .GetSSHHostname
	I1209 10:56:26.304848  632610 main.go:141] libmachine: (ha-792382-m04) DBG | domain ha-792382-m04 has defined MAC address 52:54:00:bf:1c:a3 in network mk-ha-792382
	I1209 10:56:26.305258  632610 main.go:141] libmachine: (ha-792382-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:1c:a3", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:52:50 +0000 UTC Type:0 Mac:52:54:00:bf:1c:a3 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-792382-m04 Clientid:01:52:54:00:bf:1c:a3}
	I1209 10:56:26.305289  632610 main.go:141] libmachine: (ha-792382-m04) DBG | domain ha-792382-m04 has defined IP address 192.168.39.54 and MAC address 52:54:00:bf:1c:a3 in network mk-ha-792382
	I1209 10:56:26.305419  632610 main.go:141] libmachine: (ha-792382-m04) Calling .GetSSHPort
	I1209 10:56:26.305629  632610 main.go:141] libmachine: (ha-792382-m04) Calling .GetSSHKeyPath
	I1209 10:56:26.305815  632610 main.go:141] libmachine: (ha-792382-m04) Calling .GetSSHUsername
	I1209 10:56:26.305960  632610 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382-m04/id_rsa Username:docker}
	I1209 10:56:26.399586  632610 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1209 10:56:26.452489  632610 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1209 10:56:26.507055  632610 main.go:141] libmachine: Stopping "ha-792382-m04"...
	I1209 10:56:26.507088  632610 main.go:141] libmachine: (ha-792382-m04) Calling .GetState
	I1209 10:56:26.508885  632610 main.go:141] libmachine: (ha-792382-m04) Calling .Stop
	I1209 10:56:26.513381  632610 main.go:141] libmachine: (ha-792382-m04) Waiting for machine to stop 0/120
	I1209 10:56:27.707747  632610 main.go:141] libmachine: (ha-792382-m04) Calling .GetState
	I1209 10:56:27.708890  632610 main.go:141] libmachine: Machine "ha-792382-m04" was stopped.
	I1209 10:56:27.708924  632610 stop.go:75] duration metric: took 1.4073621s to stop
	I1209 10:56:27.708955  632610 stop.go:39] StopHost: ha-792382-m03
	I1209 10:56:27.709338  632610 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:56:27.709399  632610 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:56:27.724105  632610 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37371
	I1209 10:56:27.724672  632610 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:56:27.725251  632610 main.go:141] libmachine: Using API Version  1
	I1209 10:56:27.725273  632610 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:56:27.725627  632610 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:56:27.727598  632610 out.go:177] * Stopping node "ha-792382-m03"  ...
	I1209 10:56:27.728809  632610 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1209 10:56:27.728834  632610 main.go:141] libmachine: (ha-792382-m03) Calling .DriverName
	I1209 10:56:27.729056  632610 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1209 10:56:27.729086  632610 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHHostname
	I1209 10:56:27.732021  632610 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:56:27.732530  632610 main.go:141] libmachine: (ha-792382-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:ae:3c", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:51:25 +0000 UTC Type:0 Mac:52:54:00:10:ae:3c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:ha-792382-m03 Clientid:01:52:54:00:10:ae:3c}
	I1209 10:56:27.732560  632610 main.go:141] libmachine: (ha-792382-m03) DBG | domain ha-792382-m03 has defined IP address 192.168.39.82 and MAC address 52:54:00:10:ae:3c in network mk-ha-792382
	I1209 10:56:27.732740  632610 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHPort
	I1209 10:56:27.732890  632610 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHKeyPath
	I1209 10:56:27.733027  632610 main.go:141] libmachine: (ha-792382-m03) Calling .GetSSHUsername
	I1209 10:56:27.733165  632610 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382-m03/id_rsa Username:docker}
	I1209 10:56:27.823754  632610 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1209 10:56:27.877126  632610 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1209 10:56:27.930767  632610 main.go:141] libmachine: Stopping "ha-792382-m03"...
	I1209 10:56:27.930794  632610 main.go:141] libmachine: (ha-792382-m03) Calling .GetState
	I1209 10:56:27.932314  632610 main.go:141] libmachine: (ha-792382-m03) Calling .Stop
	I1209 10:56:27.935881  632610 main.go:141] libmachine: (ha-792382-m03) Waiting for machine to stop 0/120
	I1209 10:56:28.937329  632610 main.go:141] libmachine: (ha-792382-m03) Waiting for machine to stop 1/120
	I1209 10:56:29.938741  632610 main.go:141] libmachine: (ha-792382-m03) Waiting for machine to stop 2/120
	I1209 10:56:30.941219  632610 main.go:141] libmachine: (ha-792382-m03) Waiting for machine to stop 3/120
	I1209 10:56:31.942748  632610 main.go:141] libmachine: (ha-792382-m03) Waiting for machine to stop 4/120
	I1209 10:56:32.944789  632610 main.go:141] libmachine: (ha-792382-m03) Waiting for machine to stop 5/120
	I1209 10:56:33.947340  632610 main.go:141] libmachine: (ha-792382-m03) Waiting for machine to stop 6/120
	I1209 10:56:34.948872  632610 main.go:141] libmachine: (ha-792382-m03) Waiting for machine to stop 7/120
	I1209 10:56:35.950438  632610 main.go:141] libmachine: (ha-792382-m03) Waiting for machine to stop 8/120
	I1209 10:56:36.951958  632610 main.go:141] libmachine: (ha-792382-m03) Waiting for machine to stop 9/120
	I1209 10:56:37.954087  632610 main.go:141] libmachine: (ha-792382-m03) Waiting for machine to stop 10/120
	I1209 10:56:38.955785  632610 main.go:141] libmachine: (ha-792382-m03) Waiting for machine to stop 11/120
	I1209 10:56:39.957668  632610 main.go:141] libmachine: (ha-792382-m03) Waiting for machine to stop 12/120
	I1209 10:56:40.959235  632610 main.go:141] libmachine: (ha-792382-m03) Waiting for machine to stop 13/120
	I1209 10:56:41.960672  632610 main.go:141] libmachine: (ha-792382-m03) Waiting for machine to stop 14/120
	I1209 10:56:42.962959  632610 main.go:141] libmachine: (ha-792382-m03) Waiting for machine to stop 15/120
	I1209 10:56:43.964486  632610 main.go:141] libmachine: (ha-792382-m03) Waiting for machine to stop 16/120
	I1209 10:56:44.965745  632610 main.go:141] libmachine: (ha-792382-m03) Waiting for machine to stop 17/120
	I1209 10:56:45.967241  632610 main.go:141] libmachine: (ha-792382-m03) Waiting for machine to stop 18/120
	I1209 10:56:46.968823  632610 main.go:141] libmachine: (ha-792382-m03) Waiting for machine to stop 19/120
	I1209 10:56:47.970377  632610 main.go:141] libmachine: (ha-792382-m03) Waiting for machine to stop 20/120
	I1209 10:56:48.972177  632610 main.go:141] libmachine: (ha-792382-m03) Waiting for machine to stop 21/120
	I1209 10:56:49.973724  632610 main.go:141] libmachine: (ha-792382-m03) Waiting for machine to stop 22/120
	I1209 10:56:50.975121  632610 main.go:141] libmachine: (ha-792382-m03) Waiting for machine to stop 23/120
	I1209 10:56:51.976546  632610 main.go:141] libmachine: (ha-792382-m03) Waiting for machine to stop 24/120
	I1209 10:56:52.978201  632610 main.go:141] libmachine: (ha-792382-m03) Waiting for machine to stop 25/120
	I1209 10:56:53.979569  632610 main.go:141] libmachine: (ha-792382-m03) Waiting for machine to stop 26/120
	I1209 10:56:54.980881  632610 main.go:141] libmachine: (ha-792382-m03) Waiting for machine to stop 27/120
	I1209 10:56:55.982315  632610 main.go:141] libmachine: (ha-792382-m03) Waiting for machine to stop 28/120
	I1209 10:56:56.984716  632610 main.go:141] libmachine: (ha-792382-m03) Waiting for machine to stop 29/120
	I1209 10:56:57.986215  632610 main.go:141] libmachine: (ha-792382-m03) Waiting for machine to stop 30/120
	I1209 10:56:58.987829  632610 main.go:141] libmachine: (ha-792382-m03) Waiting for machine to stop 31/120
	I1209 10:56:59.989222  632610 main.go:141] libmachine: (ha-792382-m03) Waiting for machine to stop 32/120
	I1209 10:57:00.990804  632610 main.go:141] libmachine: (ha-792382-m03) Waiting for machine to stop 33/120
	I1209 10:57:01.992775  632610 main.go:141] libmachine: (ha-792382-m03) Waiting for machine to stop 34/120
	I1209 10:57:02.994712  632610 main.go:141] libmachine: (ha-792382-m03) Waiting for machine to stop 35/120
	I1209 10:57:03.996119  632610 main.go:141] libmachine: (ha-792382-m03) Waiting for machine to stop 36/120
	I1209 10:57:04.997854  632610 main.go:141] libmachine: (ha-792382-m03) Waiting for machine to stop 37/120
	I1209 10:57:05.999274  632610 main.go:141] libmachine: (ha-792382-m03) Waiting for machine to stop 38/120
	I1209 10:57:07.000530  632610 main.go:141] libmachine: (ha-792382-m03) Waiting for machine to stop 39/120
	I1209 10:57:08.002274  632610 main.go:141] libmachine: (ha-792382-m03) Waiting for machine to stop 40/120
	I1209 10:57:09.003665  632610 main.go:141] libmachine: (ha-792382-m03) Waiting for machine to stop 41/120
	I1209 10:57:10.004949  632610 main.go:141] libmachine: (ha-792382-m03) Waiting for machine to stop 42/120
	I1209 10:57:11.006233  632610 main.go:141] libmachine: (ha-792382-m03) Waiting for machine to stop 43/120
	I1209 10:57:12.007483  632610 main.go:141] libmachine: (ha-792382-m03) Waiting for machine to stop 44/120
	I1209 10:57:13.009147  632610 main.go:141] libmachine: (ha-792382-m03) Waiting for machine to stop 45/120
	I1209 10:57:14.010579  632610 main.go:141] libmachine: (ha-792382-m03) Waiting for machine to stop 46/120
	I1209 10:57:15.012157  632610 main.go:141] libmachine: (ha-792382-m03) Waiting for machine to stop 47/120
	I1209 10:57:16.013681  632610 main.go:141] libmachine: (ha-792382-m03) Waiting for machine to stop 48/120
	I1209 10:57:17.015128  632610 main.go:141] libmachine: (ha-792382-m03) Waiting for machine to stop 49/120
	I1209 10:57:18.017001  632610 main.go:141] libmachine: (ha-792382-m03) Waiting for machine to stop 50/120
	I1209 10:57:19.018238  632610 main.go:141] libmachine: (ha-792382-m03) Waiting for machine to stop 51/120
	I1209 10:57:20.019769  632610 main.go:141] libmachine: (ha-792382-m03) Waiting for machine to stop 52/120
	I1209 10:57:21.021083  632610 main.go:141] libmachine: (ha-792382-m03) Waiting for machine to stop 53/120
	I1209 10:57:22.022452  632610 main.go:141] libmachine: (ha-792382-m03) Waiting for machine to stop 54/120
	I1209 10:57:23.024577  632610 main.go:141] libmachine: (ha-792382-m03) Waiting for machine to stop 55/120
	I1209 10:57:24.025947  632610 main.go:141] libmachine: (ha-792382-m03) Waiting for machine to stop 56/120
	I1209 10:57:25.027188  632610 main.go:141] libmachine: (ha-792382-m03) Waiting for machine to stop 57/120
	I1209 10:57:26.028637  632610 main.go:141] libmachine: (ha-792382-m03) Waiting for machine to stop 58/120
	I1209 10:57:27.030142  632610 main.go:141] libmachine: (ha-792382-m03) Waiting for machine to stop 59/120
	I1209 10:57:28.032100  632610 main.go:141] libmachine: (ha-792382-m03) Waiting for machine to stop 60/120
	I1209 10:57:29.034048  632610 main.go:141] libmachine: (ha-792382-m03) Waiting for machine to stop 61/120
	I1209 10:57:30.035948  632610 main.go:141] libmachine: (ha-792382-m03) Waiting for machine to stop 62/120
	I1209 10:57:31.037185  632610 main.go:141] libmachine: (ha-792382-m03) Waiting for machine to stop 63/120
	I1209 10:57:32.038781  632610 main.go:141] libmachine: (ha-792382-m03) Waiting for machine to stop 64/120
	I1209 10:57:33.040707  632610 main.go:141] libmachine: (ha-792382-m03) Waiting for machine to stop 65/120
	I1209 10:57:34.042109  632610 main.go:141] libmachine: (ha-792382-m03) Waiting for machine to stop 66/120
	I1209 10:57:35.043774  632610 main.go:141] libmachine: (ha-792382-m03) Waiting for machine to stop 67/120
	I1209 10:57:36.045247  632610 main.go:141] libmachine: (ha-792382-m03) Waiting for machine to stop 68/120
	I1209 10:57:37.046796  632610 main.go:141] libmachine: (ha-792382-m03) Waiting for machine to stop 69/120
	I1209 10:57:38.048687  632610 main.go:141] libmachine: (ha-792382-m03) Waiting for machine to stop 70/120
	I1209 10:57:39.050044  632610 main.go:141] libmachine: (ha-792382-m03) Waiting for machine to stop 71/120
	I1209 10:57:40.051492  632610 main.go:141] libmachine: (ha-792382-m03) Waiting for machine to stop 72/120
	I1209 10:57:41.052702  632610 main.go:141] libmachine: (ha-792382-m03) Waiting for machine to stop 73/120
	I1209 10:57:42.053999  632610 main.go:141] libmachine: (ha-792382-m03) Waiting for machine to stop 74/120
	I1209 10:57:43.055324  632610 main.go:141] libmachine: (ha-792382-m03) Waiting for machine to stop 75/120
	I1209 10:57:44.056627  632610 main.go:141] libmachine: (ha-792382-m03) Waiting for machine to stop 76/120
	I1209 10:57:45.058209  632610 main.go:141] libmachine: (ha-792382-m03) Waiting for machine to stop 77/120
	I1209 10:57:46.059418  632610 main.go:141] libmachine: (ha-792382-m03) Waiting for machine to stop 78/120
	I1209 10:57:47.060801  632610 main.go:141] libmachine: (ha-792382-m03) Waiting for machine to stop 79/120
	I1209 10:57:48.062589  632610 main.go:141] libmachine: (ha-792382-m03) Waiting for machine to stop 80/120
	I1209 10:57:49.063949  632610 main.go:141] libmachine: (ha-792382-m03) Waiting for machine to stop 81/120
	I1209 10:57:50.065224  632610 main.go:141] libmachine: (ha-792382-m03) Waiting for machine to stop 82/120
	I1209 10:57:51.066622  632610 main.go:141] libmachine: (ha-792382-m03) Waiting for machine to stop 83/120
	I1209 10:57:52.067952  632610 main.go:141] libmachine: (ha-792382-m03) Waiting for machine to stop 84/120
	I1209 10:57:53.069689  632610 main.go:141] libmachine: (ha-792382-m03) Waiting for machine to stop 85/120
	I1209 10:57:54.071130  632610 main.go:141] libmachine: (ha-792382-m03) Waiting for machine to stop 86/120
	I1209 10:57:55.072469  632610 main.go:141] libmachine: (ha-792382-m03) Waiting for machine to stop 87/120
	I1209 10:57:56.074042  632610 main.go:141] libmachine: (ha-792382-m03) Waiting for machine to stop 88/120
	I1209 10:57:57.075430  632610 main.go:141] libmachine: (ha-792382-m03) Waiting for machine to stop 89/120
	I1209 10:57:58.077799  632610 main.go:141] libmachine: (ha-792382-m03) Waiting for machine to stop 90/120
	I1209 10:57:59.079257  632610 main.go:141] libmachine: (ha-792382-m03) Waiting for machine to stop 91/120
	I1209 10:58:00.080559  632610 main.go:141] libmachine: (ha-792382-m03) Waiting for machine to stop 92/120
	I1209 10:58:01.082610  632610 main.go:141] libmachine: (ha-792382-m03) Waiting for machine to stop 93/120
	I1209 10:58:02.084025  632610 main.go:141] libmachine: (ha-792382-m03) Waiting for machine to stop 94/120
	I1209 10:58:03.086000  632610 main.go:141] libmachine: (ha-792382-m03) Waiting for machine to stop 95/120
	I1209 10:58:04.087543  632610 main.go:141] libmachine: (ha-792382-m03) Waiting for machine to stop 96/120
	I1209 10:58:05.089096  632610 main.go:141] libmachine: (ha-792382-m03) Waiting for machine to stop 97/120
	I1209 10:58:06.090543  632610 main.go:141] libmachine: (ha-792382-m03) Waiting for machine to stop 98/120
	I1209 10:58:07.092827  632610 main.go:141] libmachine: (ha-792382-m03) Waiting for machine to stop 99/120
	I1209 10:58:08.094740  632610 main.go:141] libmachine: (ha-792382-m03) Waiting for machine to stop 100/120
	I1209 10:58:09.096682  632610 main.go:141] libmachine: (ha-792382-m03) Waiting for machine to stop 101/120
	I1209 10:58:10.098071  632610 main.go:141] libmachine: (ha-792382-m03) Waiting for machine to stop 102/120
	I1209 10:58:11.100221  632610 main.go:141] libmachine: (ha-792382-m03) Waiting for machine to stop 103/120
	I1209 10:58:12.101680  632610 main.go:141] libmachine: (ha-792382-m03) Waiting for machine to stop 104/120
	I1209 10:58:13.103337  632610 main.go:141] libmachine: (ha-792382-m03) Waiting for machine to stop 105/120
	I1209 10:58:14.104579  632610 main.go:141] libmachine: (ha-792382-m03) Waiting for machine to stop 106/120
	I1209 10:58:15.106486  632610 main.go:141] libmachine: (ha-792382-m03) Waiting for machine to stop 107/120
	I1209 10:58:16.107873  632610 main.go:141] libmachine: (ha-792382-m03) Waiting for machine to stop 108/120
	I1209 10:58:17.109342  632610 main.go:141] libmachine: (ha-792382-m03) Waiting for machine to stop 109/120
	I1209 10:58:18.111104  632610 main.go:141] libmachine: (ha-792382-m03) Waiting for machine to stop 110/120
	I1209 10:58:19.112982  632610 main.go:141] libmachine: (ha-792382-m03) Waiting for machine to stop 111/120
	I1209 10:58:20.114622  632610 main.go:141] libmachine: (ha-792382-m03) Waiting for machine to stop 112/120
	I1209 10:58:21.116038  632610 main.go:141] libmachine: (ha-792382-m03) Waiting for machine to stop 113/120
	I1209 10:58:22.117622  632610 main.go:141] libmachine: (ha-792382-m03) Waiting for machine to stop 114/120
	I1209 10:58:23.119783  632610 main.go:141] libmachine: (ha-792382-m03) Waiting for machine to stop 115/120
	I1209 10:58:24.122218  632610 main.go:141] libmachine: (ha-792382-m03) Waiting for machine to stop 116/120
	I1209 10:58:25.123632  632610 main.go:141] libmachine: (ha-792382-m03) Waiting for machine to stop 117/120
	I1209 10:58:26.125019  632610 main.go:141] libmachine: (ha-792382-m03) Waiting for machine to stop 118/120
	I1209 10:58:27.126664  632610 main.go:141] libmachine: (ha-792382-m03) Waiting for machine to stop 119/120
	I1209 10:58:28.128125  632610 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1209 10:58:28.128182  632610 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1209 10:58:28.129971  632610 out.go:201] 
	W1209 10:58:28.131122  632610 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1209 10:58:28.131140  632610 out.go:270] * 
	* 
	W1209 10:58:28.134790  632610 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 10:58:28.135958  632610 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:466: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-792382 -v=7 --alsologtostderr" : exit status 82
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 start -p ha-792382 --wait=true -v=7 --alsologtostderr
E1209 10:58:50.360844  617017 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/functional-032350/client.crt: no such file or directory" logger="UnhandledError"
E1209 11:01:33.303519  617017 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/addons-156041/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 start -p ha-792382 --wait=true -v=7 --alsologtostderr: (3m57.058687344s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-792382
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-792382 -n ha-792382
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-792382 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-792382 logs -n 25: (2.124544331s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-792382 cp ha-792382-m03:/home/docker/cp-test.txt                              | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | ha-792382-m02:/home/docker/cp-test_ha-792382-m03_ha-792382-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-792382 ssh -n                                                                 | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | ha-792382-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-792382 ssh -n ha-792382-m02 sudo cat                                          | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | /home/docker/cp-test_ha-792382-m03_ha-792382-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-792382 cp ha-792382-m03:/home/docker/cp-test.txt                              | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | ha-792382-m04:/home/docker/cp-test_ha-792382-m03_ha-792382-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-792382 ssh -n                                                                 | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | ha-792382-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-792382 ssh -n ha-792382-m04 sudo cat                                          | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | /home/docker/cp-test_ha-792382-m03_ha-792382-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-792382 cp testdata/cp-test.txt                                                | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | ha-792382-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-792382 ssh -n                                                                 | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | ha-792382-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-792382 cp ha-792382-m04:/home/docker/cp-test.txt                              | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3558467820/001/cp-test_ha-792382-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-792382 ssh -n                                                                 | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | ha-792382-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-792382 cp ha-792382-m04:/home/docker/cp-test.txt                              | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | ha-792382:/home/docker/cp-test_ha-792382-m04_ha-792382.txt                       |           |         |         |                     |                     |
	| ssh     | ha-792382 ssh -n                                                                 | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | ha-792382-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-792382 ssh -n ha-792382 sudo cat                                              | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | /home/docker/cp-test_ha-792382-m04_ha-792382.txt                                 |           |         |         |                     |                     |
	| cp      | ha-792382 cp ha-792382-m04:/home/docker/cp-test.txt                              | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | ha-792382-m02:/home/docker/cp-test_ha-792382-m04_ha-792382-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-792382 ssh -n                                                                 | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | ha-792382-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-792382 ssh -n ha-792382-m02 sudo cat                                          | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | /home/docker/cp-test_ha-792382-m04_ha-792382-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-792382 cp ha-792382-m04:/home/docker/cp-test.txt                              | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | ha-792382-m03:/home/docker/cp-test_ha-792382-m04_ha-792382-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-792382 ssh -n                                                                 | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | ha-792382-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-792382 ssh -n ha-792382-m03 sudo cat                                          | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | /home/docker/cp-test_ha-792382-m04_ha-792382-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-792382 node stop m02 -v=7                                                     | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-792382 node start m02 -v=7                                                    | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:56 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-792382 -v=7                                                           | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:56 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-792382 -v=7                                                                | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:56 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-792382 --wait=true -v=7                                                    | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:58 UTC | 09 Dec 24 11:02 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-792382                                                                | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 11:02 UTC |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/09 10:58:28
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 10:58:28.191865  633119 out.go:345] Setting OutFile to fd 1 ...
	I1209 10:58:28.191988  633119 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 10:58:28.191998  633119 out.go:358] Setting ErrFile to fd 2...
	I1209 10:58:28.192003  633119 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 10:58:28.192202  633119 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20068-609844/.minikube/bin
	I1209 10:58:28.192853  633119 out.go:352] Setting JSON to false
	I1209 10:58:28.193921  633119 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":13252,"bootTime":1733728656,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 10:58:28.194037  633119 start.go:139] virtualization: kvm guest
	I1209 10:58:28.196208  633119 out.go:177] * [ha-792382] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1209 10:58:28.197833  633119 out.go:177]   - MINIKUBE_LOCATION=20068
	I1209 10:58:28.197830  633119 notify.go:220] Checking for updates...
	I1209 10:58:28.200528  633119 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 10:58:28.201941  633119 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20068-609844/kubeconfig
	I1209 10:58:28.203146  633119 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20068-609844/.minikube
	I1209 10:58:28.204302  633119 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1209 10:58:28.205523  633119 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 10:58:28.207407  633119 config.go:182] Loaded profile config "ha-792382": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 10:58:28.207565  633119 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 10:58:28.208116  633119 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:58:28.208157  633119 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:58:28.223976  633119 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40345
	I1209 10:58:28.224573  633119 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:58:28.225315  633119 main.go:141] libmachine: Using API Version  1
	I1209 10:58:28.225346  633119 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:58:28.225733  633119 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:58:28.225929  633119 main.go:141] libmachine: (ha-792382) Calling .DriverName
	I1209 10:58:28.266828  633119 out.go:177] * Using the kvm2 driver based on existing profile
	I1209 10:58:28.268216  633119 start.go:297] selected driver: kvm2
	I1209 10:58:28.268235  633119 start.go:901] validating driver "kvm2" against &{Name:ha-792382 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.2 ClusterName:ha-792382 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.69 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.89 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.82 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.54 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false defa
ult-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVe
rsion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 10:58:28.268403  633119 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 10:58:28.268799  633119 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 10:58:28.268907  633119 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20068-609844/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1209 10:58:28.284451  633119 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1209 10:58:28.285578  633119 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 10:58:28.285629  633119 cni.go:84] Creating CNI manager for ""
	I1209 10:58:28.285696  633119 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1209 10:58:28.285783  633119 start.go:340] cluster config:
	{Name:ha-792382 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-792382 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.69 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.89 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.82 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.54 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:fals
e headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 10:58:28.285928  633119 iso.go:125] acquiring lock: {Name:mk3bf4d9da9b75cc0ae0a3732f4492bac54a4b46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 10:58:28.287683  633119 out.go:177] * Starting "ha-792382" primary control-plane node in "ha-792382" cluster
	I1209 10:58:28.288797  633119 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 10:58:28.288833  633119 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1209 10:58:28.288844  633119 cache.go:56] Caching tarball of preloaded images
	I1209 10:58:28.288931  633119 preload.go:172] Found /home/jenkins/minikube-integration/20068-609844/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1209 10:58:28.288941  633119 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1209 10:58:28.289069  633119 profile.go:143] Saving config to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/config.json ...
	I1209 10:58:28.289270  633119 start.go:360] acquireMachinesLock for ha-792382: {Name:mka6afffe9ddf2faebdc603ed0401805d58bf31e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 10:58:28.289310  633119 start.go:364] duration metric: took 23.15µs to acquireMachinesLock for "ha-792382"
	I1209 10:58:28.289325  633119 start.go:96] Skipping create...Using existing machine configuration
	I1209 10:58:28.289331  633119 fix.go:54] fixHost starting: 
	I1209 10:58:28.289621  633119 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:58:28.289655  633119 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:58:28.306463  633119 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44091
	I1209 10:58:28.307025  633119 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:58:28.307611  633119 main.go:141] libmachine: Using API Version  1
	I1209 10:58:28.307644  633119 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:58:28.307965  633119 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:58:28.308153  633119 main.go:141] libmachine: (ha-792382) Calling .DriverName
	I1209 10:58:28.308285  633119 main.go:141] libmachine: (ha-792382) Calling .GetState
	I1209 10:58:28.309937  633119 fix.go:112] recreateIfNeeded on ha-792382: state=Running err=<nil>
	W1209 10:58:28.309962  633119 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 10:58:28.312538  633119 out.go:177] * Updating the running kvm2 "ha-792382" VM ...
	I1209 10:58:28.313754  633119 machine.go:93] provisionDockerMachine start ...
	I1209 10:58:28.313779  633119 main.go:141] libmachine: (ha-792382) Calling .DriverName
	I1209 10:58:28.314000  633119 main.go:141] libmachine: (ha-792382) Calling .GetSSHHostname
	I1209 10:58:28.316352  633119 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:58:28.316746  633119 main.go:141] libmachine: (ha-792382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:82:f7", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:49:26 +0000 UTC Type:0 Mac:52:54:00:a8:82:f7 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-792382 Clientid:01:52:54:00:a8:82:f7}
	I1209 10:58:28.316778  633119 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:58:28.316919  633119 main.go:141] libmachine: (ha-792382) Calling .GetSSHPort
	I1209 10:58:28.317140  633119 main.go:141] libmachine: (ha-792382) Calling .GetSSHKeyPath
	I1209 10:58:28.317333  633119 main.go:141] libmachine: (ha-792382) Calling .GetSSHKeyPath
	I1209 10:58:28.317459  633119 main.go:141] libmachine: (ha-792382) Calling .GetSSHUsername
	I1209 10:58:28.317601  633119 main.go:141] libmachine: Using SSH client type: native
	I1209 10:58:28.317852  633119 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.69 22 <nil> <nil>}
	I1209 10:58:28.317870  633119 main.go:141] libmachine: About to run SSH command:
	hostname
	I1209 10:58:28.432781  633119 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-792382
	
	I1209 10:58:28.432821  633119 main.go:141] libmachine: (ha-792382) Calling .GetMachineName
	I1209 10:58:28.433137  633119 buildroot.go:166] provisioning hostname "ha-792382"
	I1209 10:58:28.433174  633119 main.go:141] libmachine: (ha-792382) Calling .GetMachineName
	I1209 10:58:28.433406  633119 main.go:141] libmachine: (ha-792382) Calling .GetSSHHostname
	I1209 10:58:28.436046  633119 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:58:28.436430  633119 main.go:141] libmachine: (ha-792382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:82:f7", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:49:26 +0000 UTC Type:0 Mac:52:54:00:a8:82:f7 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-792382 Clientid:01:52:54:00:a8:82:f7}
	I1209 10:58:28.436451  633119 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:58:28.436620  633119 main.go:141] libmachine: (ha-792382) Calling .GetSSHPort
	I1209 10:58:28.436816  633119 main.go:141] libmachine: (ha-792382) Calling .GetSSHKeyPath
	I1209 10:58:28.436996  633119 main.go:141] libmachine: (ha-792382) Calling .GetSSHKeyPath
	I1209 10:58:28.437169  633119 main.go:141] libmachine: (ha-792382) Calling .GetSSHUsername
	I1209 10:58:28.437333  633119 main.go:141] libmachine: Using SSH client type: native
	I1209 10:58:28.437524  633119 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.69 22 <nil> <nil>}
	I1209 10:58:28.437539  633119 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-792382 && echo "ha-792382" | sudo tee /etc/hostname
	I1209 10:58:28.569208  633119 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-792382
	
	I1209 10:58:28.569241  633119 main.go:141] libmachine: (ha-792382) Calling .GetSSHHostname
	I1209 10:58:28.572736  633119 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:58:28.573149  633119 main.go:141] libmachine: (ha-792382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:82:f7", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:49:26 +0000 UTC Type:0 Mac:52:54:00:a8:82:f7 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-792382 Clientid:01:52:54:00:a8:82:f7}
	I1209 10:58:28.573178  633119 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:58:28.573326  633119 main.go:141] libmachine: (ha-792382) Calling .GetSSHPort
	I1209 10:58:28.573553  633119 main.go:141] libmachine: (ha-792382) Calling .GetSSHKeyPath
	I1209 10:58:28.573730  633119 main.go:141] libmachine: (ha-792382) Calling .GetSSHKeyPath
	I1209 10:58:28.573887  633119 main.go:141] libmachine: (ha-792382) Calling .GetSSHUsername
	I1209 10:58:28.574066  633119 main.go:141] libmachine: Using SSH client type: native
	I1209 10:58:28.574274  633119 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.69 22 <nil> <nil>}
	I1209 10:58:28.574291  633119 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-792382' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-792382/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-792382' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 10:58:28.678931  633119 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 10:58:28.678977  633119 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20068-609844/.minikube CaCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20068-609844/.minikube}
	I1209 10:58:28.679005  633119 buildroot.go:174] setting up certificates
	I1209 10:58:28.679017  633119 provision.go:84] configureAuth start
	I1209 10:58:28.679026  633119 main.go:141] libmachine: (ha-792382) Calling .GetMachineName
	I1209 10:58:28.679301  633119 main.go:141] libmachine: (ha-792382) Calling .GetIP
	I1209 10:58:28.681959  633119 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:58:28.682322  633119 main.go:141] libmachine: (ha-792382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:82:f7", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:49:26 +0000 UTC Type:0 Mac:52:54:00:a8:82:f7 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-792382 Clientid:01:52:54:00:a8:82:f7}
	I1209 10:58:28.682351  633119 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:58:28.682551  633119 main.go:141] libmachine: (ha-792382) Calling .GetSSHHostname
	I1209 10:58:28.684619  633119 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:58:28.684952  633119 main.go:141] libmachine: (ha-792382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:82:f7", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:49:26 +0000 UTC Type:0 Mac:52:54:00:a8:82:f7 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-792382 Clientid:01:52:54:00:a8:82:f7}
	I1209 10:58:28.684973  633119 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:58:28.685123  633119 provision.go:143] copyHostCerts
	I1209 10:58:28.685169  633119 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem
	I1209 10:58:28.685207  633119 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem, removing ...
	I1209 10:58:28.685224  633119 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem
	I1209 10:58:28.685289  633119 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem (1082 bytes)
	I1209 10:58:28.685363  633119 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem
	I1209 10:58:28.685383  633119 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem, removing ...
	I1209 10:58:28.685390  633119 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem
	I1209 10:58:28.685412  633119 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem (1123 bytes)
	I1209 10:58:28.685456  633119 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem
	I1209 10:58:28.685472  633119 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem, removing ...
	I1209 10:58:28.685477  633119 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem
	I1209 10:58:28.685498  633119 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem (1679 bytes)
	I1209 10:58:28.685545  633119 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem org=jenkins.ha-792382 san=[127.0.0.1 192.168.39.69 ha-792382 localhost minikube]
	I1209 10:58:28.892199  633119 provision.go:177] copyRemoteCerts
	I1209 10:58:28.892274  633119 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 10:58:28.892309  633119 main.go:141] libmachine: (ha-792382) Calling .GetSSHHostname
	I1209 10:58:28.895225  633119 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:58:28.895558  633119 main.go:141] libmachine: (ha-792382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:82:f7", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:49:26 +0000 UTC Type:0 Mac:52:54:00:a8:82:f7 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-792382 Clientid:01:52:54:00:a8:82:f7}
	I1209 10:58:28.895580  633119 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:58:28.895803  633119 main.go:141] libmachine: (ha-792382) Calling .GetSSHPort
	I1209 10:58:28.896015  633119 main.go:141] libmachine: (ha-792382) Calling .GetSSHKeyPath
	I1209 10:58:28.896149  633119 main.go:141] libmachine: (ha-792382) Calling .GetSSHUsername
	I1209 10:58:28.896267  633119 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382/id_rsa Username:docker}
	I1209 10:58:28.977362  633119 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1209 10:58:28.977440  633119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1209 10:58:29.001860  633119 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1209 10:58:29.001940  633119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1209 10:58:29.025837  633119 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1209 10:58:29.025908  633119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1209 10:58:29.049057  633119 provision.go:87] duration metric: took 370.025873ms to configureAuth
	I1209 10:58:29.049092  633119 buildroot.go:189] setting minikube options for container-runtime
	I1209 10:58:29.049327  633119 config.go:182] Loaded profile config "ha-792382": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 10:58:29.049421  633119 main.go:141] libmachine: (ha-792382) Calling .GetSSHHostname
	I1209 10:58:29.052110  633119 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:58:29.052514  633119 main.go:141] libmachine: (ha-792382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:82:f7", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:49:26 +0000 UTC Type:0 Mac:52:54:00:a8:82:f7 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-792382 Clientid:01:52:54:00:a8:82:f7}
	I1209 10:58:29.052547  633119 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:58:29.052732  633119 main.go:141] libmachine: (ha-792382) Calling .GetSSHPort
	I1209 10:58:29.052967  633119 main.go:141] libmachine: (ha-792382) Calling .GetSSHKeyPath
	I1209 10:58:29.053186  633119 main.go:141] libmachine: (ha-792382) Calling .GetSSHKeyPath
	I1209 10:58:29.053365  633119 main.go:141] libmachine: (ha-792382) Calling .GetSSHUsername
	I1209 10:58:29.053556  633119 main.go:141] libmachine: Using SSH client type: native
	I1209 10:58:29.053726  633119 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.69 22 <nil> <nil>}
	I1209 10:58:29.053743  633119 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 10:59:59.840941  633119 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 10:59:59.840982  633119 machine.go:96] duration metric: took 1m31.527209851s to provisionDockerMachine
	I1209 10:59:59.841017  633119 start.go:293] postStartSetup for "ha-792382" (driver="kvm2")
	I1209 10:59:59.841034  633119 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 10:59:59.841062  633119 main.go:141] libmachine: (ha-792382) Calling .DriverName
	I1209 10:59:59.841415  633119 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 10:59:59.841450  633119 main.go:141] libmachine: (ha-792382) Calling .GetSSHHostname
	I1209 10:59:59.844254  633119 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:59:59.844690  633119 main.go:141] libmachine: (ha-792382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:82:f7", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:49:26 +0000 UTC Type:0 Mac:52:54:00:a8:82:f7 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-792382 Clientid:01:52:54:00:a8:82:f7}
	I1209 10:59:59.844714  633119 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:59:59.844885  633119 main.go:141] libmachine: (ha-792382) Calling .GetSSHPort
	I1209 10:59:59.845113  633119 main.go:141] libmachine: (ha-792382) Calling .GetSSHKeyPath
	I1209 10:59:59.845265  633119 main.go:141] libmachine: (ha-792382) Calling .GetSSHUsername
	I1209 10:59:59.845395  633119 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382/id_rsa Username:docker}
	I1209 10:59:59.925350  633119 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 10:59:59.929576  633119 info.go:137] Remote host: Buildroot 2023.02.9
	I1209 10:59:59.929634  633119 filesync.go:126] Scanning /home/jenkins/minikube-integration/20068-609844/.minikube/addons for local assets ...
	I1209 10:59:59.929724  633119 filesync.go:126] Scanning /home/jenkins/minikube-integration/20068-609844/.minikube/files for local assets ...
	I1209 10:59:59.929797  633119 filesync.go:149] local asset: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem -> 6170172.pem in /etc/ssl/certs
	I1209 10:59:59.929808  633119 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem -> /etc/ssl/certs/6170172.pem
	I1209 10:59:59.929896  633119 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 10:59:59.939169  633119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem --> /etc/ssl/certs/6170172.pem (1708 bytes)
	I1209 10:59:59.962315  633119 start.go:296] duration metric: took 121.273228ms for postStartSetup
	I1209 10:59:59.962383  633119 main.go:141] libmachine: (ha-792382) Calling .DriverName
	I1209 10:59:59.962723  633119 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1209 10:59:59.962758  633119 main.go:141] libmachine: (ha-792382) Calling .GetSSHHostname
	I1209 10:59:59.965355  633119 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:59:59.965749  633119 main.go:141] libmachine: (ha-792382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:82:f7", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:49:26 +0000 UTC Type:0 Mac:52:54:00:a8:82:f7 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-792382 Clientid:01:52:54:00:a8:82:f7}
	I1209 10:59:59.965775  633119 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:59:59.965965  633119 main.go:141] libmachine: (ha-792382) Calling .GetSSHPort
	I1209 10:59:59.966140  633119 main.go:141] libmachine: (ha-792382) Calling .GetSSHKeyPath
	I1209 10:59:59.966320  633119 main.go:141] libmachine: (ha-792382) Calling .GetSSHUsername
	I1209 10:59:59.966442  633119 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382/id_rsa Username:docker}
	W1209 11:00:00.044743  633119 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I1209 11:00:00.044776  633119 fix.go:56] duration metric: took 1m31.755445434s for fixHost
	I1209 11:00:00.044801  633119 main.go:141] libmachine: (ha-792382) Calling .GetSSHHostname
	I1209 11:00:00.047669  633119 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 11:00:00.048000  633119 main.go:141] libmachine: (ha-792382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:82:f7", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:49:26 +0000 UTC Type:0 Mac:52:54:00:a8:82:f7 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-792382 Clientid:01:52:54:00:a8:82:f7}
	I1209 11:00:00.048030  633119 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 11:00:00.048250  633119 main.go:141] libmachine: (ha-792382) Calling .GetSSHPort
	I1209 11:00:00.048461  633119 main.go:141] libmachine: (ha-792382) Calling .GetSSHKeyPath
	I1209 11:00:00.048664  633119 main.go:141] libmachine: (ha-792382) Calling .GetSSHKeyPath
	I1209 11:00:00.048828  633119 main.go:141] libmachine: (ha-792382) Calling .GetSSHUsername
	I1209 11:00:00.049034  633119 main.go:141] libmachine: Using SSH client type: native
	I1209 11:00:00.049316  633119 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.69 22 <nil> <nil>}
	I1209 11:00:00.049343  633119 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1209 11:00:00.167593  633119 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733742000.134008518
	
	I1209 11:00:00.167620  633119 fix.go:216] guest clock: 1733742000.134008518
	I1209 11:00:00.167631  633119 fix.go:229] Guest: 2024-12-09 11:00:00.134008518 +0000 UTC Remote: 2024-12-09 11:00:00.044783223 +0000 UTC m=+91.895951206 (delta=89.225295ms)
	I1209 11:00:00.167702  633119 fix.go:200] guest clock delta is within tolerance: 89.225295ms
	I1209 11:00:00.167714  633119 start.go:83] releasing machines lock for "ha-792382", held for 1m31.87839392s
	I1209 11:00:00.167767  633119 main.go:141] libmachine: (ha-792382) Calling .DriverName
	I1209 11:00:00.168055  633119 main.go:141] libmachine: (ha-792382) Calling .GetIP
	I1209 11:00:00.171062  633119 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 11:00:00.171537  633119 main.go:141] libmachine: (ha-792382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:82:f7", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:49:26 +0000 UTC Type:0 Mac:52:54:00:a8:82:f7 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-792382 Clientid:01:52:54:00:a8:82:f7}
	I1209 11:00:00.171561  633119 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 11:00:00.171777  633119 main.go:141] libmachine: (ha-792382) Calling .DriverName
	I1209 11:00:00.172346  633119 main.go:141] libmachine: (ha-792382) Calling .DriverName
	I1209 11:00:00.172553  633119 main.go:141] libmachine: (ha-792382) Calling .DriverName
	I1209 11:00:00.172649  633119 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 11:00:00.172708  633119 main.go:141] libmachine: (ha-792382) Calling .GetSSHHostname
	I1209 11:00:00.172783  633119 ssh_runner.go:195] Run: cat /version.json
	I1209 11:00:00.172816  633119 main.go:141] libmachine: (ha-792382) Calling .GetSSHHostname
	I1209 11:00:00.175471  633119 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 11:00:00.175688  633119 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 11:00:00.175891  633119 main.go:141] libmachine: (ha-792382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:82:f7", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:49:26 +0000 UTC Type:0 Mac:52:54:00:a8:82:f7 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-792382 Clientid:01:52:54:00:a8:82:f7}
	I1209 11:00:00.175914  633119 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 11:00:00.176100  633119 main.go:141] libmachine: (ha-792382) Calling .GetSSHPort
	I1209 11:00:00.176126  633119 main.go:141] libmachine: (ha-792382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:82:f7", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:49:26 +0000 UTC Type:0 Mac:52:54:00:a8:82:f7 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-792382 Clientid:01:52:54:00:a8:82:f7}
	I1209 11:00:00.176151  633119 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 11:00:00.176301  633119 main.go:141] libmachine: (ha-792382) Calling .GetSSHPort
	I1209 11:00:00.176321  633119 main.go:141] libmachine: (ha-792382) Calling .GetSSHKeyPath
	I1209 11:00:00.176509  633119 main.go:141] libmachine: (ha-792382) Calling .GetSSHKeyPath
	I1209 11:00:00.176637  633119 main.go:141] libmachine: (ha-792382) Calling .GetSSHUsername
	I1209 11:00:00.176772  633119 main.go:141] libmachine: (ha-792382) Calling .GetSSHUsername
	I1209 11:00:00.176844  633119 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382/id_rsa Username:docker}
	I1209 11:00:00.176908  633119 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382/id_rsa Username:docker}
	I1209 11:00:00.333231  633119 ssh_runner.go:195] Run: systemctl --version
	I1209 11:00:00.342876  633119 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 11:00:00.513243  633119 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 11:00:00.519161  633119 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 11:00:00.519246  633119 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 11:00:00.528593  633119 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1209 11:00:00.528623  633119 start.go:495] detecting cgroup driver to use...
	I1209 11:00:00.528715  633119 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 11:00:00.546952  633119 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 11:00:00.561864  633119 docker.go:217] disabling cri-docker service (if available) ...
	I1209 11:00:00.561942  633119 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 11:00:00.575804  633119 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 11:00:00.590289  633119 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 11:00:00.741698  633119 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 11:00:00.883631  633119 docker.go:233] disabling docker service ...
	I1209 11:00:00.883717  633119 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 11:00:00.904297  633119 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 11:00:00.918619  633119 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 11:00:01.077613  633119 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 11:00:01.251513  633119 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 11:00:01.266718  633119 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 11:00:01.285767  633119 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1209 11:00:01.285850  633119 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:00:01.297035  633119 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 11:00:01.297129  633119 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:00:01.308075  633119 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:00:01.319023  633119 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:00:01.329996  633119 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 11:00:01.340862  633119 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:00:01.351634  633119 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:00:01.362972  633119 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:00:01.373494  633119 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 11:00:01.382730  633119 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 11:00:01.392477  633119 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 11:00:01.535996  633119 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 11:00:01.783924  633119 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 11:00:01.784005  633119 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 11:00:01.788810  633119 start.go:563] Will wait 60s for crictl version
	I1209 11:00:01.788894  633119 ssh_runner.go:195] Run: which crictl
	I1209 11:00:01.792793  633119 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 11:00:01.828069  633119 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1209 11:00:01.828147  633119 ssh_runner.go:195] Run: crio --version
	I1209 11:00:01.856155  633119 ssh_runner.go:195] Run: crio --version
	I1209 11:00:01.888597  633119 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1209 11:00:01.890226  633119 main.go:141] libmachine: (ha-792382) Calling .GetIP
	I1209 11:00:01.893412  633119 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 11:00:01.893853  633119 main.go:141] libmachine: (ha-792382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:82:f7", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:49:26 +0000 UTC Type:0 Mac:52:54:00:a8:82:f7 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-792382 Clientid:01:52:54:00:a8:82:f7}
	I1209 11:00:01.893884  633119 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 11:00:01.894102  633119 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1209 11:00:01.898962  633119 kubeadm.go:883] updating cluster {Name:ha-792382 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cl
usterName:ha-792382 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.69 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.89 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.82 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.54 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storage
class:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 11:00:01.899109  633119 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 11:00:01.899171  633119 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 11:00:01.945219  633119 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 11:00:01.945297  633119 crio.go:433] Images already preloaded, skipping extraction
	I1209 11:00:01.945362  633119 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 11:00:01.982039  633119 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 11:00:01.982073  633119 cache_images.go:84] Images are preloaded, skipping loading
	I1209 11:00:01.982098  633119 kubeadm.go:934] updating node { 192.168.39.69 8443 v1.31.2 crio true true} ...
	I1209 11:00:01.982235  633119 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-792382 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.69
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-792382 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 11:00:01.982304  633119 ssh_runner.go:195] Run: crio config
	I1209 11:00:02.032195  633119 cni.go:84] Creating CNI manager for ""
	I1209 11:00:02.032218  633119 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1209 11:00:02.032233  633119 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1209 11:00:02.032273  633119 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.69 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-792382 NodeName:ha-792382 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.69"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.69 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 11:00:02.032447  633119 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.69
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-792382"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.69"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.69"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 11:00:02.032470  633119 kube-vip.go:115] generating kube-vip config ...
	I1209 11:00:02.032527  633119 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1209 11:00:02.044370  633119 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1209 11:00:02.044504  633119 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.7
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1209 11:00:02.044570  633119 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1209 11:00:02.054431  633119 binaries.go:44] Found k8s binaries, skipping transfer
	I1209 11:00:02.054529  633119 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1209 11:00:02.064195  633119 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I1209 11:00:02.081234  633119 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 11:00:02.099055  633119 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2286 bytes)
	I1209 11:00:02.116361  633119 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1209 11:00:02.133846  633119 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1209 11:00:02.138363  633119 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 11:00:02.310777  633119 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 11:00:02.325092  633119 certs.go:68] Setting up /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382 for IP: 192.168.39.69
	I1209 11:00:02.325144  633119 certs.go:194] generating shared ca certs ...
	I1209 11:00:02.325170  633119 certs.go:226] acquiring lock for ca certs: {Name:mk2df5887e08965d909a9c950da5dfffb8a04ddc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 11:00:02.325343  633119 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.key
	I1209 11:00:02.325433  633119 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.key
	I1209 11:00:02.325448  633119 certs.go:256] generating profile certs ...
	I1209 11:00:02.325538  633119 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/client.key
	I1209 11:00:02.325566  633119 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.key.1dd7c68f
	I1209 11:00:02.325579  633119 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.crt.1dd7c68f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.69 192.168.39.89 192.168.39.82 192.168.39.254]
	I1209 11:00:02.827882  633119 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.crt.1dd7c68f ...
	I1209 11:00:02.827914  633119 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.crt.1dd7c68f: {Name:mk80e22e890d22b3f355dc15ccf8d59360abd429 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 11:00:02.828087  633119 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.key.1dd7c68f ...
	I1209 11:00:02.828099  633119 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.key.1dd7c68f: {Name:mk53f62924ce37068640a587815f7e82b51c466b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 11:00:02.828165  633119 certs.go:381] copying /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.crt.1dd7c68f -> /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.crt
	I1209 11:00:02.828344  633119 certs.go:385] copying /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.key.1dd7c68f -> /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.key
	I1209 11:00:02.828501  633119 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/proxy-client.key
	I1209 11:00:02.828519  633119 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1209 11:00:02.828531  633119 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1209 11:00:02.828545  633119 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1209 11:00:02.828559  633119 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1209 11:00:02.828570  633119 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1209 11:00:02.828585  633119 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1209 11:00:02.828597  633119 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1209 11:00:02.828606  633119 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1209 11:00:02.828655  633119 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017.pem (1338 bytes)
	W1209 11:00:02.828684  633119 certs.go:480] ignoring /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017_empty.pem, impossibly tiny 0 bytes
	I1209 11:00:02.828694  633119 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem (1675 bytes)
	I1209 11:00:02.828714  633119 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem (1082 bytes)
	I1209 11:00:02.828735  633119 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem (1123 bytes)
	I1209 11:00:02.828755  633119 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem (1679 bytes)
	I1209 11:00:02.828794  633119 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem (1708 bytes)
	I1209 11:00:02.828819  633119 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1209 11:00:02.828834  633119 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017.pem -> /usr/share/ca-certificates/617017.pem
	I1209 11:00:02.828849  633119 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem -> /usr/share/ca-certificates/6170172.pem
	I1209 11:00:02.829453  633119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 11:00:02.854330  633119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1209 11:00:02.877871  633119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 11:00:02.902595  633119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1209 11:00:02.927262  633119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1209 11:00:02.951278  633119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1209 11:00:02.974921  633119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 11:00:02.998113  633119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1209 11:00:03.021844  633119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 11:00:03.045297  633119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017.pem --> /usr/share/ca-certificates/617017.pem (1338 bytes)
	I1209 11:00:03.069947  633119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem --> /usr/share/ca-certificates/6170172.pem (1708 bytes)
	I1209 11:00:03.093633  633119 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 11:00:03.110666  633119 ssh_runner.go:195] Run: openssl version
	I1209 11:00:03.116472  633119 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1209 11:00:03.127335  633119 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 11:00:03.131684  633119 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 10:34 /usr/share/ca-certificates/minikubeCA.pem
	I1209 11:00:03.131746  633119 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 11:00:03.137252  633119 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1209 11:00:03.146778  633119 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/617017.pem && ln -fs /usr/share/ca-certificates/617017.pem /etc/ssl/certs/617017.pem"
	I1209 11:00:03.157487  633119 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/617017.pem
	I1209 11:00:03.161805  633119 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 10:45 /usr/share/ca-certificates/617017.pem
	I1209 11:00:03.161870  633119 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/617017.pem
	I1209 11:00:03.167605  633119 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/617017.pem /etc/ssl/certs/51391683.0"
	I1209 11:00:03.177085  633119 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6170172.pem && ln -fs /usr/share/ca-certificates/6170172.pem /etc/ssl/certs/6170172.pem"
	I1209 11:00:03.187750  633119 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6170172.pem
	I1209 11:00:03.191945  633119 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 10:45 /usr/share/ca-certificates/6170172.pem
	I1209 11:00:03.192066  633119 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6170172.pem
	I1209 11:00:03.197663  633119 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6170172.pem /etc/ssl/certs/3ec20f2e.0"
	I1209 11:00:03.207006  633119 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 11:00:03.211252  633119 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1209 11:00:03.218956  633119 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1209 11:00:03.224341  633119 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1209 11:00:03.229773  633119 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1209 11:00:03.235329  633119 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1209 11:00:03.240591  633119 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1209 11:00:03.246231  633119 kubeadm.go:392] StartCluster: {Name:ha-792382 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Clust
erName:ha-792382 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.69 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.89 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.82 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.54 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storagecla
ss:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 11:00:03.246356  633119 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 11:00:03.246426  633119 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 11:00:03.284534  633119 cri.go:89] found id: "8c25fc6fac09e1942f71fe72fe70632a14b3b57122944b01cd9b6d8ffdf54b16"
	I1209 11:00:03.284557  633119 cri.go:89] found id: "f12d8a04a431ddf40d44d416ee9d09815655d15c3c10d8ff8bf37d4f3dc2d041"
	I1209 11:00:03.284561  633119 cri.go:89] found id: "2f1ea1744a4f918de3f7835ae8108eb6bddf8d49fe6ddb07b8c1bf6ee00f01e3"
	I1209 11:00:03.284564  633119 cri.go:89] found id: "2d1908a476753017705e22c87981bb495c3ef86b9af8a1f3971334fd8a824497"
	I1209 11:00:03.284567  633119 cri.go:89] found id: "f4ba11ff07ea5065d66a5e8b8d091a9ba9a1c680ab1ade1d19aecf153081d7dd"
	I1209 11:00:03.284570  633119 cri.go:89] found id: "afc0f0aea4c8acce16752f3e50cc08b660c2c26b24932beaf28ba3d1bc596733"
	I1209 11:00:03.284573  633119 cri.go:89] found id: "b6bf7c7cf0d689f954fca58d1d94afa21f5bfd0f606552fbbf1479a9ae1593d3"
	I1209 11:00:03.284575  633119 cri.go:89] found id: "3cf6196a4789ec6471e8e2fe474e71d1756368a4423e425262410e6c7a71e522"
	I1209 11:00:03.284577  633119 cri.go:89] found id: "082e8ff7e6c7e3e1110852687152ddb22202b3134aa3105a8b11aa7d702bcd6a"
	I1209 11:00:03.284584  633119 cri.go:89] found id: "64b96c1c2297057b3b928bc7494df511d909f05efb4f8d5be27458c164efe95f"
	I1209 11:00:03.284586  633119 cri.go:89] found id: "778345b29099a60c663a83117fc6b409ae496d9228f79ecc48c26209e7a87f63"
	I1209 11:00:03.284589  633119 cri.go:89] found id: "d93c68b855d9f72ce368ee4f59250a988032fe781209e2115a8d70ea60f77fee"
	I1209 11:00:03.284591  633119 cri.go:89] found id: "00db8f77881efc15e9bf57aba34890302ec37f9239ba1d191862ea78b9833604"
	I1209 11:00:03.284594  633119 cri.go:89] found id: ""
	I1209 11:00:03.284640  633119 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-792382 -n ha-792382
helpers_test.go:261: (dbg) Run:  kubectl --context ha-792382 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (361.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (142.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-792382 stop -v=7 --alsologtostderr
E1209 11:02:56.376523  617017 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/addons-156041/client.crt: no such file or directory" logger="UnhandledError"
E1209 11:03:22.652923  617017 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/functional-032350/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:533: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-792382 stop -v=7 --alsologtostderr: exit status 82 (2m0.485411764s)

                                                
                                                
-- stdout --
	* Stopping node "ha-792382-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 11:02:45.356565  634899 out.go:345] Setting OutFile to fd 1 ...
	I1209 11:02:45.356697  634899 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 11:02:45.356708  634899 out.go:358] Setting ErrFile to fd 2...
	I1209 11:02:45.356712  634899 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 11:02:45.356896  634899 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20068-609844/.minikube/bin
	I1209 11:02:45.357125  634899 out.go:352] Setting JSON to false
	I1209 11:02:45.357217  634899 mustload.go:65] Loading cluster: ha-792382
	I1209 11:02:45.357617  634899 config.go:182] Loaded profile config "ha-792382": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 11:02:45.357711  634899 profile.go:143] Saving config to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/config.json ...
	I1209 11:02:45.357889  634899 mustload.go:65] Loading cluster: ha-792382
	I1209 11:02:45.358018  634899 config.go:182] Loaded profile config "ha-792382": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 11:02:45.358052  634899 stop.go:39] StopHost: ha-792382-m04
	I1209 11:02:45.358496  634899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:02:45.358542  634899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:02:45.374098  634899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34413
	I1209 11:02:45.374651  634899 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:02:45.375218  634899 main.go:141] libmachine: Using API Version  1
	I1209 11:02:45.375240  634899 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:02:45.375626  634899 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:02:45.377816  634899 out.go:177] * Stopping node "ha-792382-m04"  ...
	I1209 11:02:45.379252  634899 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1209 11:02:45.379298  634899 main.go:141] libmachine: (ha-792382-m04) Calling .DriverName
	I1209 11:02:45.379527  634899 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1209 11:02:45.379558  634899 main.go:141] libmachine: (ha-792382-m04) Calling .GetSSHHostname
	I1209 11:02:45.382339  634899 main.go:141] libmachine: (ha-792382-m04) DBG | domain ha-792382-m04 has defined MAC address 52:54:00:bf:1c:a3 in network mk-ha-792382
	I1209 11:02:45.382773  634899 main.go:141] libmachine: (ha-792382-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:1c:a3", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 12:02:13 +0000 UTC Type:0 Mac:52:54:00:bf:1c:a3 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-792382-m04 Clientid:01:52:54:00:bf:1c:a3}
	I1209 11:02:45.382804  634899 main.go:141] libmachine: (ha-792382-m04) DBG | domain ha-792382-m04 has defined IP address 192.168.39.54 and MAC address 52:54:00:bf:1c:a3 in network mk-ha-792382
	I1209 11:02:45.382938  634899 main.go:141] libmachine: (ha-792382-m04) Calling .GetSSHPort
	I1209 11:02:45.383110  634899 main.go:141] libmachine: (ha-792382-m04) Calling .GetSSHKeyPath
	I1209 11:02:45.383237  634899 main.go:141] libmachine: (ha-792382-m04) Calling .GetSSHUsername
	I1209 11:02:45.383407  634899 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382-m04/id_rsa Username:docker}
	I1209 11:02:45.468752  634899 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1209 11:02:45.521681  634899 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1209 11:02:45.574155  634899 main.go:141] libmachine: Stopping "ha-792382-m04"...
	I1209 11:02:45.574210  634899 main.go:141] libmachine: (ha-792382-m04) Calling .GetState
	I1209 11:02:45.575867  634899 main.go:141] libmachine: (ha-792382-m04) Calling .Stop
	I1209 11:02:45.579367  634899 main.go:141] libmachine: (ha-792382-m04) Waiting for machine to stop 0/120
	I1209 11:02:46.581634  634899 main.go:141] libmachine: (ha-792382-m04) Waiting for machine to stop 1/120
	I1209 11:02:47.582954  634899 main.go:141] libmachine: (ha-792382-m04) Waiting for machine to stop 2/120
	I1209 11:02:48.584642  634899 main.go:141] libmachine: (ha-792382-m04) Waiting for machine to stop 3/120
	I1209 11:02:49.586065  634899 main.go:141] libmachine: (ha-792382-m04) Waiting for machine to stop 4/120
	I1209 11:02:50.588157  634899 main.go:141] libmachine: (ha-792382-m04) Waiting for machine to stop 5/120
	I1209 11:02:51.589825  634899 main.go:141] libmachine: (ha-792382-m04) Waiting for machine to stop 6/120
	I1209 11:02:52.591071  634899 main.go:141] libmachine: (ha-792382-m04) Waiting for machine to stop 7/120
	I1209 11:02:53.592721  634899 main.go:141] libmachine: (ha-792382-m04) Waiting for machine to stop 8/120
	I1209 11:02:54.594034  634899 main.go:141] libmachine: (ha-792382-m04) Waiting for machine to stop 9/120
	I1209 11:02:55.596424  634899 main.go:141] libmachine: (ha-792382-m04) Waiting for machine to stop 10/120
	I1209 11:02:56.597988  634899 main.go:141] libmachine: (ha-792382-m04) Waiting for machine to stop 11/120
	I1209 11:02:57.599380  634899 main.go:141] libmachine: (ha-792382-m04) Waiting for machine to stop 12/120
	I1209 11:02:58.600935  634899 main.go:141] libmachine: (ha-792382-m04) Waiting for machine to stop 13/120
	I1209 11:02:59.602479  634899 main.go:141] libmachine: (ha-792382-m04) Waiting for machine to stop 14/120
	I1209 11:03:00.604553  634899 main.go:141] libmachine: (ha-792382-m04) Waiting for machine to stop 15/120
	I1209 11:03:01.606016  634899 main.go:141] libmachine: (ha-792382-m04) Waiting for machine to stop 16/120
	I1209 11:03:02.607534  634899 main.go:141] libmachine: (ha-792382-m04) Waiting for machine to stop 17/120
	I1209 11:03:03.608784  634899 main.go:141] libmachine: (ha-792382-m04) Waiting for machine to stop 18/120
	I1209 11:03:04.610287  634899 main.go:141] libmachine: (ha-792382-m04) Waiting for machine to stop 19/120
	I1209 11:03:05.612629  634899 main.go:141] libmachine: (ha-792382-m04) Waiting for machine to stop 20/120
	I1209 11:03:06.613955  634899 main.go:141] libmachine: (ha-792382-m04) Waiting for machine to stop 21/120
	I1209 11:03:07.615316  634899 main.go:141] libmachine: (ha-792382-m04) Waiting for machine to stop 22/120
	I1209 11:03:08.617450  634899 main.go:141] libmachine: (ha-792382-m04) Waiting for machine to stop 23/120
	I1209 11:03:09.618740  634899 main.go:141] libmachine: (ha-792382-m04) Waiting for machine to stop 24/120
	I1209 11:03:10.620929  634899 main.go:141] libmachine: (ha-792382-m04) Waiting for machine to stop 25/120
	I1209 11:03:11.622356  634899 main.go:141] libmachine: (ha-792382-m04) Waiting for machine to stop 26/120
	I1209 11:03:12.623921  634899 main.go:141] libmachine: (ha-792382-m04) Waiting for machine to stop 27/120
	I1209 11:03:13.625301  634899 main.go:141] libmachine: (ha-792382-m04) Waiting for machine to stop 28/120
	I1209 11:03:14.627042  634899 main.go:141] libmachine: (ha-792382-m04) Waiting for machine to stop 29/120
	I1209 11:03:15.629173  634899 main.go:141] libmachine: (ha-792382-m04) Waiting for machine to stop 30/120
	I1209 11:03:16.631118  634899 main.go:141] libmachine: (ha-792382-m04) Waiting for machine to stop 31/120
	I1209 11:03:17.632914  634899 main.go:141] libmachine: (ha-792382-m04) Waiting for machine to stop 32/120
	I1209 11:03:18.634491  634899 main.go:141] libmachine: (ha-792382-m04) Waiting for machine to stop 33/120
	I1209 11:03:19.636749  634899 main.go:141] libmachine: (ha-792382-m04) Waiting for machine to stop 34/120
	I1209 11:03:20.638910  634899 main.go:141] libmachine: (ha-792382-m04) Waiting for machine to stop 35/120
	I1209 11:03:21.641229  634899 main.go:141] libmachine: (ha-792382-m04) Waiting for machine to stop 36/120
	I1209 11:03:22.642457  634899 main.go:141] libmachine: (ha-792382-m04) Waiting for machine to stop 37/120
	I1209 11:03:23.643782  634899 main.go:141] libmachine: (ha-792382-m04) Waiting for machine to stop 38/120
	I1209 11:03:24.645240  634899 main.go:141] libmachine: (ha-792382-m04) Waiting for machine to stop 39/120
	I1209 11:03:25.647437  634899 main.go:141] libmachine: (ha-792382-m04) Waiting for machine to stop 40/120
	I1209 11:03:26.648837  634899 main.go:141] libmachine: (ha-792382-m04) Waiting for machine to stop 41/120
	I1209 11:03:27.650093  634899 main.go:141] libmachine: (ha-792382-m04) Waiting for machine to stop 42/120
	I1209 11:03:28.651311  634899 main.go:141] libmachine: (ha-792382-m04) Waiting for machine to stop 43/120
	I1209 11:03:29.652492  634899 main.go:141] libmachine: (ha-792382-m04) Waiting for machine to stop 44/120
	I1209 11:03:30.654711  634899 main.go:141] libmachine: (ha-792382-m04) Waiting for machine to stop 45/120
	I1209 11:03:31.656577  634899 main.go:141] libmachine: (ha-792382-m04) Waiting for machine to stop 46/120
	I1209 11:03:32.657834  634899 main.go:141] libmachine: (ha-792382-m04) Waiting for machine to stop 47/120
	I1209 11:03:33.659443  634899 main.go:141] libmachine: (ha-792382-m04) Waiting for machine to stop 48/120
	I1209 11:03:34.661148  634899 main.go:141] libmachine: (ha-792382-m04) Waiting for machine to stop 49/120
	I1209 11:03:35.663852  634899 main.go:141] libmachine: (ha-792382-m04) Waiting for machine to stop 50/120
	I1209 11:03:36.665265  634899 main.go:141] libmachine: (ha-792382-m04) Waiting for machine to stop 51/120
	I1209 11:03:37.666829  634899 main.go:141] libmachine: (ha-792382-m04) Waiting for machine to stop 52/120
	I1209 11:03:38.668503  634899 main.go:141] libmachine: (ha-792382-m04) Waiting for machine to stop 53/120
	I1209 11:03:39.670317  634899 main.go:141] libmachine: (ha-792382-m04) Waiting for machine to stop 54/120
	I1209 11:03:40.672319  634899 main.go:141] libmachine: (ha-792382-m04) Waiting for machine to stop 55/120
	I1209 11:03:41.674625  634899 main.go:141] libmachine: (ha-792382-m04) Waiting for machine to stop 56/120
	I1209 11:03:42.676703  634899 main.go:141] libmachine: (ha-792382-m04) Waiting for machine to stop 57/120
	I1209 11:03:43.678099  634899 main.go:141] libmachine: (ha-792382-m04) Waiting for machine to stop 58/120
	I1209 11:03:44.679501  634899 main.go:141] libmachine: (ha-792382-m04) Waiting for machine to stop 59/120
	I1209 11:03:45.681223  634899 main.go:141] libmachine: (ha-792382-m04) Waiting for machine to stop 60/120
	I1209 11:03:46.682721  634899 main.go:141] libmachine: (ha-792382-m04) Waiting for machine to stop 61/120
	I1209 11:03:47.684651  634899 main.go:141] libmachine: (ha-792382-m04) Waiting for machine to stop 62/120
	I1209 11:03:48.686148  634899 main.go:141] libmachine: (ha-792382-m04) Waiting for machine to stop 63/120
	I1209 11:03:49.687711  634899 main.go:141] libmachine: (ha-792382-m04) Waiting for machine to stop 64/120
	I1209 11:03:50.690049  634899 main.go:141] libmachine: (ha-792382-m04) Waiting for machine to stop 65/120
	I1209 11:03:51.691656  634899 main.go:141] libmachine: (ha-792382-m04) Waiting for machine to stop 66/120
	I1209 11:03:52.692860  634899 main.go:141] libmachine: (ha-792382-m04) Waiting for machine to stop 67/120
	I1209 11:03:53.694695  634899 main.go:141] libmachine: (ha-792382-m04) Waiting for machine to stop 68/120
	I1209 11:03:54.696159  634899 main.go:141] libmachine: (ha-792382-m04) Waiting for machine to stop 69/120
	I1209 11:03:55.698548  634899 main.go:141] libmachine: (ha-792382-m04) Waiting for machine to stop 70/120
	I1209 11:03:56.699752  634899 main.go:141] libmachine: (ha-792382-m04) Waiting for machine to stop 71/120
	I1209 11:03:57.701135  634899 main.go:141] libmachine: (ha-792382-m04) Waiting for machine to stop 72/120
	I1209 11:03:58.703023  634899 main.go:141] libmachine: (ha-792382-m04) Waiting for machine to stop 73/120
	I1209 11:03:59.704803  634899 main.go:141] libmachine: (ha-792382-m04) Waiting for machine to stop 74/120
	I1209 11:04:00.707178  634899 main.go:141] libmachine: (ha-792382-m04) Waiting for machine to stop 75/120
	I1209 11:04:01.708820  634899 main.go:141] libmachine: (ha-792382-m04) Waiting for machine to stop 76/120
	I1209 11:04:02.710153  634899 main.go:141] libmachine: (ha-792382-m04) Waiting for machine to stop 77/120
	I1209 11:04:03.711668  634899 main.go:141] libmachine: (ha-792382-m04) Waiting for machine to stop 78/120
	I1209 11:04:04.712967  634899 main.go:141] libmachine: (ha-792382-m04) Waiting for machine to stop 79/120
	I1209 11:04:05.715156  634899 main.go:141] libmachine: (ha-792382-m04) Waiting for machine to stop 80/120
	I1209 11:04:06.716857  634899 main.go:141] libmachine: (ha-792382-m04) Waiting for machine to stop 81/120
	I1209 11:04:07.718223  634899 main.go:141] libmachine: (ha-792382-m04) Waiting for machine to stop 82/120
	I1209 11:04:08.719575  634899 main.go:141] libmachine: (ha-792382-m04) Waiting for machine to stop 83/120
	I1209 11:04:09.720935  634899 main.go:141] libmachine: (ha-792382-m04) Waiting for machine to stop 84/120
	I1209 11:04:10.723096  634899 main.go:141] libmachine: (ha-792382-m04) Waiting for machine to stop 85/120
	I1209 11:04:11.724570  634899 main.go:141] libmachine: (ha-792382-m04) Waiting for machine to stop 86/120
	I1209 11:04:12.726028  634899 main.go:141] libmachine: (ha-792382-m04) Waiting for machine to stop 87/120
	I1209 11:04:13.727640  634899 main.go:141] libmachine: (ha-792382-m04) Waiting for machine to stop 88/120
	I1209 11:04:14.728908  634899 main.go:141] libmachine: (ha-792382-m04) Waiting for machine to stop 89/120
	I1209 11:04:15.731525  634899 main.go:141] libmachine: (ha-792382-m04) Waiting for machine to stop 90/120
	I1209 11:04:16.732928  634899 main.go:141] libmachine: (ha-792382-m04) Waiting for machine to stop 91/120
	I1209 11:04:17.735132  634899 main.go:141] libmachine: (ha-792382-m04) Waiting for machine to stop 92/120
	I1209 11:04:18.737430  634899 main.go:141] libmachine: (ha-792382-m04) Waiting for machine to stop 93/120
	I1209 11:04:19.738737  634899 main.go:141] libmachine: (ha-792382-m04) Waiting for machine to stop 94/120
	I1209 11:04:20.740985  634899 main.go:141] libmachine: (ha-792382-m04) Waiting for machine to stop 95/120
	I1209 11:04:21.742413  634899 main.go:141] libmachine: (ha-792382-m04) Waiting for machine to stop 96/120
	I1209 11:04:22.744771  634899 main.go:141] libmachine: (ha-792382-m04) Waiting for machine to stop 97/120
	I1209 11:04:23.746262  634899 main.go:141] libmachine: (ha-792382-m04) Waiting for machine to stop 98/120
	I1209 11:04:24.747520  634899 main.go:141] libmachine: (ha-792382-m04) Waiting for machine to stop 99/120
	I1209 11:04:25.750075  634899 main.go:141] libmachine: (ha-792382-m04) Waiting for machine to stop 100/120
	I1209 11:04:26.751823  634899 main.go:141] libmachine: (ha-792382-m04) Waiting for machine to stop 101/120
	I1209 11:04:27.753404  634899 main.go:141] libmachine: (ha-792382-m04) Waiting for machine to stop 102/120
	I1209 11:04:28.754904  634899 main.go:141] libmachine: (ha-792382-m04) Waiting for machine to stop 103/120
	I1209 11:04:29.756365  634899 main.go:141] libmachine: (ha-792382-m04) Waiting for machine to stop 104/120
	I1209 11:04:30.758648  634899 main.go:141] libmachine: (ha-792382-m04) Waiting for machine to stop 105/120
	I1209 11:04:31.760053  634899 main.go:141] libmachine: (ha-792382-m04) Waiting for machine to stop 106/120
	I1209 11:04:32.761647  634899 main.go:141] libmachine: (ha-792382-m04) Waiting for machine to stop 107/120
	I1209 11:04:33.763079  634899 main.go:141] libmachine: (ha-792382-m04) Waiting for machine to stop 108/120
	I1209 11:04:34.764872  634899 main.go:141] libmachine: (ha-792382-m04) Waiting for machine to stop 109/120
	I1209 11:04:35.766966  634899 main.go:141] libmachine: (ha-792382-m04) Waiting for machine to stop 110/120
	I1209 11:04:36.769102  634899 main.go:141] libmachine: (ha-792382-m04) Waiting for machine to stop 111/120
	I1209 11:04:37.771108  634899 main.go:141] libmachine: (ha-792382-m04) Waiting for machine to stop 112/120
	I1209 11:04:38.772846  634899 main.go:141] libmachine: (ha-792382-m04) Waiting for machine to stop 113/120
	I1209 11:04:39.774386  634899 main.go:141] libmachine: (ha-792382-m04) Waiting for machine to stop 114/120
	I1209 11:04:40.776214  634899 main.go:141] libmachine: (ha-792382-m04) Waiting for machine to stop 115/120
	I1209 11:04:41.777687  634899 main.go:141] libmachine: (ha-792382-m04) Waiting for machine to stop 116/120
	I1209 11:04:42.779034  634899 main.go:141] libmachine: (ha-792382-m04) Waiting for machine to stop 117/120
	I1209 11:04:43.780900  634899 main.go:141] libmachine: (ha-792382-m04) Waiting for machine to stop 118/120
	I1209 11:04:44.782243  634899 main.go:141] libmachine: (ha-792382-m04) Waiting for machine to stop 119/120
	I1209 11:04:45.782853  634899 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1209 11:04:45.782935  634899 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1209 11:04:45.784519  634899 out.go:201] 
	W1209 11:04:45.785532  634899 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1209 11:04:45.785543  634899 out.go:270] * 
	* 
	W1209 11:04:45.788761  634899 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 11:04:45.789835  634899 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:535: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-792382 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-792382 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Done: out/minikube-linux-amd64 -p ha-792382 status -v=7 --alsologtostderr: (19.151429417s)
ha_test.go:545: status says not two control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-792382 status -v=7 --alsologtostderr": 
ha_test.go:551: status says not three kubelets are stopped: args "out/minikube-linux-amd64 -p ha-792382 status -v=7 --alsologtostderr": 
ha_test.go:554: status says not two apiservers are stopped: args "out/minikube-linux-amd64 -p ha-792382 status -v=7 --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-792382 -n ha-792382
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-792382 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-792382 logs -n 25: (2.026940205s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-792382 ssh -n ha-792382-m02 sudo cat                                          | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | /home/docker/cp-test_ha-792382-m03_ha-792382-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-792382 cp ha-792382-m03:/home/docker/cp-test.txt                              | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | ha-792382-m04:/home/docker/cp-test_ha-792382-m03_ha-792382-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-792382 ssh -n                                                                 | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | ha-792382-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-792382 ssh -n ha-792382-m04 sudo cat                                          | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | /home/docker/cp-test_ha-792382-m03_ha-792382-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-792382 cp testdata/cp-test.txt                                                | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | ha-792382-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-792382 ssh -n                                                                 | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | ha-792382-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-792382 cp ha-792382-m04:/home/docker/cp-test.txt                              | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3558467820/001/cp-test_ha-792382-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-792382 ssh -n                                                                 | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | ha-792382-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-792382 cp ha-792382-m04:/home/docker/cp-test.txt                              | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | ha-792382:/home/docker/cp-test_ha-792382-m04_ha-792382.txt                       |           |         |         |                     |                     |
	| ssh     | ha-792382 ssh -n                                                                 | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | ha-792382-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-792382 ssh -n ha-792382 sudo cat                                              | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | /home/docker/cp-test_ha-792382-m04_ha-792382.txt                                 |           |         |         |                     |                     |
	| cp      | ha-792382 cp ha-792382-m04:/home/docker/cp-test.txt                              | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | ha-792382-m02:/home/docker/cp-test_ha-792382-m04_ha-792382-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-792382 ssh -n                                                                 | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | ha-792382-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-792382 ssh -n ha-792382-m02 sudo cat                                          | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | /home/docker/cp-test_ha-792382-m04_ha-792382-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-792382 cp ha-792382-m04:/home/docker/cp-test.txt                              | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | ha-792382-m03:/home/docker/cp-test_ha-792382-m04_ha-792382-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-792382 ssh -n                                                                 | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | ha-792382-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-792382 ssh -n ha-792382-m03 sudo cat                                          | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC | 09 Dec 24 10:53 UTC |
	|         | /home/docker/cp-test_ha-792382-m04_ha-792382-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-792382 node stop m02 -v=7                                                     | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:53 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-792382 node start m02 -v=7                                                    | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:56 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-792382 -v=7                                                           | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:56 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-792382 -v=7                                                                | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:56 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-792382 --wait=true -v=7                                                    | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 10:58 UTC | 09 Dec 24 11:02 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-792382                                                                | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 11:02 UTC |                     |
	| node    | ha-792382 node delete m03 -v=7                                                   | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 11:02 UTC | 09 Dec 24 11:02 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-792382 stop -v=7                                                              | ha-792382 | jenkins | v1.34.0 | 09 Dec 24 11:02 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/09 10:58:28
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 10:58:28.191865  633119 out.go:345] Setting OutFile to fd 1 ...
	I1209 10:58:28.191988  633119 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 10:58:28.191998  633119 out.go:358] Setting ErrFile to fd 2...
	I1209 10:58:28.192003  633119 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 10:58:28.192202  633119 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20068-609844/.minikube/bin
	I1209 10:58:28.192853  633119 out.go:352] Setting JSON to false
	I1209 10:58:28.193921  633119 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":13252,"bootTime":1733728656,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 10:58:28.194037  633119 start.go:139] virtualization: kvm guest
	I1209 10:58:28.196208  633119 out.go:177] * [ha-792382] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1209 10:58:28.197833  633119 out.go:177]   - MINIKUBE_LOCATION=20068
	I1209 10:58:28.197830  633119 notify.go:220] Checking for updates...
	I1209 10:58:28.200528  633119 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 10:58:28.201941  633119 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20068-609844/kubeconfig
	I1209 10:58:28.203146  633119 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20068-609844/.minikube
	I1209 10:58:28.204302  633119 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1209 10:58:28.205523  633119 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 10:58:28.207407  633119 config.go:182] Loaded profile config "ha-792382": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 10:58:28.207565  633119 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 10:58:28.208116  633119 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:58:28.208157  633119 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:58:28.223976  633119 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40345
	I1209 10:58:28.224573  633119 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:58:28.225315  633119 main.go:141] libmachine: Using API Version  1
	I1209 10:58:28.225346  633119 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:58:28.225733  633119 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:58:28.225929  633119 main.go:141] libmachine: (ha-792382) Calling .DriverName
	I1209 10:58:28.266828  633119 out.go:177] * Using the kvm2 driver based on existing profile
	I1209 10:58:28.268216  633119 start.go:297] selected driver: kvm2
	I1209 10:58:28.268235  633119 start.go:901] validating driver "kvm2" against &{Name:ha-792382 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.2 ClusterName:ha-792382 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.69 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.89 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.82 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.54 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false defa
ult-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVe
rsion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 10:58:28.268403  633119 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 10:58:28.268799  633119 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 10:58:28.268907  633119 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20068-609844/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1209 10:58:28.284451  633119 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1209 10:58:28.285578  633119 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 10:58:28.285629  633119 cni.go:84] Creating CNI manager for ""
	I1209 10:58:28.285696  633119 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1209 10:58:28.285783  633119 start.go:340] cluster config:
	{Name:ha-792382 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-792382 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.69 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.89 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.82 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.54 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:fals
e headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 10:58:28.285928  633119 iso.go:125] acquiring lock: {Name:mk3bf4d9da9b75cc0ae0a3732f4492bac54a4b46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 10:58:28.287683  633119 out.go:177] * Starting "ha-792382" primary control-plane node in "ha-792382" cluster
	I1209 10:58:28.288797  633119 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 10:58:28.288833  633119 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1209 10:58:28.288844  633119 cache.go:56] Caching tarball of preloaded images
	I1209 10:58:28.288931  633119 preload.go:172] Found /home/jenkins/minikube-integration/20068-609844/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1209 10:58:28.288941  633119 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1209 10:58:28.289069  633119 profile.go:143] Saving config to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/config.json ...
	I1209 10:58:28.289270  633119 start.go:360] acquireMachinesLock for ha-792382: {Name:mka6afffe9ddf2faebdc603ed0401805d58bf31e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 10:58:28.289310  633119 start.go:364] duration metric: took 23.15µs to acquireMachinesLock for "ha-792382"
	I1209 10:58:28.289325  633119 start.go:96] Skipping create...Using existing machine configuration
	I1209 10:58:28.289331  633119 fix.go:54] fixHost starting: 
	I1209 10:58:28.289621  633119 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:58:28.289655  633119 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:58:28.306463  633119 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44091
	I1209 10:58:28.307025  633119 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:58:28.307611  633119 main.go:141] libmachine: Using API Version  1
	I1209 10:58:28.307644  633119 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:58:28.307965  633119 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:58:28.308153  633119 main.go:141] libmachine: (ha-792382) Calling .DriverName
	I1209 10:58:28.308285  633119 main.go:141] libmachine: (ha-792382) Calling .GetState
	I1209 10:58:28.309937  633119 fix.go:112] recreateIfNeeded on ha-792382: state=Running err=<nil>
	W1209 10:58:28.309962  633119 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 10:58:28.312538  633119 out.go:177] * Updating the running kvm2 "ha-792382" VM ...
	I1209 10:58:28.313754  633119 machine.go:93] provisionDockerMachine start ...
	I1209 10:58:28.313779  633119 main.go:141] libmachine: (ha-792382) Calling .DriverName
	I1209 10:58:28.314000  633119 main.go:141] libmachine: (ha-792382) Calling .GetSSHHostname
	I1209 10:58:28.316352  633119 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:58:28.316746  633119 main.go:141] libmachine: (ha-792382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:82:f7", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:49:26 +0000 UTC Type:0 Mac:52:54:00:a8:82:f7 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-792382 Clientid:01:52:54:00:a8:82:f7}
	I1209 10:58:28.316778  633119 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:58:28.316919  633119 main.go:141] libmachine: (ha-792382) Calling .GetSSHPort
	I1209 10:58:28.317140  633119 main.go:141] libmachine: (ha-792382) Calling .GetSSHKeyPath
	I1209 10:58:28.317333  633119 main.go:141] libmachine: (ha-792382) Calling .GetSSHKeyPath
	I1209 10:58:28.317459  633119 main.go:141] libmachine: (ha-792382) Calling .GetSSHUsername
	I1209 10:58:28.317601  633119 main.go:141] libmachine: Using SSH client type: native
	I1209 10:58:28.317852  633119 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.69 22 <nil> <nil>}
	I1209 10:58:28.317870  633119 main.go:141] libmachine: About to run SSH command:
	hostname
	I1209 10:58:28.432781  633119 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-792382
	
	I1209 10:58:28.432821  633119 main.go:141] libmachine: (ha-792382) Calling .GetMachineName
	I1209 10:58:28.433137  633119 buildroot.go:166] provisioning hostname "ha-792382"
	I1209 10:58:28.433174  633119 main.go:141] libmachine: (ha-792382) Calling .GetMachineName
	I1209 10:58:28.433406  633119 main.go:141] libmachine: (ha-792382) Calling .GetSSHHostname
	I1209 10:58:28.436046  633119 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:58:28.436430  633119 main.go:141] libmachine: (ha-792382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:82:f7", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:49:26 +0000 UTC Type:0 Mac:52:54:00:a8:82:f7 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-792382 Clientid:01:52:54:00:a8:82:f7}
	I1209 10:58:28.436451  633119 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:58:28.436620  633119 main.go:141] libmachine: (ha-792382) Calling .GetSSHPort
	I1209 10:58:28.436816  633119 main.go:141] libmachine: (ha-792382) Calling .GetSSHKeyPath
	I1209 10:58:28.436996  633119 main.go:141] libmachine: (ha-792382) Calling .GetSSHKeyPath
	I1209 10:58:28.437169  633119 main.go:141] libmachine: (ha-792382) Calling .GetSSHUsername
	I1209 10:58:28.437333  633119 main.go:141] libmachine: Using SSH client type: native
	I1209 10:58:28.437524  633119 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.69 22 <nil> <nil>}
	I1209 10:58:28.437539  633119 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-792382 && echo "ha-792382" | sudo tee /etc/hostname
	I1209 10:58:28.569208  633119 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-792382
	
	I1209 10:58:28.569241  633119 main.go:141] libmachine: (ha-792382) Calling .GetSSHHostname
	I1209 10:58:28.572736  633119 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:58:28.573149  633119 main.go:141] libmachine: (ha-792382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:82:f7", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:49:26 +0000 UTC Type:0 Mac:52:54:00:a8:82:f7 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-792382 Clientid:01:52:54:00:a8:82:f7}
	I1209 10:58:28.573178  633119 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:58:28.573326  633119 main.go:141] libmachine: (ha-792382) Calling .GetSSHPort
	I1209 10:58:28.573553  633119 main.go:141] libmachine: (ha-792382) Calling .GetSSHKeyPath
	I1209 10:58:28.573730  633119 main.go:141] libmachine: (ha-792382) Calling .GetSSHKeyPath
	I1209 10:58:28.573887  633119 main.go:141] libmachine: (ha-792382) Calling .GetSSHUsername
	I1209 10:58:28.574066  633119 main.go:141] libmachine: Using SSH client type: native
	I1209 10:58:28.574274  633119 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.69 22 <nil> <nil>}
	I1209 10:58:28.574291  633119 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-792382' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-792382/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-792382' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 10:58:28.678931  633119 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 10:58:28.678977  633119 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20068-609844/.minikube CaCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20068-609844/.minikube}
	I1209 10:58:28.679005  633119 buildroot.go:174] setting up certificates
	I1209 10:58:28.679017  633119 provision.go:84] configureAuth start
	I1209 10:58:28.679026  633119 main.go:141] libmachine: (ha-792382) Calling .GetMachineName
	I1209 10:58:28.679301  633119 main.go:141] libmachine: (ha-792382) Calling .GetIP
	I1209 10:58:28.681959  633119 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:58:28.682322  633119 main.go:141] libmachine: (ha-792382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:82:f7", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:49:26 +0000 UTC Type:0 Mac:52:54:00:a8:82:f7 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-792382 Clientid:01:52:54:00:a8:82:f7}
	I1209 10:58:28.682351  633119 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:58:28.682551  633119 main.go:141] libmachine: (ha-792382) Calling .GetSSHHostname
	I1209 10:58:28.684619  633119 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:58:28.684952  633119 main.go:141] libmachine: (ha-792382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:82:f7", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:49:26 +0000 UTC Type:0 Mac:52:54:00:a8:82:f7 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-792382 Clientid:01:52:54:00:a8:82:f7}
	I1209 10:58:28.684973  633119 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:58:28.685123  633119 provision.go:143] copyHostCerts
	I1209 10:58:28.685169  633119 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem
	I1209 10:58:28.685207  633119 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem, removing ...
	I1209 10:58:28.685224  633119 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem
	I1209 10:58:28.685289  633119 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem (1082 bytes)
	I1209 10:58:28.685363  633119 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem
	I1209 10:58:28.685383  633119 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem, removing ...
	I1209 10:58:28.685390  633119 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem
	I1209 10:58:28.685412  633119 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem (1123 bytes)
	I1209 10:58:28.685456  633119 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem
	I1209 10:58:28.685472  633119 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem, removing ...
	I1209 10:58:28.685477  633119 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem
	I1209 10:58:28.685498  633119 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem (1679 bytes)
	I1209 10:58:28.685545  633119 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem org=jenkins.ha-792382 san=[127.0.0.1 192.168.39.69 ha-792382 localhost minikube]
	I1209 10:58:28.892199  633119 provision.go:177] copyRemoteCerts
	I1209 10:58:28.892274  633119 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 10:58:28.892309  633119 main.go:141] libmachine: (ha-792382) Calling .GetSSHHostname
	I1209 10:58:28.895225  633119 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:58:28.895558  633119 main.go:141] libmachine: (ha-792382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:82:f7", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:49:26 +0000 UTC Type:0 Mac:52:54:00:a8:82:f7 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-792382 Clientid:01:52:54:00:a8:82:f7}
	I1209 10:58:28.895580  633119 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:58:28.895803  633119 main.go:141] libmachine: (ha-792382) Calling .GetSSHPort
	I1209 10:58:28.896015  633119 main.go:141] libmachine: (ha-792382) Calling .GetSSHKeyPath
	I1209 10:58:28.896149  633119 main.go:141] libmachine: (ha-792382) Calling .GetSSHUsername
	I1209 10:58:28.896267  633119 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382/id_rsa Username:docker}
	I1209 10:58:28.977362  633119 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1209 10:58:28.977440  633119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1209 10:58:29.001860  633119 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1209 10:58:29.001940  633119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1209 10:58:29.025837  633119 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1209 10:58:29.025908  633119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1209 10:58:29.049057  633119 provision.go:87] duration metric: took 370.025873ms to configureAuth
	I1209 10:58:29.049092  633119 buildroot.go:189] setting minikube options for container-runtime
	I1209 10:58:29.049327  633119 config.go:182] Loaded profile config "ha-792382": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 10:58:29.049421  633119 main.go:141] libmachine: (ha-792382) Calling .GetSSHHostname
	I1209 10:58:29.052110  633119 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:58:29.052514  633119 main.go:141] libmachine: (ha-792382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:82:f7", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:49:26 +0000 UTC Type:0 Mac:52:54:00:a8:82:f7 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-792382 Clientid:01:52:54:00:a8:82:f7}
	I1209 10:58:29.052547  633119 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:58:29.052732  633119 main.go:141] libmachine: (ha-792382) Calling .GetSSHPort
	I1209 10:58:29.052967  633119 main.go:141] libmachine: (ha-792382) Calling .GetSSHKeyPath
	I1209 10:58:29.053186  633119 main.go:141] libmachine: (ha-792382) Calling .GetSSHKeyPath
	I1209 10:58:29.053365  633119 main.go:141] libmachine: (ha-792382) Calling .GetSSHUsername
	I1209 10:58:29.053556  633119 main.go:141] libmachine: Using SSH client type: native
	I1209 10:58:29.053726  633119 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.69 22 <nil> <nil>}
	I1209 10:58:29.053743  633119 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 10:59:59.840941  633119 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 10:59:59.840982  633119 machine.go:96] duration metric: took 1m31.527209851s to provisionDockerMachine
	I1209 10:59:59.841017  633119 start.go:293] postStartSetup for "ha-792382" (driver="kvm2")
	I1209 10:59:59.841034  633119 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 10:59:59.841062  633119 main.go:141] libmachine: (ha-792382) Calling .DriverName
	I1209 10:59:59.841415  633119 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 10:59:59.841450  633119 main.go:141] libmachine: (ha-792382) Calling .GetSSHHostname
	I1209 10:59:59.844254  633119 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:59:59.844690  633119 main.go:141] libmachine: (ha-792382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:82:f7", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:49:26 +0000 UTC Type:0 Mac:52:54:00:a8:82:f7 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-792382 Clientid:01:52:54:00:a8:82:f7}
	I1209 10:59:59.844714  633119 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:59:59.844885  633119 main.go:141] libmachine: (ha-792382) Calling .GetSSHPort
	I1209 10:59:59.845113  633119 main.go:141] libmachine: (ha-792382) Calling .GetSSHKeyPath
	I1209 10:59:59.845265  633119 main.go:141] libmachine: (ha-792382) Calling .GetSSHUsername
	I1209 10:59:59.845395  633119 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382/id_rsa Username:docker}
	I1209 10:59:59.925350  633119 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 10:59:59.929576  633119 info.go:137] Remote host: Buildroot 2023.02.9
	I1209 10:59:59.929634  633119 filesync.go:126] Scanning /home/jenkins/minikube-integration/20068-609844/.minikube/addons for local assets ...
	I1209 10:59:59.929724  633119 filesync.go:126] Scanning /home/jenkins/minikube-integration/20068-609844/.minikube/files for local assets ...
	I1209 10:59:59.929797  633119 filesync.go:149] local asset: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem -> 6170172.pem in /etc/ssl/certs
	I1209 10:59:59.929808  633119 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem -> /etc/ssl/certs/6170172.pem
	I1209 10:59:59.929896  633119 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 10:59:59.939169  633119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem --> /etc/ssl/certs/6170172.pem (1708 bytes)
	I1209 10:59:59.962315  633119 start.go:296] duration metric: took 121.273228ms for postStartSetup
	I1209 10:59:59.962383  633119 main.go:141] libmachine: (ha-792382) Calling .DriverName
	I1209 10:59:59.962723  633119 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1209 10:59:59.962758  633119 main.go:141] libmachine: (ha-792382) Calling .GetSSHHostname
	I1209 10:59:59.965355  633119 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:59:59.965749  633119 main.go:141] libmachine: (ha-792382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:82:f7", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:49:26 +0000 UTC Type:0 Mac:52:54:00:a8:82:f7 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-792382 Clientid:01:52:54:00:a8:82:f7}
	I1209 10:59:59.965775  633119 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 10:59:59.965965  633119 main.go:141] libmachine: (ha-792382) Calling .GetSSHPort
	I1209 10:59:59.966140  633119 main.go:141] libmachine: (ha-792382) Calling .GetSSHKeyPath
	I1209 10:59:59.966320  633119 main.go:141] libmachine: (ha-792382) Calling .GetSSHUsername
	I1209 10:59:59.966442  633119 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382/id_rsa Username:docker}
	W1209 11:00:00.044743  633119 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I1209 11:00:00.044776  633119 fix.go:56] duration metric: took 1m31.755445434s for fixHost
	I1209 11:00:00.044801  633119 main.go:141] libmachine: (ha-792382) Calling .GetSSHHostname
	I1209 11:00:00.047669  633119 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 11:00:00.048000  633119 main.go:141] libmachine: (ha-792382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:82:f7", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:49:26 +0000 UTC Type:0 Mac:52:54:00:a8:82:f7 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-792382 Clientid:01:52:54:00:a8:82:f7}
	I1209 11:00:00.048030  633119 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 11:00:00.048250  633119 main.go:141] libmachine: (ha-792382) Calling .GetSSHPort
	I1209 11:00:00.048461  633119 main.go:141] libmachine: (ha-792382) Calling .GetSSHKeyPath
	I1209 11:00:00.048664  633119 main.go:141] libmachine: (ha-792382) Calling .GetSSHKeyPath
	I1209 11:00:00.048828  633119 main.go:141] libmachine: (ha-792382) Calling .GetSSHUsername
	I1209 11:00:00.049034  633119 main.go:141] libmachine: Using SSH client type: native
	I1209 11:00:00.049316  633119 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.69 22 <nil> <nil>}
	I1209 11:00:00.049343  633119 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1209 11:00:00.167593  633119 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733742000.134008518
	
	I1209 11:00:00.167620  633119 fix.go:216] guest clock: 1733742000.134008518
	I1209 11:00:00.167631  633119 fix.go:229] Guest: 2024-12-09 11:00:00.134008518 +0000 UTC Remote: 2024-12-09 11:00:00.044783223 +0000 UTC m=+91.895951206 (delta=89.225295ms)
	I1209 11:00:00.167702  633119 fix.go:200] guest clock delta is within tolerance: 89.225295ms
	I1209 11:00:00.167714  633119 start.go:83] releasing machines lock for "ha-792382", held for 1m31.87839392s
	I1209 11:00:00.167767  633119 main.go:141] libmachine: (ha-792382) Calling .DriverName
	I1209 11:00:00.168055  633119 main.go:141] libmachine: (ha-792382) Calling .GetIP
	I1209 11:00:00.171062  633119 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 11:00:00.171537  633119 main.go:141] libmachine: (ha-792382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:82:f7", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:49:26 +0000 UTC Type:0 Mac:52:54:00:a8:82:f7 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-792382 Clientid:01:52:54:00:a8:82:f7}
	I1209 11:00:00.171561  633119 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 11:00:00.171777  633119 main.go:141] libmachine: (ha-792382) Calling .DriverName
	I1209 11:00:00.172346  633119 main.go:141] libmachine: (ha-792382) Calling .DriverName
	I1209 11:00:00.172553  633119 main.go:141] libmachine: (ha-792382) Calling .DriverName
	I1209 11:00:00.172649  633119 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 11:00:00.172708  633119 main.go:141] libmachine: (ha-792382) Calling .GetSSHHostname
	I1209 11:00:00.172783  633119 ssh_runner.go:195] Run: cat /version.json
	I1209 11:00:00.172816  633119 main.go:141] libmachine: (ha-792382) Calling .GetSSHHostname
	I1209 11:00:00.175471  633119 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 11:00:00.175688  633119 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 11:00:00.175891  633119 main.go:141] libmachine: (ha-792382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:82:f7", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:49:26 +0000 UTC Type:0 Mac:52:54:00:a8:82:f7 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-792382 Clientid:01:52:54:00:a8:82:f7}
	I1209 11:00:00.175914  633119 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 11:00:00.176100  633119 main.go:141] libmachine: (ha-792382) Calling .GetSSHPort
	I1209 11:00:00.176126  633119 main.go:141] libmachine: (ha-792382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:82:f7", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:49:26 +0000 UTC Type:0 Mac:52:54:00:a8:82:f7 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-792382 Clientid:01:52:54:00:a8:82:f7}
	I1209 11:00:00.176151  633119 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 11:00:00.176301  633119 main.go:141] libmachine: (ha-792382) Calling .GetSSHPort
	I1209 11:00:00.176321  633119 main.go:141] libmachine: (ha-792382) Calling .GetSSHKeyPath
	I1209 11:00:00.176509  633119 main.go:141] libmachine: (ha-792382) Calling .GetSSHKeyPath
	I1209 11:00:00.176637  633119 main.go:141] libmachine: (ha-792382) Calling .GetSSHUsername
	I1209 11:00:00.176772  633119 main.go:141] libmachine: (ha-792382) Calling .GetSSHUsername
	I1209 11:00:00.176844  633119 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382/id_rsa Username:docker}
	I1209 11:00:00.176908  633119 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/ha-792382/id_rsa Username:docker}
	I1209 11:00:00.333231  633119 ssh_runner.go:195] Run: systemctl --version
	I1209 11:00:00.342876  633119 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 11:00:00.513243  633119 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 11:00:00.519161  633119 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 11:00:00.519246  633119 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 11:00:00.528593  633119 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1209 11:00:00.528623  633119 start.go:495] detecting cgroup driver to use...
	I1209 11:00:00.528715  633119 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 11:00:00.546952  633119 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 11:00:00.561864  633119 docker.go:217] disabling cri-docker service (if available) ...
	I1209 11:00:00.561942  633119 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 11:00:00.575804  633119 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 11:00:00.590289  633119 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 11:00:00.741698  633119 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 11:00:00.883631  633119 docker.go:233] disabling docker service ...
	I1209 11:00:00.883717  633119 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 11:00:00.904297  633119 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 11:00:00.918619  633119 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 11:00:01.077613  633119 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 11:00:01.251513  633119 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 11:00:01.266718  633119 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 11:00:01.285767  633119 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1209 11:00:01.285850  633119 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:00:01.297035  633119 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 11:00:01.297129  633119 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:00:01.308075  633119 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:00:01.319023  633119 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:00:01.329996  633119 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 11:00:01.340862  633119 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:00:01.351634  633119 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:00:01.362972  633119 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:00:01.373494  633119 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 11:00:01.382730  633119 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 11:00:01.392477  633119 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 11:00:01.535996  633119 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 11:00:01.783924  633119 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 11:00:01.784005  633119 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 11:00:01.788810  633119 start.go:563] Will wait 60s for crictl version
	I1209 11:00:01.788894  633119 ssh_runner.go:195] Run: which crictl
	I1209 11:00:01.792793  633119 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 11:00:01.828069  633119 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1209 11:00:01.828147  633119 ssh_runner.go:195] Run: crio --version
	I1209 11:00:01.856155  633119 ssh_runner.go:195] Run: crio --version
	I1209 11:00:01.888597  633119 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1209 11:00:01.890226  633119 main.go:141] libmachine: (ha-792382) Calling .GetIP
	I1209 11:00:01.893412  633119 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 11:00:01.893853  633119 main.go:141] libmachine: (ha-792382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:82:f7", ip: ""} in network mk-ha-792382: {Iface:virbr1 ExpiryTime:2024-12-09 11:49:26 +0000 UTC Type:0 Mac:52:54:00:a8:82:f7 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-792382 Clientid:01:52:54:00:a8:82:f7}
	I1209 11:00:01.893884  633119 main.go:141] libmachine: (ha-792382) DBG | domain ha-792382 has defined IP address 192.168.39.69 and MAC address 52:54:00:a8:82:f7 in network mk-ha-792382
	I1209 11:00:01.894102  633119 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1209 11:00:01.898962  633119 kubeadm.go:883] updating cluster {Name:ha-792382 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cl
usterName:ha-792382 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.69 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.89 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.82 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.54 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storage
class:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 11:00:01.899109  633119 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 11:00:01.899171  633119 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 11:00:01.945219  633119 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 11:00:01.945297  633119 crio.go:433] Images already preloaded, skipping extraction
	I1209 11:00:01.945362  633119 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 11:00:01.982039  633119 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 11:00:01.982073  633119 cache_images.go:84] Images are preloaded, skipping loading
	I1209 11:00:01.982098  633119 kubeadm.go:934] updating node { 192.168.39.69 8443 v1.31.2 crio true true} ...
	I1209 11:00:01.982235  633119 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-792382 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.69
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-792382 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 11:00:01.982304  633119 ssh_runner.go:195] Run: crio config
	I1209 11:00:02.032195  633119 cni.go:84] Creating CNI manager for ""
	I1209 11:00:02.032218  633119 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1209 11:00:02.032233  633119 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1209 11:00:02.032273  633119 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.69 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-792382 NodeName:ha-792382 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.69"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.69 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 11:00:02.032447  633119 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.69
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-792382"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.69"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.69"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 11:00:02.032470  633119 kube-vip.go:115] generating kube-vip config ...
	I1209 11:00:02.032527  633119 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1209 11:00:02.044370  633119 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1209 11:00:02.044504  633119 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.7
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1209 11:00:02.044570  633119 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1209 11:00:02.054431  633119 binaries.go:44] Found k8s binaries, skipping transfer
	I1209 11:00:02.054529  633119 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1209 11:00:02.064195  633119 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I1209 11:00:02.081234  633119 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 11:00:02.099055  633119 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2286 bytes)
	I1209 11:00:02.116361  633119 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1209 11:00:02.133846  633119 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1209 11:00:02.138363  633119 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 11:00:02.310777  633119 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 11:00:02.325092  633119 certs.go:68] Setting up /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382 for IP: 192.168.39.69
	I1209 11:00:02.325144  633119 certs.go:194] generating shared ca certs ...
	I1209 11:00:02.325170  633119 certs.go:226] acquiring lock for ca certs: {Name:mk2df5887e08965d909a9c950da5dfffb8a04ddc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 11:00:02.325343  633119 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.key
	I1209 11:00:02.325433  633119 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.key
	I1209 11:00:02.325448  633119 certs.go:256] generating profile certs ...
	I1209 11:00:02.325538  633119 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/client.key
	I1209 11:00:02.325566  633119 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.key.1dd7c68f
	I1209 11:00:02.325579  633119 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.crt.1dd7c68f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.69 192.168.39.89 192.168.39.82 192.168.39.254]
	I1209 11:00:02.827882  633119 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.crt.1dd7c68f ...
	I1209 11:00:02.827914  633119 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.crt.1dd7c68f: {Name:mk80e22e890d22b3f355dc15ccf8d59360abd429 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 11:00:02.828087  633119 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.key.1dd7c68f ...
	I1209 11:00:02.828099  633119 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.key.1dd7c68f: {Name:mk53f62924ce37068640a587815f7e82b51c466b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 11:00:02.828165  633119 certs.go:381] copying /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.crt.1dd7c68f -> /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.crt
	I1209 11:00:02.828344  633119 certs.go:385] copying /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.key.1dd7c68f -> /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.key
	I1209 11:00:02.828501  633119 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/proxy-client.key
	I1209 11:00:02.828519  633119 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1209 11:00:02.828531  633119 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1209 11:00:02.828545  633119 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1209 11:00:02.828559  633119 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1209 11:00:02.828570  633119 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1209 11:00:02.828585  633119 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1209 11:00:02.828597  633119 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1209 11:00:02.828606  633119 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1209 11:00:02.828655  633119 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017.pem (1338 bytes)
	W1209 11:00:02.828684  633119 certs.go:480] ignoring /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017_empty.pem, impossibly tiny 0 bytes
	I1209 11:00:02.828694  633119 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem (1675 bytes)
	I1209 11:00:02.828714  633119 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem (1082 bytes)
	I1209 11:00:02.828735  633119 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem (1123 bytes)
	I1209 11:00:02.828755  633119 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem (1679 bytes)
	I1209 11:00:02.828794  633119 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem (1708 bytes)
	I1209 11:00:02.828819  633119 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1209 11:00:02.828834  633119 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017.pem -> /usr/share/ca-certificates/617017.pem
	I1209 11:00:02.828849  633119 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem -> /usr/share/ca-certificates/6170172.pem
	I1209 11:00:02.829453  633119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 11:00:02.854330  633119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1209 11:00:02.877871  633119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 11:00:02.902595  633119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1209 11:00:02.927262  633119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1209 11:00:02.951278  633119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1209 11:00:02.974921  633119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 11:00:02.998113  633119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/ha-792382/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1209 11:00:03.021844  633119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 11:00:03.045297  633119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017.pem --> /usr/share/ca-certificates/617017.pem (1338 bytes)
	I1209 11:00:03.069947  633119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem --> /usr/share/ca-certificates/6170172.pem (1708 bytes)
	I1209 11:00:03.093633  633119 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 11:00:03.110666  633119 ssh_runner.go:195] Run: openssl version
	I1209 11:00:03.116472  633119 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1209 11:00:03.127335  633119 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 11:00:03.131684  633119 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 10:34 /usr/share/ca-certificates/minikubeCA.pem
	I1209 11:00:03.131746  633119 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 11:00:03.137252  633119 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1209 11:00:03.146778  633119 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/617017.pem && ln -fs /usr/share/ca-certificates/617017.pem /etc/ssl/certs/617017.pem"
	I1209 11:00:03.157487  633119 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/617017.pem
	I1209 11:00:03.161805  633119 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 10:45 /usr/share/ca-certificates/617017.pem
	I1209 11:00:03.161870  633119 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/617017.pem
	I1209 11:00:03.167605  633119 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/617017.pem /etc/ssl/certs/51391683.0"
	I1209 11:00:03.177085  633119 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6170172.pem && ln -fs /usr/share/ca-certificates/6170172.pem /etc/ssl/certs/6170172.pem"
	I1209 11:00:03.187750  633119 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6170172.pem
	I1209 11:00:03.191945  633119 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 10:45 /usr/share/ca-certificates/6170172.pem
	I1209 11:00:03.192066  633119 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6170172.pem
	I1209 11:00:03.197663  633119 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6170172.pem /etc/ssl/certs/3ec20f2e.0"
	I1209 11:00:03.207006  633119 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 11:00:03.211252  633119 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1209 11:00:03.218956  633119 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1209 11:00:03.224341  633119 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1209 11:00:03.229773  633119 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1209 11:00:03.235329  633119 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1209 11:00:03.240591  633119 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1209 11:00:03.246231  633119 kubeadm.go:392] StartCluster: {Name:ha-792382 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Clust
erName:ha-792382 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.69 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.89 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.82 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.54 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storagecla
ss:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 11:00:03.246356  633119 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 11:00:03.246426  633119 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 11:00:03.284534  633119 cri.go:89] found id: "8c25fc6fac09e1942f71fe72fe70632a14b3b57122944b01cd9b6d8ffdf54b16"
	I1209 11:00:03.284557  633119 cri.go:89] found id: "f12d8a04a431ddf40d44d416ee9d09815655d15c3c10d8ff8bf37d4f3dc2d041"
	I1209 11:00:03.284561  633119 cri.go:89] found id: "2f1ea1744a4f918de3f7835ae8108eb6bddf8d49fe6ddb07b8c1bf6ee00f01e3"
	I1209 11:00:03.284564  633119 cri.go:89] found id: "2d1908a476753017705e22c87981bb495c3ef86b9af8a1f3971334fd8a824497"
	I1209 11:00:03.284567  633119 cri.go:89] found id: "f4ba11ff07ea5065d66a5e8b8d091a9ba9a1c680ab1ade1d19aecf153081d7dd"
	I1209 11:00:03.284570  633119 cri.go:89] found id: "afc0f0aea4c8acce16752f3e50cc08b660c2c26b24932beaf28ba3d1bc596733"
	I1209 11:00:03.284573  633119 cri.go:89] found id: "b6bf7c7cf0d689f954fca58d1d94afa21f5bfd0f606552fbbf1479a9ae1593d3"
	I1209 11:00:03.284575  633119 cri.go:89] found id: "3cf6196a4789ec6471e8e2fe474e71d1756368a4423e425262410e6c7a71e522"
	I1209 11:00:03.284577  633119 cri.go:89] found id: "082e8ff7e6c7e3e1110852687152ddb22202b3134aa3105a8b11aa7d702bcd6a"
	I1209 11:00:03.284584  633119 cri.go:89] found id: "64b96c1c2297057b3b928bc7494df511d909f05efb4f8d5be27458c164efe95f"
	I1209 11:00:03.284586  633119 cri.go:89] found id: "778345b29099a60c663a83117fc6b409ae496d9228f79ecc48c26209e7a87f63"
	I1209 11:00:03.284589  633119 cri.go:89] found id: "d93c68b855d9f72ce368ee4f59250a988032fe781209e2115a8d70ea60f77fee"
	I1209 11:00:03.284591  633119 cri.go:89] found id: "00db8f77881efc15e9bf57aba34890302ec37f9239ba1d191862ea78b9833604"
	I1209 11:00:03.284594  633119 cri.go:89] found id: ""
	I1209 11:00:03.284640  633119 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-792382 -n ha-792382
helpers_test.go:261: (dbg) Run:  kubectl --context ha-792382 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (142.28s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (325.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-714725
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-714725
E1209 11:21:33.303607  617017 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/addons-156041/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-714725: exit status 82 (2m1.853930131s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-714725-m03"  ...
	* Stopping node "multinode-714725-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-714725" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-714725 --wait=true -v=8 --alsologtostderr
E1209 11:23:22.652629  617017 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/functional-032350/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-714725 --wait=true -v=8 --alsologtostderr: (3m20.283212244s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-714725
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-714725 -n multinode-714725
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-714725 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-714725 logs -n 25: (2.15507584s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-714725 ssh -n                                                                 | multinode-714725 | jenkins | v1.34.0 | 09 Dec 24 11:19 UTC | 09 Dec 24 11:19 UTC |
	|         | multinode-714725-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-714725 cp multinode-714725-m02:/home/docker/cp-test.txt                       | multinode-714725 | jenkins | v1.34.0 | 09 Dec 24 11:19 UTC | 09 Dec 24 11:19 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2432959614/001/cp-test_multinode-714725-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-714725 ssh -n                                                                 | multinode-714725 | jenkins | v1.34.0 | 09 Dec 24 11:19 UTC | 09 Dec 24 11:19 UTC |
	|         | multinode-714725-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-714725 cp multinode-714725-m02:/home/docker/cp-test.txt                       | multinode-714725 | jenkins | v1.34.0 | 09 Dec 24 11:19 UTC | 09 Dec 24 11:19 UTC |
	|         | multinode-714725:/home/docker/cp-test_multinode-714725-m02_multinode-714725.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-714725 ssh -n                                                                 | multinode-714725 | jenkins | v1.34.0 | 09 Dec 24 11:19 UTC | 09 Dec 24 11:19 UTC |
	|         | multinode-714725-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-714725 ssh -n multinode-714725 sudo cat                                       | multinode-714725 | jenkins | v1.34.0 | 09 Dec 24 11:19 UTC | 09 Dec 24 11:19 UTC |
	|         | /home/docker/cp-test_multinode-714725-m02_multinode-714725.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-714725 cp multinode-714725-m02:/home/docker/cp-test.txt                       | multinode-714725 | jenkins | v1.34.0 | 09 Dec 24 11:19 UTC | 09 Dec 24 11:19 UTC |
	|         | multinode-714725-m03:/home/docker/cp-test_multinode-714725-m02_multinode-714725-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-714725 ssh -n                                                                 | multinode-714725 | jenkins | v1.34.0 | 09 Dec 24 11:19 UTC | 09 Dec 24 11:19 UTC |
	|         | multinode-714725-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-714725 ssh -n multinode-714725-m03 sudo cat                                   | multinode-714725 | jenkins | v1.34.0 | 09 Dec 24 11:19 UTC | 09 Dec 24 11:19 UTC |
	|         | /home/docker/cp-test_multinode-714725-m02_multinode-714725-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-714725 cp testdata/cp-test.txt                                                | multinode-714725 | jenkins | v1.34.0 | 09 Dec 24 11:19 UTC | 09 Dec 24 11:19 UTC |
	|         | multinode-714725-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-714725 ssh -n                                                                 | multinode-714725 | jenkins | v1.34.0 | 09 Dec 24 11:19 UTC | 09 Dec 24 11:19 UTC |
	|         | multinode-714725-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-714725 cp multinode-714725-m03:/home/docker/cp-test.txt                       | multinode-714725 | jenkins | v1.34.0 | 09 Dec 24 11:19 UTC | 09 Dec 24 11:19 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2432959614/001/cp-test_multinode-714725-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-714725 ssh -n                                                                 | multinode-714725 | jenkins | v1.34.0 | 09 Dec 24 11:19 UTC | 09 Dec 24 11:19 UTC |
	|         | multinode-714725-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-714725 cp multinode-714725-m03:/home/docker/cp-test.txt                       | multinode-714725 | jenkins | v1.34.0 | 09 Dec 24 11:19 UTC | 09 Dec 24 11:19 UTC |
	|         | multinode-714725:/home/docker/cp-test_multinode-714725-m03_multinode-714725.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-714725 ssh -n                                                                 | multinode-714725 | jenkins | v1.34.0 | 09 Dec 24 11:19 UTC | 09 Dec 24 11:19 UTC |
	|         | multinode-714725-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-714725 ssh -n multinode-714725 sudo cat                                       | multinode-714725 | jenkins | v1.34.0 | 09 Dec 24 11:19 UTC | 09 Dec 24 11:19 UTC |
	|         | /home/docker/cp-test_multinode-714725-m03_multinode-714725.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-714725 cp multinode-714725-m03:/home/docker/cp-test.txt                       | multinode-714725 | jenkins | v1.34.0 | 09 Dec 24 11:19 UTC | 09 Dec 24 11:19 UTC |
	|         | multinode-714725-m02:/home/docker/cp-test_multinode-714725-m03_multinode-714725-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-714725 ssh -n                                                                 | multinode-714725 | jenkins | v1.34.0 | 09 Dec 24 11:19 UTC | 09 Dec 24 11:19 UTC |
	|         | multinode-714725-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-714725 ssh -n multinode-714725-m02 sudo cat                                   | multinode-714725 | jenkins | v1.34.0 | 09 Dec 24 11:19 UTC | 09 Dec 24 11:19 UTC |
	|         | /home/docker/cp-test_multinode-714725-m03_multinode-714725-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-714725 node stop m03                                                          | multinode-714725 | jenkins | v1.34.0 | 09 Dec 24 11:19 UTC | 09 Dec 24 11:19 UTC |
	| node    | multinode-714725 node start                                                             | multinode-714725 | jenkins | v1.34.0 | 09 Dec 24 11:19 UTC | 09 Dec 24 11:19 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-714725                                                                | multinode-714725 | jenkins | v1.34.0 | 09 Dec 24 11:19 UTC |                     |
	| stop    | -p multinode-714725                                                                     | multinode-714725 | jenkins | v1.34.0 | 09 Dec 24 11:19 UTC |                     |
	| start   | -p multinode-714725                                                                     | multinode-714725 | jenkins | v1.34.0 | 09 Dec 24 11:21 UTC | 09 Dec 24 11:25 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-714725                                                                | multinode-714725 | jenkins | v1.34.0 | 09 Dec 24 11:25 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/09 11:21:56
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 11:21:56.506451  645459 out.go:345] Setting OutFile to fd 1 ...
	I1209 11:21:56.506590  645459 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 11:21:56.506602  645459 out.go:358] Setting ErrFile to fd 2...
	I1209 11:21:56.506606  645459 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 11:21:56.506777  645459 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20068-609844/.minikube/bin
	I1209 11:21:56.507375  645459 out.go:352] Setting JSON to false
	I1209 11:21:56.508430  645459 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":14660,"bootTime":1733728656,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 11:21:56.508500  645459 start.go:139] virtualization: kvm guest
	I1209 11:21:56.510860  645459 out.go:177] * [multinode-714725] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1209 11:21:56.512096  645459 out.go:177]   - MINIKUBE_LOCATION=20068
	I1209 11:21:56.512096  645459 notify.go:220] Checking for updates...
	I1209 11:21:56.514114  645459 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 11:21:56.515318  645459 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20068-609844/kubeconfig
	I1209 11:21:56.516314  645459 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20068-609844/.minikube
	I1209 11:21:56.517463  645459 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1209 11:21:56.518481  645459 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 11:21:56.520209  645459 config.go:182] Loaded profile config "multinode-714725": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 11:21:56.520366  645459 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 11:21:56.521236  645459 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:21:56.521291  645459 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:21:56.537711  645459 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37237
	I1209 11:21:56.538201  645459 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:21:56.538908  645459 main.go:141] libmachine: Using API Version  1
	I1209 11:21:56.538936  645459 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:21:56.539311  645459 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:21:56.539522  645459 main.go:141] libmachine: (multinode-714725) Calling .DriverName
	I1209 11:21:56.575486  645459 out.go:177] * Using the kvm2 driver based on existing profile
	I1209 11:21:56.577742  645459 start.go:297] selected driver: kvm2
	I1209 11:21:56.577850  645459 start.go:901] validating driver "kvm2" against &{Name:multinode-714725 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.2 ClusterName:multinode-714725 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.31 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.21 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.208 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:f
alse ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 11:21:56.578414  645459 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 11:21:56.578766  645459 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 11:21:56.578870  645459 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20068-609844/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1209 11:21:56.595157  645459 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1209 11:21:56.595880  645459 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 11:21:56.595918  645459 cni.go:84] Creating CNI manager for ""
	I1209 11:21:56.595970  645459 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1209 11:21:56.596028  645459 start.go:340] cluster config:
	{Name:multinode-714725 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:multinode-714725 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.31 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.21 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.208 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisione
r:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuF
irmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 11:21:56.596174  645459 iso.go:125] acquiring lock: {Name:mk3bf4d9da9b75cc0ae0a3732f4492bac54a4b46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 11:21:56.598134  645459 out.go:177] * Starting "multinode-714725" primary control-plane node in "multinode-714725" cluster
	I1209 11:21:56.599150  645459 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 11:21:56.599192  645459 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1209 11:21:56.599200  645459 cache.go:56] Caching tarball of preloaded images
	I1209 11:21:56.599297  645459 preload.go:172] Found /home/jenkins/minikube-integration/20068-609844/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1209 11:21:56.599311  645459 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1209 11:21:56.599479  645459 profile.go:143] Saving config to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/multinode-714725/config.json ...
	I1209 11:21:56.599683  645459 start.go:360] acquireMachinesLock for multinode-714725: {Name:mka6afffe9ddf2faebdc603ed0401805d58bf31e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 11:21:56.599732  645459 start.go:364] duration metric: took 28.976µs to acquireMachinesLock for "multinode-714725"
	I1209 11:21:56.599752  645459 start.go:96] Skipping create...Using existing machine configuration
	I1209 11:21:56.599763  645459 fix.go:54] fixHost starting: 
	I1209 11:21:56.600015  645459 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:21:56.600055  645459 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:21:56.615104  645459 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37161
	I1209 11:21:56.615603  645459 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:21:56.616267  645459 main.go:141] libmachine: Using API Version  1
	I1209 11:21:56.616293  645459 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:21:56.616623  645459 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:21:56.616789  645459 main.go:141] libmachine: (multinode-714725) Calling .DriverName
	I1209 11:21:56.616915  645459 main.go:141] libmachine: (multinode-714725) Calling .GetState
	I1209 11:21:56.618592  645459 fix.go:112] recreateIfNeeded on multinode-714725: state=Running err=<nil>
	W1209 11:21:56.618617  645459 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 11:21:56.620335  645459 out.go:177] * Updating the running kvm2 "multinode-714725" VM ...
	I1209 11:21:56.621598  645459 machine.go:93] provisionDockerMachine start ...
	I1209 11:21:56.621621  645459 main.go:141] libmachine: (multinode-714725) Calling .DriverName
	I1209 11:21:56.621820  645459 main.go:141] libmachine: (multinode-714725) Calling .GetSSHHostname
	I1209 11:21:56.624458  645459 main.go:141] libmachine: (multinode-714725) DBG | domain multinode-714725 has defined MAC address 52:54:00:b7:d0:20 in network mk-multinode-714725
	I1209 11:21:56.624962  645459 main.go:141] libmachine: (multinode-714725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:d0:20", ip: ""} in network mk-multinode-714725: {Iface:virbr1 ExpiryTime:2024-12-09 12:16:28 +0000 UTC Type:0 Mac:52:54:00:b7:d0:20 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:multinode-714725 Clientid:01:52:54:00:b7:d0:20}
	I1209 11:21:56.625007  645459 main.go:141] libmachine: (multinode-714725) DBG | domain multinode-714725 has defined IP address 192.168.39.31 and MAC address 52:54:00:b7:d0:20 in network mk-multinode-714725
	I1209 11:21:56.625132  645459 main.go:141] libmachine: (multinode-714725) Calling .GetSSHPort
	I1209 11:21:56.625312  645459 main.go:141] libmachine: (multinode-714725) Calling .GetSSHKeyPath
	I1209 11:21:56.625459  645459 main.go:141] libmachine: (multinode-714725) Calling .GetSSHKeyPath
	I1209 11:21:56.625598  645459 main.go:141] libmachine: (multinode-714725) Calling .GetSSHUsername
	I1209 11:21:56.625769  645459 main.go:141] libmachine: Using SSH client type: native
	I1209 11:21:56.625953  645459 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.31 22 <nil> <nil>}
	I1209 11:21:56.625969  645459 main.go:141] libmachine: About to run SSH command:
	hostname
	I1209 11:21:56.736882  645459 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-714725
	
	I1209 11:21:56.736918  645459 main.go:141] libmachine: (multinode-714725) Calling .GetMachineName
	I1209 11:21:56.737216  645459 buildroot.go:166] provisioning hostname "multinode-714725"
	I1209 11:21:56.737244  645459 main.go:141] libmachine: (multinode-714725) Calling .GetMachineName
	I1209 11:21:56.737465  645459 main.go:141] libmachine: (multinode-714725) Calling .GetSSHHostname
	I1209 11:21:56.740032  645459 main.go:141] libmachine: (multinode-714725) DBG | domain multinode-714725 has defined MAC address 52:54:00:b7:d0:20 in network mk-multinode-714725
	I1209 11:21:56.740404  645459 main.go:141] libmachine: (multinode-714725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:d0:20", ip: ""} in network mk-multinode-714725: {Iface:virbr1 ExpiryTime:2024-12-09 12:16:28 +0000 UTC Type:0 Mac:52:54:00:b7:d0:20 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:multinode-714725 Clientid:01:52:54:00:b7:d0:20}
	I1209 11:21:56.740447  645459 main.go:141] libmachine: (multinode-714725) DBG | domain multinode-714725 has defined IP address 192.168.39.31 and MAC address 52:54:00:b7:d0:20 in network mk-multinode-714725
	I1209 11:21:56.740605  645459 main.go:141] libmachine: (multinode-714725) Calling .GetSSHPort
	I1209 11:21:56.740784  645459 main.go:141] libmachine: (multinode-714725) Calling .GetSSHKeyPath
	I1209 11:21:56.740924  645459 main.go:141] libmachine: (multinode-714725) Calling .GetSSHKeyPath
	I1209 11:21:56.741022  645459 main.go:141] libmachine: (multinode-714725) Calling .GetSSHUsername
	I1209 11:21:56.741167  645459 main.go:141] libmachine: Using SSH client type: native
	I1209 11:21:56.741339  645459 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.31 22 <nil> <nil>}
	I1209 11:21:56.741351  645459 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-714725 && echo "multinode-714725" | sudo tee /etc/hostname
	I1209 11:21:56.861170  645459 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-714725
	
	I1209 11:21:56.861222  645459 main.go:141] libmachine: (multinode-714725) Calling .GetSSHHostname
	I1209 11:21:56.863897  645459 main.go:141] libmachine: (multinode-714725) DBG | domain multinode-714725 has defined MAC address 52:54:00:b7:d0:20 in network mk-multinode-714725
	I1209 11:21:56.864346  645459 main.go:141] libmachine: (multinode-714725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:d0:20", ip: ""} in network mk-multinode-714725: {Iface:virbr1 ExpiryTime:2024-12-09 12:16:28 +0000 UTC Type:0 Mac:52:54:00:b7:d0:20 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:multinode-714725 Clientid:01:52:54:00:b7:d0:20}
	I1209 11:21:56.864380  645459 main.go:141] libmachine: (multinode-714725) DBG | domain multinode-714725 has defined IP address 192.168.39.31 and MAC address 52:54:00:b7:d0:20 in network mk-multinode-714725
	I1209 11:21:56.864488  645459 main.go:141] libmachine: (multinode-714725) Calling .GetSSHPort
	I1209 11:21:56.864694  645459 main.go:141] libmachine: (multinode-714725) Calling .GetSSHKeyPath
	I1209 11:21:56.864860  645459 main.go:141] libmachine: (multinode-714725) Calling .GetSSHKeyPath
	I1209 11:21:56.865024  645459 main.go:141] libmachine: (multinode-714725) Calling .GetSSHUsername
	I1209 11:21:56.865245  645459 main.go:141] libmachine: Using SSH client type: native
	I1209 11:21:56.865482  645459 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.31 22 <nil> <nil>}
	I1209 11:21:56.865500  645459 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-714725' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-714725/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-714725' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 11:21:56.962727  645459 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 11:21:56.962765  645459 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20068-609844/.minikube CaCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20068-609844/.minikube}
	I1209 11:21:56.962787  645459 buildroot.go:174] setting up certificates
	I1209 11:21:56.962795  645459 provision.go:84] configureAuth start
	I1209 11:21:56.962803  645459 main.go:141] libmachine: (multinode-714725) Calling .GetMachineName
	I1209 11:21:56.963097  645459 main.go:141] libmachine: (multinode-714725) Calling .GetIP
	I1209 11:21:56.965875  645459 main.go:141] libmachine: (multinode-714725) DBG | domain multinode-714725 has defined MAC address 52:54:00:b7:d0:20 in network mk-multinode-714725
	I1209 11:21:56.966267  645459 main.go:141] libmachine: (multinode-714725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:d0:20", ip: ""} in network mk-multinode-714725: {Iface:virbr1 ExpiryTime:2024-12-09 12:16:28 +0000 UTC Type:0 Mac:52:54:00:b7:d0:20 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:multinode-714725 Clientid:01:52:54:00:b7:d0:20}
	I1209 11:21:56.966296  645459 main.go:141] libmachine: (multinode-714725) DBG | domain multinode-714725 has defined IP address 192.168.39.31 and MAC address 52:54:00:b7:d0:20 in network mk-multinode-714725
	I1209 11:21:56.966447  645459 main.go:141] libmachine: (multinode-714725) Calling .GetSSHHostname
	I1209 11:21:56.968885  645459 main.go:141] libmachine: (multinode-714725) DBG | domain multinode-714725 has defined MAC address 52:54:00:b7:d0:20 in network mk-multinode-714725
	I1209 11:21:56.969264  645459 main.go:141] libmachine: (multinode-714725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:d0:20", ip: ""} in network mk-multinode-714725: {Iface:virbr1 ExpiryTime:2024-12-09 12:16:28 +0000 UTC Type:0 Mac:52:54:00:b7:d0:20 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:multinode-714725 Clientid:01:52:54:00:b7:d0:20}
	I1209 11:21:56.969304  645459 main.go:141] libmachine: (multinode-714725) DBG | domain multinode-714725 has defined IP address 192.168.39.31 and MAC address 52:54:00:b7:d0:20 in network mk-multinode-714725
	I1209 11:21:56.969485  645459 provision.go:143] copyHostCerts
	I1209 11:21:56.969523  645459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem
	I1209 11:21:56.969558  645459 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem, removing ...
	I1209 11:21:56.969567  645459 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem
	I1209 11:21:56.969630  645459 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem (1123 bytes)
	I1209 11:21:56.969748  645459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem
	I1209 11:21:56.969772  645459 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem, removing ...
	I1209 11:21:56.969781  645459 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem
	I1209 11:21:56.969813  645459 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem (1679 bytes)
	I1209 11:21:56.969869  645459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem
	I1209 11:21:56.969886  645459 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem, removing ...
	I1209 11:21:56.969892  645459 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem
	I1209 11:21:56.969913  645459 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem (1082 bytes)
	I1209 11:21:56.969960  645459 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem org=jenkins.multinode-714725 san=[127.0.0.1 192.168.39.31 localhost minikube multinode-714725]
	I1209 11:21:57.036368  645459 provision.go:177] copyRemoteCerts
	I1209 11:21:57.036438  645459 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 11:21:57.036521  645459 main.go:141] libmachine: (multinode-714725) Calling .GetSSHHostname
	I1209 11:21:57.039147  645459 main.go:141] libmachine: (multinode-714725) DBG | domain multinode-714725 has defined MAC address 52:54:00:b7:d0:20 in network mk-multinode-714725
	I1209 11:21:57.039465  645459 main.go:141] libmachine: (multinode-714725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:d0:20", ip: ""} in network mk-multinode-714725: {Iface:virbr1 ExpiryTime:2024-12-09 12:16:28 +0000 UTC Type:0 Mac:52:54:00:b7:d0:20 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:multinode-714725 Clientid:01:52:54:00:b7:d0:20}
	I1209 11:21:57.039502  645459 main.go:141] libmachine: (multinode-714725) DBG | domain multinode-714725 has defined IP address 192.168.39.31 and MAC address 52:54:00:b7:d0:20 in network mk-multinode-714725
	I1209 11:21:57.039607  645459 main.go:141] libmachine: (multinode-714725) Calling .GetSSHPort
	I1209 11:21:57.039789  645459 main.go:141] libmachine: (multinode-714725) Calling .GetSSHKeyPath
	I1209 11:21:57.039914  645459 main.go:141] libmachine: (multinode-714725) Calling .GetSSHUsername
	I1209 11:21:57.040047  645459 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/multinode-714725/id_rsa Username:docker}
	I1209 11:21:57.121698  645459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1209 11:21:57.121784  645459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1209 11:21:57.148922  645459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1209 11:21:57.149004  645459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1209 11:21:57.174235  645459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1209 11:21:57.174312  645459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1209 11:21:57.202534  645459 provision.go:87] duration metric: took 239.722079ms to configureAuth
	I1209 11:21:57.202569  645459 buildroot.go:189] setting minikube options for container-runtime
	I1209 11:21:57.202834  645459 config.go:182] Loaded profile config "multinode-714725": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 11:21:57.202949  645459 main.go:141] libmachine: (multinode-714725) Calling .GetSSHHostname
	I1209 11:21:57.205753  645459 main.go:141] libmachine: (multinode-714725) DBG | domain multinode-714725 has defined MAC address 52:54:00:b7:d0:20 in network mk-multinode-714725
	I1209 11:21:57.206203  645459 main.go:141] libmachine: (multinode-714725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:d0:20", ip: ""} in network mk-multinode-714725: {Iface:virbr1 ExpiryTime:2024-12-09 12:16:28 +0000 UTC Type:0 Mac:52:54:00:b7:d0:20 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:multinode-714725 Clientid:01:52:54:00:b7:d0:20}
	I1209 11:21:57.206245  645459 main.go:141] libmachine: (multinode-714725) DBG | domain multinode-714725 has defined IP address 192.168.39.31 and MAC address 52:54:00:b7:d0:20 in network mk-multinode-714725
	I1209 11:21:57.206467  645459 main.go:141] libmachine: (multinode-714725) Calling .GetSSHPort
	I1209 11:21:57.206672  645459 main.go:141] libmachine: (multinode-714725) Calling .GetSSHKeyPath
	I1209 11:21:57.206939  645459 main.go:141] libmachine: (multinode-714725) Calling .GetSSHKeyPath
	I1209 11:21:57.207076  645459 main.go:141] libmachine: (multinode-714725) Calling .GetSSHUsername
	I1209 11:21:57.207287  645459 main.go:141] libmachine: Using SSH client type: native
	I1209 11:21:57.207469  645459 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.31 22 <nil> <nil>}
	I1209 11:21:57.207485  645459 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 11:23:27.931364  645459 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 11:23:27.931424  645459 machine.go:96] duration metric: took 1m31.309808368s to provisionDockerMachine
	I1209 11:23:27.931444  645459 start.go:293] postStartSetup for "multinode-714725" (driver="kvm2")
	I1209 11:23:27.931455  645459 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 11:23:27.931492  645459 main.go:141] libmachine: (multinode-714725) Calling .DriverName
	I1209 11:23:27.931834  645459 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 11:23:27.931875  645459 main.go:141] libmachine: (multinode-714725) Calling .GetSSHHostname
	I1209 11:23:27.935355  645459 main.go:141] libmachine: (multinode-714725) DBG | domain multinode-714725 has defined MAC address 52:54:00:b7:d0:20 in network mk-multinode-714725
	I1209 11:23:27.935796  645459 main.go:141] libmachine: (multinode-714725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:d0:20", ip: ""} in network mk-multinode-714725: {Iface:virbr1 ExpiryTime:2024-12-09 12:16:28 +0000 UTC Type:0 Mac:52:54:00:b7:d0:20 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:multinode-714725 Clientid:01:52:54:00:b7:d0:20}
	I1209 11:23:27.935832  645459 main.go:141] libmachine: (multinode-714725) DBG | domain multinode-714725 has defined IP address 192.168.39.31 and MAC address 52:54:00:b7:d0:20 in network mk-multinode-714725
	I1209 11:23:27.935980  645459 main.go:141] libmachine: (multinode-714725) Calling .GetSSHPort
	I1209 11:23:27.936191  645459 main.go:141] libmachine: (multinode-714725) Calling .GetSSHKeyPath
	I1209 11:23:27.936385  645459 main.go:141] libmachine: (multinode-714725) Calling .GetSSHUsername
	I1209 11:23:27.936545  645459 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/multinode-714725/id_rsa Username:docker}
	I1209 11:23:28.018689  645459 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 11:23:28.022951  645459 command_runner.go:130] > NAME=Buildroot
	I1209 11:23:28.022976  645459 command_runner.go:130] > VERSION=2023.02.9-dirty
	I1209 11:23:28.022981  645459 command_runner.go:130] > ID=buildroot
	I1209 11:23:28.022986  645459 command_runner.go:130] > VERSION_ID=2023.02.9
	I1209 11:23:28.022992  645459 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I1209 11:23:28.023026  645459 info.go:137] Remote host: Buildroot 2023.02.9
	I1209 11:23:28.023043  645459 filesync.go:126] Scanning /home/jenkins/minikube-integration/20068-609844/.minikube/addons for local assets ...
	I1209 11:23:28.023116  645459 filesync.go:126] Scanning /home/jenkins/minikube-integration/20068-609844/.minikube/files for local assets ...
	I1209 11:23:28.023188  645459 filesync.go:149] local asset: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem -> 6170172.pem in /etc/ssl/certs
	I1209 11:23:28.023198  645459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem -> /etc/ssl/certs/6170172.pem
	I1209 11:23:28.023284  645459 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 11:23:28.032913  645459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem --> /etc/ssl/certs/6170172.pem (1708 bytes)
	I1209 11:23:28.055736  645459 start.go:296] duration metric: took 124.276162ms for postStartSetup
	I1209 11:23:28.055813  645459 fix.go:56] duration metric: took 1m31.456048715s for fixHost
	I1209 11:23:28.055846  645459 main.go:141] libmachine: (multinode-714725) Calling .GetSSHHostname
	I1209 11:23:28.058820  645459 main.go:141] libmachine: (multinode-714725) DBG | domain multinode-714725 has defined MAC address 52:54:00:b7:d0:20 in network mk-multinode-714725
	I1209 11:23:28.059195  645459 main.go:141] libmachine: (multinode-714725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:d0:20", ip: ""} in network mk-multinode-714725: {Iface:virbr1 ExpiryTime:2024-12-09 12:16:28 +0000 UTC Type:0 Mac:52:54:00:b7:d0:20 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:multinode-714725 Clientid:01:52:54:00:b7:d0:20}
	I1209 11:23:28.059227  645459 main.go:141] libmachine: (multinode-714725) DBG | domain multinode-714725 has defined IP address 192.168.39.31 and MAC address 52:54:00:b7:d0:20 in network mk-multinode-714725
	I1209 11:23:28.059471  645459 main.go:141] libmachine: (multinode-714725) Calling .GetSSHPort
	I1209 11:23:28.059704  645459 main.go:141] libmachine: (multinode-714725) Calling .GetSSHKeyPath
	I1209 11:23:28.059845  645459 main.go:141] libmachine: (multinode-714725) Calling .GetSSHKeyPath
	I1209 11:23:28.060037  645459 main.go:141] libmachine: (multinode-714725) Calling .GetSSHUsername
	I1209 11:23:28.060205  645459 main.go:141] libmachine: Using SSH client type: native
	I1209 11:23:28.060387  645459 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.31 22 <nil> <nil>}
	I1209 11:23:28.060399  645459 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1209 11:23:28.159101  645459 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733743408.136165136
	
	I1209 11:23:28.159129  645459 fix.go:216] guest clock: 1733743408.136165136
	I1209 11:23:28.159139  645459 fix.go:229] Guest: 2024-12-09 11:23:28.136165136 +0000 UTC Remote: 2024-12-09 11:23:28.055820906 +0000 UTC m=+91.591790282 (delta=80.34423ms)
	I1209 11:23:28.159170  645459 fix.go:200] guest clock delta is within tolerance: 80.34423ms
	I1209 11:23:28.159177  645459 start.go:83] releasing machines lock for "multinode-714725", held for 1m31.559433598s
	I1209 11:23:28.159202  645459 main.go:141] libmachine: (multinode-714725) Calling .DriverName
	I1209 11:23:28.159477  645459 main.go:141] libmachine: (multinode-714725) Calling .GetIP
	I1209 11:23:28.162352  645459 main.go:141] libmachine: (multinode-714725) DBG | domain multinode-714725 has defined MAC address 52:54:00:b7:d0:20 in network mk-multinode-714725
	I1209 11:23:28.162699  645459 main.go:141] libmachine: (multinode-714725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:d0:20", ip: ""} in network mk-multinode-714725: {Iface:virbr1 ExpiryTime:2024-12-09 12:16:28 +0000 UTC Type:0 Mac:52:54:00:b7:d0:20 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:multinode-714725 Clientid:01:52:54:00:b7:d0:20}
	I1209 11:23:28.162732  645459 main.go:141] libmachine: (multinode-714725) DBG | domain multinode-714725 has defined IP address 192.168.39.31 and MAC address 52:54:00:b7:d0:20 in network mk-multinode-714725
	I1209 11:23:28.162908  645459 main.go:141] libmachine: (multinode-714725) Calling .DriverName
	I1209 11:23:28.163420  645459 main.go:141] libmachine: (multinode-714725) Calling .DriverName
	I1209 11:23:28.163600  645459 main.go:141] libmachine: (multinode-714725) Calling .DriverName
	I1209 11:23:28.163684  645459 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 11:23:28.163745  645459 main.go:141] libmachine: (multinode-714725) Calling .GetSSHHostname
	I1209 11:23:28.163809  645459 ssh_runner.go:195] Run: cat /version.json
	I1209 11:23:28.163832  645459 main.go:141] libmachine: (multinode-714725) Calling .GetSSHHostname
	I1209 11:23:28.166460  645459 main.go:141] libmachine: (multinode-714725) DBG | domain multinode-714725 has defined MAC address 52:54:00:b7:d0:20 in network mk-multinode-714725
	I1209 11:23:28.166602  645459 main.go:141] libmachine: (multinode-714725) DBG | domain multinode-714725 has defined MAC address 52:54:00:b7:d0:20 in network mk-multinode-714725
	I1209 11:23:28.166891  645459 main.go:141] libmachine: (multinode-714725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:d0:20", ip: ""} in network mk-multinode-714725: {Iface:virbr1 ExpiryTime:2024-12-09 12:16:28 +0000 UTC Type:0 Mac:52:54:00:b7:d0:20 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:multinode-714725 Clientid:01:52:54:00:b7:d0:20}
	I1209 11:23:28.166923  645459 main.go:141] libmachine: (multinode-714725) DBG | domain multinode-714725 has defined IP address 192.168.39.31 and MAC address 52:54:00:b7:d0:20 in network mk-multinode-714725
	I1209 11:23:28.166947  645459 main.go:141] libmachine: (multinode-714725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:d0:20", ip: ""} in network mk-multinode-714725: {Iface:virbr1 ExpiryTime:2024-12-09 12:16:28 +0000 UTC Type:0 Mac:52:54:00:b7:d0:20 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:multinode-714725 Clientid:01:52:54:00:b7:d0:20}
	I1209 11:23:28.166970  645459 main.go:141] libmachine: (multinode-714725) DBG | domain multinode-714725 has defined IP address 192.168.39.31 and MAC address 52:54:00:b7:d0:20 in network mk-multinode-714725
	I1209 11:23:28.167080  645459 main.go:141] libmachine: (multinode-714725) Calling .GetSSHPort
	I1209 11:23:28.167224  645459 main.go:141] libmachine: (multinode-714725) Calling .GetSSHPort
	I1209 11:23:28.167297  645459 main.go:141] libmachine: (multinode-714725) Calling .GetSSHKeyPath
	I1209 11:23:28.167382  645459 main.go:141] libmachine: (multinode-714725) Calling .GetSSHKeyPath
	I1209 11:23:28.167461  645459 main.go:141] libmachine: (multinode-714725) Calling .GetSSHUsername
	I1209 11:23:28.167561  645459 main.go:141] libmachine: (multinode-714725) Calling .GetSSHUsername
	I1209 11:23:28.167573  645459 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/multinode-714725/id_rsa Username:docker}
	I1209 11:23:28.167668  645459 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/multinode-714725/id_rsa Username:docker}
	I1209 11:23:28.238540  645459 command_runner.go:130] > {"iso_version": "v1.34.0-1730913550-19917", "kicbase_version": "v0.0.45-1730888964-19917", "minikube_version": "v1.34.0", "commit": "72f43dde5d92c8ae490d0727dad53fb3ed6aa41e"}
	I1209 11:23:28.238849  645459 ssh_runner.go:195] Run: systemctl --version
	I1209 11:23:28.272471  645459 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1209 11:23:28.273232  645459 command_runner.go:130] > systemd 252 (252)
	I1209 11:23:28.273266  645459 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I1209 11:23:28.273348  645459 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 11:23:28.429710  645459 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1209 11:23:28.439362  645459 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1209 11:23:28.439685  645459 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 11:23:28.439760  645459 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 11:23:28.448646  645459 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1209 11:23:28.448674  645459 start.go:495] detecting cgroup driver to use...
	I1209 11:23:28.448751  645459 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 11:23:28.464648  645459 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 11:23:28.477704  645459 docker.go:217] disabling cri-docker service (if available) ...
	I1209 11:23:28.477784  645459 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 11:23:28.490155  645459 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 11:23:28.502955  645459 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 11:23:28.639125  645459 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 11:23:28.776815  645459 docker.go:233] disabling docker service ...
	I1209 11:23:28.776892  645459 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 11:23:28.792067  645459 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 11:23:28.805251  645459 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 11:23:28.990986  645459 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 11:23:29.181706  645459 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 11:23:29.197362  645459 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 11:23:29.214609  645459 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1209 11:23:29.214681  645459 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1209 11:23:29.214742  645459 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:23:29.224276  645459 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 11:23:29.224341  645459 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:23:29.233797  645459 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:23:29.243129  645459 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:23:29.252851  645459 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 11:23:29.262937  645459 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:23:29.272413  645459 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:23:29.282248  645459 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:23:29.291812  645459 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 11:23:29.301037  645459 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1209 11:23:29.301110  645459 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 11:23:29.310190  645459 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 11:23:29.450033  645459 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 11:23:29.674920  645459 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 11:23:29.674999  645459 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 11:23:29.679355  645459 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1209 11:23:29.679383  645459 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1209 11:23:29.679394  645459 command_runner.go:130] > Device: 0,22	Inode: 1372        Links: 1
	I1209 11:23:29.679405  645459 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1209 11:23:29.679412  645459 command_runner.go:130] > Access: 2024-12-09 11:23:29.524097371 +0000
	I1209 11:23:29.679454  645459 command_runner.go:130] > Modify: 2024-12-09 11:23:29.524097371 +0000
	I1209 11:23:29.679479  645459 command_runner.go:130] > Change: 2024-12-09 11:23:29.524097371 +0000
	I1209 11:23:29.679490  645459 command_runner.go:130] >  Birth: -
	I1209 11:23:29.679603  645459 start.go:563] Will wait 60s for crictl version
	I1209 11:23:29.679667  645459 ssh_runner.go:195] Run: which crictl
	I1209 11:23:29.683025  645459 command_runner.go:130] > /usr/bin/crictl
	I1209 11:23:29.683095  645459 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 11:23:29.721744  645459 command_runner.go:130] > Version:  0.1.0
	I1209 11:23:29.721776  645459 command_runner.go:130] > RuntimeName:  cri-o
	I1209 11:23:29.721782  645459 command_runner.go:130] > RuntimeVersion:  1.29.1
	I1209 11:23:29.721978  645459 command_runner.go:130] > RuntimeApiVersion:  v1
	I1209 11:23:29.723196  645459 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1209 11:23:29.723267  645459 ssh_runner.go:195] Run: crio --version
	I1209 11:23:29.749497  645459 command_runner.go:130] > crio version 1.29.1
	I1209 11:23:29.749525  645459 command_runner.go:130] > Version:        1.29.1
	I1209 11:23:29.749534  645459 command_runner.go:130] > GitCommit:      unknown
	I1209 11:23:29.749540  645459 command_runner.go:130] > GitCommitDate:  unknown
	I1209 11:23:29.749548  645459 command_runner.go:130] > GitTreeState:   clean
	I1209 11:23:29.749557  645459 command_runner.go:130] > BuildDate:      2024-11-06T23:09:37Z
	I1209 11:23:29.749563  645459 command_runner.go:130] > GoVersion:      go1.21.6
	I1209 11:23:29.749568  645459 command_runner.go:130] > Compiler:       gc
	I1209 11:23:29.749574  645459 command_runner.go:130] > Platform:       linux/amd64
	I1209 11:23:29.749578  645459 command_runner.go:130] > Linkmode:       dynamic
	I1209 11:23:29.749583  645459 command_runner.go:130] > BuildTags:      
	I1209 11:23:29.749589  645459 command_runner.go:130] >   containers_image_ostree_stub
	I1209 11:23:29.749596  645459 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1209 11:23:29.749600  645459 command_runner.go:130] >   btrfs_noversion
	I1209 11:23:29.749604  645459 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1209 11:23:29.749612  645459 command_runner.go:130] >   libdm_no_deferred_remove
	I1209 11:23:29.749615  645459 command_runner.go:130] >   seccomp
	I1209 11:23:29.749619  645459 command_runner.go:130] > LDFlags:          unknown
	I1209 11:23:29.749624  645459 command_runner.go:130] > SeccompEnabled:   true
	I1209 11:23:29.749628  645459 command_runner.go:130] > AppArmorEnabled:  false
	I1209 11:23:29.749696  645459 ssh_runner.go:195] Run: crio --version
	I1209 11:23:29.775891  645459 command_runner.go:130] > crio version 1.29.1
	I1209 11:23:29.775929  645459 command_runner.go:130] > Version:        1.29.1
	I1209 11:23:29.775938  645459 command_runner.go:130] > GitCommit:      unknown
	I1209 11:23:29.775945  645459 command_runner.go:130] > GitCommitDate:  unknown
	I1209 11:23:29.775952  645459 command_runner.go:130] > GitTreeState:   clean
	I1209 11:23:29.775961  645459 command_runner.go:130] > BuildDate:      2024-11-06T23:09:37Z
	I1209 11:23:29.775967  645459 command_runner.go:130] > GoVersion:      go1.21.6
	I1209 11:23:29.775974  645459 command_runner.go:130] > Compiler:       gc
	I1209 11:23:29.775982  645459 command_runner.go:130] > Platform:       linux/amd64
	I1209 11:23:29.775991  645459 command_runner.go:130] > Linkmode:       dynamic
	I1209 11:23:29.775997  645459 command_runner.go:130] > BuildTags:      
	I1209 11:23:29.776005  645459 command_runner.go:130] >   containers_image_ostree_stub
	I1209 11:23:29.776009  645459 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1209 11:23:29.776013  645459 command_runner.go:130] >   btrfs_noversion
	I1209 11:23:29.776019  645459 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1209 11:23:29.776023  645459 command_runner.go:130] >   libdm_no_deferred_remove
	I1209 11:23:29.776030  645459 command_runner.go:130] >   seccomp
	I1209 11:23:29.776034  645459 command_runner.go:130] > LDFlags:          unknown
	I1209 11:23:29.776039  645459 command_runner.go:130] > SeccompEnabled:   true
	I1209 11:23:29.776043  645459 command_runner.go:130] > AppArmorEnabled:  false
	I1209 11:23:29.778797  645459 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1209 11:23:29.780230  645459 main.go:141] libmachine: (multinode-714725) Calling .GetIP
	I1209 11:23:29.782960  645459 main.go:141] libmachine: (multinode-714725) DBG | domain multinode-714725 has defined MAC address 52:54:00:b7:d0:20 in network mk-multinode-714725
	I1209 11:23:29.783349  645459 main.go:141] libmachine: (multinode-714725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:d0:20", ip: ""} in network mk-multinode-714725: {Iface:virbr1 ExpiryTime:2024-12-09 12:16:28 +0000 UTC Type:0 Mac:52:54:00:b7:d0:20 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:multinode-714725 Clientid:01:52:54:00:b7:d0:20}
	I1209 11:23:29.783388  645459 main.go:141] libmachine: (multinode-714725) DBG | domain multinode-714725 has defined IP address 192.168.39.31 and MAC address 52:54:00:b7:d0:20 in network mk-multinode-714725
	I1209 11:23:29.783606  645459 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1209 11:23:29.787570  645459 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I1209 11:23:29.787679  645459 kubeadm.go:883] updating cluster {Name:multinode-714725 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.2 ClusterName:multinode-714725 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.31 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.21 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.208 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingres
s-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirr
or: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 11:23:29.787813  645459 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 11:23:29.787855  645459 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 11:23:29.825610  645459 command_runner.go:130] > {
	I1209 11:23:29.825633  645459 command_runner.go:130] >   "images": [
	I1209 11:23:29.825637  645459 command_runner.go:130] >     {
	I1209 11:23:29.825645  645459 command_runner.go:130] >       "id": "3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52",
	I1209 11:23:29.825650  645459 command_runner.go:130] >       "repoTags": [
	I1209 11:23:29.825655  645459 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241007-36f62932"
	I1209 11:23:29.825659  645459 command_runner.go:130] >       ],
	I1209 11:23:29.825663  645459 command_runner.go:130] >       "repoDigests": [
	I1209 11:23:29.825671  645459 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387",
	I1209 11:23:29.825678  645459 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7"
	I1209 11:23:29.825682  645459 command_runner.go:130] >       ],
	I1209 11:23:29.825686  645459 command_runner.go:130] >       "size": "94965812",
	I1209 11:23:29.825694  645459 command_runner.go:130] >       "uid": null,
	I1209 11:23:29.825700  645459 command_runner.go:130] >       "username": "",
	I1209 11:23:29.825706  645459 command_runner.go:130] >       "spec": null,
	I1209 11:23:29.825710  645459 command_runner.go:130] >       "pinned": false
	I1209 11:23:29.825716  645459 command_runner.go:130] >     },
	I1209 11:23:29.825719  645459 command_runner.go:130] >     {
	I1209 11:23:29.825725  645459 command_runner.go:130] >       "id": "50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e",
	I1209 11:23:29.825731  645459 command_runner.go:130] >       "repoTags": [
	I1209 11:23:29.825737  645459 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241108-5c6d2daf"
	I1209 11:23:29.825741  645459 command_runner.go:130] >       ],
	I1209 11:23:29.825745  645459 command_runner.go:130] >       "repoDigests": [
	I1209 11:23:29.825752  645459 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3",
	I1209 11:23:29.825760  645459 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108"
	I1209 11:23:29.825767  645459 command_runner.go:130] >       ],
	I1209 11:23:29.825771  645459 command_runner.go:130] >       "size": "94963761",
	I1209 11:23:29.825775  645459 command_runner.go:130] >       "uid": null,
	I1209 11:23:29.825782  645459 command_runner.go:130] >       "username": "",
	I1209 11:23:29.825786  645459 command_runner.go:130] >       "spec": null,
	I1209 11:23:29.825790  645459 command_runner.go:130] >       "pinned": false
	I1209 11:23:29.825794  645459 command_runner.go:130] >     },
	I1209 11:23:29.825797  645459 command_runner.go:130] >     {
	I1209 11:23:29.825804  645459 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I1209 11:23:29.825808  645459 command_runner.go:130] >       "repoTags": [
	I1209 11:23:29.825814  645459 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I1209 11:23:29.825818  645459 command_runner.go:130] >       ],
	I1209 11:23:29.825824  645459 command_runner.go:130] >       "repoDigests": [
	I1209 11:23:29.825831  645459 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I1209 11:23:29.825838  645459 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I1209 11:23:29.825843  645459 command_runner.go:130] >       ],
	I1209 11:23:29.825847  645459 command_runner.go:130] >       "size": "1363676",
	I1209 11:23:29.825851  645459 command_runner.go:130] >       "uid": null,
	I1209 11:23:29.825856  645459 command_runner.go:130] >       "username": "",
	I1209 11:23:29.825860  645459 command_runner.go:130] >       "spec": null,
	I1209 11:23:29.825866  645459 command_runner.go:130] >       "pinned": false
	I1209 11:23:29.825869  645459 command_runner.go:130] >     },
	I1209 11:23:29.825872  645459 command_runner.go:130] >     {
	I1209 11:23:29.825878  645459 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1209 11:23:29.825884  645459 command_runner.go:130] >       "repoTags": [
	I1209 11:23:29.825889  645459 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1209 11:23:29.825895  645459 command_runner.go:130] >       ],
	I1209 11:23:29.825899  645459 command_runner.go:130] >       "repoDigests": [
	I1209 11:23:29.825908  645459 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1209 11:23:29.825922  645459 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1209 11:23:29.825928  645459 command_runner.go:130] >       ],
	I1209 11:23:29.825932  645459 command_runner.go:130] >       "size": "31470524",
	I1209 11:23:29.825939  645459 command_runner.go:130] >       "uid": null,
	I1209 11:23:29.825943  645459 command_runner.go:130] >       "username": "",
	I1209 11:23:29.825950  645459 command_runner.go:130] >       "spec": null,
	I1209 11:23:29.825954  645459 command_runner.go:130] >       "pinned": false
	I1209 11:23:29.825960  645459 command_runner.go:130] >     },
	I1209 11:23:29.825963  645459 command_runner.go:130] >     {
	I1209 11:23:29.825971  645459 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I1209 11:23:29.825975  645459 command_runner.go:130] >       "repoTags": [
	I1209 11:23:29.825983  645459 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I1209 11:23:29.825986  645459 command_runner.go:130] >       ],
	I1209 11:23:29.825990  645459 command_runner.go:130] >       "repoDigests": [
	I1209 11:23:29.825997  645459 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I1209 11:23:29.826005  645459 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I1209 11:23:29.826009  645459 command_runner.go:130] >       ],
	I1209 11:23:29.826013  645459 command_runner.go:130] >       "size": "63273227",
	I1209 11:23:29.826017  645459 command_runner.go:130] >       "uid": null,
	I1209 11:23:29.826020  645459 command_runner.go:130] >       "username": "nonroot",
	I1209 11:23:29.826024  645459 command_runner.go:130] >       "spec": null,
	I1209 11:23:29.826028  645459 command_runner.go:130] >       "pinned": false
	I1209 11:23:29.826032  645459 command_runner.go:130] >     },
	I1209 11:23:29.826035  645459 command_runner.go:130] >     {
	I1209 11:23:29.826041  645459 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I1209 11:23:29.826047  645459 command_runner.go:130] >       "repoTags": [
	I1209 11:23:29.826052  645459 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I1209 11:23:29.826057  645459 command_runner.go:130] >       ],
	I1209 11:23:29.826061  645459 command_runner.go:130] >       "repoDigests": [
	I1209 11:23:29.826070  645459 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I1209 11:23:29.826077  645459 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I1209 11:23:29.826083  645459 command_runner.go:130] >       ],
	I1209 11:23:29.826088  645459 command_runner.go:130] >       "size": "149009664",
	I1209 11:23:29.826094  645459 command_runner.go:130] >       "uid": {
	I1209 11:23:29.826098  645459 command_runner.go:130] >         "value": "0"
	I1209 11:23:29.826104  645459 command_runner.go:130] >       },
	I1209 11:23:29.826109  645459 command_runner.go:130] >       "username": "",
	I1209 11:23:29.826116  645459 command_runner.go:130] >       "spec": null,
	I1209 11:23:29.826120  645459 command_runner.go:130] >       "pinned": false
	I1209 11:23:29.826127  645459 command_runner.go:130] >     },
	I1209 11:23:29.826130  645459 command_runner.go:130] >     {
	I1209 11:23:29.826136  645459 command_runner.go:130] >       "id": "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173",
	I1209 11:23:29.826142  645459 command_runner.go:130] >       "repoTags": [
	I1209 11:23:29.826147  645459 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.2"
	I1209 11:23:29.826155  645459 command_runner.go:130] >       ],
	I1209 11:23:29.826159  645459 command_runner.go:130] >       "repoDigests": [
	I1209 11:23:29.826184  645459 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0",
	I1209 11:23:29.826227  645459 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a4fdc0ebc2950d76f2859c5f38f2d05b760ed09fd8006d7a8d98dd9b30bc55da"
	I1209 11:23:29.826243  645459 command_runner.go:130] >       ],
	I1209 11:23:29.826248  645459 command_runner.go:130] >       "size": "95274464",
	I1209 11:23:29.826252  645459 command_runner.go:130] >       "uid": {
	I1209 11:23:29.826256  645459 command_runner.go:130] >         "value": "0"
	I1209 11:23:29.826260  645459 command_runner.go:130] >       },
	I1209 11:23:29.826265  645459 command_runner.go:130] >       "username": "",
	I1209 11:23:29.826269  645459 command_runner.go:130] >       "spec": null,
	I1209 11:23:29.826273  645459 command_runner.go:130] >       "pinned": false
	I1209 11:23:29.826279  645459 command_runner.go:130] >     },
	I1209 11:23:29.826282  645459 command_runner.go:130] >     {
	I1209 11:23:29.826288  645459 command_runner.go:130] >       "id": "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503",
	I1209 11:23:29.826292  645459 command_runner.go:130] >       "repoTags": [
	I1209 11:23:29.826298  645459 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.2"
	I1209 11:23:29.826303  645459 command_runner.go:130] >       ],
	I1209 11:23:29.826307  645459 command_runner.go:130] >       "repoDigests": [
	I1209 11:23:29.826326  645459 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:4ba16ce7d80945dc4bb8e85ac0794c6171bfa8a55c94fe5be415afb4c3eb938c",
	I1209 11:23:29.826336  645459 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752"
	I1209 11:23:29.826342  645459 command_runner.go:130] >       ],
	I1209 11:23:29.826346  645459 command_runner.go:130] >       "size": "89474374",
	I1209 11:23:29.826352  645459 command_runner.go:130] >       "uid": {
	I1209 11:23:29.826356  645459 command_runner.go:130] >         "value": "0"
	I1209 11:23:29.826361  645459 command_runner.go:130] >       },
	I1209 11:23:29.826369  645459 command_runner.go:130] >       "username": "",
	I1209 11:23:29.826373  645459 command_runner.go:130] >       "spec": null,
	I1209 11:23:29.826377  645459 command_runner.go:130] >       "pinned": false
	I1209 11:23:29.826379  645459 command_runner.go:130] >     },
	I1209 11:23:29.826382  645459 command_runner.go:130] >     {
	I1209 11:23:29.826388  645459 command_runner.go:130] >       "id": "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38",
	I1209 11:23:29.826391  645459 command_runner.go:130] >       "repoTags": [
	I1209 11:23:29.826396  645459 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.2"
	I1209 11:23:29.826399  645459 command_runner.go:130] >       ],
	I1209 11:23:29.826409  645459 command_runner.go:130] >       "repoDigests": [
	I1209 11:23:29.826416  645459 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:22535649599e9f22b1b857afcbd9a8b36be238b2b3ea68e47f60bedcea48cd3b",
	I1209 11:23:29.826422  645459 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe"
	I1209 11:23:29.826426  645459 command_runner.go:130] >       ],
	I1209 11:23:29.826433  645459 command_runner.go:130] >       "size": "92783513",
	I1209 11:23:29.826436  645459 command_runner.go:130] >       "uid": null,
	I1209 11:23:29.826440  645459 command_runner.go:130] >       "username": "",
	I1209 11:23:29.826444  645459 command_runner.go:130] >       "spec": null,
	I1209 11:23:29.826448  645459 command_runner.go:130] >       "pinned": false
	I1209 11:23:29.826450  645459 command_runner.go:130] >     },
	I1209 11:23:29.826454  645459 command_runner.go:130] >     {
	I1209 11:23:29.826459  645459 command_runner.go:130] >       "id": "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856",
	I1209 11:23:29.826463  645459 command_runner.go:130] >       "repoTags": [
	I1209 11:23:29.826467  645459 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.2"
	I1209 11:23:29.826470  645459 command_runner.go:130] >       ],
	I1209 11:23:29.826474  645459 command_runner.go:130] >       "repoDigests": [
	I1209 11:23:29.826481  645459 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282",
	I1209 11:23:29.826488  645459 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:a40aba236dfcd0fe9d1258dcb9d22a82d83e9ea6d35f7d3574949b12df928fe5"
	I1209 11:23:29.826492  645459 command_runner.go:130] >       ],
	I1209 11:23:29.826495  645459 command_runner.go:130] >       "size": "68457798",
	I1209 11:23:29.826499  645459 command_runner.go:130] >       "uid": {
	I1209 11:23:29.826503  645459 command_runner.go:130] >         "value": "0"
	I1209 11:23:29.826506  645459 command_runner.go:130] >       },
	I1209 11:23:29.826510  645459 command_runner.go:130] >       "username": "",
	I1209 11:23:29.826514  645459 command_runner.go:130] >       "spec": null,
	I1209 11:23:29.826517  645459 command_runner.go:130] >       "pinned": false
	I1209 11:23:29.826521  645459 command_runner.go:130] >     },
	I1209 11:23:29.826525  645459 command_runner.go:130] >     {
	I1209 11:23:29.826531  645459 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I1209 11:23:29.826535  645459 command_runner.go:130] >       "repoTags": [
	I1209 11:23:29.826539  645459 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I1209 11:23:29.826546  645459 command_runner.go:130] >       ],
	I1209 11:23:29.826549  645459 command_runner.go:130] >       "repoDigests": [
	I1209 11:23:29.826556  645459 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I1209 11:23:29.826563  645459 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I1209 11:23:29.826572  645459 command_runner.go:130] >       ],
	I1209 11:23:29.826578  645459 command_runner.go:130] >       "size": "742080",
	I1209 11:23:29.826587  645459 command_runner.go:130] >       "uid": {
	I1209 11:23:29.826593  645459 command_runner.go:130] >         "value": "65535"
	I1209 11:23:29.826601  645459 command_runner.go:130] >       },
	I1209 11:23:29.826606  645459 command_runner.go:130] >       "username": "",
	I1209 11:23:29.826611  645459 command_runner.go:130] >       "spec": null,
	I1209 11:23:29.826617  645459 command_runner.go:130] >       "pinned": true
	I1209 11:23:29.826625  645459 command_runner.go:130] >     }
	I1209 11:23:29.826631  645459 command_runner.go:130] >   ]
	I1209 11:23:29.826636  645459 command_runner.go:130] > }
	I1209 11:23:29.826922  645459 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 11:23:29.826948  645459 crio.go:433] Images already preloaded, skipping extraction
	I1209 11:23:29.827026  645459 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 11:23:29.857666  645459 command_runner.go:130] > {
	I1209 11:23:29.857691  645459 command_runner.go:130] >   "images": [
	I1209 11:23:29.857694  645459 command_runner.go:130] >     {
	I1209 11:23:29.857702  645459 command_runner.go:130] >       "id": "3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52",
	I1209 11:23:29.857708  645459 command_runner.go:130] >       "repoTags": [
	I1209 11:23:29.857715  645459 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241007-36f62932"
	I1209 11:23:29.857719  645459 command_runner.go:130] >       ],
	I1209 11:23:29.857723  645459 command_runner.go:130] >       "repoDigests": [
	I1209 11:23:29.857732  645459 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387",
	I1209 11:23:29.857739  645459 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7"
	I1209 11:23:29.857743  645459 command_runner.go:130] >       ],
	I1209 11:23:29.857748  645459 command_runner.go:130] >       "size": "94965812",
	I1209 11:23:29.857752  645459 command_runner.go:130] >       "uid": null,
	I1209 11:23:29.857756  645459 command_runner.go:130] >       "username": "",
	I1209 11:23:29.857764  645459 command_runner.go:130] >       "spec": null,
	I1209 11:23:29.857771  645459 command_runner.go:130] >       "pinned": false
	I1209 11:23:29.857777  645459 command_runner.go:130] >     },
	I1209 11:23:29.857781  645459 command_runner.go:130] >     {
	I1209 11:23:29.857787  645459 command_runner.go:130] >       "id": "50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e",
	I1209 11:23:29.857791  645459 command_runner.go:130] >       "repoTags": [
	I1209 11:23:29.857796  645459 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241108-5c6d2daf"
	I1209 11:23:29.857800  645459 command_runner.go:130] >       ],
	I1209 11:23:29.857804  645459 command_runner.go:130] >       "repoDigests": [
	I1209 11:23:29.857811  645459 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3",
	I1209 11:23:29.857821  645459 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108"
	I1209 11:23:29.857824  645459 command_runner.go:130] >       ],
	I1209 11:23:29.857828  645459 command_runner.go:130] >       "size": "94963761",
	I1209 11:23:29.857832  645459 command_runner.go:130] >       "uid": null,
	I1209 11:23:29.857839  645459 command_runner.go:130] >       "username": "",
	I1209 11:23:29.857845  645459 command_runner.go:130] >       "spec": null,
	I1209 11:23:29.857852  645459 command_runner.go:130] >       "pinned": false
	I1209 11:23:29.857855  645459 command_runner.go:130] >     },
	I1209 11:23:29.857859  645459 command_runner.go:130] >     {
	I1209 11:23:29.857865  645459 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I1209 11:23:29.857870  645459 command_runner.go:130] >       "repoTags": [
	I1209 11:23:29.857875  645459 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I1209 11:23:29.857879  645459 command_runner.go:130] >       ],
	I1209 11:23:29.857885  645459 command_runner.go:130] >       "repoDigests": [
	I1209 11:23:29.857892  645459 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I1209 11:23:29.857901  645459 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I1209 11:23:29.857905  645459 command_runner.go:130] >       ],
	I1209 11:23:29.857934  645459 command_runner.go:130] >       "size": "1363676",
	I1209 11:23:29.857944  645459 command_runner.go:130] >       "uid": null,
	I1209 11:23:29.857947  645459 command_runner.go:130] >       "username": "",
	I1209 11:23:29.857954  645459 command_runner.go:130] >       "spec": null,
	I1209 11:23:29.857959  645459 command_runner.go:130] >       "pinned": false
	I1209 11:23:29.857962  645459 command_runner.go:130] >     },
	I1209 11:23:29.857966  645459 command_runner.go:130] >     {
	I1209 11:23:29.857971  645459 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1209 11:23:29.857980  645459 command_runner.go:130] >       "repoTags": [
	I1209 11:23:29.857985  645459 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1209 11:23:29.857988  645459 command_runner.go:130] >       ],
	I1209 11:23:29.857992  645459 command_runner.go:130] >       "repoDigests": [
	I1209 11:23:29.858000  645459 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1209 11:23:29.858011  645459 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1209 11:23:29.858015  645459 command_runner.go:130] >       ],
	I1209 11:23:29.858019  645459 command_runner.go:130] >       "size": "31470524",
	I1209 11:23:29.858023  645459 command_runner.go:130] >       "uid": null,
	I1209 11:23:29.858026  645459 command_runner.go:130] >       "username": "",
	I1209 11:23:29.858030  645459 command_runner.go:130] >       "spec": null,
	I1209 11:23:29.858034  645459 command_runner.go:130] >       "pinned": false
	I1209 11:23:29.858037  645459 command_runner.go:130] >     },
	I1209 11:23:29.858041  645459 command_runner.go:130] >     {
	I1209 11:23:29.858046  645459 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I1209 11:23:29.858050  645459 command_runner.go:130] >       "repoTags": [
	I1209 11:23:29.858055  645459 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I1209 11:23:29.858058  645459 command_runner.go:130] >       ],
	I1209 11:23:29.858062  645459 command_runner.go:130] >       "repoDigests": [
	I1209 11:23:29.858068  645459 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I1209 11:23:29.858076  645459 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I1209 11:23:29.858080  645459 command_runner.go:130] >       ],
	I1209 11:23:29.858084  645459 command_runner.go:130] >       "size": "63273227",
	I1209 11:23:29.858088  645459 command_runner.go:130] >       "uid": null,
	I1209 11:23:29.858092  645459 command_runner.go:130] >       "username": "nonroot",
	I1209 11:23:29.858098  645459 command_runner.go:130] >       "spec": null,
	I1209 11:23:29.858102  645459 command_runner.go:130] >       "pinned": false
	I1209 11:23:29.858105  645459 command_runner.go:130] >     },
	I1209 11:23:29.858108  645459 command_runner.go:130] >     {
	I1209 11:23:29.858114  645459 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I1209 11:23:29.858118  645459 command_runner.go:130] >       "repoTags": [
	I1209 11:23:29.858123  645459 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I1209 11:23:29.858126  645459 command_runner.go:130] >       ],
	I1209 11:23:29.858130  645459 command_runner.go:130] >       "repoDigests": [
	I1209 11:23:29.858137  645459 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I1209 11:23:29.858143  645459 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I1209 11:23:29.858147  645459 command_runner.go:130] >       ],
	I1209 11:23:29.858150  645459 command_runner.go:130] >       "size": "149009664",
	I1209 11:23:29.858154  645459 command_runner.go:130] >       "uid": {
	I1209 11:23:29.858158  645459 command_runner.go:130] >         "value": "0"
	I1209 11:23:29.858164  645459 command_runner.go:130] >       },
	I1209 11:23:29.858178  645459 command_runner.go:130] >       "username": "",
	I1209 11:23:29.858182  645459 command_runner.go:130] >       "spec": null,
	I1209 11:23:29.858186  645459 command_runner.go:130] >       "pinned": false
	I1209 11:23:29.858190  645459 command_runner.go:130] >     },
	I1209 11:23:29.858193  645459 command_runner.go:130] >     {
	I1209 11:23:29.858198  645459 command_runner.go:130] >       "id": "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173",
	I1209 11:23:29.858201  645459 command_runner.go:130] >       "repoTags": [
	I1209 11:23:29.858206  645459 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.2"
	I1209 11:23:29.858209  645459 command_runner.go:130] >       ],
	I1209 11:23:29.858213  645459 command_runner.go:130] >       "repoDigests": [
	I1209 11:23:29.858220  645459 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0",
	I1209 11:23:29.858227  645459 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a4fdc0ebc2950d76f2859c5f38f2d05b760ed09fd8006d7a8d98dd9b30bc55da"
	I1209 11:23:29.858231  645459 command_runner.go:130] >       ],
	I1209 11:23:29.858235  645459 command_runner.go:130] >       "size": "95274464",
	I1209 11:23:29.858238  645459 command_runner.go:130] >       "uid": {
	I1209 11:23:29.858242  645459 command_runner.go:130] >         "value": "0"
	I1209 11:23:29.858246  645459 command_runner.go:130] >       },
	I1209 11:23:29.858251  645459 command_runner.go:130] >       "username": "",
	I1209 11:23:29.858258  645459 command_runner.go:130] >       "spec": null,
	I1209 11:23:29.858261  645459 command_runner.go:130] >       "pinned": false
	I1209 11:23:29.858264  645459 command_runner.go:130] >     },
	I1209 11:23:29.858267  645459 command_runner.go:130] >     {
	I1209 11:23:29.858273  645459 command_runner.go:130] >       "id": "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503",
	I1209 11:23:29.858279  645459 command_runner.go:130] >       "repoTags": [
	I1209 11:23:29.858284  645459 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.2"
	I1209 11:23:29.858290  645459 command_runner.go:130] >       ],
	I1209 11:23:29.858294  645459 command_runner.go:130] >       "repoDigests": [
	I1209 11:23:29.858310  645459 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:4ba16ce7d80945dc4bb8e85ac0794c6171bfa8a55c94fe5be415afb4c3eb938c",
	I1209 11:23:29.858317  645459 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752"
	I1209 11:23:29.858324  645459 command_runner.go:130] >       ],
	I1209 11:23:29.858328  645459 command_runner.go:130] >       "size": "89474374",
	I1209 11:23:29.858332  645459 command_runner.go:130] >       "uid": {
	I1209 11:23:29.858335  645459 command_runner.go:130] >         "value": "0"
	I1209 11:23:29.858339  645459 command_runner.go:130] >       },
	I1209 11:23:29.858343  645459 command_runner.go:130] >       "username": "",
	I1209 11:23:29.858347  645459 command_runner.go:130] >       "spec": null,
	I1209 11:23:29.858352  645459 command_runner.go:130] >       "pinned": false
	I1209 11:23:29.858356  645459 command_runner.go:130] >     },
	I1209 11:23:29.858361  645459 command_runner.go:130] >     {
	I1209 11:23:29.858367  645459 command_runner.go:130] >       "id": "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38",
	I1209 11:23:29.858371  645459 command_runner.go:130] >       "repoTags": [
	I1209 11:23:29.858375  645459 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.2"
	I1209 11:23:29.858379  645459 command_runner.go:130] >       ],
	I1209 11:23:29.858383  645459 command_runner.go:130] >       "repoDigests": [
	I1209 11:23:29.858389  645459 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:22535649599e9f22b1b857afcbd9a8b36be238b2b3ea68e47f60bedcea48cd3b",
	I1209 11:23:29.858399  645459 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe"
	I1209 11:23:29.858405  645459 command_runner.go:130] >       ],
	I1209 11:23:29.858409  645459 command_runner.go:130] >       "size": "92783513",
	I1209 11:23:29.858413  645459 command_runner.go:130] >       "uid": null,
	I1209 11:23:29.858417  645459 command_runner.go:130] >       "username": "",
	I1209 11:23:29.858421  645459 command_runner.go:130] >       "spec": null,
	I1209 11:23:29.858424  645459 command_runner.go:130] >       "pinned": false
	I1209 11:23:29.858428  645459 command_runner.go:130] >     },
	I1209 11:23:29.858431  645459 command_runner.go:130] >     {
	I1209 11:23:29.858438  645459 command_runner.go:130] >       "id": "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856",
	I1209 11:23:29.858444  645459 command_runner.go:130] >       "repoTags": [
	I1209 11:23:29.858449  645459 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.2"
	I1209 11:23:29.858454  645459 command_runner.go:130] >       ],
	I1209 11:23:29.858459  645459 command_runner.go:130] >       "repoDigests": [
	I1209 11:23:29.858479  645459 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282",
	I1209 11:23:29.858498  645459 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:a40aba236dfcd0fe9d1258dcb9d22a82d83e9ea6d35f7d3574949b12df928fe5"
	I1209 11:23:29.858504  645459 command_runner.go:130] >       ],
	I1209 11:23:29.858508  645459 command_runner.go:130] >       "size": "68457798",
	I1209 11:23:29.858512  645459 command_runner.go:130] >       "uid": {
	I1209 11:23:29.858516  645459 command_runner.go:130] >         "value": "0"
	I1209 11:23:29.858520  645459 command_runner.go:130] >       },
	I1209 11:23:29.858524  645459 command_runner.go:130] >       "username": "",
	I1209 11:23:29.858528  645459 command_runner.go:130] >       "spec": null,
	I1209 11:23:29.858532  645459 command_runner.go:130] >       "pinned": false
	I1209 11:23:29.858535  645459 command_runner.go:130] >     },
	I1209 11:23:29.858538  645459 command_runner.go:130] >     {
	I1209 11:23:29.858544  645459 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I1209 11:23:29.858547  645459 command_runner.go:130] >       "repoTags": [
	I1209 11:23:29.858552  645459 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I1209 11:23:29.858555  645459 command_runner.go:130] >       ],
	I1209 11:23:29.858559  645459 command_runner.go:130] >       "repoDigests": [
	I1209 11:23:29.858566  645459 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I1209 11:23:29.858573  645459 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I1209 11:23:29.858576  645459 command_runner.go:130] >       ],
	I1209 11:23:29.858580  645459 command_runner.go:130] >       "size": "742080",
	I1209 11:23:29.858587  645459 command_runner.go:130] >       "uid": {
	I1209 11:23:29.858591  645459 command_runner.go:130] >         "value": "65535"
	I1209 11:23:29.858595  645459 command_runner.go:130] >       },
	I1209 11:23:29.858599  645459 command_runner.go:130] >       "username": "",
	I1209 11:23:29.858605  645459 command_runner.go:130] >       "spec": null,
	I1209 11:23:29.858609  645459 command_runner.go:130] >       "pinned": true
	I1209 11:23:29.858613  645459 command_runner.go:130] >     }
	I1209 11:23:29.858616  645459 command_runner.go:130] >   ]
	I1209 11:23:29.858619  645459 command_runner.go:130] > }
	I1209 11:23:29.859134  645459 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 11:23:29.859156  645459 cache_images.go:84] Images are preloaded, skipping loading
	I1209 11:23:29.859166  645459 kubeadm.go:934] updating node { 192.168.39.31 8443 v1.31.2 crio true true} ...
	I1209 11:23:29.859309  645459 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-714725 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.31
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:multinode-714725 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 11:23:29.859403  645459 ssh_runner.go:195] Run: crio config
	I1209 11:23:29.898296  645459 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1209 11:23:29.898337  645459 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1209 11:23:29.898346  645459 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1209 11:23:29.898351  645459 command_runner.go:130] > #
	I1209 11:23:29.898374  645459 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1209 11:23:29.898384  645459 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1209 11:23:29.898394  645459 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1209 11:23:29.898416  645459 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1209 11:23:29.898427  645459 command_runner.go:130] > # reload'.
	I1209 11:23:29.898437  645459 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1209 11:23:29.898451  645459 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1209 11:23:29.898466  645459 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1209 11:23:29.898479  645459 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1209 11:23:29.898485  645459 command_runner.go:130] > [crio]
	I1209 11:23:29.898498  645459 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1209 11:23:29.898506  645459 command_runner.go:130] > # containers images, in this directory.
	I1209 11:23:29.898513  645459 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1209 11:23:29.898526  645459 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1209 11:23:29.898566  645459 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1209 11:23:29.898593  645459 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1209 11:23:29.898605  645459 command_runner.go:130] > # imagestore = ""
	I1209 11:23:29.898619  645459 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1209 11:23:29.898630  645459 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1209 11:23:29.898642  645459 command_runner.go:130] > storage_driver = "overlay"
	I1209 11:23:29.898658  645459 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1209 11:23:29.898671  645459 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1209 11:23:29.898680  645459 command_runner.go:130] > storage_option = [
	I1209 11:23:29.898688  645459 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1209 11:23:29.898696  645459 command_runner.go:130] > ]
	I1209 11:23:29.898706  645459 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1209 11:23:29.898725  645459 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1209 11:23:29.898736  645459 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1209 11:23:29.898745  645459 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1209 11:23:29.898759  645459 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1209 11:23:29.898769  645459 command_runner.go:130] > # always happen on a node reboot
	I1209 11:23:29.898777  645459 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1209 11:23:29.898797  645459 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1209 11:23:29.898811  645459 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1209 11:23:29.898818  645459 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1209 11:23:29.898825  645459 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I1209 11:23:29.898836  645459 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1209 11:23:29.898851  645459 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1209 11:23:29.898859  645459 command_runner.go:130] > # internal_wipe = true
	I1209 11:23:29.898871  645459 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1209 11:23:29.898880  645459 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1209 11:23:29.898892  645459 command_runner.go:130] > # internal_repair = false
	I1209 11:23:29.898899  645459 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1209 11:23:29.898914  645459 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1209 11:23:29.898925  645459 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1209 11:23:29.898937  645459 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1209 11:23:29.898949  645459 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1209 11:23:29.898954  645459 command_runner.go:130] > [crio.api]
	I1209 11:23:29.898961  645459 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1209 11:23:29.898969  645459 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1209 11:23:29.898978  645459 command_runner.go:130] > # IP address on which the stream server will listen.
	I1209 11:23:29.898986  645459 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1209 11:23:29.899000  645459 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1209 11:23:29.899013  645459 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1209 11:23:29.899024  645459 command_runner.go:130] > # stream_port = "0"
	I1209 11:23:29.899033  645459 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1209 11:23:29.899043  645459 command_runner.go:130] > # stream_enable_tls = false
	I1209 11:23:29.899053  645459 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1209 11:23:29.899064  645459 command_runner.go:130] > # stream_idle_timeout = ""
	I1209 11:23:29.899074  645459 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1209 11:23:29.899087  645459 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1209 11:23:29.899093  645459 command_runner.go:130] > # minutes.
	I1209 11:23:29.899103  645459 command_runner.go:130] > # stream_tls_cert = ""
	I1209 11:23:29.899120  645459 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1209 11:23:29.899134  645459 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1209 11:23:29.899147  645459 command_runner.go:130] > # stream_tls_key = ""
	I1209 11:23:29.899159  645459 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1209 11:23:29.899172  645459 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1209 11:23:29.899189  645459 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1209 11:23:29.899199  645459 command_runner.go:130] > # stream_tls_ca = ""
	I1209 11:23:29.899210  645459 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1209 11:23:29.899226  645459 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1209 11:23:29.899240  645459 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1209 11:23:29.899251  645459 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1209 11:23:29.899264  645459 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1209 11:23:29.899277  645459 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1209 11:23:29.899286  645459 command_runner.go:130] > [crio.runtime]
	I1209 11:23:29.899294  645459 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1209 11:23:29.899305  645459 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1209 11:23:29.899314  645459 command_runner.go:130] > # "nofile=1024:2048"
	I1209 11:23:29.899327  645459 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1209 11:23:29.899337  645459 command_runner.go:130] > # default_ulimits = [
	I1209 11:23:29.899346  645459 command_runner.go:130] > # ]
	I1209 11:23:29.899356  645459 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1209 11:23:29.899368  645459 command_runner.go:130] > # no_pivot = false
	I1209 11:23:29.899382  645459 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1209 11:23:29.899397  645459 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1209 11:23:29.899409  645459 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1209 11:23:29.899423  645459 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1209 11:23:29.899434  645459 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1209 11:23:29.899445  645459 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1209 11:23:29.899456  645459 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1209 11:23:29.899463  645459 command_runner.go:130] > # Cgroup setting for conmon
	I1209 11:23:29.899477  645459 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1209 11:23:29.899487  645459 command_runner.go:130] > conmon_cgroup = "pod"
	I1209 11:23:29.899499  645459 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1209 11:23:29.899512  645459 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1209 11:23:29.899525  645459 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1209 11:23:29.899535  645459 command_runner.go:130] > conmon_env = [
	I1209 11:23:29.899544  645459 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1209 11:23:29.899559  645459 command_runner.go:130] > ]
	I1209 11:23:29.899573  645459 command_runner.go:130] > # Additional environment variables to set for all the
	I1209 11:23:29.899585  645459 command_runner.go:130] > # containers. These are overridden if set in the
	I1209 11:23:29.899598  645459 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1209 11:23:29.899609  645459 command_runner.go:130] > # default_env = [
	I1209 11:23:29.899614  645459 command_runner.go:130] > # ]
	I1209 11:23:29.899627  645459 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1209 11:23:29.899640  645459 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1209 11:23:29.899650  645459 command_runner.go:130] > # selinux = false
	I1209 11:23:29.899661  645459 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1209 11:23:29.899675  645459 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1209 11:23:29.899687  645459 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1209 11:23:29.899697  645459 command_runner.go:130] > # seccomp_profile = ""
	I1209 11:23:29.899706  645459 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1209 11:23:29.899718  645459 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1209 11:23:29.899731  645459 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1209 11:23:29.899741  645459 command_runner.go:130] > # which might increase security.
	I1209 11:23:29.899753  645459 command_runner.go:130] > # This option is currently deprecated,
	I1209 11:23:29.899761  645459 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I1209 11:23:29.899773  645459 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1209 11:23:29.899786  645459 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1209 11:23:29.899800  645459 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1209 11:23:29.899813  645459 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1209 11:23:29.899827  645459 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1209 11:23:29.899842  645459 command_runner.go:130] > # This option supports live configuration reload.
	I1209 11:23:29.899852  645459 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1209 11:23:29.899862  645459 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1209 11:23:29.899871  645459 command_runner.go:130] > # the cgroup blockio controller.
	I1209 11:23:29.899879  645459 command_runner.go:130] > # blockio_config_file = ""
	I1209 11:23:29.899892  645459 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1209 11:23:29.899907  645459 command_runner.go:130] > # blockio parameters.
	I1209 11:23:29.899915  645459 command_runner.go:130] > # blockio_reload = false
	I1209 11:23:29.899924  645459 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1209 11:23:29.899934  645459 command_runner.go:130] > # irqbalance daemon.
	I1209 11:23:29.899944  645459 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1209 11:23:29.899956  645459 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1209 11:23:29.899970  645459 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1209 11:23:29.899983  645459 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1209 11:23:29.900002  645459 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1209 11:23:29.900020  645459 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1209 11:23:29.900031  645459 command_runner.go:130] > # This option supports live configuration reload.
	I1209 11:23:29.900042  645459 command_runner.go:130] > # rdt_config_file = ""
	I1209 11:23:29.900051  645459 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1209 11:23:29.900061  645459 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1209 11:23:29.900090  645459 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1209 11:23:29.900102  645459 command_runner.go:130] > # separate_pull_cgroup = ""
	I1209 11:23:29.900112  645459 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1209 11:23:29.900126  645459 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1209 11:23:29.900133  645459 command_runner.go:130] > # will be added.
	I1209 11:23:29.900143  645459 command_runner.go:130] > # default_capabilities = [
	I1209 11:23:29.900149  645459 command_runner.go:130] > # 	"CHOWN",
	I1209 11:23:29.900159  645459 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1209 11:23:29.900167  645459 command_runner.go:130] > # 	"FSETID",
	I1209 11:23:29.900176  645459 command_runner.go:130] > # 	"FOWNER",
	I1209 11:23:29.900183  645459 command_runner.go:130] > # 	"SETGID",
	I1209 11:23:29.900192  645459 command_runner.go:130] > # 	"SETUID",
	I1209 11:23:29.900197  645459 command_runner.go:130] > # 	"SETPCAP",
	I1209 11:23:29.900205  645459 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1209 11:23:29.900221  645459 command_runner.go:130] > # 	"KILL",
	I1209 11:23:29.900230  645459 command_runner.go:130] > # ]
	I1209 11:23:29.900241  645459 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1209 11:23:29.900255  645459 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1209 11:23:29.900266  645459 command_runner.go:130] > # add_inheritable_capabilities = false
	I1209 11:23:29.900275  645459 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1209 11:23:29.900287  645459 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1209 11:23:29.900297  645459 command_runner.go:130] > default_sysctls = [
	I1209 11:23:29.900304  645459 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1209 11:23:29.900312  645459 command_runner.go:130] > ]
	I1209 11:23:29.900320  645459 command_runner.go:130] > # List of devices on the host that a
	I1209 11:23:29.900334  645459 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1209 11:23:29.900344  645459 command_runner.go:130] > # allowed_devices = [
	I1209 11:23:29.900350  645459 command_runner.go:130] > # 	"/dev/fuse",
	I1209 11:23:29.900362  645459 command_runner.go:130] > # ]
	I1209 11:23:29.900375  645459 command_runner.go:130] > # List of additional devices. specified as
	I1209 11:23:29.900390  645459 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1209 11:23:29.900402  645459 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1209 11:23:29.900412  645459 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1209 11:23:29.900421  645459 command_runner.go:130] > # additional_devices = [
	I1209 11:23:29.900427  645459 command_runner.go:130] > # ]
	I1209 11:23:29.900438  645459 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1209 11:23:29.900455  645459 command_runner.go:130] > # cdi_spec_dirs = [
	I1209 11:23:29.900464  645459 command_runner.go:130] > # 	"/etc/cdi",
	I1209 11:23:29.900470  645459 command_runner.go:130] > # 	"/var/run/cdi",
	I1209 11:23:29.900478  645459 command_runner.go:130] > # ]
	I1209 11:23:29.900489  645459 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1209 11:23:29.900503  645459 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1209 11:23:29.900512  645459 command_runner.go:130] > # Defaults to false.
	I1209 11:23:29.900521  645459 command_runner.go:130] > # device_ownership_from_security_context = false
	I1209 11:23:29.900536  645459 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1209 11:23:29.900551  645459 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1209 11:23:29.900559  645459 command_runner.go:130] > # hooks_dir = [
	I1209 11:23:29.900567  645459 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1209 11:23:29.900575  645459 command_runner.go:130] > # ]
	I1209 11:23:29.900588  645459 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1209 11:23:29.900602  645459 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1209 11:23:29.900614  645459 command_runner.go:130] > # its default mounts from the following two files:
	I1209 11:23:29.900623  645459 command_runner.go:130] > #
	I1209 11:23:29.900632  645459 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1209 11:23:29.900646  645459 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1209 11:23:29.900658  645459 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1209 11:23:29.900667  645459 command_runner.go:130] > #
	I1209 11:23:29.900678  645459 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1209 11:23:29.900694  645459 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1209 11:23:29.900708  645459 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1209 11:23:29.900719  645459 command_runner.go:130] > #      only add mounts it finds in this file.
	I1209 11:23:29.900724  645459 command_runner.go:130] > #
	I1209 11:23:29.900733  645459 command_runner.go:130] > # default_mounts_file = ""
	I1209 11:23:29.900742  645459 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1209 11:23:29.900777  645459 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1209 11:23:29.900795  645459 command_runner.go:130] > pids_limit = 1024
	I1209 11:23:29.900805  645459 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1209 11:23:29.900819  645459 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1209 11:23:29.900832  645459 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1209 11:23:29.900847  645459 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1209 11:23:29.900856  645459 command_runner.go:130] > # log_size_max = -1
	I1209 11:23:29.900867  645459 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1209 11:23:29.900877  645459 command_runner.go:130] > # log_to_journald = false
	I1209 11:23:29.900888  645459 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1209 11:23:29.900905  645459 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1209 11:23:29.900920  645459 command_runner.go:130] > # Path to directory for container attach sockets.
	I1209 11:23:29.900935  645459 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1209 11:23:29.900946  645459 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1209 11:23:29.900954  645459 command_runner.go:130] > # bind_mount_prefix = ""
	I1209 11:23:29.900965  645459 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1209 11:23:29.900975  645459 command_runner.go:130] > # read_only = false
	I1209 11:23:29.900987  645459 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1209 11:23:29.901000  645459 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1209 11:23:29.901007  645459 command_runner.go:130] > # live configuration reload.
	I1209 11:23:29.901017  645459 command_runner.go:130] > # log_level = "info"
	I1209 11:23:29.901031  645459 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1209 11:23:29.901041  645459 command_runner.go:130] > # This option supports live configuration reload.
	I1209 11:23:29.901050  645459 command_runner.go:130] > # log_filter = ""
	I1209 11:23:29.901058  645459 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1209 11:23:29.901069  645459 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1209 11:23:29.901078  645459 command_runner.go:130] > # separated by comma.
	I1209 11:23:29.901087  645459 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1209 11:23:29.901095  645459 command_runner.go:130] > # uid_mappings = ""
	I1209 11:23:29.901103  645459 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1209 11:23:29.901114  645459 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1209 11:23:29.901123  645459 command_runner.go:130] > # separated by comma.
	I1209 11:23:29.901133  645459 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1209 11:23:29.901142  645459 command_runner.go:130] > # gid_mappings = ""
	I1209 11:23:29.901152  645459 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1209 11:23:29.901163  645459 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1209 11:23:29.901172  645459 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1209 11:23:29.901185  645459 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1209 11:23:29.901195  645459 command_runner.go:130] > # minimum_mappable_uid = -1
	I1209 11:23:29.901206  645459 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1209 11:23:29.901225  645459 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1209 11:23:29.901239  645459 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1209 11:23:29.901250  645459 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1209 11:23:29.901264  645459 command_runner.go:130] > # minimum_mappable_gid = -1
	I1209 11:23:29.901272  645459 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1209 11:23:29.901285  645459 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1209 11:23:29.901296  645459 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1209 11:23:29.901312  645459 command_runner.go:130] > # ctr_stop_timeout = 30
	I1209 11:23:29.901323  645459 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1209 11:23:29.901334  645459 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1209 11:23:29.901341  645459 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1209 11:23:29.901351  645459 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1209 11:23:29.901356  645459 command_runner.go:130] > drop_infra_ctr = false
	I1209 11:23:29.901366  645459 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1209 11:23:29.901378  645459 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1209 11:23:29.901390  645459 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1209 11:23:29.901398  645459 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1209 11:23:29.901407  645459 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1209 11:23:29.901418  645459 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1209 11:23:29.901427  645459 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1209 11:23:29.901436  645459 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1209 11:23:29.901442  645459 command_runner.go:130] > # shared_cpuset = ""
	I1209 11:23:29.901454  645459 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1209 11:23:29.901464  645459 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1209 11:23:29.901475  645459 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1209 11:23:29.901488  645459 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1209 11:23:29.901498  645459 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1209 11:23:29.901506  645459 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1209 11:23:29.901517  645459 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1209 11:23:29.901523  645459 command_runner.go:130] > # enable_criu_support = false
	I1209 11:23:29.901532  645459 command_runner.go:130] > # Enable/disable the generation of the container,
	I1209 11:23:29.901540  645459 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1209 11:23:29.901550  645459 command_runner.go:130] > # enable_pod_events = false
	I1209 11:23:29.901559  645459 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1209 11:23:29.901570  645459 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1209 11:23:29.901581  645459 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1209 11:23:29.901595  645459 command_runner.go:130] > # default_runtime = "runc"
	I1209 11:23:29.901606  645459 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1209 11:23:29.901615  645459 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1209 11:23:29.901631  645459 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1209 11:23:29.901642  645459 command_runner.go:130] > # creation as a file is not desired either.
	I1209 11:23:29.901652  645459 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1209 11:23:29.901668  645459 command_runner.go:130] > # the hostname is being managed dynamically.
	I1209 11:23:29.901678  645459 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1209 11:23:29.901682  645459 command_runner.go:130] > # ]
	I1209 11:23:29.901692  645459 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1209 11:23:29.901703  645459 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1209 11:23:29.901716  645459 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1209 11:23:29.901727  645459 command_runner.go:130] > # Each entry in the table should follow the format:
	I1209 11:23:29.901736  645459 command_runner.go:130] > #
	I1209 11:23:29.901743  645459 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1209 11:23:29.901752  645459 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1209 11:23:29.901819  645459 command_runner.go:130] > # runtime_type = "oci"
	I1209 11:23:29.901838  645459 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1209 11:23:29.901851  645459 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1209 11:23:29.901858  645459 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1209 11:23:29.901869  645459 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1209 11:23:29.901875  645459 command_runner.go:130] > # monitor_env = []
	I1209 11:23:29.901883  645459 command_runner.go:130] > # privileged_without_host_devices = false
	I1209 11:23:29.901894  645459 command_runner.go:130] > # allowed_annotations = []
	I1209 11:23:29.901907  645459 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1209 11:23:29.901917  645459 command_runner.go:130] > # Where:
	I1209 11:23:29.901925  645459 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1209 11:23:29.901937  645459 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1209 11:23:29.901947  645459 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1209 11:23:29.901959  645459 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1209 11:23:29.901968  645459 command_runner.go:130] > #   in $PATH.
	I1209 11:23:29.901977  645459 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1209 11:23:29.901988  645459 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1209 11:23:29.901999  645459 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1209 11:23:29.902008  645459 command_runner.go:130] > #   state.
	I1209 11:23:29.902018  645459 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1209 11:23:29.902030  645459 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1209 11:23:29.902040  645459 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1209 11:23:29.902051  645459 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1209 11:23:29.902062  645459 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1209 11:23:29.902075  645459 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1209 11:23:29.902086  645459 command_runner.go:130] > #   The currently recognized values are:
	I1209 11:23:29.902095  645459 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1209 11:23:29.902109  645459 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1209 11:23:29.902126  645459 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1209 11:23:29.902136  645459 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1209 11:23:29.902150  645459 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1209 11:23:29.902164  645459 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1209 11:23:29.902195  645459 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1209 11:23:29.902208  645459 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1209 11:23:29.902227  645459 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1209 11:23:29.902240  645459 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1209 11:23:29.902247  645459 command_runner.go:130] > #   deprecated option "conmon".
	I1209 11:23:29.902264  645459 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1209 11:23:29.902276  645459 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1209 11:23:29.902286  645459 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1209 11:23:29.902297  645459 command_runner.go:130] > #   should be moved to the container's cgroup
	I1209 11:23:29.902307  645459 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I1209 11:23:29.902319  645459 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1209 11:23:29.902331  645459 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1209 11:23:29.902345  645459 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1209 11:23:29.902353  645459 command_runner.go:130] > #
	I1209 11:23:29.902360  645459 command_runner.go:130] > # Using the seccomp notifier feature:
	I1209 11:23:29.902368  645459 command_runner.go:130] > #
	I1209 11:23:29.902377  645459 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1209 11:23:29.902388  645459 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1209 11:23:29.902398  645459 command_runner.go:130] > #
	I1209 11:23:29.902411  645459 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1209 11:23:29.902424  645459 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1209 11:23:29.902432  645459 command_runner.go:130] > #
	I1209 11:23:29.902443  645459 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1209 11:23:29.902451  645459 command_runner.go:130] > # feature.
	I1209 11:23:29.902457  645459 command_runner.go:130] > #
	I1209 11:23:29.902468  645459 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1209 11:23:29.902480  645459 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1209 11:23:29.902493  645459 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1209 11:23:29.902506  645459 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1209 11:23:29.902514  645459 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1209 11:23:29.902520  645459 command_runner.go:130] > #
	I1209 11:23:29.902529  645459 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1209 11:23:29.902547  645459 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1209 11:23:29.902553  645459 command_runner.go:130] > #
	I1209 11:23:29.902562  645459 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1209 11:23:29.902574  645459 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1209 11:23:29.902579  645459 command_runner.go:130] > #
	I1209 11:23:29.902588  645459 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1209 11:23:29.902596  645459 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1209 11:23:29.902605  645459 command_runner.go:130] > # limitation.
	I1209 11:23:29.902611  645459 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1209 11:23:29.902617  645459 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1209 11:23:29.902623  645459 command_runner.go:130] > runtime_type = "oci"
	I1209 11:23:29.902632  645459 command_runner.go:130] > runtime_root = "/run/runc"
	I1209 11:23:29.902638  645459 command_runner.go:130] > runtime_config_path = ""
	I1209 11:23:29.902648  645459 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1209 11:23:29.902655  645459 command_runner.go:130] > monitor_cgroup = "pod"
	I1209 11:23:29.902662  645459 command_runner.go:130] > monitor_exec_cgroup = ""
	I1209 11:23:29.902669  645459 command_runner.go:130] > monitor_env = [
	I1209 11:23:29.902681  645459 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1209 11:23:29.902686  645459 command_runner.go:130] > ]
	I1209 11:23:29.902695  645459 command_runner.go:130] > privileged_without_host_devices = false
	I1209 11:23:29.902707  645459 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1209 11:23:29.902716  645459 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1209 11:23:29.902726  645459 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1209 11:23:29.902739  645459 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1209 11:23:29.902750  645459 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1209 11:23:29.902761  645459 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1209 11:23:29.902778  645459 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1209 11:23:29.902793  645459 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1209 11:23:29.902805  645459 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1209 11:23:29.902814  645459 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1209 11:23:29.902823  645459 command_runner.go:130] > # Example:
	I1209 11:23:29.902828  645459 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1209 11:23:29.902835  645459 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1209 11:23:29.902842  645459 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1209 11:23:29.902849  645459 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1209 11:23:29.902855  645459 command_runner.go:130] > # cpuset = 0
	I1209 11:23:29.902861  645459 command_runner.go:130] > # cpushares = "0-1"
	I1209 11:23:29.902866  645459 command_runner.go:130] > # Where:
	I1209 11:23:29.902878  645459 command_runner.go:130] > # The workload name is workload-type.
	I1209 11:23:29.902887  645459 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1209 11:23:29.902893  645459 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1209 11:23:29.902900  645459 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1209 11:23:29.902912  645459 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1209 11:23:29.902920  645459 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1209 11:23:29.902927  645459 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1209 11:23:29.902936  645459 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1209 11:23:29.902943  645459 command_runner.go:130] > # Default value is set to true
	I1209 11:23:29.902950  645459 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1209 11:23:29.902959  645459 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1209 11:23:29.902967  645459 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1209 11:23:29.902974  645459 command_runner.go:130] > # Default value is set to 'false'
	I1209 11:23:29.902980  645459 command_runner.go:130] > # disable_hostport_mapping = false
	I1209 11:23:29.902994  645459 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1209 11:23:29.902998  645459 command_runner.go:130] > #
	I1209 11:23:29.903007  645459 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1209 11:23:29.903017  645459 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1209 11:23:29.903025  645459 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1209 11:23:29.903036  645459 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1209 11:23:29.903044  645459 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1209 11:23:29.903055  645459 command_runner.go:130] > [crio.image]
	I1209 11:23:29.903066  645459 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1209 11:23:29.903076  645459 command_runner.go:130] > # default_transport = "docker://"
	I1209 11:23:29.903089  645459 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1209 11:23:29.903101  645459 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1209 11:23:29.903111  645459 command_runner.go:130] > # global_auth_file = ""
	I1209 11:23:29.903120  645459 command_runner.go:130] > # The image used to instantiate infra containers.
	I1209 11:23:29.903130  645459 command_runner.go:130] > # This option supports live configuration reload.
	I1209 11:23:29.903136  645459 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I1209 11:23:29.903149  645459 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1209 11:23:29.903160  645459 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1209 11:23:29.903169  645459 command_runner.go:130] > # This option supports live configuration reload.
	I1209 11:23:29.903178  645459 command_runner.go:130] > # pause_image_auth_file = ""
	I1209 11:23:29.903187  645459 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1209 11:23:29.903199  645459 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1209 11:23:29.903228  645459 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1209 11:23:29.903244  645459 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1209 11:23:29.903255  645459 command_runner.go:130] > # pause_command = "/pause"
	I1209 11:23:29.903270  645459 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1209 11:23:29.903283  645459 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1209 11:23:29.903296  645459 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1209 11:23:29.903308  645459 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1209 11:23:29.903319  645459 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1209 11:23:29.903332  645459 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1209 11:23:29.903339  645459 command_runner.go:130] > # pinned_images = [
	I1209 11:23:29.903347  645459 command_runner.go:130] > # ]
	I1209 11:23:29.903358  645459 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1209 11:23:29.903371  645459 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1209 11:23:29.903382  645459 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1209 11:23:29.903394  645459 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1209 11:23:29.903405  645459 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1209 11:23:29.903414  645459 command_runner.go:130] > # signature_policy = ""
	I1209 11:23:29.903424  645459 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1209 11:23:29.903437  645459 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1209 11:23:29.903450  645459 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1209 11:23:29.903459  645459 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1209 11:23:29.903471  645459 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1209 11:23:29.903482  645459 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1209 11:23:29.903494  645459 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1209 11:23:29.903506  645459 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1209 11:23:29.903515  645459 command_runner.go:130] > # changing them here.
	I1209 11:23:29.903522  645459 command_runner.go:130] > # insecure_registries = [
	I1209 11:23:29.903531  645459 command_runner.go:130] > # ]
	I1209 11:23:29.903544  645459 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1209 11:23:29.903556  645459 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1209 11:23:29.903562  645459 command_runner.go:130] > # image_volumes = "mkdir"
	I1209 11:23:29.903572  645459 command_runner.go:130] > # Temporary directory to use for storing big files
	I1209 11:23:29.903583  645459 command_runner.go:130] > # big_files_temporary_dir = ""
	I1209 11:23:29.903593  645459 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1209 11:23:29.903602  645459 command_runner.go:130] > # CNI plugins.
	I1209 11:23:29.903608  645459 command_runner.go:130] > [crio.network]
	I1209 11:23:29.903620  645459 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1209 11:23:29.903636  645459 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1209 11:23:29.903646  645459 command_runner.go:130] > # cni_default_network = ""
	I1209 11:23:29.903658  645459 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1209 11:23:29.903670  645459 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1209 11:23:29.903683  645459 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1209 11:23:29.903692  645459 command_runner.go:130] > # plugin_dirs = [
	I1209 11:23:29.903698  645459 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1209 11:23:29.903712  645459 command_runner.go:130] > # ]
	I1209 11:23:29.903724  645459 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1209 11:23:29.903735  645459 command_runner.go:130] > [crio.metrics]
	I1209 11:23:29.903745  645459 command_runner.go:130] > # Globally enable or disable metrics support.
	I1209 11:23:29.903754  645459 command_runner.go:130] > enable_metrics = true
	I1209 11:23:29.903763  645459 command_runner.go:130] > # Specify enabled metrics collectors.
	I1209 11:23:29.903773  645459 command_runner.go:130] > # Per default all metrics are enabled.
	I1209 11:23:29.903785  645459 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1209 11:23:29.903798  645459 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1209 11:23:29.903810  645459 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1209 11:23:29.903820  645459 command_runner.go:130] > # metrics_collectors = [
	I1209 11:23:29.903829  645459 command_runner.go:130] > # 	"operations",
	I1209 11:23:29.903839  645459 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1209 11:23:29.903848  645459 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1209 11:23:29.903856  645459 command_runner.go:130] > # 	"operations_errors",
	I1209 11:23:29.903865  645459 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1209 11:23:29.903871  645459 command_runner.go:130] > # 	"image_pulls_by_name",
	I1209 11:23:29.903881  645459 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1209 11:23:29.903888  645459 command_runner.go:130] > # 	"image_pulls_failures",
	I1209 11:23:29.903897  645459 command_runner.go:130] > # 	"image_pulls_successes",
	I1209 11:23:29.903908  645459 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1209 11:23:29.903916  645459 command_runner.go:130] > # 	"image_layer_reuse",
	I1209 11:23:29.903925  645459 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1209 11:23:29.903935  645459 command_runner.go:130] > # 	"containers_oom_total",
	I1209 11:23:29.903944  645459 command_runner.go:130] > # 	"containers_oom",
	I1209 11:23:29.903953  645459 command_runner.go:130] > # 	"processes_defunct",
	I1209 11:23:29.903960  645459 command_runner.go:130] > # 	"operations_total",
	I1209 11:23:29.903970  645459 command_runner.go:130] > # 	"operations_latency_seconds",
	I1209 11:23:29.903978  645459 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1209 11:23:29.903989  645459 command_runner.go:130] > # 	"operations_errors_total",
	I1209 11:23:29.904000  645459 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1209 11:23:29.904011  645459 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1209 11:23:29.904021  645459 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1209 11:23:29.904034  645459 command_runner.go:130] > # 	"image_pulls_success_total",
	I1209 11:23:29.904046  645459 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1209 11:23:29.904053  645459 command_runner.go:130] > # 	"containers_oom_count_total",
	I1209 11:23:29.904064  645459 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1209 11:23:29.904074  645459 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1209 11:23:29.904090  645459 command_runner.go:130] > # ]
	I1209 11:23:29.904102  645459 command_runner.go:130] > # The port on which the metrics server will listen.
	I1209 11:23:29.904113  645459 command_runner.go:130] > # metrics_port = 9090
	I1209 11:23:29.904125  645459 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1209 11:23:29.904131  645459 command_runner.go:130] > # metrics_socket = ""
	I1209 11:23:29.904142  645459 command_runner.go:130] > # The certificate for the secure metrics server.
	I1209 11:23:29.904152  645459 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1209 11:23:29.904165  645459 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1209 11:23:29.904175  645459 command_runner.go:130] > # certificate on any modification event.
	I1209 11:23:29.904184  645459 command_runner.go:130] > # metrics_cert = ""
	I1209 11:23:29.904192  645459 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1209 11:23:29.904202  645459 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1209 11:23:29.904211  645459 command_runner.go:130] > # metrics_key = ""
	I1209 11:23:29.904227  645459 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1209 11:23:29.904237  645459 command_runner.go:130] > [crio.tracing]
	I1209 11:23:29.904249  645459 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1209 11:23:29.904258  645459 command_runner.go:130] > # enable_tracing = false
	I1209 11:23:29.904269  645459 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1209 11:23:29.904280  645459 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1209 11:23:29.904293  645459 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1209 11:23:29.904305  645459 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1209 11:23:29.904312  645459 command_runner.go:130] > # CRI-O NRI configuration.
	I1209 11:23:29.904322  645459 command_runner.go:130] > [crio.nri]
	I1209 11:23:29.904332  645459 command_runner.go:130] > # Globally enable or disable NRI.
	I1209 11:23:29.904341  645459 command_runner.go:130] > # enable_nri = false
	I1209 11:23:29.904351  645459 command_runner.go:130] > # NRI socket to listen on.
	I1209 11:23:29.904361  645459 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1209 11:23:29.904371  645459 command_runner.go:130] > # NRI plugin directory to use.
	I1209 11:23:29.904380  645459 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1209 11:23:29.904387  645459 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1209 11:23:29.904397  645459 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1209 11:23:29.904404  645459 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1209 11:23:29.904414  645459 command_runner.go:130] > # nri_disable_connections = false
	I1209 11:23:29.904422  645459 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1209 11:23:29.904432  645459 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1209 11:23:29.904440  645459 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1209 11:23:29.904450  645459 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1209 11:23:29.904458  645459 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1209 11:23:29.904463  645459 command_runner.go:130] > [crio.stats]
	I1209 11:23:29.904471  645459 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1209 11:23:29.904479  645459 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1209 11:23:29.904486  645459 command_runner.go:130] > # stats_collection_period = 0
	I1209 11:23:29.904523  645459 command_runner.go:130] ! time="2024-12-09 11:23:29.866966552Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I1209 11:23:29.904551  645459 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1209 11:23:29.904638  645459 cni.go:84] Creating CNI manager for ""
	I1209 11:23:29.904652  645459 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1209 11:23:29.904668  645459 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1209 11:23:29.904704  645459 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.31 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-714725 NodeName:multinode-714725 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.31"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.31 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 11:23:29.904829  645459 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.31
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-714725"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.31"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.31"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 11:23:29.904903  645459 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1209 11:23:29.920249  645459 command_runner.go:130] > kubeadm
	I1209 11:23:29.920280  645459 command_runner.go:130] > kubectl
	I1209 11:23:29.920286  645459 command_runner.go:130] > kubelet
	I1209 11:23:29.920309  645459 binaries.go:44] Found k8s binaries, skipping transfer
	I1209 11:23:29.920383  645459 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 11:23:29.929816  645459 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I1209 11:23:29.945595  645459 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 11:23:29.961817  645459 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2293 bytes)
	I1209 11:23:29.977580  645459 ssh_runner.go:195] Run: grep 192.168.39.31	control-plane.minikube.internal$ /etc/hosts
	I1209 11:23:29.981205  645459 command_runner.go:130] > 192.168.39.31	control-plane.minikube.internal
	I1209 11:23:29.981420  645459 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 11:23:30.115771  645459 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 11:23:30.130676  645459 certs.go:68] Setting up /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/multinode-714725 for IP: 192.168.39.31
	I1209 11:23:30.130704  645459 certs.go:194] generating shared ca certs ...
	I1209 11:23:30.130729  645459 certs.go:226] acquiring lock for ca certs: {Name:mk2df5887e08965d909a9c950da5dfffb8a04ddc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 11:23:30.130913  645459 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.key
	I1209 11:23:30.130975  645459 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.key
	I1209 11:23:30.130994  645459 certs.go:256] generating profile certs ...
	I1209 11:23:30.131122  645459 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/multinode-714725/client.key
	I1209 11:23:30.131207  645459 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/multinode-714725/apiserver.key.fa0b84c4
	I1209 11:23:30.131282  645459 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/multinode-714725/proxy-client.key
	I1209 11:23:30.131302  645459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1209 11:23:30.131326  645459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1209 11:23:30.131346  645459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1209 11:23:30.131363  645459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1209 11:23:30.131389  645459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/multinode-714725/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1209 11:23:30.131405  645459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/multinode-714725/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1209 11:23:30.131423  645459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/multinode-714725/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1209 11:23:30.131446  645459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/multinode-714725/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1209 11:23:30.131507  645459 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017.pem (1338 bytes)
	W1209 11:23:30.131551  645459 certs.go:480] ignoring /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017_empty.pem, impossibly tiny 0 bytes
	I1209 11:23:30.131561  645459 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem (1675 bytes)
	I1209 11:23:30.131591  645459 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem (1082 bytes)
	I1209 11:23:30.131628  645459 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem (1123 bytes)
	I1209 11:23:30.131661  645459 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem (1679 bytes)
	I1209 11:23:30.131750  645459 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem (1708 bytes)
	I1209 11:23:30.131859  645459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1209 11:23:30.131884  645459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017.pem -> /usr/share/ca-certificates/617017.pem
	I1209 11:23:30.131904  645459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem -> /usr/share/ca-certificates/6170172.pem
	I1209 11:23:30.132781  645459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 11:23:30.155181  645459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1209 11:23:30.177648  645459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 11:23:30.199088  645459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1209 11:23:30.220820  645459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/multinode-714725/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1209 11:23:30.242445  645459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/multinode-714725/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1209 11:23:30.264568  645459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/multinode-714725/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 11:23:30.285723  645459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/multinode-714725/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1209 11:23:30.307697  645459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 11:23:30.330439  645459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017.pem --> /usr/share/ca-certificates/617017.pem (1338 bytes)
	I1209 11:23:30.351888  645459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem --> /usr/share/ca-certificates/6170172.pem (1708 bytes)
	I1209 11:23:30.373689  645459 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 11:23:30.388667  645459 ssh_runner.go:195] Run: openssl version
	I1209 11:23:30.394123  645459 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I1209 11:23:30.394351  645459 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1209 11:23:30.404186  645459 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 11:23:30.408279  645459 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec  9 10:34 /usr/share/ca-certificates/minikubeCA.pem
	I1209 11:23:30.408420  645459 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 10:34 /usr/share/ca-certificates/minikubeCA.pem
	I1209 11:23:30.408474  645459 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 11:23:30.413751  645459 command_runner.go:130] > b5213941
	I1209 11:23:30.413815  645459 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1209 11:23:30.422726  645459 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/617017.pem && ln -fs /usr/share/ca-certificates/617017.pem /etc/ssl/certs/617017.pem"
	I1209 11:23:30.432470  645459 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/617017.pem
	I1209 11:23:30.436551  645459 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec  9 10:45 /usr/share/ca-certificates/617017.pem
	I1209 11:23:30.436622  645459 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 10:45 /usr/share/ca-certificates/617017.pem
	I1209 11:23:30.436668  645459 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/617017.pem
	I1209 11:23:30.441877  645459 command_runner.go:130] > 51391683
	I1209 11:23:30.441949  645459 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/617017.pem /etc/ssl/certs/51391683.0"
	I1209 11:23:30.450577  645459 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6170172.pem && ln -fs /usr/share/ca-certificates/6170172.pem /etc/ssl/certs/6170172.pem"
	I1209 11:23:30.460334  645459 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6170172.pem
	I1209 11:23:30.464283  645459 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec  9 10:45 /usr/share/ca-certificates/6170172.pem
	I1209 11:23:30.464416  645459 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 10:45 /usr/share/ca-certificates/6170172.pem
	I1209 11:23:30.464466  645459 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6170172.pem
	I1209 11:23:30.469648  645459 command_runner.go:130] > 3ec20f2e
	I1209 11:23:30.469721  645459 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6170172.pem /etc/ssl/certs/3ec20f2e.0"
	I1209 11:23:30.478061  645459 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 11:23:30.482176  645459 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 11:23:30.482197  645459 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1209 11:23:30.482205  645459 command_runner.go:130] > Device: 253,1	Inode: 4197422     Links: 1
	I1209 11:23:30.482217  645459 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1209 11:23:30.482230  645459 command_runner.go:130] > Access: 2024-12-09 11:16:45.210739788 +0000
	I1209 11:23:30.482238  645459 command_runner.go:130] > Modify: 2024-12-09 11:16:45.210739788 +0000
	I1209 11:23:30.482250  645459 command_runner.go:130] > Change: 2024-12-09 11:16:45.210739788 +0000
	I1209 11:23:30.482262  645459 command_runner.go:130] >  Birth: 2024-12-09 11:16:45.210739788 +0000
	I1209 11:23:30.482304  645459 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1209 11:23:30.487483  645459 command_runner.go:130] > Certificate will not expire
	I1209 11:23:30.487547  645459 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1209 11:23:30.492442  645459 command_runner.go:130] > Certificate will not expire
	I1209 11:23:30.492668  645459 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1209 11:23:30.497592  645459 command_runner.go:130] > Certificate will not expire
	I1209 11:23:30.497758  645459 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1209 11:23:30.502759  645459 command_runner.go:130] > Certificate will not expire
	I1209 11:23:30.502809  645459 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1209 11:23:30.507736  645459 command_runner.go:130] > Certificate will not expire
	I1209 11:23:30.507889  645459 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1209 11:23:30.512797  645459 command_runner.go:130] > Certificate will not expire
	I1209 11:23:30.512866  645459 kubeadm.go:392] StartCluster: {Name:multinode-714725 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
2 ClusterName:multinode-714725 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.31 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.21 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.208 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 11:23:30.512998  645459 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 11:23:30.513056  645459 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 11:23:30.545698  645459 command_runner.go:130] > 000acae6b217f427b8f6acdf002e363e07c656690f10420767c4cc5a8eb5a9fb
	I1209 11:23:30.545736  645459 command_runner.go:130] > b290f046ccdb2cf03080d6ac2d459063f48e75106d6a3af08a6a2851744af474
	I1209 11:23:30.545746  645459 command_runner.go:130] > e26cbe010ad4442ceadffab51ef56d87b6f192b91651925c56194481053fa335
	I1209 11:23:30.545757  645459 command_runner.go:130] > f858cae62f854847342f432947742bac7fc1329cb1e1886fcddd5888a674d561
	I1209 11:23:30.545766  645459 command_runner.go:130] > 13dde430803b2ead2165363121d70eb4fedc39d2a7f6ea59aa7ed6fbbe2c4e8e
	I1209 11:23:30.545776  645459 command_runner.go:130] > 490cfe762cf3942a733ef67734bdad81051e0355b76be9b5df0ddc2872cbaf31
	I1209 11:23:30.545797  645459 command_runner.go:130] > b5459c77ec8bed068a441985a42c9997504af0e6beb5fe241f32d120a7df3940
	I1209 11:23:30.545819  645459 command_runner.go:130] > 027004a5bcaf40ecd3ca7d0b0f75eef805cabd118273733e4c134fa161d932fd
	I1209 11:23:30.547276  645459 cri.go:89] found id: "000acae6b217f427b8f6acdf002e363e07c656690f10420767c4cc5a8eb5a9fb"
	I1209 11:23:30.547294  645459 cri.go:89] found id: "b290f046ccdb2cf03080d6ac2d459063f48e75106d6a3af08a6a2851744af474"
	I1209 11:23:30.547299  645459 cri.go:89] found id: "e26cbe010ad4442ceadffab51ef56d87b6f192b91651925c56194481053fa335"
	I1209 11:23:30.547302  645459 cri.go:89] found id: "f858cae62f854847342f432947742bac7fc1329cb1e1886fcddd5888a674d561"
	I1209 11:23:30.547305  645459 cri.go:89] found id: "13dde430803b2ead2165363121d70eb4fedc39d2a7f6ea59aa7ed6fbbe2c4e8e"
	I1209 11:23:30.547309  645459 cri.go:89] found id: "490cfe762cf3942a733ef67734bdad81051e0355b76be9b5df0ddc2872cbaf31"
	I1209 11:23:30.547311  645459 cri.go:89] found id: "b5459c77ec8bed068a441985a42c9997504af0e6beb5fe241f32d120a7df3940"
	I1209 11:23:30.547314  645459 cri.go:89] found id: "027004a5bcaf40ecd3ca7d0b0f75eef805cabd118273733e4c134fa161d932fd"
	I1209 11:23:30.547317  645459 cri.go:89] found id: ""
	I1209 11:23:30.547366  645459 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-714725 -n multinode-714725
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-714725 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (325.05s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (145.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-714725 stop
E1209 11:26:25.726027  617017 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/functional-032350/client.crt: no such file or directory" logger="UnhandledError"
E1209 11:26:33.303222  617017 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/addons-156041/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-714725 stop: exit status 82 (2m0.472265666s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-714725-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-714725 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-714725 status
multinode_test.go:351: (dbg) Done: out/minikube-linux-amd64 -p multinode-714725 status: (18.827581935s)
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-714725 status --alsologtostderr
multinode_test.go:358: (dbg) Done: out/minikube-linux-amd64 -p multinode-714725 status --alsologtostderr: (3.391960511s)
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-linux-amd64 -p multinode-714725 status --alsologtostderr": 
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-linux-amd64 -p multinode-714725 status --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-714725 -n multinode-714725
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-714725 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-714725 logs -n 25: (2.031102655s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-714725 ssh -n                                                                 | multinode-714725 | jenkins | v1.34.0 | 09 Dec 24 11:19 UTC | 09 Dec 24 11:19 UTC |
	|         | multinode-714725-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-714725 cp multinode-714725-m02:/home/docker/cp-test.txt                       | multinode-714725 | jenkins | v1.34.0 | 09 Dec 24 11:19 UTC | 09 Dec 24 11:19 UTC |
	|         | multinode-714725:/home/docker/cp-test_multinode-714725-m02_multinode-714725.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-714725 ssh -n                                                                 | multinode-714725 | jenkins | v1.34.0 | 09 Dec 24 11:19 UTC | 09 Dec 24 11:19 UTC |
	|         | multinode-714725-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-714725 ssh -n multinode-714725 sudo cat                                       | multinode-714725 | jenkins | v1.34.0 | 09 Dec 24 11:19 UTC | 09 Dec 24 11:19 UTC |
	|         | /home/docker/cp-test_multinode-714725-m02_multinode-714725.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-714725 cp multinode-714725-m02:/home/docker/cp-test.txt                       | multinode-714725 | jenkins | v1.34.0 | 09 Dec 24 11:19 UTC | 09 Dec 24 11:19 UTC |
	|         | multinode-714725-m03:/home/docker/cp-test_multinode-714725-m02_multinode-714725-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-714725 ssh -n                                                                 | multinode-714725 | jenkins | v1.34.0 | 09 Dec 24 11:19 UTC | 09 Dec 24 11:19 UTC |
	|         | multinode-714725-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-714725 ssh -n multinode-714725-m03 sudo cat                                   | multinode-714725 | jenkins | v1.34.0 | 09 Dec 24 11:19 UTC | 09 Dec 24 11:19 UTC |
	|         | /home/docker/cp-test_multinode-714725-m02_multinode-714725-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-714725 cp testdata/cp-test.txt                                                | multinode-714725 | jenkins | v1.34.0 | 09 Dec 24 11:19 UTC | 09 Dec 24 11:19 UTC |
	|         | multinode-714725-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-714725 ssh -n                                                                 | multinode-714725 | jenkins | v1.34.0 | 09 Dec 24 11:19 UTC | 09 Dec 24 11:19 UTC |
	|         | multinode-714725-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-714725 cp multinode-714725-m03:/home/docker/cp-test.txt                       | multinode-714725 | jenkins | v1.34.0 | 09 Dec 24 11:19 UTC | 09 Dec 24 11:19 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2432959614/001/cp-test_multinode-714725-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-714725 ssh -n                                                                 | multinode-714725 | jenkins | v1.34.0 | 09 Dec 24 11:19 UTC | 09 Dec 24 11:19 UTC |
	|         | multinode-714725-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-714725 cp multinode-714725-m03:/home/docker/cp-test.txt                       | multinode-714725 | jenkins | v1.34.0 | 09 Dec 24 11:19 UTC | 09 Dec 24 11:19 UTC |
	|         | multinode-714725:/home/docker/cp-test_multinode-714725-m03_multinode-714725.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-714725 ssh -n                                                                 | multinode-714725 | jenkins | v1.34.0 | 09 Dec 24 11:19 UTC | 09 Dec 24 11:19 UTC |
	|         | multinode-714725-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-714725 ssh -n multinode-714725 sudo cat                                       | multinode-714725 | jenkins | v1.34.0 | 09 Dec 24 11:19 UTC | 09 Dec 24 11:19 UTC |
	|         | /home/docker/cp-test_multinode-714725-m03_multinode-714725.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-714725 cp multinode-714725-m03:/home/docker/cp-test.txt                       | multinode-714725 | jenkins | v1.34.0 | 09 Dec 24 11:19 UTC | 09 Dec 24 11:19 UTC |
	|         | multinode-714725-m02:/home/docker/cp-test_multinode-714725-m03_multinode-714725-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-714725 ssh -n                                                                 | multinode-714725 | jenkins | v1.34.0 | 09 Dec 24 11:19 UTC | 09 Dec 24 11:19 UTC |
	|         | multinode-714725-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-714725 ssh -n multinode-714725-m02 sudo cat                                   | multinode-714725 | jenkins | v1.34.0 | 09 Dec 24 11:19 UTC | 09 Dec 24 11:19 UTC |
	|         | /home/docker/cp-test_multinode-714725-m03_multinode-714725-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-714725 node stop m03                                                          | multinode-714725 | jenkins | v1.34.0 | 09 Dec 24 11:19 UTC | 09 Dec 24 11:19 UTC |
	| node    | multinode-714725 node start                                                             | multinode-714725 | jenkins | v1.34.0 | 09 Dec 24 11:19 UTC | 09 Dec 24 11:19 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-714725                                                                | multinode-714725 | jenkins | v1.34.0 | 09 Dec 24 11:19 UTC |                     |
	| stop    | -p multinode-714725                                                                     | multinode-714725 | jenkins | v1.34.0 | 09 Dec 24 11:19 UTC |                     |
	| start   | -p multinode-714725                                                                     | multinode-714725 | jenkins | v1.34.0 | 09 Dec 24 11:21 UTC | 09 Dec 24 11:25 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-714725                                                                | multinode-714725 | jenkins | v1.34.0 | 09 Dec 24 11:25 UTC |                     |
	| node    | multinode-714725 node delete                                                            | multinode-714725 | jenkins | v1.34.0 | 09 Dec 24 11:25 UTC | 09 Dec 24 11:25 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-714725 stop                                                                   | multinode-714725 | jenkins | v1.34.0 | 09 Dec 24 11:25 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/09 11:21:56
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 11:21:56.506451  645459 out.go:345] Setting OutFile to fd 1 ...
	I1209 11:21:56.506590  645459 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 11:21:56.506602  645459 out.go:358] Setting ErrFile to fd 2...
	I1209 11:21:56.506606  645459 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 11:21:56.506777  645459 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20068-609844/.minikube/bin
	I1209 11:21:56.507375  645459 out.go:352] Setting JSON to false
	I1209 11:21:56.508430  645459 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":14660,"bootTime":1733728656,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 11:21:56.508500  645459 start.go:139] virtualization: kvm guest
	I1209 11:21:56.510860  645459 out.go:177] * [multinode-714725] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1209 11:21:56.512096  645459 out.go:177]   - MINIKUBE_LOCATION=20068
	I1209 11:21:56.512096  645459 notify.go:220] Checking for updates...
	I1209 11:21:56.514114  645459 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 11:21:56.515318  645459 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20068-609844/kubeconfig
	I1209 11:21:56.516314  645459 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20068-609844/.minikube
	I1209 11:21:56.517463  645459 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1209 11:21:56.518481  645459 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 11:21:56.520209  645459 config.go:182] Loaded profile config "multinode-714725": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 11:21:56.520366  645459 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 11:21:56.521236  645459 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:21:56.521291  645459 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:21:56.537711  645459 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37237
	I1209 11:21:56.538201  645459 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:21:56.538908  645459 main.go:141] libmachine: Using API Version  1
	I1209 11:21:56.538936  645459 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:21:56.539311  645459 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:21:56.539522  645459 main.go:141] libmachine: (multinode-714725) Calling .DriverName
	I1209 11:21:56.575486  645459 out.go:177] * Using the kvm2 driver based on existing profile
	I1209 11:21:56.577742  645459 start.go:297] selected driver: kvm2
	I1209 11:21:56.577850  645459 start.go:901] validating driver "kvm2" against &{Name:multinode-714725 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.2 ClusterName:multinode-714725 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.31 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.21 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.208 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:f
alse ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 11:21:56.578414  645459 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 11:21:56.578766  645459 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 11:21:56.578870  645459 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20068-609844/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1209 11:21:56.595157  645459 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1209 11:21:56.595880  645459 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 11:21:56.595918  645459 cni.go:84] Creating CNI manager for ""
	I1209 11:21:56.595970  645459 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1209 11:21:56.596028  645459 start.go:340] cluster config:
	{Name:multinode-714725 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:multinode-714725 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.31 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.21 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.208 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisione
r:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuF
irmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 11:21:56.596174  645459 iso.go:125] acquiring lock: {Name:mk3bf4d9da9b75cc0ae0a3732f4492bac54a4b46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 11:21:56.598134  645459 out.go:177] * Starting "multinode-714725" primary control-plane node in "multinode-714725" cluster
	I1209 11:21:56.599150  645459 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 11:21:56.599192  645459 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1209 11:21:56.599200  645459 cache.go:56] Caching tarball of preloaded images
	I1209 11:21:56.599297  645459 preload.go:172] Found /home/jenkins/minikube-integration/20068-609844/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1209 11:21:56.599311  645459 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1209 11:21:56.599479  645459 profile.go:143] Saving config to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/multinode-714725/config.json ...
	I1209 11:21:56.599683  645459 start.go:360] acquireMachinesLock for multinode-714725: {Name:mka6afffe9ddf2faebdc603ed0401805d58bf31e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 11:21:56.599732  645459 start.go:364] duration metric: took 28.976µs to acquireMachinesLock for "multinode-714725"
	I1209 11:21:56.599752  645459 start.go:96] Skipping create...Using existing machine configuration
	I1209 11:21:56.599763  645459 fix.go:54] fixHost starting: 
	I1209 11:21:56.600015  645459 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:21:56.600055  645459 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:21:56.615104  645459 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37161
	I1209 11:21:56.615603  645459 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:21:56.616267  645459 main.go:141] libmachine: Using API Version  1
	I1209 11:21:56.616293  645459 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:21:56.616623  645459 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:21:56.616789  645459 main.go:141] libmachine: (multinode-714725) Calling .DriverName
	I1209 11:21:56.616915  645459 main.go:141] libmachine: (multinode-714725) Calling .GetState
	I1209 11:21:56.618592  645459 fix.go:112] recreateIfNeeded on multinode-714725: state=Running err=<nil>
	W1209 11:21:56.618617  645459 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 11:21:56.620335  645459 out.go:177] * Updating the running kvm2 "multinode-714725" VM ...
	I1209 11:21:56.621598  645459 machine.go:93] provisionDockerMachine start ...
	I1209 11:21:56.621621  645459 main.go:141] libmachine: (multinode-714725) Calling .DriverName
	I1209 11:21:56.621820  645459 main.go:141] libmachine: (multinode-714725) Calling .GetSSHHostname
	I1209 11:21:56.624458  645459 main.go:141] libmachine: (multinode-714725) DBG | domain multinode-714725 has defined MAC address 52:54:00:b7:d0:20 in network mk-multinode-714725
	I1209 11:21:56.624962  645459 main.go:141] libmachine: (multinode-714725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:d0:20", ip: ""} in network mk-multinode-714725: {Iface:virbr1 ExpiryTime:2024-12-09 12:16:28 +0000 UTC Type:0 Mac:52:54:00:b7:d0:20 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:multinode-714725 Clientid:01:52:54:00:b7:d0:20}
	I1209 11:21:56.625007  645459 main.go:141] libmachine: (multinode-714725) DBG | domain multinode-714725 has defined IP address 192.168.39.31 and MAC address 52:54:00:b7:d0:20 in network mk-multinode-714725
	I1209 11:21:56.625132  645459 main.go:141] libmachine: (multinode-714725) Calling .GetSSHPort
	I1209 11:21:56.625312  645459 main.go:141] libmachine: (multinode-714725) Calling .GetSSHKeyPath
	I1209 11:21:56.625459  645459 main.go:141] libmachine: (multinode-714725) Calling .GetSSHKeyPath
	I1209 11:21:56.625598  645459 main.go:141] libmachine: (multinode-714725) Calling .GetSSHUsername
	I1209 11:21:56.625769  645459 main.go:141] libmachine: Using SSH client type: native
	I1209 11:21:56.625953  645459 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.31 22 <nil> <nil>}
	I1209 11:21:56.625969  645459 main.go:141] libmachine: About to run SSH command:
	hostname
	I1209 11:21:56.736882  645459 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-714725
	
	I1209 11:21:56.736918  645459 main.go:141] libmachine: (multinode-714725) Calling .GetMachineName
	I1209 11:21:56.737216  645459 buildroot.go:166] provisioning hostname "multinode-714725"
	I1209 11:21:56.737244  645459 main.go:141] libmachine: (multinode-714725) Calling .GetMachineName
	I1209 11:21:56.737465  645459 main.go:141] libmachine: (multinode-714725) Calling .GetSSHHostname
	I1209 11:21:56.740032  645459 main.go:141] libmachine: (multinode-714725) DBG | domain multinode-714725 has defined MAC address 52:54:00:b7:d0:20 in network mk-multinode-714725
	I1209 11:21:56.740404  645459 main.go:141] libmachine: (multinode-714725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:d0:20", ip: ""} in network mk-multinode-714725: {Iface:virbr1 ExpiryTime:2024-12-09 12:16:28 +0000 UTC Type:0 Mac:52:54:00:b7:d0:20 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:multinode-714725 Clientid:01:52:54:00:b7:d0:20}
	I1209 11:21:56.740447  645459 main.go:141] libmachine: (multinode-714725) DBG | domain multinode-714725 has defined IP address 192.168.39.31 and MAC address 52:54:00:b7:d0:20 in network mk-multinode-714725
	I1209 11:21:56.740605  645459 main.go:141] libmachine: (multinode-714725) Calling .GetSSHPort
	I1209 11:21:56.740784  645459 main.go:141] libmachine: (multinode-714725) Calling .GetSSHKeyPath
	I1209 11:21:56.740924  645459 main.go:141] libmachine: (multinode-714725) Calling .GetSSHKeyPath
	I1209 11:21:56.741022  645459 main.go:141] libmachine: (multinode-714725) Calling .GetSSHUsername
	I1209 11:21:56.741167  645459 main.go:141] libmachine: Using SSH client type: native
	I1209 11:21:56.741339  645459 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.31 22 <nil> <nil>}
	I1209 11:21:56.741351  645459 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-714725 && echo "multinode-714725" | sudo tee /etc/hostname
	I1209 11:21:56.861170  645459 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-714725
	
	I1209 11:21:56.861222  645459 main.go:141] libmachine: (multinode-714725) Calling .GetSSHHostname
	I1209 11:21:56.863897  645459 main.go:141] libmachine: (multinode-714725) DBG | domain multinode-714725 has defined MAC address 52:54:00:b7:d0:20 in network mk-multinode-714725
	I1209 11:21:56.864346  645459 main.go:141] libmachine: (multinode-714725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:d0:20", ip: ""} in network mk-multinode-714725: {Iface:virbr1 ExpiryTime:2024-12-09 12:16:28 +0000 UTC Type:0 Mac:52:54:00:b7:d0:20 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:multinode-714725 Clientid:01:52:54:00:b7:d0:20}
	I1209 11:21:56.864380  645459 main.go:141] libmachine: (multinode-714725) DBG | domain multinode-714725 has defined IP address 192.168.39.31 and MAC address 52:54:00:b7:d0:20 in network mk-multinode-714725
	I1209 11:21:56.864488  645459 main.go:141] libmachine: (multinode-714725) Calling .GetSSHPort
	I1209 11:21:56.864694  645459 main.go:141] libmachine: (multinode-714725) Calling .GetSSHKeyPath
	I1209 11:21:56.864860  645459 main.go:141] libmachine: (multinode-714725) Calling .GetSSHKeyPath
	I1209 11:21:56.865024  645459 main.go:141] libmachine: (multinode-714725) Calling .GetSSHUsername
	I1209 11:21:56.865245  645459 main.go:141] libmachine: Using SSH client type: native
	I1209 11:21:56.865482  645459 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.31 22 <nil> <nil>}
	I1209 11:21:56.865500  645459 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-714725' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-714725/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-714725' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 11:21:56.962727  645459 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 11:21:56.962765  645459 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20068-609844/.minikube CaCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20068-609844/.minikube}
	I1209 11:21:56.962787  645459 buildroot.go:174] setting up certificates
	I1209 11:21:56.962795  645459 provision.go:84] configureAuth start
	I1209 11:21:56.962803  645459 main.go:141] libmachine: (multinode-714725) Calling .GetMachineName
	I1209 11:21:56.963097  645459 main.go:141] libmachine: (multinode-714725) Calling .GetIP
	I1209 11:21:56.965875  645459 main.go:141] libmachine: (multinode-714725) DBG | domain multinode-714725 has defined MAC address 52:54:00:b7:d0:20 in network mk-multinode-714725
	I1209 11:21:56.966267  645459 main.go:141] libmachine: (multinode-714725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:d0:20", ip: ""} in network mk-multinode-714725: {Iface:virbr1 ExpiryTime:2024-12-09 12:16:28 +0000 UTC Type:0 Mac:52:54:00:b7:d0:20 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:multinode-714725 Clientid:01:52:54:00:b7:d0:20}
	I1209 11:21:56.966296  645459 main.go:141] libmachine: (multinode-714725) DBG | domain multinode-714725 has defined IP address 192.168.39.31 and MAC address 52:54:00:b7:d0:20 in network mk-multinode-714725
	I1209 11:21:56.966447  645459 main.go:141] libmachine: (multinode-714725) Calling .GetSSHHostname
	I1209 11:21:56.968885  645459 main.go:141] libmachine: (multinode-714725) DBG | domain multinode-714725 has defined MAC address 52:54:00:b7:d0:20 in network mk-multinode-714725
	I1209 11:21:56.969264  645459 main.go:141] libmachine: (multinode-714725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:d0:20", ip: ""} in network mk-multinode-714725: {Iface:virbr1 ExpiryTime:2024-12-09 12:16:28 +0000 UTC Type:0 Mac:52:54:00:b7:d0:20 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:multinode-714725 Clientid:01:52:54:00:b7:d0:20}
	I1209 11:21:56.969304  645459 main.go:141] libmachine: (multinode-714725) DBG | domain multinode-714725 has defined IP address 192.168.39.31 and MAC address 52:54:00:b7:d0:20 in network mk-multinode-714725
	I1209 11:21:56.969485  645459 provision.go:143] copyHostCerts
	I1209 11:21:56.969523  645459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem
	I1209 11:21:56.969558  645459 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem, removing ...
	I1209 11:21:56.969567  645459 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem
	I1209 11:21:56.969630  645459 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem (1123 bytes)
	I1209 11:21:56.969748  645459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem
	I1209 11:21:56.969772  645459 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem, removing ...
	I1209 11:21:56.969781  645459 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem
	I1209 11:21:56.969813  645459 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem (1679 bytes)
	I1209 11:21:56.969869  645459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem
	I1209 11:21:56.969886  645459 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem, removing ...
	I1209 11:21:56.969892  645459 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem
	I1209 11:21:56.969913  645459 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem (1082 bytes)
	I1209 11:21:56.969960  645459 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem org=jenkins.multinode-714725 san=[127.0.0.1 192.168.39.31 localhost minikube multinode-714725]
	I1209 11:21:57.036368  645459 provision.go:177] copyRemoteCerts
	I1209 11:21:57.036438  645459 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 11:21:57.036521  645459 main.go:141] libmachine: (multinode-714725) Calling .GetSSHHostname
	I1209 11:21:57.039147  645459 main.go:141] libmachine: (multinode-714725) DBG | domain multinode-714725 has defined MAC address 52:54:00:b7:d0:20 in network mk-multinode-714725
	I1209 11:21:57.039465  645459 main.go:141] libmachine: (multinode-714725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:d0:20", ip: ""} in network mk-multinode-714725: {Iface:virbr1 ExpiryTime:2024-12-09 12:16:28 +0000 UTC Type:0 Mac:52:54:00:b7:d0:20 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:multinode-714725 Clientid:01:52:54:00:b7:d0:20}
	I1209 11:21:57.039502  645459 main.go:141] libmachine: (multinode-714725) DBG | domain multinode-714725 has defined IP address 192.168.39.31 and MAC address 52:54:00:b7:d0:20 in network mk-multinode-714725
	I1209 11:21:57.039607  645459 main.go:141] libmachine: (multinode-714725) Calling .GetSSHPort
	I1209 11:21:57.039789  645459 main.go:141] libmachine: (multinode-714725) Calling .GetSSHKeyPath
	I1209 11:21:57.039914  645459 main.go:141] libmachine: (multinode-714725) Calling .GetSSHUsername
	I1209 11:21:57.040047  645459 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/multinode-714725/id_rsa Username:docker}
	I1209 11:21:57.121698  645459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1209 11:21:57.121784  645459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1209 11:21:57.148922  645459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1209 11:21:57.149004  645459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1209 11:21:57.174235  645459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1209 11:21:57.174312  645459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1209 11:21:57.202534  645459 provision.go:87] duration metric: took 239.722079ms to configureAuth
	I1209 11:21:57.202569  645459 buildroot.go:189] setting minikube options for container-runtime
	I1209 11:21:57.202834  645459 config.go:182] Loaded profile config "multinode-714725": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 11:21:57.202949  645459 main.go:141] libmachine: (multinode-714725) Calling .GetSSHHostname
	I1209 11:21:57.205753  645459 main.go:141] libmachine: (multinode-714725) DBG | domain multinode-714725 has defined MAC address 52:54:00:b7:d0:20 in network mk-multinode-714725
	I1209 11:21:57.206203  645459 main.go:141] libmachine: (multinode-714725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:d0:20", ip: ""} in network mk-multinode-714725: {Iface:virbr1 ExpiryTime:2024-12-09 12:16:28 +0000 UTC Type:0 Mac:52:54:00:b7:d0:20 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:multinode-714725 Clientid:01:52:54:00:b7:d0:20}
	I1209 11:21:57.206245  645459 main.go:141] libmachine: (multinode-714725) DBG | domain multinode-714725 has defined IP address 192.168.39.31 and MAC address 52:54:00:b7:d0:20 in network mk-multinode-714725
	I1209 11:21:57.206467  645459 main.go:141] libmachine: (multinode-714725) Calling .GetSSHPort
	I1209 11:21:57.206672  645459 main.go:141] libmachine: (multinode-714725) Calling .GetSSHKeyPath
	I1209 11:21:57.206939  645459 main.go:141] libmachine: (multinode-714725) Calling .GetSSHKeyPath
	I1209 11:21:57.207076  645459 main.go:141] libmachine: (multinode-714725) Calling .GetSSHUsername
	I1209 11:21:57.207287  645459 main.go:141] libmachine: Using SSH client type: native
	I1209 11:21:57.207469  645459 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.31 22 <nil> <nil>}
	I1209 11:21:57.207485  645459 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 11:23:27.931364  645459 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 11:23:27.931424  645459 machine.go:96] duration metric: took 1m31.309808368s to provisionDockerMachine
	I1209 11:23:27.931444  645459 start.go:293] postStartSetup for "multinode-714725" (driver="kvm2")
	I1209 11:23:27.931455  645459 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 11:23:27.931492  645459 main.go:141] libmachine: (multinode-714725) Calling .DriverName
	I1209 11:23:27.931834  645459 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 11:23:27.931875  645459 main.go:141] libmachine: (multinode-714725) Calling .GetSSHHostname
	I1209 11:23:27.935355  645459 main.go:141] libmachine: (multinode-714725) DBG | domain multinode-714725 has defined MAC address 52:54:00:b7:d0:20 in network mk-multinode-714725
	I1209 11:23:27.935796  645459 main.go:141] libmachine: (multinode-714725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:d0:20", ip: ""} in network mk-multinode-714725: {Iface:virbr1 ExpiryTime:2024-12-09 12:16:28 +0000 UTC Type:0 Mac:52:54:00:b7:d0:20 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:multinode-714725 Clientid:01:52:54:00:b7:d0:20}
	I1209 11:23:27.935832  645459 main.go:141] libmachine: (multinode-714725) DBG | domain multinode-714725 has defined IP address 192.168.39.31 and MAC address 52:54:00:b7:d0:20 in network mk-multinode-714725
	I1209 11:23:27.935980  645459 main.go:141] libmachine: (multinode-714725) Calling .GetSSHPort
	I1209 11:23:27.936191  645459 main.go:141] libmachine: (multinode-714725) Calling .GetSSHKeyPath
	I1209 11:23:27.936385  645459 main.go:141] libmachine: (multinode-714725) Calling .GetSSHUsername
	I1209 11:23:27.936545  645459 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/multinode-714725/id_rsa Username:docker}
	I1209 11:23:28.018689  645459 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 11:23:28.022951  645459 command_runner.go:130] > NAME=Buildroot
	I1209 11:23:28.022976  645459 command_runner.go:130] > VERSION=2023.02.9-dirty
	I1209 11:23:28.022981  645459 command_runner.go:130] > ID=buildroot
	I1209 11:23:28.022986  645459 command_runner.go:130] > VERSION_ID=2023.02.9
	I1209 11:23:28.022992  645459 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I1209 11:23:28.023026  645459 info.go:137] Remote host: Buildroot 2023.02.9
	I1209 11:23:28.023043  645459 filesync.go:126] Scanning /home/jenkins/minikube-integration/20068-609844/.minikube/addons for local assets ...
	I1209 11:23:28.023116  645459 filesync.go:126] Scanning /home/jenkins/minikube-integration/20068-609844/.minikube/files for local assets ...
	I1209 11:23:28.023188  645459 filesync.go:149] local asset: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem -> 6170172.pem in /etc/ssl/certs
	I1209 11:23:28.023198  645459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem -> /etc/ssl/certs/6170172.pem
	I1209 11:23:28.023284  645459 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 11:23:28.032913  645459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem --> /etc/ssl/certs/6170172.pem (1708 bytes)
	I1209 11:23:28.055736  645459 start.go:296] duration metric: took 124.276162ms for postStartSetup
	I1209 11:23:28.055813  645459 fix.go:56] duration metric: took 1m31.456048715s for fixHost
	I1209 11:23:28.055846  645459 main.go:141] libmachine: (multinode-714725) Calling .GetSSHHostname
	I1209 11:23:28.058820  645459 main.go:141] libmachine: (multinode-714725) DBG | domain multinode-714725 has defined MAC address 52:54:00:b7:d0:20 in network mk-multinode-714725
	I1209 11:23:28.059195  645459 main.go:141] libmachine: (multinode-714725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:d0:20", ip: ""} in network mk-multinode-714725: {Iface:virbr1 ExpiryTime:2024-12-09 12:16:28 +0000 UTC Type:0 Mac:52:54:00:b7:d0:20 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:multinode-714725 Clientid:01:52:54:00:b7:d0:20}
	I1209 11:23:28.059227  645459 main.go:141] libmachine: (multinode-714725) DBG | domain multinode-714725 has defined IP address 192.168.39.31 and MAC address 52:54:00:b7:d0:20 in network mk-multinode-714725
	I1209 11:23:28.059471  645459 main.go:141] libmachine: (multinode-714725) Calling .GetSSHPort
	I1209 11:23:28.059704  645459 main.go:141] libmachine: (multinode-714725) Calling .GetSSHKeyPath
	I1209 11:23:28.059845  645459 main.go:141] libmachine: (multinode-714725) Calling .GetSSHKeyPath
	I1209 11:23:28.060037  645459 main.go:141] libmachine: (multinode-714725) Calling .GetSSHUsername
	I1209 11:23:28.060205  645459 main.go:141] libmachine: Using SSH client type: native
	I1209 11:23:28.060387  645459 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.31 22 <nil> <nil>}
	I1209 11:23:28.060399  645459 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1209 11:23:28.159101  645459 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733743408.136165136
	
	I1209 11:23:28.159129  645459 fix.go:216] guest clock: 1733743408.136165136
	I1209 11:23:28.159139  645459 fix.go:229] Guest: 2024-12-09 11:23:28.136165136 +0000 UTC Remote: 2024-12-09 11:23:28.055820906 +0000 UTC m=+91.591790282 (delta=80.34423ms)
	I1209 11:23:28.159170  645459 fix.go:200] guest clock delta is within tolerance: 80.34423ms
	I1209 11:23:28.159177  645459 start.go:83] releasing machines lock for "multinode-714725", held for 1m31.559433598s
	I1209 11:23:28.159202  645459 main.go:141] libmachine: (multinode-714725) Calling .DriverName
	I1209 11:23:28.159477  645459 main.go:141] libmachine: (multinode-714725) Calling .GetIP
	I1209 11:23:28.162352  645459 main.go:141] libmachine: (multinode-714725) DBG | domain multinode-714725 has defined MAC address 52:54:00:b7:d0:20 in network mk-multinode-714725
	I1209 11:23:28.162699  645459 main.go:141] libmachine: (multinode-714725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:d0:20", ip: ""} in network mk-multinode-714725: {Iface:virbr1 ExpiryTime:2024-12-09 12:16:28 +0000 UTC Type:0 Mac:52:54:00:b7:d0:20 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:multinode-714725 Clientid:01:52:54:00:b7:d0:20}
	I1209 11:23:28.162732  645459 main.go:141] libmachine: (multinode-714725) DBG | domain multinode-714725 has defined IP address 192.168.39.31 and MAC address 52:54:00:b7:d0:20 in network mk-multinode-714725
	I1209 11:23:28.162908  645459 main.go:141] libmachine: (multinode-714725) Calling .DriverName
	I1209 11:23:28.163420  645459 main.go:141] libmachine: (multinode-714725) Calling .DriverName
	I1209 11:23:28.163600  645459 main.go:141] libmachine: (multinode-714725) Calling .DriverName
	I1209 11:23:28.163684  645459 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 11:23:28.163745  645459 main.go:141] libmachine: (multinode-714725) Calling .GetSSHHostname
	I1209 11:23:28.163809  645459 ssh_runner.go:195] Run: cat /version.json
	I1209 11:23:28.163832  645459 main.go:141] libmachine: (multinode-714725) Calling .GetSSHHostname
	I1209 11:23:28.166460  645459 main.go:141] libmachine: (multinode-714725) DBG | domain multinode-714725 has defined MAC address 52:54:00:b7:d0:20 in network mk-multinode-714725
	I1209 11:23:28.166602  645459 main.go:141] libmachine: (multinode-714725) DBG | domain multinode-714725 has defined MAC address 52:54:00:b7:d0:20 in network mk-multinode-714725
	I1209 11:23:28.166891  645459 main.go:141] libmachine: (multinode-714725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:d0:20", ip: ""} in network mk-multinode-714725: {Iface:virbr1 ExpiryTime:2024-12-09 12:16:28 +0000 UTC Type:0 Mac:52:54:00:b7:d0:20 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:multinode-714725 Clientid:01:52:54:00:b7:d0:20}
	I1209 11:23:28.166923  645459 main.go:141] libmachine: (multinode-714725) DBG | domain multinode-714725 has defined IP address 192.168.39.31 and MAC address 52:54:00:b7:d0:20 in network mk-multinode-714725
	I1209 11:23:28.166947  645459 main.go:141] libmachine: (multinode-714725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:d0:20", ip: ""} in network mk-multinode-714725: {Iface:virbr1 ExpiryTime:2024-12-09 12:16:28 +0000 UTC Type:0 Mac:52:54:00:b7:d0:20 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:multinode-714725 Clientid:01:52:54:00:b7:d0:20}
	I1209 11:23:28.166970  645459 main.go:141] libmachine: (multinode-714725) DBG | domain multinode-714725 has defined IP address 192.168.39.31 and MAC address 52:54:00:b7:d0:20 in network mk-multinode-714725
	I1209 11:23:28.167080  645459 main.go:141] libmachine: (multinode-714725) Calling .GetSSHPort
	I1209 11:23:28.167224  645459 main.go:141] libmachine: (multinode-714725) Calling .GetSSHPort
	I1209 11:23:28.167297  645459 main.go:141] libmachine: (multinode-714725) Calling .GetSSHKeyPath
	I1209 11:23:28.167382  645459 main.go:141] libmachine: (multinode-714725) Calling .GetSSHKeyPath
	I1209 11:23:28.167461  645459 main.go:141] libmachine: (multinode-714725) Calling .GetSSHUsername
	I1209 11:23:28.167561  645459 main.go:141] libmachine: (multinode-714725) Calling .GetSSHUsername
	I1209 11:23:28.167573  645459 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/multinode-714725/id_rsa Username:docker}
	I1209 11:23:28.167668  645459 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/multinode-714725/id_rsa Username:docker}
	I1209 11:23:28.238540  645459 command_runner.go:130] > {"iso_version": "v1.34.0-1730913550-19917", "kicbase_version": "v0.0.45-1730888964-19917", "minikube_version": "v1.34.0", "commit": "72f43dde5d92c8ae490d0727dad53fb3ed6aa41e"}
	I1209 11:23:28.238849  645459 ssh_runner.go:195] Run: systemctl --version
	I1209 11:23:28.272471  645459 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1209 11:23:28.273232  645459 command_runner.go:130] > systemd 252 (252)
	I1209 11:23:28.273266  645459 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I1209 11:23:28.273348  645459 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 11:23:28.429710  645459 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1209 11:23:28.439362  645459 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1209 11:23:28.439685  645459 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 11:23:28.439760  645459 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 11:23:28.448646  645459 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1209 11:23:28.448674  645459 start.go:495] detecting cgroup driver to use...
	I1209 11:23:28.448751  645459 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 11:23:28.464648  645459 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 11:23:28.477704  645459 docker.go:217] disabling cri-docker service (if available) ...
	I1209 11:23:28.477784  645459 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 11:23:28.490155  645459 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 11:23:28.502955  645459 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 11:23:28.639125  645459 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 11:23:28.776815  645459 docker.go:233] disabling docker service ...
	I1209 11:23:28.776892  645459 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 11:23:28.792067  645459 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 11:23:28.805251  645459 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 11:23:28.990986  645459 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 11:23:29.181706  645459 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 11:23:29.197362  645459 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 11:23:29.214609  645459 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1209 11:23:29.214681  645459 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1209 11:23:29.214742  645459 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:23:29.224276  645459 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 11:23:29.224341  645459 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:23:29.233797  645459 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:23:29.243129  645459 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:23:29.252851  645459 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 11:23:29.262937  645459 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:23:29.272413  645459 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:23:29.282248  645459 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:23:29.291812  645459 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 11:23:29.301037  645459 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1209 11:23:29.301110  645459 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 11:23:29.310190  645459 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 11:23:29.450033  645459 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 11:23:29.674920  645459 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 11:23:29.674999  645459 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 11:23:29.679355  645459 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1209 11:23:29.679383  645459 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1209 11:23:29.679394  645459 command_runner.go:130] > Device: 0,22	Inode: 1372        Links: 1
	I1209 11:23:29.679405  645459 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1209 11:23:29.679412  645459 command_runner.go:130] > Access: 2024-12-09 11:23:29.524097371 +0000
	I1209 11:23:29.679454  645459 command_runner.go:130] > Modify: 2024-12-09 11:23:29.524097371 +0000
	I1209 11:23:29.679479  645459 command_runner.go:130] > Change: 2024-12-09 11:23:29.524097371 +0000
	I1209 11:23:29.679490  645459 command_runner.go:130] >  Birth: -
	I1209 11:23:29.679603  645459 start.go:563] Will wait 60s for crictl version
	I1209 11:23:29.679667  645459 ssh_runner.go:195] Run: which crictl
	I1209 11:23:29.683025  645459 command_runner.go:130] > /usr/bin/crictl
	I1209 11:23:29.683095  645459 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 11:23:29.721744  645459 command_runner.go:130] > Version:  0.1.0
	I1209 11:23:29.721776  645459 command_runner.go:130] > RuntimeName:  cri-o
	I1209 11:23:29.721782  645459 command_runner.go:130] > RuntimeVersion:  1.29.1
	I1209 11:23:29.721978  645459 command_runner.go:130] > RuntimeApiVersion:  v1
	I1209 11:23:29.723196  645459 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1209 11:23:29.723267  645459 ssh_runner.go:195] Run: crio --version
	I1209 11:23:29.749497  645459 command_runner.go:130] > crio version 1.29.1
	I1209 11:23:29.749525  645459 command_runner.go:130] > Version:        1.29.1
	I1209 11:23:29.749534  645459 command_runner.go:130] > GitCommit:      unknown
	I1209 11:23:29.749540  645459 command_runner.go:130] > GitCommitDate:  unknown
	I1209 11:23:29.749548  645459 command_runner.go:130] > GitTreeState:   clean
	I1209 11:23:29.749557  645459 command_runner.go:130] > BuildDate:      2024-11-06T23:09:37Z
	I1209 11:23:29.749563  645459 command_runner.go:130] > GoVersion:      go1.21.6
	I1209 11:23:29.749568  645459 command_runner.go:130] > Compiler:       gc
	I1209 11:23:29.749574  645459 command_runner.go:130] > Platform:       linux/amd64
	I1209 11:23:29.749578  645459 command_runner.go:130] > Linkmode:       dynamic
	I1209 11:23:29.749583  645459 command_runner.go:130] > BuildTags:      
	I1209 11:23:29.749589  645459 command_runner.go:130] >   containers_image_ostree_stub
	I1209 11:23:29.749596  645459 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1209 11:23:29.749600  645459 command_runner.go:130] >   btrfs_noversion
	I1209 11:23:29.749604  645459 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1209 11:23:29.749612  645459 command_runner.go:130] >   libdm_no_deferred_remove
	I1209 11:23:29.749615  645459 command_runner.go:130] >   seccomp
	I1209 11:23:29.749619  645459 command_runner.go:130] > LDFlags:          unknown
	I1209 11:23:29.749624  645459 command_runner.go:130] > SeccompEnabled:   true
	I1209 11:23:29.749628  645459 command_runner.go:130] > AppArmorEnabled:  false
	I1209 11:23:29.749696  645459 ssh_runner.go:195] Run: crio --version
	I1209 11:23:29.775891  645459 command_runner.go:130] > crio version 1.29.1
	I1209 11:23:29.775929  645459 command_runner.go:130] > Version:        1.29.1
	I1209 11:23:29.775938  645459 command_runner.go:130] > GitCommit:      unknown
	I1209 11:23:29.775945  645459 command_runner.go:130] > GitCommitDate:  unknown
	I1209 11:23:29.775952  645459 command_runner.go:130] > GitTreeState:   clean
	I1209 11:23:29.775961  645459 command_runner.go:130] > BuildDate:      2024-11-06T23:09:37Z
	I1209 11:23:29.775967  645459 command_runner.go:130] > GoVersion:      go1.21.6
	I1209 11:23:29.775974  645459 command_runner.go:130] > Compiler:       gc
	I1209 11:23:29.775982  645459 command_runner.go:130] > Platform:       linux/amd64
	I1209 11:23:29.775991  645459 command_runner.go:130] > Linkmode:       dynamic
	I1209 11:23:29.775997  645459 command_runner.go:130] > BuildTags:      
	I1209 11:23:29.776005  645459 command_runner.go:130] >   containers_image_ostree_stub
	I1209 11:23:29.776009  645459 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1209 11:23:29.776013  645459 command_runner.go:130] >   btrfs_noversion
	I1209 11:23:29.776019  645459 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1209 11:23:29.776023  645459 command_runner.go:130] >   libdm_no_deferred_remove
	I1209 11:23:29.776030  645459 command_runner.go:130] >   seccomp
	I1209 11:23:29.776034  645459 command_runner.go:130] > LDFlags:          unknown
	I1209 11:23:29.776039  645459 command_runner.go:130] > SeccompEnabled:   true
	I1209 11:23:29.776043  645459 command_runner.go:130] > AppArmorEnabled:  false
	I1209 11:23:29.778797  645459 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1209 11:23:29.780230  645459 main.go:141] libmachine: (multinode-714725) Calling .GetIP
	I1209 11:23:29.782960  645459 main.go:141] libmachine: (multinode-714725) DBG | domain multinode-714725 has defined MAC address 52:54:00:b7:d0:20 in network mk-multinode-714725
	I1209 11:23:29.783349  645459 main.go:141] libmachine: (multinode-714725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:d0:20", ip: ""} in network mk-multinode-714725: {Iface:virbr1 ExpiryTime:2024-12-09 12:16:28 +0000 UTC Type:0 Mac:52:54:00:b7:d0:20 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:multinode-714725 Clientid:01:52:54:00:b7:d0:20}
	I1209 11:23:29.783388  645459 main.go:141] libmachine: (multinode-714725) DBG | domain multinode-714725 has defined IP address 192.168.39.31 and MAC address 52:54:00:b7:d0:20 in network mk-multinode-714725
	I1209 11:23:29.783606  645459 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1209 11:23:29.787570  645459 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I1209 11:23:29.787679  645459 kubeadm.go:883] updating cluster {Name:multinode-714725 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.2 ClusterName:multinode-714725 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.31 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.21 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.208 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingres
s-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirr
or: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 11:23:29.787813  645459 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 11:23:29.787855  645459 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 11:23:29.825610  645459 command_runner.go:130] > {
	I1209 11:23:29.825633  645459 command_runner.go:130] >   "images": [
	I1209 11:23:29.825637  645459 command_runner.go:130] >     {
	I1209 11:23:29.825645  645459 command_runner.go:130] >       "id": "3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52",
	I1209 11:23:29.825650  645459 command_runner.go:130] >       "repoTags": [
	I1209 11:23:29.825655  645459 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241007-36f62932"
	I1209 11:23:29.825659  645459 command_runner.go:130] >       ],
	I1209 11:23:29.825663  645459 command_runner.go:130] >       "repoDigests": [
	I1209 11:23:29.825671  645459 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387",
	I1209 11:23:29.825678  645459 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7"
	I1209 11:23:29.825682  645459 command_runner.go:130] >       ],
	I1209 11:23:29.825686  645459 command_runner.go:130] >       "size": "94965812",
	I1209 11:23:29.825694  645459 command_runner.go:130] >       "uid": null,
	I1209 11:23:29.825700  645459 command_runner.go:130] >       "username": "",
	I1209 11:23:29.825706  645459 command_runner.go:130] >       "spec": null,
	I1209 11:23:29.825710  645459 command_runner.go:130] >       "pinned": false
	I1209 11:23:29.825716  645459 command_runner.go:130] >     },
	I1209 11:23:29.825719  645459 command_runner.go:130] >     {
	I1209 11:23:29.825725  645459 command_runner.go:130] >       "id": "50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e",
	I1209 11:23:29.825731  645459 command_runner.go:130] >       "repoTags": [
	I1209 11:23:29.825737  645459 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241108-5c6d2daf"
	I1209 11:23:29.825741  645459 command_runner.go:130] >       ],
	I1209 11:23:29.825745  645459 command_runner.go:130] >       "repoDigests": [
	I1209 11:23:29.825752  645459 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3",
	I1209 11:23:29.825760  645459 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108"
	I1209 11:23:29.825767  645459 command_runner.go:130] >       ],
	I1209 11:23:29.825771  645459 command_runner.go:130] >       "size": "94963761",
	I1209 11:23:29.825775  645459 command_runner.go:130] >       "uid": null,
	I1209 11:23:29.825782  645459 command_runner.go:130] >       "username": "",
	I1209 11:23:29.825786  645459 command_runner.go:130] >       "spec": null,
	I1209 11:23:29.825790  645459 command_runner.go:130] >       "pinned": false
	I1209 11:23:29.825794  645459 command_runner.go:130] >     },
	I1209 11:23:29.825797  645459 command_runner.go:130] >     {
	I1209 11:23:29.825804  645459 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I1209 11:23:29.825808  645459 command_runner.go:130] >       "repoTags": [
	I1209 11:23:29.825814  645459 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I1209 11:23:29.825818  645459 command_runner.go:130] >       ],
	I1209 11:23:29.825824  645459 command_runner.go:130] >       "repoDigests": [
	I1209 11:23:29.825831  645459 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I1209 11:23:29.825838  645459 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I1209 11:23:29.825843  645459 command_runner.go:130] >       ],
	I1209 11:23:29.825847  645459 command_runner.go:130] >       "size": "1363676",
	I1209 11:23:29.825851  645459 command_runner.go:130] >       "uid": null,
	I1209 11:23:29.825856  645459 command_runner.go:130] >       "username": "",
	I1209 11:23:29.825860  645459 command_runner.go:130] >       "spec": null,
	I1209 11:23:29.825866  645459 command_runner.go:130] >       "pinned": false
	I1209 11:23:29.825869  645459 command_runner.go:130] >     },
	I1209 11:23:29.825872  645459 command_runner.go:130] >     {
	I1209 11:23:29.825878  645459 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1209 11:23:29.825884  645459 command_runner.go:130] >       "repoTags": [
	I1209 11:23:29.825889  645459 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1209 11:23:29.825895  645459 command_runner.go:130] >       ],
	I1209 11:23:29.825899  645459 command_runner.go:130] >       "repoDigests": [
	I1209 11:23:29.825908  645459 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1209 11:23:29.825922  645459 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1209 11:23:29.825928  645459 command_runner.go:130] >       ],
	I1209 11:23:29.825932  645459 command_runner.go:130] >       "size": "31470524",
	I1209 11:23:29.825939  645459 command_runner.go:130] >       "uid": null,
	I1209 11:23:29.825943  645459 command_runner.go:130] >       "username": "",
	I1209 11:23:29.825950  645459 command_runner.go:130] >       "spec": null,
	I1209 11:23:29.825954  645459 command_runner.go:130] >       "pinned": false
	I1209 11:23:29.825960  645459 command_runner.go:130] >     },
	I1209 11:23:29.825963  645459 command_runner.go:130] >     {
	I1209 11:23:29.825971  645459 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I1209 11:23:29.825975  645459 command_runner.go:130] >       "repoTags": [
	I1209 11:23:29.825983  645459 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I1209 11:23:29.825986  645459 command_runner.go:130] >       ],
	I1209 11:23:29.825990  645459 command_runner.go:130] >       "repoDigests": [
	I1209 11:23:29.825997  645459 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I1209 11:23:29.826005  645459 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I1209 11:23:29.826009  645459 command_runner.go:130] >       ],
	I1209 11:23:29.826013  645459 command_runner.go:130] >       "size": "63273227",
	I1209 11:23:29.826017  645459 command_runner.go:130] >       "uid": null,
	I1209 11:23:29.826020  645459 command_runner.go:130] >       "username": "nonroot",
	I1209 11:23:29.826024  645459 command_runner.go:130] >       "spec": null,
	I1209 11:23:29.826028  645459 command_runner.go:130] >       "pinned": false
	I1209 11:23:29.826032  645459 command_runner.go:130] >     },
	I1209 11:23:29.826035  645459 command_runner.go:130] >     {
	I1209 11:23:29.826041  645459 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I1209 11:23:29.826047  645459 command_runner.go:130] >       "repoTags": [
	I1209 11:23:29.826052  645459 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I1209 11:23:29.826057  645459 command_runner.go:130] >       ],
	I1209 11:23:29.826061  645459 command_runner.go:130] >       "repoDigests": [
	I1209 11:23:29.826070  645459 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I1209 11:23:29.826077  645459 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I1209 11:23:29.826083  645459 command_runner.go:130] >       ],
	I1209 11:23:29.826088  645459 command_runner.go:130] >       "size": "149009664",
	I1209 11:23:29.826094  645459 command_runner.go:130] >       "uid": {
	I1209 11:23:29.826098  645459 command_runner.go:130] >         "value": "0"
	I1209 11:23:29.826104  645459 command_runner.go:130] >       },
	I1209 11:23:29.826109  645459 command_runner.go:130] >       "username": "",
	I1209 11:23:29.826116  645459 command_runner.go:130] >       "spec": null,
	I1209 11:23:29.826120  645459 command_runner.go:130] >       "pinned": false
	I1209 11:23:29.826127  645459 command_runner.go:130] >     },
	I1209 11:23:29.826130  645459 command_runner.go:130] >     {
	I1209 11:23:29.826136  645459 command_runner.go:130] >       "id": "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173",
	I1209 11:23:29.826142  645459 command_runner.go:130] >       "repoTags": [
	I1209 11:23:29.826147  645459 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.2"
	I1209 11:23:29.826155  645459 command_runner.go:130] >       ],
	I1209 11:23:29.826159  645459 command_runner.go:130] >       "repoDigests": [
	I1209 11:23:29.826184  645459 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0",
	I1209 11:23:29.826227  645459 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a4fdc0ebc2950d76f2859c5f38f2d05b760ed09fd8006d7a8d98dd9b30bc55da"
	I1209 11:23:29.826243  645459 command_runner.go:130] >       ],
	I1209 11:23:29.826248  645459 command_runner.go:130] >       "size": "95274464",
	I1209 11:23:29.826252  645459 command_runner.go:130] >       "uid": {
	I1209 11:23:29.826256  645459 command_runner.go:130] >         "value": "0"
	I1209 11:23:29.826260  645459 command_runner.go:130] >       },
	I1209 11:23:29.826265  645459 command_runner.go:130] >       "username": "",
	I1209 11:23:29.826269  645459 command_runner.go:130] >       "spec": null,
	I1209 11:23:29.826273  645459 command_runner.go:130] >       "pinned": false
	I1209 11:23:29.826279  645459 command_runner.go:130] >     },
	I1209 11:23:29.826282  645459 command_runner.go:130] >     {
	I1209 11:23:29.826288  645459 command_runner.go:130] >       "id": "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503",
	I1209 11:23:29.826292  645459 command_runner.go:130] >       "repoTags": [
	I1209 11:23:29.826298  645459 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.2"
	I1209 11:23:29.826303  645459 command_runner.go:130] >       ],
	I1209 11:23:29.826307  645459 command_runner.go:130] >       "repoDigests": [
	I1209 11:23:29.826326  645459 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:4ba16ce7d80945dc4bb8e85ac0794c6171bfa8a55c94fe5be415afb4c3eb938c",
	I1209 11:23:29.826336  645459 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752"
	I1209 11:23:29.826342  645459 command_runner.go:130] >       ],
	I1209 11:23:29.826346  645459 command_runner.go:130] >       "size": "89474374",
	I1209 11:23:29.826352  645459 command_runner.go:130] >       "uid": {
	I1209 11:23:29.826356  645459 command_runner.go:130] >         "value": "0"
	I1209 11:23:29.826361  645459 command_runner.go:130] >       },
	I1209 11:23:29.826369  645459 command_runner.go:130] >       "username": "",
	I1209 11:23:29.826373  645459 command_runner.go:130] >       "spec": null,
	I1209 11:23:29.826377  645459 command_runner.go:130] >       "pinned": false
	I1209 11:23:29.826379  645459 command_runner.go:130] >     },
	I1209 11:23:29.826382  645459 command_runner.go:130] >     {
	I1209 11:23:29.826388  645459 command_runner.go:130] >       "id": "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38",
	I1209 11:23:29.826391  645459 command_runner.go:130] >       "repoTags": [
	I1209 11:23:29.826396  645459 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.2"
	I1209 11:23:29.826399  645459 command_runner.go:130] >       ],
	I1209 11:23:29.826409  645459 command_runner.go:130] >       "repoDigests": [
	I1209 11:23:29.826416  645459 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:22535649599e9f22b1b857afcbd9a8b36be238b2b3ea68e47f60bedcea48cd3b",
	I1209 11:23:29.826422  645459 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe"
	I1209 11:23:29.826426  645459 command_runner.go:130] >       ],
	I1209 11:23:29.826433  645459 command_runner.go:130] >       "size": "92783513",
	I1209 11:23:29.826436  645459 command_runner.go:130] >       "uid": null,
	I1209 11:23:29.826440  645459 command_runner.go:130] >       "username": "",
	I1209 11:23:29.826444  645459 command_runner.go:130] >       "spec": null,
	I1209 11:23:29.826448  645459 command_runner.go:130] >       "pinned": false
	I1209 11:23:29.826450  645459 command_runner.go:130] >     },
	I1209 11:23:29.826454  645459 command_runner.go:130] >     {
	I1209 11:23:29.826459  645459 command_runner.go:130] >       "id": "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856",
	I1209 11:23:29.826463  645459 command_runner.go:130] >       "repoTags": [
	I1209 11:23:29.826467  645459 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.2"
	I1209 11:23:29.826470  645459 command_runner.go:130] >       ],
	I1209 11:23:29.826474  645459 command_runner.go:130] >       "repoDigests": [
	I1209 11:23:29.826481  645459 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282",
	I1209 11:23:29.826488  645459 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:a40aba236dfcd0fe9d1258dcb9d22a82d83e9ea6d35f7d3574949b12df928fe5"
	I1209 11:23:29.826492  645459 command_runner.go:130] >       ],
	I1209 11:23:29.826495  645459 command_runner.go:130] >       "size": "68457798",
	I1209 11:23:29.826499  645459 command_runner.go:130] >       "uid": {
	I1209 11:23:29.826503  645459 command_runner.go:130] >         "value": "0"
	I1209 11:23:29.826506  645459 command_runner.go:130] >       },
	I1209 11:23:29.826510  645459 command_runner.go:130] >       "username": "",
	I1209 11:23:29.826514  645459 command_runner.go:130] >       "spec": null,
	I1209 11:23:29.826517  645459 command_runner.go:130] >       "pinned": false
	I1209 11:23:29.826521  645459 command_runner.go:130] >     },
	I1209 11:23:29.826525  645459 command_runner.go:130] >     {
	I1209 11:23:29.826531  645459 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I1209 11:23:29.826535  645459 command_runner.go:130] >       "repoTags": [
	I1209 11:23:29.826539  645459 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I1209 11:23:29.826546  645459 command_runner.go:130] >       ],
	I1209 11:23:29.826549  645459 command_runner.go:130] >       "repoDigests": [
	I1209 11:23:29.826556  645459 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I1209 11:23:29.826563  645459 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I1209 11:23:29.826572  645459 command_runner.go:130] >       ],
	I1209 11:23:29.826578  645459 command_runner.go:130] >       "size": "742080",
	I1209 11:23:29.826587  645459 command_runner.go:130] >       "uid": {
	I1209 11:23:29.826593  645459 command_runner.go:130] >         "value": "65535"
	I1209 11:23:29.826601  645459 command_runner.go:130] >       },
	I1209 11:23:29.826606  645459 command_runner.go:130] >       "username": "",
	I1209 11:23:29.826611  645459 command_runner.go:130] >       "spec": null,
	I1209 11:23:29.826617  645459 command_runner.go:130] >       "pinned": true
	I1209 11:23:29.826625  645459 command_runner.go:130] >     }
	I1209 11:23:29.826631  645459 command_runner.go:130] >   ]
	I1209 11:23:29.826636  645459 command_runner.go:130] > }
	I1209 11:23:29.826922  645459 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 11:23:29.826948  645459 crio.go:433] Images already preloaded, skipping extraction
	I1209 11:23:29.827026  645459 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 11:23:29.857666  645459 command_runner.go:130] > {
	I1209 11:23:29.857691  645459 command_runner.go:130] >   "images": [
	I1209 11:23:29.857694  645459 command_runner.go:130] >     {
	I1209 11:23:29.857702  645459 command_runner.go:130] >       "id": "3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52",
	I1209 11:23:29.857708  645459 command_runner.go:130] >       "repoTags": [
	I1209 11:23:29.857715  645459 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241007-36f62932"
	I1209 11:23:29.857719  645459 command_runner.go:130] >       ],
	I1209 11:23:29.857723  645459 command_runner.go:130] >       "repoDigests": [
	I1209 11:23:29.857732  645459 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387",
	I1209 11:23:29.857739  645459 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7"
	I1209 11:23:29.857743  645459 command_runner.go:130] >       ],
	I1209 11:23:29.857748  645459 command_runner.go:130] >       "size": "94965812",
	I1209 11:23:29.857752  645459 command_runner.go:130] >       "uid": null,
	I1209 11:23:29.857756  645459 command_runner.go:130] >       "username": "",
	I1209 11:23:29.857764  645459 command_runner.go:130] >       "spec": null,
	I1209 11:23:29.857771  645459 command_runner.go:130] >       "pinned": false
	I1209 11:23:29.857777  645459 command_runner.go:130] >     },
	I1209 11:23:29.857781  645459 command_runner.go:130] >     {
	I1209 11:23:29.857787  645459 command_runner.go:130] >       "id": "50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e",
	I1209 11:23:29.857791  645459 command_runner.go:130] >       "repoTags": [
	I1209 11:23:29.857796  645459 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241108-5c6d2daf"
	I1209 11:23:29.857800  645459 command_runner.go:130] >       ],
	I1209 11:23:29.857804  645459 command_runner.go:130] >       "repoDigests": [
	I1209 11:23:29.857811  645459 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3",
	I1209 11:23:29.857821  645459 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108"
	I1209 11:23:29.857824  645459 command_runner.go:130] >       ],
	I1209 11:23:29.857828  645459 command_runner.go:130] >       "size": "94963761",
	I1209 11:23:29.857832  645459 command_runner.go:130] >       "uid": null,
	I1209 11:23:29.857839  645459 command_runner.go:130] >       "username": "",
	I1209 11:23:29.857845  645459 command_runner.go:130] >       "spec": null,
	I1209 11:23:29.857852  645459 command_runner.go:130] >       "pinned": false
	I1209 11:23:29.857855  645459 command_runner.go:130] >     },
	I1209 11:23:29.857859  645459 command_runner.go:130] >     {
	I1209 11:23:29.857865  645459 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I1209 11:23:29.857870  645459 command_runner.go:130] >       "repoTags": [
	I1209 11:23:29.857875  645459 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I1209 11:23:29.857879  645459 command_runner.go:130] >       ],
	I1209 11:23:29.857885  645459 command_runner.go:130] >       "repoDigests": [
	I1209 11:23:29.857892  645459 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I1209 11:23:29.857901  645459 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I1209 11:23:29.857905  645459 command_runner.go:130] >       ],
	I1209 11:23:29.857934  645459 command_runner.go:130] >       "size": "1363676",
	I1209 11:23:29.857944  645459 command_runner.go:130] >       "uid": null,
	I1209 11:23:29.857947  645459 command_runner.go:130] >       "username": "",
	I1209 11:23:29.857954  645459 command_runner.go:130] >       "spec": null,
	I1209 11:23:29.857959  645459 command_runner.go:130] >       "pinned": false
	I1209 11:23:29.857962  645459 command_runner.go:130] >     },
	I1209 11:23:29.857966  645459 command_runner.go:130] >     {
	I1209 11:23:29.857971  645459 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1209 11:23:29.857980  645459 command_runner.go:130] >       "repoTags": [
	I1209 11:23:29.857985  645459 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1209 11:23:29.857988  645459 command_runner.go:130] >       ],
	I1209 11:23:29.857992  645459 command_runner.go:130] >       "repoDigests": [
	I1209 11:23:29.858000  645459 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1209 11:23:29.858011  645459 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1209 11:23:29.858015  645459 command_runner.go:130] >       ],
	I1209 11:23:29.858019  645459 command_runner.go:130] >       "size": "31470524",
	I1209 11:23:29.858023  645459 command_runner.go:130] >       "uid": null,
	I1209 11:23:29.858026  645459 command_runner.go:130] >       "username": "",
	I1209 11:23:29.858030  645459 command_runner.go:130] >       "spec": null,
	I1209 11:23:29.858034  645459 command_runner.go:130] >       "pinned": false
	I1209 11:23:29.858037  645459 command_runner.go:130] >     },
	I1209 11:23:29.858041  645459 command_runner.go:130] >     {
	I1209 11:23:29.858046  645459 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I1209 11:23:29.858050  645459 command_runner.go:130] >       "repoTags": [
	I1209 11:23:29.858055  645459 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I1209 11:23:29.858058  645459 command_runner.go:130] >       ],
	I1209 11:23:29.858062  645459 command_runner.go:130] >       "repoDigests": [
	I1209 11:23:29.858068  645459 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I1209 11:23:29.858076  645459 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I1209 11:23:29.858080  645459 command_runner.go:130] >       ],
	I1209 11:23:29.858084  645459 command_runner.go:130] >       "size": "63273227",
	I1209 11:23:29.858088  645459 command_runner.go:130] >       "uid": null,
	I1209 11:23:29.858092  645459 command_runner.go:130] >       "username": "nonroot",
	I1209 11:23:29.858098  645459 command_runner.go:130] >       "spec": null,
	I1209 11:23:29.858102  645459 command_runner.go:130] >       "pinned": false
	I1209 11:23:29.858105  645459 command_runner.go:130] >     },
	I1209 11:23:29.858108  645459 command_runner.go:130] >     {
	I1209 11:23:29.858114  645459 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I1209 11:23:29.858118  645459 command_runner.go:130] >       "repoTags": [
	I1209 11:23:29.858123  645459 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I1209 11:23:29.858126  645459 command_runner.go:130] >       ],
	I1209 11:23:29.858130  645459 command_runner.go:130] >       "repoDigests": [
	I1209 11:23:29.858137  645459 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I1209 11:23:29.858143  645459 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I1209 11:23:29.858147  645459 command_runner.go:130] >       ],
	I1209 11:23:29.858150  645459 command_runner.go:130] >       "size": "149009664",
	I1209 11:23:29.858154  645459 command_runner.go:130] >       "uid": {
	I1209 11:23:29.858158  645459 command_runner.go:130] >         "value": "0"
	I1209 11:23:29.858164  645459 command_runner.go:130] >       },
	I1209 11:23:29.858178  645459 command_runner.go:130] >       "username": "",
	I1209 11:23:29.858182  645459 command_runner.go:130] >       "spec": null,
	I1209 11:23:29.858186  645459 command_runner.go:130] >       "pinned": false
	I1209 11:23:29.858190  645459 command_runner.go:130] >     },
	I1209 11:23:29.858193  645459 command_runner.go:130] >     {
	I1209 11:23:29.858198  645459 command_runner.go:130] >       "id": "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173",
	I1209 11:23:29.858201  645459 command_runner.go:130] >       "repoTags": [
	I1209 11:23:29.858206  645459 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.2"
	I1209 11:23:29.858209  645459 command_runner.go:130] >       ],
	I1209 11:23:29.858213  645459 command_runner.go:130] >       "repoDigests": [
	I1209 11:23:29.858220  645459 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0",
	I1209 11:23:29.858227  645459 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a4fdc0ebc2950d76f2859c5f38f2d05b760ed09fd8006d7a8d98dd9b30bc55da"
	I1209 11:23:29.858231  645459 command_runner.go:130] >       ],
	I1209 11:23:29.858235  645459 command_runner.go:130] >       "size": "95274464",
	I1209 11:23:29.858238  645459 command_runner.go:130] >       "uid": {
	I1209 11:23:29.858242  645459 command_runner.go:130] >         "value": "0"
	I1209 11:23:29.858246  645459 command_runner.go:130] >       },
	I1209 11:23:29.858251  645459 command_runner.go:130] >       "username": "",
	I1209 11:23:29.858258  645459 command_runner.go:130] >       "spec": null,
	I1209 11:23:29.858261  645459 command_runner.go:130] >       "pinned": false
	I1209 11:23:29.858264  645459 command_runner.go:130] >     },
	I1209 11:23:29.858267  645459 command_runner.go:130] >     {
	I1209 11:23:29.858273  645459 command_runner.go:130] >       "id": "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503",
	I1209 11:23:29.858279  645459 command_runner.go:130] >       "repoTags": [
	I1209 11:23:29.858284  645459 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.2"
	I1209 11:23:29.858290  645459 command_runner.go:130] >       ],
	I1209 11:23:29.858294  645459 command_runner.go:130] >       "repoDigests": [
	I1209 11:23:29.858310  645459 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:4ba16ce7d80945dc4bb8e85ac0794c6171bfa8a55c94fe5be415afb4c3eb938c",
	I1209 11:23:29.858317  645459 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752"
	I1209 11:23:29.858324  645459 command_runner.go:130] >       ],
	I1209 11:23:29.858328  645459 command_runner.go:130] >       "size": "89474374",
	I1209 11:23:29.858332  645459 command_runner.go:130] >       "uid": {
	I1209 11:23:29.858335  645459 command_runner.go:130] >         "value": "0"
	I1209 11:23:29.858339  645459 command_runner.go:130] >       },
	I1209 11:23:29.858343  645459 command_runner.go:130] >       "username": "",
	I1209 11:23:29.858347  645459 command_runner.go:130] >       "spec": null,
	I1209 11:23:29.858352  645459 command_runner.go:130] >       "pinned": false
	I1209 11:23:29.858356  645459 command_runner.go:130] >     },
	I1209 11:23:29.858361  645459 command_runner.go:130] >     {
	I1209 11:23:29.858367  645459 command_runner.go:130] >       "id": "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38",
	I1209 11:23:29.858371  645459 command_runner.go:130] >       "repoTags": [
	I1209 11:23:29.858375  645459 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.2"
	I1209 11:23:29.858379  645459 command_runner.go:130] >       ],
	I1209 11:23:29.858383  645459 command_runner.go:130] >       "repoDigests": [
	I1209 11:23:29.858389  645459 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:22535649599e9f22b1b857afcbd9a8b36be238b2b3ea68e47f60bedcea48cd3b",
	I1209 11:23:29.858399  645459 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe"
	I1209 11:23:29.858405  645459 command_runner.go:130] >       ],
	I1209 11:23:29.858409  645459 command_runner.go:130] >       "size": "92783513",
	I1209 11:23:29.858413  645459 command_runner.go:130] >       "uid": null,
	I1209 11:23:29.858417  645459 command_runner.go:130] >       "username": "",
	I1209 11:23:29.858421  645459 command_runner.go:130] >       "spec": null,
	I1209 11:23:29.858424  645459 command_runner.go:130] >       "pinned": false
	I1209 11:23:29.858428  645459 command_runner.go:130] >     },
	I1209 11:23:29.858431  645459 command_runner.go:130] >     {
	I1209 11:23:29.858438  645459 command_runner.go:130] >       "id": "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856",
	I1209 11:23:29.858444  645459 command_runner.go:130] >       "repoTags": [
	I1209 11:23:29.858449  645459 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.2"
	I1209 11:23:29.858454  645459 command_runner.go:130] >       ],
	I1209 11:23:29.858459  645459 command_runner.go:130] >       "repoDigests": [
	I1209 11:23:29.858479  645459 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282",
	I1209 11:23:29.858498  645459 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:a40aba236dfcd0fe9d1258dcb9d22a82d83e9ea6d35f7d3574949b12df928fe5"
	I1209 11:23:29.858504  645459 command_runner.go:130] >       ],
	I1209 11:23:29.858508  645459 command_runner.go:130] >       "size": "68457798",
	I1209 11:23:29.858512  645459 command_runner.go:130] >       "uid": {
	I1209 11:23:29.858516  645459 command_runner.go:130] >         "value": "0"
	I1209 11:23:29.858520  645459 command_runner.go:130] >       },
	I1209 11:23:29.858524  645459 command_runner.go:130] >       "username": "",
	I1209 11:23:29.858528  645459 command_runner.go:130] >       "spec": null,
	I1209 11:23:29.858532  645459 command_runner.go:130] >       "pinned": false
	I1209 11:23:29.858535  645459 command_runner.go:130] >     },
	I1209 11:23:29.858538  645459 command_runner.go:130] >     {
	I1209 11:23:29.858544  645459 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I1209 11:23:29.858547  645459 command_runner.go:130] >       "repoTags": [
	I1209 11:23:29.858552  645459 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I1209 11:23:29.858555  645459 command_runner.go:130] >       ],
	I1209 11:23:29.858559  645459 command_runner.go:130] >       "repoDigests": [
	I1209 11:23:29.858566  645459 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I1209 11:23:29.858573  645459 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I1209 11:23:29.858576  645459 command_runner.go:130] >       ],
	I1209 11:23:29.858580  645459 command_runner.go:130] >       "size": "742080",
	I1209 11:23:29.858587  645459 command_runner.go:130] >       "uid": {
	I1209 11:23:29.858591  645459 command_runner.go:130] >         "value": "65535"
	I1209 11:23:29.858595  645459 command_runner.go:130] >       },
	I1209 11:23:29.858599  645459 command_runner.go:130] >       "username": "",
	I1209 11:23:29.858605  645459 command_runner.go:130] >       "spec": null,
	I1209 11:23:29.858609  645459 command_runner.go:130] >       "pinned": true
	I1209 11:23:29.858613  645459 command_runner.go:130] >     }
	I1209 11:23:29.858616  645459 command_runner.go:130] >   ]
	I1209 11:23:29.858619  645459 command_runner.go:130] > }
	I1209 11:23:29.859134  645459 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 11:23:29.859156  645459 cache_images.go:84] Images are preloaded, skipping loading
	I1209 11:23:29.859166  645459 kubeadm.go:934] updating node { 192.168.39.31 8443 v1.31.2 crio true true} ...
	I1209 11:23:29.859309  645459 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-714725 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.31
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:multinode-714725 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 11:23:29.859403  645459 ssh_runner.go:195] Run: crio config
	I1209 11:23:29.898296  645459 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1209 11:23:29.898337  645459 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1209 11:23:29.898346  645459 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1209 11:23:29.898351  645459 command_runner.go:130] > #
	I1209 11:23:29.898374  645459 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1209 11:23:29.898384  645459 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1209 11:23:29.898394  645459 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1209 11:23:29.898416  645459 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1209 11:23:29.898427  645459 command_runner.go:130] > # reload'.
	I1209 11:23:29.898437  645459 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1209 11:23:29.898451  645459 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1209 11:23:29.898466  645459 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1209 11:23:29.898479  645459 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1209 11:23:29.898485  645459 command_runner.go:130] > [crio]
	I1209 11:23:29.898498  645459 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1209 11:23:29.898506  645459 command_runner.go:130] > # containers images, in this directory.
	I1209 11:23:29.898513  645459 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1209 11:23:29.898526  645459 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1209 11:23:29.898566  645459 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1209 11:23:29.898593  645459 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1209 11:23:29.898605  645459 command_runner.go:130] > # imagestore = ""
	I1209 11:23:29.898619  645459 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1209 11:23:29.898630  645459 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1209 11:23:29.898642  645459 command_runner.go:130] > storage_driver = "overlay"
	I1209 11:23:29.898658  645459 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1209 11:23:29.898671  645459 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1209 11:23:29.898680  645459 command_runner.go:130] > storage_option = [
	I1209 11:23:29.898688  645459 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1209 11:23:29.898696  645459 command_runner.go:130] > ]
	I1209 11:23:29.898706  645459 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1209 11:23:29.898725  645459 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1209 11:23:29.898736  645459 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1209 11:23:29.898745  645459 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1209 11:23:29.898759  645459 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1209 11:23:29.898769  645459 command_runner.go:130] > # always happen on a node reboot
	I1209 11:23:29.898777  645459 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1209 11:23:29.898797  645459 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1209 11:23:29.898811  645459 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1209 11:23:29.898818  645459 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1209 11:23:29.898825  645459 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I1209 11:23:29.898836  645459 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1209 11:23:29.898851  645459 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1209 11:23:29.898859  645459 command_runner.go:130] > # internal_wipe = true
	I1209 11:23:29.898871  645459 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1209 11:23:29.898880  645459 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1209 11:23:29.898892  645459 command_runner.go:130] > # internal_repair = false
	I1209 11:23:29.898899  645459 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1209 11:23:29.898914  645459 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1209 11:23:29.898925  645459 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1209 11:23:29.898937  645459 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1209 11:23:29.898949  645459 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1209 11:23:29.898954  645459 command_runner.go:130] > [crio.api]
	I1209 11:23:29.898961  645459 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1209 11:23:29.898969  645459 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1209 11:23:29.898978  645459 command_runner.go:130] > # IP address on which the stream server will listen.
	I1209 11:23:29.898986  645459 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1209 11:23:29.899000  645459 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1209 11:23:29.899013  645459 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1209 11:23:29.899024  645459 command_runner.go:130] > # stream_port = "0"
	I1209 11:23:29.899033  645459 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1209 11:23:29.899043  645459 command_runner.go:130] > # stream_enable_tls = false
	I1209 11:23:29.899053  645459 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1209 11:23:29.899064  645459 command_runner.go:130] > # stream_idle_timeout = ""
	I1209 11:23:29.899074  645459 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1209 11:23:29.899087  645459 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1209 11:23:29.899093  645459 command_runner.go:130] > # minutes.
	I1209 11:23:29.899103  645459 command_runner.go:130] > # stream_tls_cert = ""
	I1209 11:23:29.899120  645459 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1209 11:23:29.899134  645459 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1209 11:23:29.899147  645459 command_runner.go:130] > # stream_tls_key = ""
	I1209 11:23:29.899159  645459 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1209 11:23:29.899172  645459 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1209 11:23:29.899189  645459 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1209 11:23:29.899199  645459 command_runner.go:130] > # stream_tls_ca = ""
	I1209 11:23:29.899210  645459 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1209 11:23:29.899226  645459 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1209 11:23:29.899240  645459 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1209 11:23:29.899251  645459 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1209 11:23:29.899264  645459 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1209 11:23:29.899277  645459 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1209 11:23:29.899286  645459 command_runner.go:130] > [crio.runtime]
	I1209 11:23:29.899294  645459 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1209 11:23:29.899305  645459 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1209 11:23:29.899314  645459 command_runner.go:130] > # "nofile=1024:2048"
	I1209 11:23:29.899327  645459 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1209 11:23:29.899337  645459 command_runner.go:130] > # default_ulimits = [
	I1209 11:23:29.899346  645459 command_runner.go:130] > # ]
	I1209 11:23:29.899356  645459 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1209 11:23:29.899368  645459 command_runner.go:130] > # no_pivot = false
	I1209 11:23:29.899382  645459 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1209 11:23:29.899397  645459 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1209 11:23:29.899409  645459 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1209 11:23:29.899423  645459 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1209 11:23:29.899434  645459 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1209 11:23:29.899445  645459 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1209 11:23:29.899456  645459 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1209 11:23:29.899463  645459 command_runner.go:130] > # Cgroup setting for conmon
	I1209 11:23:29.899477  645459 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1209 11:23:29.899487  645459 command_runner.go:130] > conmon_cgroup = "pod"
	I1209 11:23:29.899499  645459 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1209 11:23:29.899512  645459 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1209 11:23:29.899525  645459 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1209 11:23:29.899535  645459 command_runner.go:130] > conmon_env = [
	I1209 11:23:29.899544  645459 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1209 11:23:29.899559  645459 command_runner.go:130] > ]
	I1209 11:23:29.899573  645459 command_runner.go:130] > # Additional environment variables to set for all the
	I1209 11:23:29.899585  645459 command_runner.go:130] > # containers. These are overridden if set in the
	I1209 11:23:29.899598  645459 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1209 11:23:29.899609  645459 command_runner.go:130] > # default_env = [
	I1209 11:23:29.899614  645459 command_runner.go:130] > # ]
	I1209 11:23:29.899627  645459 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1209 11:23:29.899640  645459 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1209 11:23:29.899650  645459 command_runner.go:130] > # selinux = false
	I1209 11:23:29.899661  645459 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1209 11:23:29.899675  645459 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1209 11:23:29.899687  645459 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1209 11:23:29.899697  645459 command_runner.go:130] > # seccomp_profile = ""
	I1209 11:23:29.899706  645459 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1209 11:23:29.899718  645459 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1209 11:23:29.899731  645459 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1209 11:23:29.899741  645459 command_runner.go:130] > # which might increase security.
	I1209 11:23:29.899753  645459 command_runner.go:130] > # This option is currently deprecated,
	I1209 11:23:29.899761  645459 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I1209 11:23:29.899773  645459 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1209 11:23:29.899786  645459 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1209 11:23:29.899800  645459 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1209 11:23:29.899813  645459 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1209 11:23:29.899827  645459 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1209 11:23:29.899842  645459 command_runner.go:130] > # This option supports live configuration reload.
	I1209 11:23:29.899852  645459 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1209 11:23:29.899862  645459 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1209 11:23:29.899871  645459 command_runner.go:130] > # the cgroup blockio controller.
	I1209 11:23:29.899879  645459 command_runner.go:130] > # blockio_config_file = ""
	I1209 11:23:29.899892  645459 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1209 11:23:29.899907  645459 command_runner.go:130] > # blockio parameters.
	I1209 11:23:29.899915  645459 command_runner.go:130] > # blockio_reload = false
	I1209 11:23:29.899924  645459 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1209 11:23:29.899934  645459 command_runner.go:130] > # irqbalance daemon.
	I1209 11:23:29.899944  645459 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1209 11:23:29.899956  645459 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1209 11:23:29.899970  645459 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1209 11:23:29.899983  645459 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1209 11:23:29.900002  645459 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1209 11:23:29.900020  645459 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1209 11:23:29.900031  645459 command_runner.go:130] > # This option supports live configuration reload.
	I1209 11:23:29.900042  645459 command_runner.go:130] > # rdt_config_file = ""
	I1209 11:23:29.900051  645459 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1209 11:23:29.900061  645459 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1209 11:23:29.900090  645459 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1209 11:23:29.900102  645459 command_runner.go:130] > # separate_pull_cgroup = ""
	I1209 11:23:29.900112  645459 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1209 11:23:29.900126  645459 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1209 11:23:29.900133  645459 command_runner.go:130] > # will be added.
	I1209 11:23:29.900143  645459 command_runner.go:130] > # default_capabilities = [
	I1209 11:23:29.900149  645459 command_runner.go:130] > # 	"CHOWN",
	I1209 11:23:29.900159  645459 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1209 11:23:29.900167  645459 command_runner.go:130] > # 	"FSETID",
	I1209 11:23:29.900176  645459 command_runner.go:130] > # 	"FOWNER",
	I1209 11:23:29.900183  645459 command_runner.go:130] > # 	"SETGID",
	I1209 11:23:29.900192  645459 command_runner.go:130] > # 	"SETUID",
	I1209 11:23:29.900197  645459 command_runner.go:130] > # 	"SETPCAP",
	I1209 11:23:29.900205  645459 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1209 11:23:29.900221  645459 command_runner.go:130] > # 	"KILL",
	I1209 11:23:29.900230  645459 command_runner.go:130] > # ]
	I1209 11:23:29.900241  645459 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1209 11:23:29.900255  645459 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1209 11:23:29.900266  645459 command_runner.go:130] > # add_inheritable_capabilities = false
	I1209 11:23:29.900275  645459 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1209 11:23:29.900287  645459 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1209 11:23:29.900297  645459 command_runner.go:130] > default_sysctls = [
	I1209 11:23:29.900304  645459 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1209 11:23:29.900312  645459 command_runner.go:130] > ]
	I1209 11:23:29.900320  645459 command_runner.go:130] > # List of devices on the host that a
	I1209 11:23:29.900334  645459 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1209 11:23:29.900344  645459 command_runner.go:130] > # allowed_devices = [
	I1209 11:23:29.900350  645459 command_runner.go:130] > # 	"/dev/fuse",
	I1209 11:23:29.900362  645459 command_runner.go:130] > # ]
	I1209 11:23:29.900375  645459 command_runner.go:130] > # List of additional devices. specified as
	I1209 11:23:29.900390  645459 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1209 11:23:29.900402  645459 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1209 11:23:29.900412  645459 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1209 11:23:29.900421  645459 command_runner.go:130] > # additional_devices = [
	I1209 11:23:29.900427  645459 command_runner.go:130] > # ]
	I1209 11:23:29.900438  645459 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1209 11:23:29.900455  645459 command_runner.go:130] > # cdi_spec_dirs = [
	I1209 11:23:29.900464  645459 command_runner.go:130] > # 	"/etc/cdi",
	I1209 11:23:29.900470  645459 command_runner.go:130] > # 	"/var/run/cdi",
	I1209 11:23:29.900478  645459 command_runner.go:130] > # ]
	I1209 11:23:29.900489  645459 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1209 11:23:29.900503  645459 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1209 11:23:29.900512  645459 command_runner.go:130] > # Defaults to false.
	I1209 11:23:29.900521  645459 command_runner.go:130] > # device_ownership_from_security_context = false
	I1209 11:23:29.900536  645459 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1209 11:23:29.900551  645459 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1209 11:23:29.900559  645459 command_runner.go:130] > # hooks_dir = [
	I1209 11:23:29.900567  645459 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1209 11:23:29.900575  645459 command_runner.go:130] > # ]
	I1209 11:23:29.900588  645459 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1209 11:23:29.900602  645459 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1209 11:23:29.900614  645459 command_runner.go:130] > # its default mounts from the following two files:
	I1209 11:23:29.900623  645459 command_runner.go:130] > #
	I1209 11:23:29.900632  645459 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1209 11:23:29.900646  645459 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1209 11:23:29.900658  645459 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1209 11:23:29.900667  645459 command_runner.go:130] > #
	I1209 11:23:29.900678  645459 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1209 11:23:29.900694  645459 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1209 11:23:29.900708  645459 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1209 11:23:29.900719  645459 command_runner.go:130] > #      only add mounts it finds in this file.
	I1209 11:23:29.900724  645459 command_runner.go:130] > #
	I1209 11:23:29.900733  645459 command_runner.go:130] > # default_mounts_file = ""
	I1209 11:23:29.900742  645459 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1209 11:23:29.900777  645459 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1209 11:23:29.900795  645459 command_runner.go:130] > pids_limit = 1024
	I1209 11:23:29.900805  645459 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1209 11:23:29.900819  645459 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1209 11:23:29.900832  645459 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1209 11:23:29.900847  645459 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1209 11:23:29.900856  645459 command_runner.go:130] > # log_size_max = -1
	I1209 11:23:29.900867  645459 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1209 11:23:29.900877  645459 command_runner.go:130] > # log_to_journald = false
	I1209 11:23:29.900888  645459 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1209 11:23:29.900905  645459 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1209 11:23:29.900920  645459 command_runner.go:130] > # Path to directory for container attach sockets.
	I1209 11:23:29.900935  645459 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1209 11:23:29.900946  645459 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1209 11:23:29.900954  645459 command_runner.go:130] > # bind_mount_prefix = ""
	I1209 11:23:29.900965  645459 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1209 11:23:29.900975  645459 command_runner.go:130] > # read_only = false
	I1209 11:23:29.900987  645459 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1209 11:23:29.901000  645459 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1209 11:23:29.901007  645459 command_runner.go:130] > # live configuration reload.
	I1209 11:23:29.901017  645459 command_runner.go:130] > # log_level = "info"
	I1209 11:23:29.901031  645459 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1209 11:23:29.901041  645459 command_runner.go:130] > # This option supports live configuration reload.
	I1209 11:23:29.901050  645459 command_runner.go:130] > # log_filter = ""
	I1209 11:23:29.901058  645459 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1209 11:23:29.901069  645459 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1209 11:23:29.901078  645459 command_runner.go:130] > # separated by comma.
	I1209 11:23:29.901087  645459 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1209 11:23:29.901095  645459 command_runner.go:130] > # uid_mappings = ""
	I1209 11:23:29.901103  645459 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1209 11:23:29.901114  645459 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1209 11:23:29.901123  645459 command_runner.go:130] > # separated by comma.
	I1209 11:23:29.901133  645459 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1209 11:23:29.901142  645459 command_runner.go:130] > # gid_mappings = ""
	I1209 11:23:29.901152  645459 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1209 11:23:29.901163  645459 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1209 11:23:29.901172  645459 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1209 11:23:29.901185  645459 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1209 11:23:29.901195  645459 command_runner.go:130] > # minimum_mappable_uid = -1
	I1209 11:23:29.901206  645459 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1209 11:23:29.901225  645459 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1209 11:23:29.901239  645459 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1209 11:23:29.901250  645459 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1209 11:23:29.901264  645459 command_runner.go:130] > # minimum_mappable_gid = -1
	I1209 11:23:29.901272  645459 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1209 11:23:29.901285  645459 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1209 11:23:29.901296  645459 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1209 11:23:29.901312  645459 command_runner.go:130] > # ctr_stop_timeout = 30
	I1209 11:23:29.901323  645459 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1209 11:23:29.901334  645459 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1209 11:23:29.901341  645459 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1209 11:23:29.901351  645459 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1209 11:23:29.901356  645459 command_runner.go:130] > drop_infra_ctr = false
	I1209 11:23:29.901366  645459 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1209 11:23:29.901378  645459 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1209 11:23:29.901390  645459 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1209 11:23:29.901398  645459 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1209 11:23:29.901407  645459 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1209 11:23:29.901418  645459 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1209 11:23:29.901427  645459 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1209 11:23:29.901436  645459 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1209 11:23:29.901442  645459 command_runner.go:130] > # shared_cpuset = ""
	I1209 11:23:29.901454  645459 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1209 11:23:29.901464  645459 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1209 11:23:29.901475  645459 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1209 11:23:29.901488  645459 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1209 11:23:29.901498  645459 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1209 11:23:29.901506  645459 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1209 11:23:29.901517  645459 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1209 11:23:29.901523  645459 command_runner.go:130] > # enable_criu_support = false
	I1209 11:23:29.901532  645459 command_runner.go:130] > # Enable/disable the generation of the container,
	I1209 11:23:29.901540  645459 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1209 11:23:29.901550  645459 command_runner.go:130] > # enable_pod_events = false
	I1209 11:23:29.901559  645459 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1209 11:23:29.901570  645459 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1209 11:23:29.901581  645459 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1209 11:23:29.901595  645459 command_runner.go:130] > # default_runtime = "runc"
	I1209 11:23:29.901606  645459 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1209 11:23:29.901615  645459 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1209 11:23:29.901631  645459 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1209 11:23:29.901642  645459 command_runner.go:130] > # creation as a file is not desired either.
	I1209 11:23:29.901652  645459 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1209 11:23:29.901668  645459 command_runner.go:130] > # the hostname is being managed dynamically.
	I1209 11:23:29.901678  645459 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1209 11:23:29.901682  645459 command_runner.go:130] > # ]
	I1209 11:23:29.901692  645459 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1209 11:23:29.901703  645459 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1209 11:23:29.901716  645459 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1209 11:23:29.901727  645459 command_runner.go:130] > # Each entry in the table should follow the format:
	I1209 11:23:29.901736  645459 command_runner.go:130] > #
	I1209 11:23:29.901743  645459 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1209 11:23:29.901752  645459 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1209 11:23:29.901819  645459 command_runner.go:130] > # runtime_type = "oci"
	I1209 11:23:29.901838  645459 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1209 11:23:29.901851  645459 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1209 11:23:29.901858  645459 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1209 11:23:29.901869  645459 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1209 11:23:29.901875  645459 command_runner.go:130] > # monitor_env = []
	I1209 11:23:29.901883  645459 command_runner.go:130] > # privileged_without_host_devices = false
	I1209 11:23:29.901894  645459 command_runner.go:130] > # allowed_annotations = []
	I1209 11:23:29.901907  645459 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1209 11:23:29.901917  645459 command_runner.go:130] > # Where:
	I1209 11:23:29.901925  645459 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1209 11:23:29.901937  645459 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1209 11:23:29.901947  645459 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1209 11:23:29.901959  645459 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1209 11:23:29.901968  645459 command_runner.go:130] > #   in $PATH.
	I1209 11:23:29.901977  645459 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1209 11:23:29.901988  645459 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1209 11:23:29.901999  645459 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1209 11:23:29.902008  645459 command_runner.go:130] > #   state.
	I1209 11:23:29.902018  645459 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1209 11:23:29.902030  645459 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1209 11:23:29.902040  645459 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1209 11:23:29.902051  645459 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1209 11:23:29.902062  645459 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1209 11:23:29.902075  645459 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1209 11:23:29.902086  645459 command_runner.go:130] > #   The currently recognized values are:
	I1209 11:23:29.902095  645459 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1209 11:23:29.902109  645459 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1209 11:23:29.902126  645459 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1209 11:23:29.902136  645459 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1209 11:23:29.902150  645459 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1209 11:23:29.902164  645459 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1209 11:23:29.902195  645459 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1209 11:23:29.902208  645459 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1209 11:23:29.902227  645459 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1209 11:23:29.902240  645459 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1209 11:23:29.902247  645459 command_runner.go:130] > #   deprecated option "conmon".
	I1209 11:23:29.902264  645459 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1209 11:23:29.902276  645459 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1209 11:23:29.902286  645459 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1209 11:23:29.902297  645459 command_runner.go:130] > #   should be moved to the container's cgroup
	I1209 11:23:29.902307  645459 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I1209 11:23:29.902319  645459 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1209 11:23:29.902331  645459 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1209 11:23:29.902345  645459 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1209 11:23:29.902353  645459 command_runner.go:130] > #
	I1209 11:23:29.902360  645459 command_runner.go:130] > # Using the seccomp notifier feature:
	I1209 11:23:29.902368  645459 command_runner.go:130] > #
	I1209 11:23:29.902377  645459 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1209 11:23:29.902388  645459 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1209 11:23:29.902398  645459 command_runner.go:130] > #
	I1209 11:23:29.902411  645459 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1209 11:23:29.902424  645459 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1209 11:23:29.902432  645459 command_runner.go:130] > #
	I1209 11:23:29.902443  645459 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1209 11:23:29.902451  645459 command_runner.go:130] > # feature.
	I1209 11:23:29.902457  645459 command_runner.go:130] > #
	I1209 11:23:29.902468  645459 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1209 11:23:29.902480  645459 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1209 11:23:29.902493  645459 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1209 11:23:29.902506  645459 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1209 11:23:29.902514  645459 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1209 11:23:29.902520  645459 command_runner.go:130] > #
	I1209 11:23:29.902529  645459 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1209 11:23:29.902547  645459 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1209 11:23:29.902553  645459 command_runner.go:130] > #
	I1209 11:23:29.902562  645459 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1209 11:23:29.902574  645459 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1209 11:23:29.902579  645459 command_runner.go:130] > #
	I1209 11:23:29.902588  645459 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1209 11:23:29.902596  645459 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1209 11:23:29.902605  645459 command_runner.go:130] > # limitation.
	I1209 11:23:29.902611  645459 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1209 11:23:29.902617  645459 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1209 11:23:29.902623  645459 command_runner.go:130] > runtime_type = "oci"
	I1209 11:23:29.902632  645459 command_runner.go:130] > runtime_root = "/run/runc"
	I1209 11:23:29.902638  645459 command_runner.go:130] > runtime_config_path = ""
	I1209 11:23:29.902648  645459 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1209 11:23:29.902655  645459 command_runner.go:130] > monitor_cgroup = "pod"
	I1209 11:23:29.902662  645459 command_runner.go:130] > monitor_exec_cgroup = ""
	I1209 11:23:29.902669  645459 command_runner.go:130] > monitor_env = [
	I1209 11:23:29.902681  645459 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1209 11:23:29.902686  645459 command_runner.go:130] > ]
	I1209 11:23:29.902695  645459 command_runner.go:130] > privileged_without_host_devices = false
	I1209 11:23:29.902707  645459 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1209 11:23:29.902716  645459 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1209 11:23:29.902726  645459 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1209 11:23:29.902739  645459 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1209 11:23:29.902750  645459 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1209 11:23:29.902761  645459 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1209 11:23:29.902778  645459 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1209 11:23:29.902793  645459 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1209 11:23:29.902805  645459 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1209 11:23:29.902814  645459 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1209 11:23:29.902823  645459 command_runner.go:130] > # Example:
	I1209 11:23:29.902828  645459 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1209 11:23:29.902835  645459 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1209 11:23:29.902842  645459 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1209 11:23:29.902849  645459 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1209 11:23:29.902855  645459 command_runner.go:130] > # cpuset = 0
	I1209 11:23:29.902861  645459 command_runner.go:130] > # cpushares = "0-1"
	I1209 11:23:29.902866  645459 command_runner.go:130] > # Where:
	I1209 11:23:29.902878  645459 command_runner.go:130] > # The workload name is workload-type.
	I1209 11:23:29.902887  645459 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1209 11:23:29.902893  645459 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1209 11:23:29.902900  645459 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1209 11:23:29.902912  645459 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1209 11:23:29.902920  645459 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1209 11:23:29.902927  645459 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1209 11:23:29.902936  645459 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1209 11:23:29.902943  645459 command_runner.go:130] > # Default value is set to true
	I1209 11:23:29.902950  645459 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1209 11:23:29.902959  645459 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1209 11:23:29.902967  645459 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1209 11:23:29.902974  645459 command_runner.go:130] > # Default value is set to 'false'
	I1209 11:23:29.902980  645459 command_runner.go:130] > # disable_hostport_mapping = false
	I1209 11:23:29.902994  645459 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1209 11:23:29.902998  645459 command_runner.go:130] > #
	I1209 11:23:29.903007  645459 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1209 11:23:29.903017  645459 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1209 11:23:29.903025  645459 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1209 11:23:29.903036  645459 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1209 11:23:29.903044  645459 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1209 11:23:29.903055  645459 command_runner.go:130] > [crio.image]
	I1209 11:23:29.903066  645459 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1209 11:23:29.903076  645459 command_runner.go:130] > # default_transport = "docker://"
	I1209 11:23:29.903089  645459 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1209 11:23:29.903101  645459 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1209 11:23:29.903111  645459 command_runner.go:130] > # global_auth_file = ""
	I1209 11:23:29.903120  645459 command_runner.go:130] > # The image used to instantiate infra containers.
	I1209 11:23:29.903130  645459 command_runner.go:130] > # This option supports live configuration reload.
	I1209 11:23:29.903136  645459 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I1209 11:23:29.903149  645459 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1209 11:23:29.903160  645459 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1209 11:23:29.903169  645459 command_runner.go:130] > # This option supports live configuration reload.
	I1209 11:23:29.903178  645459 command_runner.go:130] > # pause_image_auth_file = ""
	I1209 11:23:29.903187  645459 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1209 11:23:29.903199  645459 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1209 11:23:29.903228  645459 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1209 11:23:29.903244  645459 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1209 11:23:29.903255  645459 command_runner.go:130] > # pause_command = "/pause"
	I1209 11:23:29.903270  645459 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1209 11:23:29.903283  645459 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1209 11:23:29.903296  645459 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1209 11:23:29.903308  645459 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1209 11:23:29.903319  645459 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1209 11:23:29.903332  645459 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1209 11:23:29.903339  645459 command_runner.go:130] > # pinned_images = [
	I1209 11:23:29.903347  645459 command_runner.go:130] > # ]
	I1209 11:23:29.903358  645459 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1209 11:23:29.903371  645459 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1209 11:23:29.903382  645459 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1209 11:23:29.903394  645459 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1209 11:23:29.903405  645459 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1209 11:23:29.903414  645459 command_runner.go:130] > # signature_policy = ""
	I1209 11:23:29.903424  645459 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1209 11:23:29.903437  645459 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1209 11:23:29.903450  645459 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1209 11:23:29.903459  645459 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1209 11:23:29.903471  645459 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1209 11:23:29.903482  645459 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1209 11:23:29.903494  645459 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1209 11:23:29.903506  645459 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1209 11:23:29.903515  645459 command_runner.go:130] > # changing them here.
	I1209 11:23:29.903522  645459 command_runner.go:130] > # insecure_registries = [
	I1209 11:23:29.903531  645459 command_runner.go:130] > # ]
	I1209 11:23:29.903544  645459 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1209 11:23:29.903556  645459 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1209 11:23:29.903562  645459 command_runner.go:130] > # image_volumes = "mkdir"
	I1209 11:23:29.903572  645459 command_runner.go:130] > # Temporary directory to use for storing big files
	I1209 11:23:29.903583  645459 command_runner.go:130] > # big_files_temporary_dir = ""
	I1209 11:23:29.903593  645459 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1209 11:23:29.903602  645459 command_runner.go:130] > # CNI plugins.
	I1209 11:23:29.903608  645459 command_runner.go:130] > [crio.network]
	I1209 11:23:29.903620  645459 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1209 11:23:29.903636  645459 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1209 11:23:29.903646  645459 command_runner.go:130] > # cni_default_network = ""
	I1209 11:23:29.903658  645459 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1209 11:23:29.903670  645459 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1209 11:23:29.903683  645459 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1209 11:23:29.903692  645459 command_runner.go:130] > # plugin_dirs = [
	I1209 11:23:29.903698  645459 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1209 11:23:29.903712  645459 command_runner.go:130] > # ]
	I1209 11:23:29.903724  645459 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1209 11:23:29.903735  645459 command_runner.go:130] > [crio.metrics]
	I1209 11:23:29.903745  645459 command_runner.go:130] > # Globally enable or disable metrics support.
	I1209 11:23:29.903754  645459 command_runner.go:130] > enable_metrics = true
	I1209 11:23:29.903763  645459 command_runner.go:130] > # Specify enabled metrics collectors.
	I1209 11:23:29.903773  645459 command_runner.go:130] > # Per default all metrics are enabled.
	I1209 11:23:29.903785  645459 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1209 11:23:29.903798  645459 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1209 11:23:29.903810  645459 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1209 11:23:29.903820  645459 command_runner.go:130] > # metrics_collectors = [
	I1209 11:23:29.903829  645459 command_runner.go:130] > # 	"operations",
	I1209 11:23:29.903839  645459 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1209 11:23:29.903848  645459 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1209 11:23:29.903856  645459 command_runner.go:130] > # 	"operations_errors",
	I1209 11:23:29.903865  645459 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1209 11:23:29.903871  645459 command_runner.go:130] > # 	"image_pulls_by_name",
	I1209 11:23:29.903881  645459 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1209 11:23:29.903888  645459 command_runner.go:130] > # 	"image_pulls_failures",
	I1209 11:23:29.903897  645459 command_runner.go:130] > # 	"image_pulls_successes",
	I1209 11:23:29.903908  645459 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1209 11:23:29.903916  645459 command_runner.go:130] > # 	"image_layer_reuse",
	I1209 11:23:29.903925  645459 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1209 11:23:29.903935  645459 command_runner.go:130] > # 	"containers_oom_total",
	I1209 11:23:29.903944  645459 command_runner.go:130] > # 	"containers_oom",
	I1209 11:23:29.903953  645459 command_runner.go:130] > # 	"processes_defunct",
	I1209 11:23:29.903960  645459 command_runner.go:130] > # 	"operations_total",
	I1209 11:23:29.903970  645459 command_runner.go:130] > # 	"operations_latency_seconds",
	I1209 11:23:29.903978  645459 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1209 11:23:29.903989  645459 command_runner.go:130] > # 	"operations_errors_total",
	I1209 11:23:29.904000  645459 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1209 11:23:29.904011  645459 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1209 11:23:29.904021  645459 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1209 11:23:29.904034  645459 command_runner.go:130] > # 	"image_pulls_success_total",
	I1209 11:23:29.904046  645459 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1209 11:23:29.904053  645459 command_runner.go:130] > # 	"containers_oom_count_total",
	I1209 11:23:29.904064  645459 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1209 11:23:29.904074  645459 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1209 11:23:29.904090  645459 command_runner.go:130] > # ]
	I1209 11:23:29.904102  645459 command_runner.go:130] > # The port on which the metrics server will listen.
	I1209 11:23:29.904113  645459 command_runner.go:130] > # metrics_port = 9090
	I1209 11:23:29.904125  645459 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1209 11:23:29.904131  645459 command_runner.go:130] > # metrics_socket = ""
	I1209 11:23:29.904142  645459 command_runner.go:130] > # The certificate for the secure metrics server.
	I1209 11:23:29.904152  645459 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1209 11:23:29.904165  645459 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1209 11:23:29.904175  645459 command_runner.go:130] > # certificate on any modification event.
	I1209 11:23:29.904184  645459 command_runner.go:130] > # metrics_cert = ""
	I1209 11:23:29.904192  645459 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1209 11:23:29.904202  645459 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1209 11:23:29.904211  645459 command_runner.go:130] > # metrics_key = ""
	I1209 11:23:29.904227  645459 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1209 11:23:29.904237  645459 command_runner.go:130] > [crio.tracing]
	I1209 11:23:29.904249  645459 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1209 11:23:29.904258  645459 command_runner.go:130] > # enable_tracing = false
	I1209 11:23:29.904269  645459 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1209 11:23:29.904280  645459 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1209 11:23:29.904293  645459 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1209 11:23:29.904305  645459 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1209 11:23:29.904312  645459 command_runner.go:130] > # CRI-O NRI configuration.
	I1209 11:23:29.904322  645459 command_runner.go:130] > [crio.nri]
	I1209 11:23:29.904332  645459 command_runner.go:130] > # Globally enable or disable NRI.
	I1209 11:23:29.904341  645459 command_runner.go:130] > # enable_nri = false
	I1209 11:23:29.904351  645459 command_runner.go:130] > # NRI socket to listen on.
	I1209 11:23:29.904361  645459 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1209 11:23:29.904371  645459 command_runner.go:130] > # NRI plugin directory to use.
	I1209 11:23:29.904380  645459 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1209 11:23:29.904387  645459 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1209 11:23:29.904397  645459 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1209 11:23:29.904404  645459 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1209 11:23:29.904414  645459 command_runner.go:130] > # nri_disable_connections = false
	I1209 11:23:29.904422  645459 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1209 11:23:29.904432  645459 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1209 11:23:29.904440  645459 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1209 11:23:29.904450  645459 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1209 11:23:29.904458  645459 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1209 11:23:29.904463  645459 command_runner.go:130] > [crio.stats]
	I1209 11:23:29.904471  645459 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1209 11:23:29.904479  645459 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1209 11:23:29.904486  645459 command_runner.go:130] > # stats_collection_period = 0
	I1209 11:23:29.904523  645459 command_runner.go:130] ! time="2024-12-09 11:23:29.866966552Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I1209 11:23:29.904551  645459 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1209 11:23:29.904638  645459 cni.go:84] Creating CNI manager for ""
	I1209 11:23:29.904652  645459 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1209 11:23:29.904668  645459 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1209 11:23:29.904704  645459 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.31 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-714725 NodeName:multinode-714725 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.31"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.31 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 11:23:29.904829  645459 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.31
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-714725"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.31"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.31"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 11:23:29.904903  645459 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1209 11:23:29.920249  645459 command_runner.go:130] > kubeadm
	I1209 11:23:29.920280  645459 command_runner.go:130] > kubectl
	I1209 11:23:29.920286  645459 command_runner.go:130] > kubelet
	I1209 11:23:29.920309  645459 binaries.go:44] Found k8s binaries, skipping transfer
	I1209 11:23:29.920383  645459 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 11:23:29.929816  645459 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I1209 11:23:29.945595  645459 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 11:23:29.961817  645459 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2293 bytes)
	I1209 11:23:29.977580  645459 ssh_runner.go:195] Run: grep 192.168.39.31	control-plane.minikube.internal$ /etc/hosts
	I1209 11:23:29.981205  645459 command_runner.go:130] > 192.168.39.31	control-plane.minikube.internal
	I1209 11:23:29.981420  645459 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 11:23:30.115771  645459 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 11:23:30.130676  645459 certs.go:68] Setting up /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/multinode-714725 for IP: 192.168.39.31
	I1209 11:23:30.130704  645459 certs.go:194] generating shared ca certs ...
	I1209 11:23:30.130729  645459 certs.go:226] acquiring lock for ca certs: {Name:mk2df5887e08965d909a9c950da5dfffb8a04ddc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 11:23:30.130913  645459 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.key
	I1209 11:23:30.130975  645459 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.key
	I1209 11:23:30.130994  645459 certs.go:256] generating profile certs ...
	I1209 11:23:30.131122  645459 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/multinode-714725/client.key
	I1209 11:23:30.131207  645459 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/multinode-714725/apiserver.key.fa0b84c4
	I1209 11:23:30.131282  645459 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/multinode-714725/proxy-client.key
	I1209 11:23:30.131302  645459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1209 11:23:30.131326  645459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1209 11:23:30.131346  645459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1209 11:23:30.131363  645459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1209 11:23:30.131389  645459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/multinode-714725/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1209 11:23:30.131405  645459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/multinode-714725/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1209 11:23:30.131423  645459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/multinode-714725/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1209 11:23:30.131446  645459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/multinode-714725/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1209 11:23:30.131507  645459 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017.pem (1338 bytes)
	W1209 11:23:30.131551  645459 certs.go:480] ignoring /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017_empty.pem, impossibly tiny 0 bytes
	I1209 11:23:30.131561  645459 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem (1675 bytes)
	I1209 11:23:30.131591  645459 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem (1082 bytes)
	I1209 11:23:30.131628  645459 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem (1123 bytes)
	I1209 11:23:30.131661  645459 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem (1679 bytes)
	I1209 11:23:30.131750  645459 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem (1708 bytes)
	I1209 11:23:30.131859  645459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1209 11:23:30.131884  645459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017.pem -> /usr/share/ca-certificates/617017.pem
	I1209 11:23:30.131904  645459 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem -> /usr/share/ca-certificates/6170172.pem
	I1209 11:23:30.132781  645459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 11:23:30.155181  645459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1209 11:23:30.177648  645459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 11:23:30.199088  645459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1209 11:23:30.220820  645459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/multinode-714725/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1209 11:23:30.242445  645459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/multinode-714725/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1209 11:23:30.264568  645459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/multinode-714725/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 11:23:30.285723  645459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/multinode-714725/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1209 11:23:30.307697  645459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 11:23:30.330439  645459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017.pem --> /usr/share/ca-certificates/617017.pem (1338 bytes)
	I1209 11:23:30.351888  645459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem --> /usr/share/ca-certificates/6170172.pem (1708 bytes)
	I1209 11:23:30.373689  645459 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 11:23:30.388667  645459 ssh_runner.go:195] Run: openssl version
	I1209 11:23:30.394123  645459 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I1209 11:23:30.394351  645459 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1209 11:23:30.404186  645459 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 11:23:30.408279  645459 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec  9 10:34 /usr/share/ca-certificates/minikubeCA.pem
	I1209 11:23:30.408420  645459 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 10:34 /usr/share/ca-certificates/minikubeCA.pem
	I1209 11:23:30.408474  645459 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 11:23:30.413751  645459 command_runner.go:130] > b5213941
	I1209 11:23:30.413815  645459 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1209 11:23:30.422726  645459 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/617017.pem && ln -fs /usr/share/ca-certificates/617017.pem /etc/ssl/certs/617017.pem"
	I1209 11:23:30.432470  645459 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/617017.pem
	I1209 11:23:30.436551  645459 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec  9 10:45 /usr/share/ca-certificates/617017.pem
	I1209 11:23:30.436622  645459 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 10:45 /usr/share/ca-certificates/617017.pem
	I1209 11:23:30.436668  645459 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/617017.pem
	I1209 11:23:30.441877  645459 command_runner.go:130] > 51391683
	I1209 11:23:30.441949  645459 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/617017.pem /etc/ssl/certs/51391683.0"
	I1209 11:23:30.450577  645459 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6170172.pem && ln -fs /usr/share/ca-certificates/6170172.pem /etc/ssl/certs/6170172.pem"
	I1209 11:23:30.460334  645459 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6170172.pem
	I1209 11:23:30.464283  645459 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec  9 10:45 /usr/share/ca-certificates/6170172.pem
	I1209 11:23:30.464416  645459 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 10:45 /usr/share/ca-certificates/6170172.pem
	I1209 11:23:30.464466  645459 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6170172.pem
	I1209 11:23:30.469648  645459 command_runner.go:130] > 3ec20f2e
	I1209 11:23:30.469721  645459 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6170172.pem /etc/ssl/certs/3ec20f2e.0"
	I1209 11:23:30.478061  645459 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 11:23:30.482176  645459 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 11:23:30.482197  645459 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1209 11:23:30.482205  645459 command_runner.go:130] > Device: 253,1	Inode: 4197422     Links: 1
	I1209 11:23:30.482217  645459 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1209 11:23:30.482230  645459 command_runner.go:130] > Access: 2024-12-09 11:16:45.210739788 +0000
	I1209 11:23:30.482238  645459 command_runner.go:130] > Modify: 2024-12-09 11:16:45.210739788 +0000
	I1209 11:23:30.482250  645459 command_runner.go:130] > Change: 2024-12-09 11:16:45.210739788 +0000
	I1209 11:23:30.482262  645459 command_runner.go:130] >  Birth: 2024-12-09 11:16:45.210739788 +0000
	I1209 11:23:30.482304  645459 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1209 11:23:30.487483  645459 command_runner.go:130] > Certificate will not expire
	I1209 11:23:30.487547  645459 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1209 11:23:30.492442  645459 command_runner.go:130] > Certificate will not expire
	I1209 11:23:30.492668  645459 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1209 11:23:30.497592  645459 command_runner.go:130] > Certificate will not expire
	I1209 11:23:30.497758  645459 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1209 11:23:30.502759  645459 command_runner.go:130] > Certificate will not expire
	I1209 11:23:30.502809  645459 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1209 11:23:30.507736  645459 command_runner.go:130] > Certificate will not expire
	I1209 11:23:30.507889  645459 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1209 11:23:30.512797  645459 command_runner.go:130] > Certificate will not expire
	I1209 11:23:30.512866  645459 kubeadm.go:392] StartCluster: {Name:multinode-714725 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
2 ClusterName:multinode-714725 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.31 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.21 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.208 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 11:23:30.512998  645459 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 11:23:30.513056  645459 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 11:23:30.545698  645459 command_runner.go:130] > 000acae6b217f427b8f6acdf002e363e07c656690f10420767c4cc5a8eb5a9fb
	I1209 11:23:30.545736  645459 command_runner.go:130] > b290f046ccdb2cf03080d6ac2d459063f48e75106d6a3af08a6a2851744af474
	I1209 11:23:30.545746  645459 command_runner.go:130] > e26cbe010ad4442ceadffab51ef56d87b6f192b91651925c56194481053fa335
	I1209 11:23:30.545757  645459 command_runner.go:130] > f858cae62f854847342f432947742bac7fc1329cb1e1886fcddd5888a674d561
	I1209 11:23:30.545766  645459 command_runner.go:130] > 13dde430803b2ead2165363121d70eb4fedc39d2a7f6ea59aa7ed6fbbe2c4e8e
	I1209 11:23:30.545776  645459 command_runner.go:130] > 490cfe762cf3942a733ef67734bdad81051e0355b76be9b5df0ddc2872cbaf31
	I1209 11:23:30.545797  645459 command_runner.go:130] > b5459c77ec8bed068a441985a42c9997504af0e6beb5fe241f32d120a7df3940
	I1209 11:23:30.545819  645459 command_runner.go:130] > 027004a5bcaf40ecd3ca7d0b0f75eef805cabd118273733e4c134fa161d932fd
	I1209 11:23:30.547276  645459 cri.go:89] found id: "000acae6b217f427b8f6acdf002e363e07c656690f10420767c4cc5a8eb5a9fb"
	I1209 11:23:30.547294  645459 cri.go:89] found id: "b290f046ccdb2cf03080d6ac2d459063f48e75106d6a3af08a6a2851744af474"
	I1209 11:23:30.547299  645459 cri.go:89] found id: "e26cbe010ad4442ceadffab51ef56d87b6f192b91651925c56194481053fa335"
	I1209 11:23:30.547302  645459 cri.go:89] found id: "f858cae62f854847342f432947742bac7fc1329cb1e1886fcddd5888a674d561"
	I1209 11:23:30.547305  645459 cri.go:89] found id: "13dde430803b2ead2165363121d70eb4fedc39d2a7f6ea59aa7ed6fbbe2c4e8e"
	I1209 11:23:30.547309  645459 cri.go:89] found id: "490cfe762cf3942a733ef67734bdad81051e0355b76be9b5df0ddc2872cbaf31"
	I1209 11:23:30.547311  645459 cri.go:89] found id: "b5459c77ec8bed068a441985a42c9997504af0e6beb5fe241f32d120a7df3940"
	I1209 11:23:30.547314  645459 cri.go:89] found id: "027004a5bcaf40ecd3ca7d0b0f75eef805cabd118273733e4c134fa161d932fd"
	I1209 11:23:30.547317  645459 cri.go:89] found id: ""
	I1209 11:23:30.547366  645459 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-714725 -n multinode-714725
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-714725 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (145.34s)

                                                
                                    
x
+
TestPreload (175.44s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-934001 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-934001 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (1m33.869264635s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-934001 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-934001 image pull gcr.io/k8s-minikube/busybox: (3.243502922s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-934001
E1209 11:33:22.652616  617017 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/functional-032350/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-934001: (7.292123761s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-934001 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-934001 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (1m7.948609789s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-934001 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:629: *** TestPreload FAILED at 2024-12-09 11:34:32.925236559 +0000 UTC m=+3661.385962065
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-934001 -n test-preload-934001
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-934001 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-934001 logs -n 25: (1.028076909s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-714725 ssh -n                                                                 | multinode-714725     | jenkins | v1.34.0 | 09 Dec 24 11:19 UTC | 09 Dec 24 11:19 UTC |
	|         | multinode-714725-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-714725 ssh -n multinode-714725 sudo cat                                       | multinode-714725     | jenkins | v1.34.0 | 09 Dec 24 11:19 UTC | 09 Dec 24 11:19 UTC |
	|         | /home/docker/cp-test_multinode-714725-m03_multinode-714725.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-714725 cp multinode-714725-m03:/home/docker/cp-test.txt                       | multinode-714725     | jenkins | v1.34.0 | 09 Dec 24 11:19 UTC | 09 Dec 24 11:19 UTC |
	|         | multinode-714725-m02:/home/docker/cp-test_multinode-714725-m03_multinode-714725-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-714725 ssh -n                                                                 | multinode-714725     | jenkins | v1.34.0 | 09 Dec 24 11:19 UTC | 09 Dec 24 11:19 UTC |
	|         | multinode-714725-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-714725 ssh -n multinode-714725-m02 sudo cat                                   | multinode-714725     | jenkins | v1.34.0 | 09 Dec 24 11:19 UTC | 09 Dec 24 11:19 UTC |
	|         | /home/docker/cp-test_multinode-714725-m03_multinode-714725-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-714725 node stop m03                                                          | multinode-714725     | jenkins | v1.34.0 | 09 Dec 24 11:19 UTC | 09 Dec 24 11:19 UTC |
	| node    | multinode-714725 node start                                                             | multinode-714725     | jenkins | v1.34.0 | 09 Dec 24 11:19 UTC | 09 Dec 24 11:19 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                      |         |         |                     |                     |
	| node    | list -p multinode-714725                                                                | multinode-714725     | jenkins | v1.34.0 | 09 Dec 24 11:19 UTC |                     |
	| stop    | -p multinode-714725                                                                     | multinode-714725     | jenkins | v1.34.0 | 09 Dec 24 11:19 UTC |                     |
	| start   | -p multinode-714725                                                                     | multinode-714725     | jenkins | v1.34.0 | 09 Dec 24 11:21 UTC | 09 Dec 24 11:25 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-714725                                                                | multinode-714725     | jenkins | v1.34.0 | 09 Dec 24 11:25 UTC |                     |
	| node    | multinode-714725 node delete                                                            | multinode-714725     | jenkins | v1.34.0 | 09 Dec 24 11:25 UTC | 09 Dec 24 11:25 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-714725 stop                                                                   | multinode-714725     | jenkins | v1.34.0 | 09 Dec 24 11:25 UTC |                     |
	| start   | -p multinode-714725                                                                     | multinode-714725     | jenkins | v1.34.0 | 09 Dec 24 11:27 UTC | 09 Dec 24 11:30 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-714725                                                                | multinode-714725     | jenkins | v1.34.0 | 09 Dec 24 11:30 UTC |                     |
	| start   | -p multinode-714725-m02                                                                 | multinode-714725-m02 | jenkins | v1.34.0 | 09 Dec 24 11:30 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-714725-m03                                                                 | multinode-714725-m03 | jenkins | v1.34.0 | 09 Dec 24 11:30 UTC | 09 Dec 24 11:31 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-714725                                                                 | multinode-714725     | jenkins | v1.34.0 | 09 Dec 24 11:31 UTC |                     |
	| delete  | -p multinode-714725-m03                                                                 | multinode-714725-m03 | jenkins | v1.34.0 | 09 Dec 24 11:31 UTC | 09 Dec 24 11:31 UTC |
	| delete  | -p multinode-714725                                                                     | multinode-714725     | jenkins | v1.34.0 | 09 Dec 24 11:31 UTC | 09 Dec 24 11:31 UTC |
	| start   | -p test-preload-934001                                                                  | test-preload-934001  | jenkins | v1.34.0 | 09 Dec 24 11:31 UTC | 09 Dec 24 11:33 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-934001 image pull                                                          | test-preload-934001  | jenkins | v1.34.0 | 09 Dec 24 11:33 UTC | 09 Dec 24 11:33 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-934001                                                                  | test-preload-934001  | jenkins | v1.34.0 | 09 Dec 24 11:33 UTC | 09 Dec 24 11:33 UTC |
	| start   | -p test-preload-934001                                                                  | test-preload-934001  | jenkins | v1.34.0 | 09 Dec 24 11:33 UTC | 09 Dec 24 11:34 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-934001 image list                                                          | test-preload-934001  | jenkins | v1.34.0 | 09 Dec 24 11:34 UTC | 09 Dec 24 11:34 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/09 11:33:24
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 11:33:24.760510  649823 out.go:345] Setting OutFile to fd 1 ...
	I1209 11:33:24.760626  649823 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 11:33:24.760636  649823 out.go:358] Setting ErrFile to fd 2...
	I1209 11:33:24.760640  649823 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 11:33:24.760850  649823 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20068-609844/.minikube/bin
	I1209 11:33:24.761398  649823 out.go:352] Setting JSON to false
	I1209 11:33:24.762380  649823 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":15349,"bootTime":1733728656,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 11:33:24.762489  649823 start.go:139] virtualization: kvm guest
	I1209 11:33:24.764649  649823 out.go:177] * [test-preload-934001] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1209 11:33:24.765875  649823 notify.go:220] Checking for updates...
	I1209 11:33:24.765884  649823 out.go:177]   - MINIKUBE_LOCATION=20068
	I1209 11:33:24.767128  649823 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 11:33:24.768245  649823 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20068-609844/kubeconfig
	I1209 11:33:24.769357  649823 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20068-609844/.minikube
	I1209 11:33:24.770319  649823 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1209 11:33:24.771309  649823 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 11:33:24.772664  649823 config.go:182] Loaded profile config "test-preload-934001": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I1209 11:33:24.773040  649823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:33:24.773113  649823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:33:24.793875  649823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35009
	I1209 11:33:24.794336  649823 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:33:24.794996  649823 main.go:141] libmachine: Using API Version  1
	I1209 11:33:24.795018  649823 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:33:24.795417  649823 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:33:24.795628  649823 main.go:141] libmachine: (test-preload-934001) Calling .DriverName
	I1209 11:33:24.797240  649823 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1209 11:33:24.798229  649823 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 11:33:24.798528  649823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:33:24.798563  649823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:33:24.813617  649823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40059
	I1209 11:33:24.814031  649823 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:33:24.814609  649823 main.go:141] libmachine: Using API Version  1
	I1209 11:33:24.814632  649823 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:33:24.814987  649823 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:33:24.815203  649823 main.go:141] libmachine: (test-preload-934001) Calling .DriverName
	I1209 11:33:24.850758  649823 out.go:177] * Using the kvm2 driver based on existing profile
	I1209 11:33:24.851840  649823 start.go:297] selected driver: kvm2
	I1209 11:33:24.851860  649823 start.go:901] validating driver "kvm2" against &{Name:test-preload-934001 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.24.4 ClusterName:test-preload-934001 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.125 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 11:33:24.851966  649823 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 11:33:24.852650  649823 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 11:33:24.852743  649823 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20068-609844/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1209 11:33:24.868300  649823 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1209 11:33:24.868640  649823 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 11:33:24.868670  649823 cni.go:84] Creating CNI manager for ""
	I1209 11:33:24.868695  649823 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 11:33:24.868741  649823 start.go:340] cluster config:
	{Name:test-preload-934001 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-934001 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.125 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 11:33:24.868838  649823 iso.go:125] acquiring lock: {Name:mk3bf4d9da9b75cc0ae0a3732f4492bac54a4b46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 11:33:24.870426  649823 out.go:177] * Starting "test-preload-934001" primary control-plane node in "test-preload-934001" cluster
	I1209 11:33:24.871649  649823 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I1209 11:33:25.292967  649823 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I1209 11:33:25.293013  649823 cache.go:56] Caching tarball of preloaded images
	I1209 11:33:25.293173  649823 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I1209 11:33:25.295079  649823 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I1209 11:33:25.296282  649823 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I1209 11:33:25.393024  649823 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/20068-609844/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I1209 11:33:36.383584  649823 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I1209 11:33:36.383694  649823 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20068-609844/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I1209 11:33:37.255829  649823 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on crio
	I1209 11:33:37.256000  649823 profile.go:143] Saving config to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/test-preload-934001/config.json ...
	I1209 11:33:37.256230  649823 start.go:360] acquireMachinesLock for test-preload-934001: {Name:mka6afffe9ddf2faebdc603ed0401805d58bf31e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 11:33:37.256308  649823 start.go:364] duration metric: took 47.867µs to acquireMachinesLock for "test-preload-934001"
	I1209 11:33:37.256321  649823 start.go:96] Skipping create...Using existing machine configuration
	I1209 11:33:37.256328  649823 fix.go:54] fixHost starting: 
	I1209 11:33:37.256594  649823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:33:37.256628  649823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:33:37.271773  649823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37159
	I1209 11:33:37.272265  649823 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:33:37.272755  649823 main.go:141] libmachine: Using API Version  1
	I1209 11:33:37.272781  649823 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:33:37.273106  649823 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:33:37.273370  649823 main.go:141] libmachine: (test-preload-934001) Calling .DriverName
	I1209 11:33:37.273546  649823 main.go:141] libmachine: (test-preload-934001) Calling .GetState
	I1209 11:33:37.275192  649823 fix.go:112] recreateIfNeeded on test-preload-934001: state=Stopped err=<nil>
	I1209 11:33:37.275230  649823 main.go:141] libmachine: (test-preload-934001) Calling .DriverName
	W1209 11:33:37.275375  649823 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 11:33:37.277068  649823 out.go:177] * Restarting existing kvm2 VM for "test-preload-934001" ...
	I1209 11:33:37.278058  649823 main.go:141] libmachine: (test-preload-934001) Calling .Start
	I1209 11:33:37.278248  649823 main.go:141] libmachine: (test-preload-934001) Ensuring networks are active...
	I1209 11:33:37.279002  649823 main.go:141] libmachine: (test-preload-934001) Ensuring network default is active
	I1209 11:33:37.279333  649823 main.go:141] libmachine: (test-preload-934001) Ensuring network mk-test-preload-934001 is active
	I1209 11:33:37.279672  649823 main.go:141] libmachine: (test-preload-934001) Getting domain xml...
	I1209 11:33:37.280320  649823 main.go:141] libmachine: (test-preload-934001) Creating domain...
	I1209 11:33:38.507389  649823 main.go:141] libmachine: (test-preload-934001) Waiting to get IP...
	I1209 11:33:38.508706  649823 main.go:141] libmachine: (test-preload-934001) DBG | domain test-preload-934001 has defined MAC address 52:54:00:00:7d:c2 in network mk-test-preload-934001
	I1209 11:33:38.509107  649823 main.go:141] libmachine: (test-preload-934001) DBG | unable to find current IP address of domain test-preload-934001 in network mk-test-preload-934001
	I1209 11:33:38.509205  649823 main.go:141] libmachine: (test-preload-934001) DBG | I1209 11:33:38.509096  649908 retry.go:31] will retry after 240.23744ms: waiting for machine to come up
	I1209 11:33:38.750610  649823 main.go:141] libmachine: (test-preload-934001) DBG | domain test-preload-934001 has defined MAC address 52:54:00:00:7d:c2 in network mk-test-preload-934001
	I1209 11:33:38.751057  649823 main.go:141] libmachine: (test-preload-934001) DBG | unable to find current IP address of domain test-preload-934001 in network mk-test-preload-934001
	I1209 11:33:38.751092  649823 main.go:141] libmachine: (test-preload-934001) DBG | I1209 11:33:38.750994  649908 retry.go:31] will retry after 361.902786ms: waiting for machine to come up
	I1209 11:33:39.114825  649823 main.go:141] libmachine: (test-preload-934001) DBG | domain test-preload-934001 has defined MAC address 52:54:00:00:7d:c2 in network mk-test-preload-934001
	I1209 11:33:39.115237  649823 main.go:141] libmachine: (test-preload-934001) DBG | unable to find current IP address of domain test-preload-934001 in network mk-test-preload-934001
	I1209 11:33:39.115267  649823 main.go:141] libmachine: (test-preload-934001) DBG | I1209 11:33:39.115189  649908 retry.go:31] will retry after 440.26901ms: waiting for machine to come up
	I1209 11:33:39.556772  649823 main.go:141] libmachine: (test-preload-934001) DBG | domain test-preload-934001 has defined MAC address 52:54:00:00:7d:c2 in network mk-test-preload-934001
	I1209 11:33:39.557260  649823 main.go:141] libmachine: (test-preload-934001) DBG | unable to find current IP address of domain test-preload-934001 in network mk-test-preload-934001
	I1209 11:33:39.557283  649823 main.go:141] libmachine: (test-preload-934001) DBG | I1209 11:33:39.557190  649908 retry.go:31] will retry after 401.987136ms: waiting for machine to come up
	I1209 11:33:39.960809  649823 main.go:141] libmachine: (test-preload-934001) DBG | domain test-preload-934001 has defined MAC address 52:54:00:00:7d:c2 in network mk-test-preload-934001
	I1209 11:33:39.961187  649823 main.go:141] libmachine: (test-preload-934001) DBG | unable to find current IP address of domain test-preload-934001 in network mk-test-preload-934001
	I1209 11:33:39.961209  649823 main.go:141] libmachine: (test-preload-934001) DBG | I1209 11:33:39.961157  649908 retry.go:31] will retry after 654.747627ms: waiting for machine to come up
	I1209 11:33:40.617046  649823 main.go:141] libmachine: (test-preload-934001) DBG | domain test-preload-934001 has defined MAC address 52:54:00:00:7d:c2 in network mk-test-preload-934001
	I1209 11:33:40.617532  649823 main.go:141] libmachine: (test-preload-934001) DBG | unable to find current IP address of domain test-preload-934001 in network mk-test-preload-934001
	I1209 11:33:40.617559  649823 main.go:141] libmachine: (test-preload-934001) DBG | I1209 11:33:40.617492  649908 retry.go:31] will retry after 926.063925ms: waiting for machine to come up
	I1209 11:33:41.545718  649823 main.go:141] libmachine: (test-preload-934001) DBG | domain test-preload-934001 has defined MAC address 52:54:00:00:7d:c2 in network mk-test-preload-934001
	I1209 11:33:41.546131  649823 main.go:141] libmachine: (test-preload-934001) DBG | unable to find current IP address of domain test-preload-934001 in network mk-test-preload-934001
	I1209 11:33:41.546195  649823 main.go:141] libmachine: (test-preload-934001) DBG | I1209 11:33:41.546084  649908 retry.go:31] will retry after 964.029416ms: waiting for machine to come up
	I1209 11:33:42.511722  649823 main.go:141] libmachine: (test-preload-934001) DBG | domain test-preload-934001 has defined MAC address 52:54:00:00:7d:c2 in network mk-test-preload-934001
	I1209 11:33:42.512083  649823 main.go:141] libmachine: (test-preload-934001) DBG | unable to find current IP address of domain test-preload-934001 in network mk-test-preload-934001
	I1209 11:33:42.512112  649823 main.go:141] libmachine: (test-preload-934001) DBG | I1209 11:33:42.512020  649908 retry.go:31] will retry after 1.338415537s: waiting for machine to come up
	I1209 11:33:43.852600  649823 main.go:141] libmachine: (test-preload-934001) DBG | domain test-preload-934001 has defined MAC address 52:54:00:00:7d:c2 in network mk-test-preload-934001
	I1209 11:33:43.853033  649823 main.go:141] libmachine: (test-preload-934001) DBG | unable to find current IP address of domain test-preload-934001 in network mk-test-preload-934001
	I1209 11:33:43.853063  649823 main.go:141] libmachine: (test-preload-934001) DBG | I1209 11:33:43.852978  649908 retry.go:31] will retry after 1.357702158s: waiting for machine to come up
	I1209 11:33:45.212696  649823 main.go:141] libmachine: (test-preload-934001) DBG | domain test-preload-934001 has defined MAC address 52:54:00:00:7d:c2 in network mk-test-preload-934001
	I1209 11:33:45.213190  649823 main.go:141] libmachine: (test-preload-934001) DBG | unable to find current IP address of domain test-preload-934001 in network mk-test-preload-934001
	I1209 11:33:45.213218  649823 main.go:141] libmachine: (test-preload-934001) DBG | I1209 11:33:45.213153  649908 retry.go:31] will retry after 1.542222408s: waiting for machine to come up
	I1209 11:33:46.756714  649823 main.go:141] libmachine: (test-preload-934001) DBG | domain test-preload-934001 has defined MAC address 52:54:00:00:7d:c2 in network mk-test-preload-934001
	I1209 11:33:46.757136  649823 main.go:141] libmachine: (test-preload-934001) DBG | unable to find current IP address of domain test-preload-934001 in network mk-test-preload-934001
	I1209 11:33:46.757160  649823 main.go:141] libmachine: (test-preload-934001) DBG | I1209 11:33:46.757080  649908 retry.go:31] will retry after 2.062262828s: waiting for machine to come up
	I1209 11:33:48.820455  649823 main.go:141] libmachine: (test-preload-934001) DBG | domain test-preload-934001 has defined MAC address 52:54:00:00:7d:c2 in network mk-test-preload-934001
	I1209 11:33:48.820858  649823 main.go:141] libmachine: (test-preload-934001) DBG | unable to find current IP address of domain test-preload-934001 in network mk-test-preload-934001
	I1209 11:33:48.820885  649823 main.go:141] libmachine: (test-preload-934001) DBG | I1209 11:33:48.820811  649908 retry.go:31] will retry after 3.301215743s: waiting for machine to come up
	I1209 11:33:52.126364  649823 main.go:141] libmachine: (test-preload-934001) DBG | domain test-preload-934001 has defined MAC address 52:54:00:00:7d:c2 in network mk-test-preload-934001
	I1209 11:33:52.126817  649823 main.go:141] libmachine: (test-preload-934001) DBG | unable to find current IP address of domain test-preload-934001 in network mk-test-preload-934001
	I1209 11:33:52.126846  649823 main.go:141] libmachine: (test-preload-934001) DBG | I1209 11:33:52.126779  649908 retry.go:31] will retry after 2.883798422s: waiting for machine to come up
	I1209 11:33:55.014164  649823 main.go:141] libmachine: (test-preload-934001) DBG | domain test-preload-934001 has defined MAC address 52:54:00:00:7d:c2 in network mk-test-preload-934001
	I1209 11:33:55.014626  649823 main.go:141] libmachine: (test-preload-934001) Found IP for machine: 192.168.39.125
	I1209 11:33:55.014657  649823 main.go:141] libmachine: (test-preload-934001) DBG | domain test-preload-934001 has current primary IP address 192.168.39.125 and MAC address 52:54:00:00:7d:c2 in network mk-test-preload-934001
	I1209 11:33:55.014668  649823 main.go:141] libmachine: (test-preload-934001) Reserving static IP address...
	I1209 11:33:55.015102  649823 main.go:141] libmachine: (test-preload-934001) DBG | found host DHCP lease matching {name: "test-preload-934001", mac: "52:54:00:00:7d:c2", ip: "192.168.39.125"} in network mk-test-preload-934001: {Iface:virbr1 ExpiryTime:2024-12-09 12:33:48 +0000 UTC Type:0 Mac:52:54:00:00:7d:c2 Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:test-preload-934001 Clientid:01:52:54:00:00:7d:c2}
	I1209 11:33:55.015136  649823 main.go:141] libmachine: (test-preload-934001) Reserved static IP address: 192.168.39.125
	I1209 11:33:55.015154  649823 main.go:141] libmachine: (test-preload-934001) DBG | skip adding static IP to network mk-test-preload-934001 - found existing host DHCP lease matching {name: "test-preload-934001", mac: "52:54:00:00:7d:c2", ip: "192.168.39.125"}
	I1209 11:33:55.015169  649823 main.go:141] libmachine: (test-preload-934001) Waiting for SSH to be available...
	I1209 11:33:55.015192  649823 main.go:141] libmachine: (test-preload-934001) DBG | Getting to WaitForSSH function...
	I1209 11:33:55.017131  649823 main.go:141] libmachine: (test-preload-934001) DBG | domain test-preload-934001 has defined MAC address 52:54:00:00:7d:c2 in network mk-test-preload-934001
	I1209 11:33:55.017406  649823 main.go:141] libmachine: (test-preload-934001) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:7d:c2", ip: ""} in network mk-test-preload-934001: {Iface:virbr1 ExpiryTime:2024-12-09 12:33:48 +0000 UTC Type:0 Mac:52:54:00:00:7d:c2 Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:test-preload-934001 Clientid:01:52:54:00:00:7d:c2}
	I1209 11:33:55.017438  649823 main.go:141] libmachine: (test-preload-934001) DBG | domain test-preload-934001 has defined IP address 192.168.39.125 and MAC address 52:54:00:00:7d:c2 in network mk-test-preload-934001
	I1209 11:33:55.017541  649823 main.go:141] libmachine: (test-preload-934001) DBG | Using SSH client type: external
	I1209 11:33:55.017563  649823 main.go:141] libmachine: (test-preload-934001) DBG | Using SSH private key: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/test-preload-934001/id_rsa (-rw-------)
	I1209 11:33:55.017595  649823 main.go:141] libmachine: (test-preload-934001) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.125 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20068-609844/.minikube/machines/test-preload-934001/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1209 11:33:55.017610  649823 main.go:141] libmachine: (test-preload-934001) DBG | About to run SSH command:
	I1209 11:33:55.017627  649823 main.go:141] libmachine: (test-preload-934001) DBG | exit 0
	I1209 11:33:55.141896  649823 main.go:141] libmachine: (test-preload-934001) DBG | SSH cmd err, output: <nil>: 
	I1209 11:33:55.142287  649823 main.go:141] libmachine: (test-preload-934001) Calling .GetConfigRaw
	I1209 11:33:55.142934  649823 main.go:141] libmachine: (test-preload-934001) Calling .GetIP
	I1209 11:33:55.145743  649823 main.go:141] libmachine: (test-preload-934001) DBG | domain test-preload-934001 has defined MAC address 52:54:00:00:7d:c2 in network mk-test-preload-934001
	I1209 11:33:55.146233  649823 main.go:141] libmachine: (test-preload-934001) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:7d:c2", ip: ""} in network mk-test-preload-934001: {Iface:virbr1 ExpiryTime:2024-12-09 12:33:48 +0000 UTC Type:0 Mac:52:54:00:00:7d:c2 Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:test-preload-934001 Clientid:01:52:54:00:00:7d:c2}
	I1209 11:33:55.146266  649823 main.go:141] libmachine: (test-preload-934001) DBG | domain test-preload-934001 has defined IP address 192.168.39.125 and MAC address 52:54:00:00:7d:c2 in network mk-test-preload-934001
	I1209 11:33:55.146455  649823 profile.go:143] Saving config to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/test-preload-934001/config.json ...
	I1209 11:33:55.146675  649823 machine.go:93] provisionDockerMachine start ...
	I1209 11:33:55.146699  649823 main.go:141] libmachine: (test-preload-934001) Calling .DriverName
	I1209 11:33:55.146959  649823 main.go:141] libmachine: (test-preload-934001) Calling .GetSSHHostname
	I1209 11:33:55.149268  649823 main.go:141] libmachine: (test-preload-934001) DBG | domain test-preload-934001 has defined MAC address 52:54:00:00:7d:c2 in network mk-test-preload-934001
	I1209 11:33:55.149594  649823 main.go:141] libmachine: (test-preload-934001) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:7d:c2", ip: ""} in network mk-test-preload-934001: {Iface:virbr1 ExpiryTime:2024-12-09 12:33:48 +0000 UTC Type:0 Mac:52:54:00:00:7d:c2 Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:test-preload-934001 Clientid:01:52:54:00:00:7d:c2}
	I1209 11:33:55.149636  649823 main.go:141] libmachine: (test-preload-934001) DBG | domain test-preload-934001 has defined IP address 192.168.39.125 and MAC address 52:54:00:00:7d:c2 in network mk-test-preload-934001
	I1209 11:33:55.149833  649823 main.go:141] libmachine: (test-preload-934001) Calling .GetSSHPort
	I1209 11:33:55.150010  649823 main.go:141] libmachine: (test-preload-934001) Calling .GetSSHKeyPath
	I1209 11:33:55.150155  649823 main.go:141] libmachine: (test-preload-934001) Calling .GetSSHKeyPath
	I1209 11:33:55.150307  649823 main.go:141] libmachine: (test-preload-934001) Calling .GetSSHUsername
	I1209 11:33:55.150502  649823 main.go:141] libmachine: Using SSH client type: native
	I1209 11:33:55.150677  649823 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I1209 11:33:55.150687  649823 main.go:141] libmachine: About to run SSH command:
	hostname
	I1209 11:33:55.258926  649823 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1209 11:33:55.258961  649823 main.go:141] libmachine: (test-preload-934001) Calling .GetMachineName
	I1209 11:33:55.259226  649823 buildroot.go:166] provisioning hostname "test-preload-934001"
	I1209 11:33:55.259254  649823 main.go:141] libmachine: (test-preload-934001) Calling .GetMachineName
	I1209 11:33:55.259438  649823 main.go:141] libmachine: (test-preload-934001) Calling .GetSSHHostname
	I1209 11:33:55.262089  649823 main.go:141] libmachine: (test-preload-934001) DBG | domain test-preload-934001 has defined MAC address 52:54:00:00:7d:c2 in network mk-test-preload-934001
	I1209 11:33:55.262478  649823 main.go:141] libmachine: (test-preload-934001) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:7d:c2", ip: ""} in network mk-test-preload-934001: {Iface:virbr1 ExpiryTime:2024-12-09 12:33:48 +0000 UTC Type:0 Mac:52:54:00:00:7d:c2 Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:test-preload-934001 Clientid:01:52:54:00:00:7d:c2}
	I1209 11:33:55.262515  649823 main.go:141] libmachine: (test-preload-934001) DBG | domain test-preload-934001 has defined IP address 192.168.39.125 and MAC address 52:54:00:00:7d:c2 in network mk-test-preload-934001
	I1209 11:33:55.262642  649823 main.go:141] libmachine: (test-preload-934001) Calling .GetSSHPort
	I1209 11:33:55.262835  649823 main.go:141] libmachine: (test-preload-934001) Calling .GetSSHKeyPath
	I1209 11:33:55.262986  649823 main.go:141] libmachine: (test-preload-934001) Calling .GetSSHKeyPath
	I1209 11:33:55.263107  649823 main.go:141] libmachine: (test-preload-934001) Calling .GetSSHUsername
	I1209 11:33:55.263294  649823 main.go:141] libmachine: Using SSH client type: native
	I1209 11:33:55.263484  649823 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I1209 11:33:55.263497  649823 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-934001 && echo "test-preload-934001" | sudo tee /etc/hostname
	I1209 11:33:55.384152  649823 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-934001
	
	I1209 11:33:55.384183  649823 main.go:141] libmachine: (test-preload-934001) Calling .GetSSHHostname
	I1209 11:33:55.386813  649823 main.go:141] libmachine: (test-preload-934001) DBG | domain test-preload-934001 has defined MAC address 52:54:00:00:7d:c2 in network mk-test-preload-934001
	I1209 11:33:55.387140  649823 main.go:141] libmachine: (test-preload-934001) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:7d:c2", ip: ""} in network mk-test-preload-934001: {Iface:virbr1 ExpiryTime:2024-12-09 12:33:48 +0000 UTC Type:0 Mac:52:54:00:00:7d:c2 Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:test-preload-934001 Clientid:01:52:54:00:00:7d:c2}
	I1209 11:33:55.387176  649823 main.go:141] libmachine: (test-preload-934001) DBG | domain test-preload-934001 has defined IP address 192.168.39.125 and MAC address 52:54:00:00:7d:c2 in network mk-test-preload-934001
	I1209 11:33:55.387304  649823 main.go:141] libmachine: (test-preload-934001) Calling .GetSSHPort
	I1209 11:33:55.387490  649823 main.go:141] libmachine: (test-preload-934001) Calling .GetSSHKeyPath
	I1209 11:33:55.387648  649823 main.go:141] libmachine: (test-preload-934001) Calling .GetSSHKeyPath
	I1209 11:33:55.387798  649823 main.go:141] libmachine: (test-preload-934001) Calling .GetSSHUsername
	I1209 11:33:55.387960  649823 main.go:141] libmachine: Using SSH client type: native
	I1209 11:33:55.388133  649823 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I1209 11:33:55.388150  649823 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-934001' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-934001/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-934001' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 11:33:55.499774  649823 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 11:33:55.499813  649823 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20068-609844/.minikube CaCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20068-609844/.minikube}
	I1209 11:33:55.499833  649823 buildroot.go:174] setting up certificates
	I1209 11:33:55.499841  649823 provision.go:84] configureAuth start
	I1209 11:33:55.499851  649823 main.go:141] libmachine: (test-preload-934001) Calling .GetMachineName
	I1209 11:33:55.500179  649823 main.go:141] libmachine: (test-preload-934001) Calling .GetIP
	I1209 11:33:55.502811  649823 main.go:141] libmachine: (test-preload-934001) DBG | domain test-preload-934001 has defined MAC address 52:54:00:00:7d:c2 in network mk-test-preload-934001
	I1209 11:33:55.503133  649823 main.go:141] libmachine: (test-preload-934001) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:7d:c2", ip: ""} in network mk-test-preload-934001: {Iface:virbr1 ExpiryTime:2024-12-09 12:33:48 +0000 UTC Type:0 Mac:52:54:00:00:7d:c2 Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:test-preload-934001 Clientid:01:52:54:00:00:7d:c2}
	I1209 11:33:55.503189  649823 main.go:141] libmachine: (test-preload-934001) DBG | domain test-preload-934001 has defined IP address 192.168.39.125 and MAC address 52:54:00:00:7d:c2 in network mk-test-preload-934001
	I1209 11:33:55.503260  649823 main.go:141] libmachine: (test-preload-934001) Calling .GetSSHHostname
	I1209 11:33:55.505420  649823 main.go:141] libmachine: (test-preload-934001) DBG | domain test-preload-934001 has defined MAC address 52:54:00:00:7d:c2 in network mk-test-preload-934001
	I1209 11:33:55.505712  649823 main.go:141] libmachine: (test-preload-934001) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:7d:c2", ip: ""} in network mk-test-preload-934001: {Iface:virbr1 ExpiryTime:2024-12-09 12:33:48 +0000 UTC Type:0 Mac:52:54:00:00:7d:c2 Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:test-preload-934001 Clientid:01:52:54:00:00:7d:c2}
	I1209 11:33:55.505750  649823 main.go:141] libmachine: (test-preload-934001) DBG | domain test-preload-934001 has defined IP address 192.168.39.125 and MAC address 52:54:00:00:7d:c2 in network mk-test-preload-934001
	I1209 11:33:55.505897  649823 provision.go:143] copyHostCerts
	I1209 11:33:55.505953  649823 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem, removing ...
	I1209 11:33:55.505987  649823 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem
	I1209 11:33:55.506052  649823 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem (1082 bytes)
	I1209 11:33:55.506141  649823 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem, removing ...
	I1209 11:33:55.506148  649823 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem
	I1209 11:33:55.506208  649823 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem (1123 bytes)
	I1209 11:33:55.506273  649823 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem, removing ...
	I1209 11:33:55.506281  649823 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem
	I1209 11:33:55.506305  649823 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem (1679 bytes)
	I1209 11:33:55.506355  649823 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem org=jenkins.test-preload-934001 san=[127.0.0.1 192.168.39.125 localhost minikube test-preload-934001]
	I1209 11:33:55.626311  649823 provision.go:177] copyRemoteCerts
	I1209 11:33:55.626388  649823 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 11:33:55.626427  649823 main.go:141] libmachine: (test-preload-934001) Calling .GetSSHHostname
	I1209 11:33:55.629002  649823 main.go:141] libmachine: (test-preload-934001) DBG | domain test-preload-934001 has defined MAC address 52:54:00:00:7d:c2 in network mk-test-preload-934001
	I1209 11:33:55.629367  649823 main.go:141] libmachine: (test-preload-934001) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:7d:c2", ip: ""} in network mk-test-preload-934001: {Iface:virbr1 ExpiryTime:2024-12-09 12:33:48 +0000 UTC Type:0 Mac:52:54:00:00:7d:c2 Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:test-preload-934001 Clientid:01:52:54:00:00:7d:c2}
	I1209 11:33:55.629385  649823 main.go:141] libmachine: (test-preload-934001) DBG | domain test-preload-934001 has defined IP address 192.168.39.125 and MAC address 52:54:00:00:7d:c2 in network mk-test-preload-934001
	I1209 11:33:55.629591  649823 main.go:141] libmachine: (test-preload-934001) Calling .GetSSHPort
	I1209 11:33:55.629780  649823 main.go:141] libmachine: (test-preload-934001) Calling .GetSSHKeyPath
	I1209 11:33:55.629924  649823 main.go:141] libmachine: (test-preload-934001) Calling .GetSSHUsername
	I1209 11:33:55.630034  649823 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/test-preload-934001/id_rsa Username:docker}
	I1209 11:33:55.711685  649823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1209 11:33:55.733816  649823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1209 11:33:55.755231  649823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1209 11:33:55.776341  649823 provision.go:87] duration metric: took 276.4848ms to configureAuth
	I1209 11:33:55.776377  649823 buildroot.go:189] setting minikube options for container-runtime
	I1209 11:33:55.776560  649823 config.go:182] Loaded profile config "test-preload-934001": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I1209 11:33:55.776637  649823 main.go:141] libmachine: (test-preload-934001) Calling .GetSSHHostname
	I1209 11:33:55.779127  649823 main.go:141] libmachine: (test-preload-934001) DBG | domain test-preload-934001 has defined MAC address 52:54:00:00:7d:c2 in network mk-test-preload-934001
	I1209 11:33:55.779443  649823 main.go:141] libmachine: (test-preload-934001) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:7d:c2", ip: ""} in network mk-test-preload-934001: {Iface:virbr1 ExpiryTime:2024-12-09 12:33:48 +0000 UTC Type:0 Mac:52:54:00:00:7d:c2 Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:test-preload-934001 Clientid:01:52:54:00:00:7d:c2}
	I1209 11:33:55.779477  649823 main.go:141] libmachine: (test-preload-934001) DBG | domain test-preload-934001 has defined IP address 192.168.39.125 and MAC address 52:54:00:00:7d:c2 in network mk-test-preload-934001
	I1209 11:33:55.779601  649823 main.go:141] libmachine: (test-preload-934001) Calling .GetSSHPort
	I1209 11:33:55.779774  649823 main.go:141] libmachine: (test-preload-934001) Calling .GetSSHKeyPath
	I1209 11:33:55.779936  649823 main.go:141] libmachine: (test-preload-934001) Calling .GetSSHKeyPath
	I1209 11:33:55.780067  649823 main.go:141] libmachine: (test-preload-934001) Calling .GetSSHUsername
	I1209 11:33:55.780192  649823 main.go:141] libmachine: Using SSH client type: native
	I1209 11:33:55.780406  649823 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I1209 11:33:55.780423  649823 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 11:33:55.995246  649823 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 11:33:55.995280  649823 machine.go:96] duration metric: took 848.591495ms to provisionDockerMachine
	I1209 11:33:55.995295  649823 start.go:293] postStartSetup for "test-preload-934001" (driver="kvm2")
	I1209 11:33:55.995306  649823 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 11:33:55.995338  649823 main.go:141] libmachine: (test-preload-934001) Calling .DriverName
	I1209 11:33:55.995786  649823 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 11:33:55.995839  649823 main.go:141] libmachine: (test-preload-934001) Calling .GetSSHHostname
	I1209 11:33:55.998283  649823 main.go:141] libmachine: (test-preload-934001) DBG | domain test-preload-934001 has defined MAC address 52:54:00:00:7d:c2 in network mk-test-preload-934001
	I1209 11:33:55.998689  649823 main.go:141] libmachine: (test-preload-934001) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:7d:c2", ip: ""} in network mk-test-preload-934001: {Iface:virbr1 ExpiryTime:2024-12-09 12:33:48 +0000 UTC Type:0 Mac:52:54:00:00:7d:c2 Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:test-preload-934001 Clientid:01:52:54:00:00:7d:c2}
	I1209 11:33:55.998738  649823 main.go:141] libmachine: (test-preload-934001) DBG | domain test-preload-934001 has defined IP address 192.168.39.125 and MAC address 52:54:00:00:7d:c2 in network mk-test-preload-934001
	I1209 11:33:55.998822  649823 main.go:141] libmachine: (test-preload-934001) Calling .GetSSHPort
	I1209 11:33:55.999010  649823 main.go:141] libmachine: (test-preload-934001) Calling .GetSSHKeyPath
	I1209 11:33:55.999181  649823 main.go:141] libmachine: (test-preload-934001) Calling .GetSSHUsername
	I1209 11:33:55.999322  649823 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/test-preload-934001/id_rsa Username:docker}
	I1209 11:33:56.080701  649823 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 11:33:56.084884  649823 info.go:137] Remote host: Buildroot 2023.02.9
	I1209 11:33:56.084919  649823 filesync.go:126] Scanning /home/jenkins/minikube-integration/20068-609844/.minikube/addons for local assets ...
	I1209 11:33:56.085012  649823 filesync.go:126] Scanning /home/jenkins/minikube-integration/20068-609844/.minikube/files for local assets ...
	I1209 11:33:56.085124  649823 filesync.go:149] local asset: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem -> 6170172.pem in /etc/ssl/certs
	I1209 11:33:56.085249  649823 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 11:33:56.094158  649823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem --> /etc/ssl/certs/6170172.pem (1708 bytes)
	I1209 11:33:56.116029  649823 start.go:296] duration metric: took 120.717633ms for postStartSetup
	I1209 11:33:56.116081  649823 fix.go:56] duration metric: took 18.859751737s for fixHost
	I1209 11:33:56.116113  649823 main.go:141] libmachine: (test-preload-934001) Calling .GetSSHHostname
	I1209 11:33:56.118776  649823 main.go:141] libmachine: (test-preload-934001) DBG | domain test-preload-934001 has defined MAC address 52:54:00:00:7d:c2 in network mk-test-preload-934001
	I1209 11:33:56.119256  649823 main.go:141] libmachine: (test-preload-934001) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:7d:c2", ip: ""} in network mk-test-preload-934001: {Iface:virbr1 ExpiryTime:2024-12-09 12:33:48 +0000 UTC Type:0 Mac:52:54:00:00:7d:c2 Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:test-preload-934001 Clientid:01:52:54:00:00:7d:c2}
	I1209 11:33:56.119294  649823 main.go:141] libmachine: (test-preload-934001) DBG | domain test-preload-934001 has defined IP address 192.168.39.125 and MAC address 52:54:00:00:7d:c2 in network mk-test-preload-934001
	I1209 11:33:56.119463  649823 main.go:141] libmachine: (test-preload-934001) Calling .GetSSHPort
	I1209 11:33:56.119644  649823 main.go:141] libmachine: (test-preload-934001) Calling .GetSSHKeyPath
	I1209 11:33:56.119805  649823 main.go:141] libmachine: (test-preload-934001) Calling .GetSSHKeyPath
	I1209 11:33:56.119961  649823 main.go:141] libmachine: (test-preload-934001) Calling .GetSSHUsername
	I1209 11:33:56.120191  649823 main.go:141] libmachine: Using SSH client type: native
	I1209 11:33:56.120362  649823 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I1209 11:33:56.120372  649823 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1209 11:33:56.226629  649823 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733744036.185721688
	
	I1209 11:33:56.226660  649823 fix.go:216] guest clock: 1733744036.185721688
	I1209 11:33:56.226667  649823 fix.go:229] Guest: 2024-12-09 11:33:56.185721688 +0000 UTC Remote: 2024-12-09 11:33:56.116087713 +0000 UTC m=+31.393690088 (delta=69.633975ms)
	I1209 11:33:56.226690  649823 fix.go:200] guest clock delta is within tolerance: 69.633975ms
	I1209 11:33:56.226695  649823 start.go:83] releasing machines lock for "test-preload-934001", held for 18.970379054s
	I1209 11:33:56.226717  649823 main.go:141] libmachine: (test-preload-934001) Calling .DriverName
	I1209 11:33:56.227021  649823 main.go:141] libmachine: (test-preload-934001) Calling .GetIP
	I1209 11:33:56.229523  649823 main.go:141] libmachine: (test-preload-934001) DBG | domain test-preload-934001 has defined MAC address 52:54:00:00:7d:c2 in network mk-test-preload-934001
	I1209 11:33:56.229858  649823 main.go:141] libmachine: (test-preload-934001) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:7d:c2", ip: ""} in network mk-test-preload-934001: {Iface:virbr1 ExpiryTime:2024-12-09 12:33:48 +0000 UTC Type:0 Mac:52:54:00:00:7d:c2 Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:test-preload-934001 Clientid:01:52:54:00:00:7d:c2}
	I1209 11:33:56.229888  649823 main.go:141] libmachine: (test-preload-934001) DBG | domain test-preload-934001 has defined IP address 192.168.39.125 and MAC address 52:54:00:00:7d:c2 in network mk-test-preload-934001
	I1209 11:33:56.230063  649823 main.go:141] libmachine: (test-preload-934001) Calling .DriverName
	I1209 11:33:56.230570  649823 main.go:141] libmachine: (test-preload-934001) Calling .DriverName
	I1209 11:33:56.230763  649823 main.go:141] libmachine: (test-preload-934001) Calling .DriverName
	I1209 11:33:56.230873  649823 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 11:33:56.230918  649823 main.go:141] libmachine: (test-preload-934001) Calling .GetSSHHostname
	I1209 11:33:56.230968  649823 ssh_runner.go:195] Run: cat /version.json
	I1209 11:33:56.230995  649823 main.go:141] libmachine: (test-preload-934001) Calling .GetSSHHostname
	I1209 11:33:56.233654  649823 main.go:141] libmachine: (test-preload-934001) DBG | domain test-preload-934001 has defined MAC address 52:54:00:00:7d:c2 in network mk-test-preload-934001
	I1209 11:33:56.233799  649823 main.go:141] libmachine: (test-preload-934001) DBG | domain test-preload-934001 has defined MAC address 52:54:00:00:7d:c2 in network mk-test-preload-934001
	I1209 11:33:56.234068  649823 main.go:141] libmachine: (test-preload-934001) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:7d:c2", ip: ""} in network mk-test-preload-934001: {Iface:virbr1 ExpiryTime:2024-12-09 12:33:48 +0000 UTC Type:0 Mac:52:54:00:00:7d:c2 Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:test-preload-934001 Clientid:01:52:54:00:00:7d:c2}
	I1209 11:33:56.234102  649823 main.go:141] libmachine: (test-preload-934001) DBG | domain test-preload-934001 has defined IP address 192.168.39.125 and MAC address 52:54:00:00:7d:c2 in network mk-test-preload-934001
	I1209 11:33:56.234127  649823 main.go:141] libmachine: (test-preload-934001) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:7d:c2", ip: ""} in network mk-test-preload-934001: {Iface:virbr1 ExpiryTime:2024-12-09 12:33:48 +0000 UTC Type:0 Mac:52:54:00:00:7d:c2 Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:test-preload-934001 Clientid:01:52:54:00:00:7d:c2}
	I1209 11:33:56.234147  649823 main.go:141] libmachine: (test-preload-934001) DBG | domain test-preload-934001 has defined IP address 192.168.39.125 and MAC address 52:54:00:00:7d:c2 in network mk-test-preload-934001
	I1209 11:33:56.234252  649823 main.go:141] libmachine: (test-preload-934001) Calling .GetSSHPort
	I1209 11:33:56.234378  649823 main.go:141] libmachine: (test-preload-934001) Calling .GetSSHPort
	I1209 11:33:56.234445  649823 main.go:141] libmachine: (test-preload-934001) Calling .GetSSHKeyPath
	I1209 11:33:56.234596  649823 main.go:141] libmachine: (test-preload-934001) Calling .GetSSHUsername
	I1209 11:33:56.234723  649823 main.go:141] libmachine: (test-preload-934001) Calling .GetSSHKeyPath
	I1209 11:33:56.234821  649823 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/test-preload-934001/id_rsa Username:docker}
	I1209 11:33:56.234858  649823 main.go:141] libmachine: (test-preload-934001) Calling .GetSSHUsername
	I1209 11:33:56.234987  649823 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/test-preload-934001/id_rsa Username:docker}
	I1209 11:33:56.310782  649823 ssh_runner.go:195] Run: systemctl --version
	I1209 11:33:56.354830  649823 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 11:33:56.493605  649823 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 11:33:56.499297  649823 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 11:33:56.499379  649823 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 11:33:56.514314  649823 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1209 11:33:56.514347  649823 start.go:495] detecting cgroup driver to use...
	I1209 11:33:56.514451  649823 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 11:33:56.529280  649823 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 11:33:56.541805  649823 docker.go:217] disabling cri-docker service (if available) ...
	I1209 11:33:56.541877  649823 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 11:33:56.554410  649823 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 11:33:56.566870  649823 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 11:33:56.673692  649823 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 11:33:56.846804  649823 docker.go:233] disabling docker service ...
	I1209 11:33:56.846897  649823 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 11:33:56.860285  649823 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 11:33:56.872106  649823 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 11:33:56.985511  649823 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 11:33:57.099189  649823 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 11:33:57.112833  649823 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 11:33:57.129930  649823 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I1209 11:33:57.130006  649823 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:33:57.139631  649823 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 11:33:57.139719  649823 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:33:57.149927  649823 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:33:57.160037  649823 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:33:57.169805  649823 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 11:33:57.179489  649823 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:33:57.189087  649823 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:33:57.205293  649823 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:33:57.215894  649823 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 11:33:57.225382  649823 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1209 11:33:57.225445  649823 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1209 11:33:57.236792  649823 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 11:33:57.245516  649823 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 11:33:57.356001  649823 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 11:33:57.439987  649823 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 11:33:57.440056  649823 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 11:33:57.444701  649823 start.go:563] Will wait 60s for crictl version
	I1209 11:33:57.444752  649823 ssh_runner.go:195] Run: which crictl
	I1209 11:33:57.448290  649823 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 11:33:57.484351  649823 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1209 11:33:57.484437  649823 ssh_runner.go:195] Run: crio --version
	I1209 11:33:57.510344  649823 ssh_runner.go:195] Run: crio --version
	I1209 11:33:57.539174  649823 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.29.1 ...
	I1209 11:33:57.540397  649823 main.go:141] libmachine: (test-preload-934001) Calling .GetIP
	I1209 11:33:57.543209  649823 main.go:141] libmachine: (test-preload-934001) DBG | domain test-preload-934001 has defined MAC address 52:54:00:00:7d:c2 in network mk-test-preload-934001
	I1209 11:33:57.543574  649823 main.go:141] libmachine: (test-preload-934001) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:7d:c2", ip: ""} in network mk-test-preload-934001: {Iface:virbr1 ExpiryTime:2024-12-09 12:33:48 +0000 UTC Type:0 Mac:52:54:00:00:7d:c2 Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:test-preload-934001 Clientid:01:52:54:00:00:7d:c2}
	I1209 11:33:57.543601  649823 main.go:141] libmachine: (test-preload-934001) DBG | domain test-preload-934001 has defined IP address 192.168.39.125 and MAC address 52:54:00:00:7d:c2 in network mk-test-preload-934001
	I1209 11:33:57.543813  649823 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1209 11:33:57.547571  649823 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 11:33:57.559368  649823 kubeadm.go:883] updating cluster {Name:test-preload-934001 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.24.4 ClusterName:test-preload-934001 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.125 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker
MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 11:33:57.559485  649823 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I1209 11:33:57.559531  649823 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 11:33:57.592464  649823 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I1209 11:33:57.592538  649823 ssh_runner.go:195] Run: which lz4
	I1209 11:33:57.596121  649823 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1209 11:33:57.599775  649823 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1209 11:33:57.599802  649823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I1209 11:33:58.999567  649823 crio.go:462] duration metric: took 1.40347362s to copy over tarball
	I1209 11:33:58.999663  649823 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1209 11:34:01.279046  649823 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.279345931s)
	I1209 11:34:01.279085  649823 crio.go:469] duration metric: took 2.27947996s to extract the tarball
	I1209 11:34:01.279094  649823 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1209 11:34:01.319647  649823 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 11:34:01.361534  649823 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I1209 11:34:01.361566  649823 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1209 11:34:01.361653  649823 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 11:34:01.361672  649823 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I1209 11:34:01.361709  649823 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I1209 11:34:01.361719  649823 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1209 11:34:01.361739  649823 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1209 11:34:01.361747  649823 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I1209 11:34:01.361683  649823 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1209 11:34:01.361767  649823 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1209 11:34:01.363360  649823 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1209 11:34:01.363366  649823 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I1209 11:34:01.363376  649823 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1209 11:34:01.363363  649823 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I1209 11:34:01.363363  649823 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1209 11:34:01.363418  649823 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 11:34:01.363363  649823 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I1209 11:34:01.363422  649823 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1209 11:34:01.585734  649823 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I1209 11:34:01.593537  649823 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I1209 11:34:01.613097  649823 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I1209 11:34:01.632558  649823 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I1209 11:34:01.643275  649823 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I1209 11:34:01.648759  649823 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I1209 11:34:01.650313  649823 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I1209 11:34:01.673325  649823 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I1209 11:34:01.673376  649823 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I1209 11:34:01.673382  649823 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I1209 11:34:01.673416  649823 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I1209 11:34:01.673430  649823 ssh_runner.go:195] Run: which crictl
	I1209 11:34:01.673460  649823 ssh_runner.go:195] Run: which crictl
	I1209 11:34:01.740561  649823 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I1209 11:34:01.740616  649823 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I1209 11:34:01.740672  649823 ssh_runner.go:195] Run: which crictl
	I1209 11:34:01.766049  649823 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I1209 11:34:01.766091  649823 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I1209 11:34:01.766135  649823 ssh_runner.go:195] Run: which crictl
	I1209 11:34:01.768964  649823 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I1209 11:34:01.768988  649823 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I1209 11:34:01.769020  649823 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I1209 11:34:01.769021  649823 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I1209 11:34:01.769067  649823 ssh_runner.go:195] Run: which crictl
	I1209 11:34:01.769109  649823 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I1209 11:34:01.769152  649823 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1209 11:34:01.769181  649823 ssh_runner.go:195] Run: which crictl
	I1209 11:34:01.769180  649823 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1209 11:34:01.769199  649823 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I1209 11:34:01.769072  649823 ssh_runner.go:195] Run: which crictl
	I1209 11:34:01.769201  649823 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I1209 11:34:01.773241  649823 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I1209 11:34:01.773289  649823 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I1209 11:34:01.779037  649823 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I1209 11:34:01.898023  649823 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1209 11:34:01.898052  649823 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I1209 11:34:01.898103  649823 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I1209 11:34:01.898103  649823 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I1209 11:34:01.898209  649823 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I1209 11:34:01.910651  649823 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I1209 11:34:01.910687  649823 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I1209 11:34:02.033039  649823 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1209 11:34:02.033049  649823 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I1209 11:34:02.033093  649823 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I1209 11:34:02.033213  649823 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I1209 11:34:02.033217  649823 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I1209 11:34:02.044632  649823 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I1209 11:34:02.048084  649823 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I1209 11:34:02.168336  649823 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I1209 11:34:02.168410  649823 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I1209 11:34:02.168450  649823 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I1209 11:34:02.168511  649823 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I1209 11:34:02.184465  649823 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I1209 11:34:02.184481  649823 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I1209 11:34:02.184500  649823 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I1209 11:34:02.184570  649823 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I1209 11:34:02.184595  649823 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I1209 11:34:02.187388  649823 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I1209 11:34:02.187391  649823 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I1209 11:34:02.187473  649823 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I1209 11:34:02.187476  649823 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I1209 11:34:02.191764  649823 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I1209 11:34:02.191787  649823 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I1209 11:34:02.191840  649823 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I1209 11:34:02.191971  649823 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I1209 11:34:02.195639  649823 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I1209 11:34:02.231142  649823 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I1209 11:34:02.231176  649823 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I1209 11:34:02.231274  649823 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I1209 11:34:02.231283  649823 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I1209 11:34:02.231285  649823 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.4
	I1209 11:34:02.612535  649823 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 11:34:05.160440  649823 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6: (2.968568218s)
	I1209 11:34:05.160487  649823 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I1209 11:34:05.160486  649823 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.4: (2.929177047s)
	I1209 11:34:05.160509  649823 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I1209 11:34:05.160514  649823 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I1209 11:34:05.160564  649823 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I1209 11:34:05.160568  649823 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.547994431s)
	I1209 11:34:05.301858  649823 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I1209 11:34:05.301918  649823 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I1209 11:34:05.301979  649823 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I1209 11:34:06.043299  649823 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I1209 11:34:06.043351  649823 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I1209 11:34:06.043411  649823 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I1209 11:34:08.091047  649823 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.047604549s)
	I1209 11:34:08.091087  649823 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I1209 11:34:08.091117  649823 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I1209 11:34:08.091188  649823 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I1209 11:34:08.840661  649823 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I1209 11:34:08.840716  649823 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I1209 11:34:08.840774  649823 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I1209 11:34:09.285854  649823 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I1209 11:34:09.285915  649823 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I1209 11:34:09.285966  649823 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I1209 11:34:10.134456  649823 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I1209 11:34:10.134514  649823 cache_images.go:123] Successfully loaded all cached images
	I1209 11:34:10.134522  649823 cache_images.go:92] duration metric: took 8.772939556s to LoadCachedImages
	I1209 11:34:10.134541  649823 kubeadm.go:934] updating node { 192.168.39.125 8443 v1.24.4 crio true true} ...
	I1209 11:34:10.134700  649823 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-934001 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.125
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-934001 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 11:34:10.134779  649823 ssh_runner.go:195] Run: crio config
	I1209 11:34:10.183034  649823 cni.go:84] Creating CNI manager for ""
	I1209 11:34:10.183062  649823 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 11:34:10.183076  649823 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1209 11:34:10.183102  649823 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.125 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-934001 NodeName:test-preload-934001 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.125"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.125 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 11:34:10.183259  649823 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.125
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-934001"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.125
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.125"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 11:34:10.183329  649823 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I1209 11:34:10.192877  649823 binaries.go:44] Found k8s binaries, skipping transfer
	I1209 11:34:10.192955  649823 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 11:34:10.201916  649823 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1209 11:34:10.217904  649823 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 11:34:10.233264  649823 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I1209 11:34:10.249404  649823 ssh_runner.go:195] Run: grep 192.168.39.125	control-plane.minikube.internal$ /etc/hosts
	I1209 11:34:10.253077  649823 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.125	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 11:34:10.264664  649823 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 11:34:10.389506  649823 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 11:34:10.406247  649823 certs.go:68] Setting up /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/test-preload-934001 for IP: 192.168.39.125
	I1209 11:34:10.406276  649823 certs.go:194] generating shared ca certs ...
	I1209 11:34:10.406301  649823 certs.go:226] acquiring lock for ca certs: {Name:mk2df5887e08965d909a9c950da5dfffb8a04ddc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 11:34:10.406503  649823 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.key
	I1209 11:34:10.406551  649823 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.key
	I1209 11:34:10.406566  649823 certs.go:256] generating profile certs ...
	I1209 11:34:10.406687  649823 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/test-preload-934001/client.key
	I1209 11:34:10.406772  649823 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/test-preload-934001/apiserver.key.63926733
	I1209 11:34:10.406822  649823 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/test-preload-934001/proxy-client.key
	I1209 11:34:10.406975  649823 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017.pem (1338 bytes)
	W1209 11:34:10.407018  649823 certs.go:480] ignoring /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017_empty.pem, impossibly tiny 0 bytes
	I1209 11:34:10.407031  649823 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem (1675 bytes)
	I1209 11:34:10.407068  649823 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem (1082 bytes)
	I1209 11:34:10.407099  649823 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem (1123 bytes)
	I1209 11:34:10.407130  649823 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem (1679 bytes)
	I1209 11:34:10.407177  649823 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem (1708 bytes)
	I1209 11:34:10.408100  649823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 11:34:10.446149  649823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1209 11:34:10.476932  649823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 11:34:10.507278  649823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1209 11:34:10.539666  649823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/test-preload-934001/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1209 11:34:10.572732  649823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/test-preload-934001/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1209 11:34:10.602407  649823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/test-preload-934001/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 11:34:10.631206  649823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/test-preload-934001/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1209 11:34:10.654152  649823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem --> /usr/share/ca-certificates/6170172.pem (1708 bytes)
	I1209 11:34:10.676014  649823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 11:34:10.698070  649823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017.pem --> /usr/share/ca-certificates/617017.pem (1338 bytes)
	I1209 11:34:10.719989  649823 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 11:34:10.735497  649823 ssh_runner.go:195] Run: openssl version
	I1209 11:34:10.740880  649823 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6170172.pem && ln -fs /usr/share/ca-certificates/6170172.pem /etc/ssl/certs/6170172.pem"
	I1209 11:34:10.750886  649823 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6170172.pem
	I1209 11:34:10.754959  649823 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 10:45 /usr/share/ca-certificates/6170172.pem
	I1209 11:34:10.755012  649823 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6170172.pem
	I1209 11:34:10.760415  649823 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6170172.pem /etc/ssl/certs/3ec20f2e.0"
	I1209 11:34:10.770183  649823 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1209 11:34:10.780016  649823 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 11:34:10.783985  649823 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 10:34 /usr/share/ca-certificates/minikubeCA.pem
	I1209 11:34:10.784054  649823 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 11:34:10.789335  649823 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1209 11:34:10.799206  649823 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/617017.pem && ln -fs /usr/share/ca-certificates/617017.pem /etc/ssl/certs/617017.pem"
	I1209 11:34:10.808901  649823 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/617017.pem
	I1209 11:34:10.812817  649823 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 10:45 /usr/share/ca-certificates/617017.pem
	I1209 11:34:10.812867  649823 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/617017.pem
	I1209 11:34:10.818163  649823 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/617017.pem /etc/ssl/certs/51391683.0"
	I1209 11:34:10.827865  649823 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 11:34:10.831993  649823 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1209 11:34:10.837674  649823 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1209 11:34:10.843245  649823 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1209 11:34:10.848866  649823 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1209 11:34:10.854491  649823 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1209 11:34:10.859812  649823 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1209 11:34:10.865249  649823 kubeadm.go:392] StartCluster: {Name:test-preload-934001 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
24.4 ClusterName:test-preload-934001 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.125 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 11:34:10.865367  649823 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 11:34:10.865417  649823 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 11:34:10.902086  649823 cri.go:89] found id: ""
	I1209 11:34:10.902161  649823 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 11:34:10.911797  649823 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1209 11:34:10.911822  649823 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1209 11:34:10.911882  649823 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1209 11:34:10.920925  649823 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1209 11:34:10.921425  649823 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-934001" does not appear in /home/jenkins/minikube-integration/20068-609844/kubeconfig
	I1209 11:34:10.921543  649823 kubeconfig.go:62] /home/jenkins/minikube-integration/20068-609844/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-934001" cluster setting kubeconfig missing "test-preload-934001" context setting]
	I1209 11:34:10.921818  649823 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/kubeconfig: {Name:mk32876a1ab47f13c0d7c0ba1ee5e32de01b4a4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 11:34:10.922504  649823 kapi.go:59] client config for test-preload-934001: &rest.Config{Host:"https://192.168.39.125:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20068-609844/.minikube/profiles/test-preload-934001/client.crt", KeyFile:"/home/jenkins/minikube-integration/20068-609844/.minikube/profiles/test-preload-934001/client.key", CAFile:"/home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243b6e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1209 11:34:10.923152  649823 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1209 11:34:10.931981  649823 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.125
	I1209 11:34:10.932019  649823 kubeadm.go:1160] stopping kube-system containers ...
	I1209 11:34:10.932034  649823 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1209 11:34:10.932089  649823 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 11:34:10.966188  649823 cri.go:89] found id: ""
	I1209 11:34:10.966278  649823 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1209 11:34:10.981816  649823 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 11:34:10.990811  649823 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 11:34:10.990835  649823 kubeadm.go:157] found existing configuration files:
	
	I1209 11:34:10.990885  649823 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 11:34:10.999245  649823 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 11:34:10.999315  649823 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 11:34:11.008151  649823 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 11:34:11.016543  649823 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 11:34:11.016629  649823 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 11:34:11.025039  649823 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 11:34:11.033305  649823 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 11:34:11.033360  649823 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 11:34:11.041993  649823 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 11:34:11.050102  649823 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 11:34:11.050207  649823 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 11:34:11.058894  649823 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 11:34:11.067630  649823 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:34:11.161923  649823 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:34:12.298257  649823 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.136265988s)
	I1209 11:34:12.298317  649823 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:34:12.545427  649823 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:34:12.607160  649823 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:34:12.703034  649823 api_server.go:52] waiting for apiserver process to appear ...
	I1209 11:34:12.703155  649823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:34:13.203679  649823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:34:13.703486  649823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:34:13.717606  649823 api_server.go:72] duration metric: took 1.014573451s to wait for apiserver process to appear ...
	I1209 11:34:13.717642  649823 api_server.go:88] waiting for apiserver healthz status ...
	I1209 11:34:13.717671  649823 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I1209 11:34:13.718206  649823 api_server.go:269] stopped: https://192.168.39.125:8443/healthz: Get "https://192.168.39.125:8443/healthz": dial tcp 192.168.39.125:8443: connect: connection refused
	I1209 11:34:14.218555  649823 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I1209 11:34:17.931780  649823 api_server.go:279] https://192.168.39.125:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1209 11:34:17.931822  649823 api_server.go:103] status: https://192.168.39.125:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1209 11:34:17.931843  649823 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I1209 11:34:17.981559  649823 api_server.go:279] https://192.168.39.125:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1209 11:34:17.981596  649823 api_server.go:103] status: https://192.168.39.125:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1209 11:34:18.217858  649823 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I1209 11:34:18.223127  649823 api_server.go:279] https://192.168.39.125:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1209 11:34:18.223153  649823 api_server.go:103] status: https://192.168.39.125:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1209 11:34:18.718416  649823 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I1209 11:34:18.728504  649823 api_server.go:279] https://192.168.39.125:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1209 11:34:18.728598  649823 api_server.go:103] status: https://192.168.39.125:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1209 11:34:19.217852  649823 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I1209 11:34:19.223880  649823 api_server.go:279] https://192.168.39.125:8443/healthz returned 200:
	ok
	I1209 11:34:19.229913  649823 api_server.go:141] control plane version: v1.24.4
	I1209 11:34:19.229948  649823 api_server.go:131] duration metric: took 5.512297581s to wait for apiserver health ...
	I1209 11:34:19.229961  649823 cni.go:84] Creating CNI manager for ""
	I1209 11:34:19.229972  649823 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 11:34:19.231955  649823 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1209 11:34:19.233329  649823 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1209 11:34:19.246061  649823 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1209 11:34:19.269153  649823 system_pods.go:43] waiting for kube-system pods to appear ...
	I1209 11:34:19.269275  649823 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1209 11:34:19.269313  649823 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1209 11:34:19.280201  649823 system_pods.go:59] 7 kube-system pods found
	I1209 11:34:19.280230  649823 system_pods.go:61] "coredns-6d4b75cb6d-5sm4n" [7f7a9af8-0040-4f29-b6e8-6e0df48ff0af] Running
	I1209 11:34:19.280238  649823 system_pods.go:61] "etcd-test-preload-934001" [3e828ef3-6107-4e36-9f6e-ec5dd9e9b04d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1209 11:34:19.280243  649823 system_pods.go:61] "kube-apiserver-test-preload-934001" [11e03cec-1cbe-45d7-aed1-7761a3e4bc9b] Running
	I1209 11:34:19.280248  649823 system_pods.go:61] "kube-controller-manager-test-preload-934001" [b89c9b73-dc56-4f75-ad05-6302f340b120] Running
	I1209 11:34:19.280256  649823 system_pods.go:61] "kube-proxy-hdwmv" [afc0156f-bb90-4a58-83a5-42342f7ca40d] Running
	I1209 11:34:19.280265  649823 system_pods.go:61] "kube-scheduler-test-preload-934001" [2311f3c7-3d4c-4006-989a-0fc0ee3cfef6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1209 11:34:19.280272  649823 system_pods.go:61] "storage-provisioner" [d0e31e88-e025-4042-bdba-634a3948f362] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1209 11:34:19.280280  649823 system_pods.go:74] duration metric: took 11.100545ms to wait for pod list to return data ...
	I1209 11:34:19.280291  649823 node_conditions.go:102] verifying NodePressure condition ...
	I1209 11:34:19.284368  649823 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1209 11:34:19.284449  649823 node_conditions.go:123] node cpu capacity is 2
	I1209 11:34:19.284474  649823 node_conditions.go:105] duration metric: took 4.176943ms to run NodePressure ...
	I1209 11:34:19.284517  649823 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:34:19.442895  649823 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1209 11:34:19.448767  649823 kubeadm.go:739] kubelet initialised
	I1209 11:34:19.448789  649823 kubeadm.go:740] duration metric: took 5.86476ms waiting for restarted kubelet to initialise ...
	I1209 11:34:19.448799  649823 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 11:34:19.454230  649823 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6d4b75cb6d-5sm4n" in "kube-system" namespace to be "Ready" ...
	I1209 11:34:19.458795  649823 pod_ready.go:98] node "test-preload-934001" hosting pod "coredns-6d4b75cb6d-5sm4n" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-934001" has status "Ready":"False"
	I1209 11:34:19.458816  649823 pod_ready.go:82] duration metric: took 4.561819ms for pod "coredns-6d4b75cb6d-5sm4n" in "kube-system" namespace to be "Ready" ...
	E1209 11:34:19.458824  649823 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-934001" hosting pod "coredns-6d4b75cb6d-5sm4n" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-934001" has status "Ready":"False"
	I1209 11:34:19.458831  649823 pod_ready.go:79] waiting up to 4m0s for pod "etcd-test-preload-934001" in "kube-system" namespace to be "Ready" ...
	I1209 11:34:19.463380  649823 pod_ready.go:98] node "test-preload-934001" hosting pod "etcd-test-preload-934001" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-934001" has status "Ready":"False"
	I1209 11:34:19.463399  649823 pod_ready.go:82] duration metric: took 4.561067ms for pod "etcd-test-preload-934001" in "kube-system" namespace to be "Ready" ...
	E1209 11:34:19.463407  649823 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-934001" hosting pod "etcd-test-preload-934001" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-934001" has status "Ready":"False"
	I1209 11:34:19.463413  649823 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-test-preload-934001" in "kube-system" namespace to be "Ready" ...
	I1209 11:34:19.469925  649823 pod_ready.go:98] node "test-preload-934001" hosting pod "kube-apiserver-test-preload-934001" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-934001" has status "Ready":"False"
	I1209 11:34:19.469945  649823 pod_ready.go:82] duration metric: took 6.525199ms for pod "kube-apiserver-test-preload-934001" in "kube-system" namespace to be "Ready" ...
	E1209 11:34:19.469953  649823 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-934001" hosting pod "kube-apiserver-test-preload-934001" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-934001" has status "Ready":"False"
	I1209 11:34:19.469963  649823 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-test-preload-934001" in "kube-system" namespace to be "Ready" ...
	I1209 11:34:19.672484  649823 pod_ready.go:98] node "test-preload-934001" hosting pod "kube-controller-manager-test-preload-934001" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-934001" has status "Ready":"False"
	I1209 11:34:19.672512  649823 pod_ready.go:82] duration metric: took 202.542082ms for pod "kube-controller-manager-test-preload-934001" in "kube-system" namespace to be "Ready" ...
	E1209 11:34:19.672523  649823 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-934001" hosting pod "kube-controller-manager-test-preload-934001" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-934001" has status "Ready":"False"
	I1209 11:34:19.672529  649823 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-hdwmv" in "kube-system" namespace to be "Ready" ...
	I1209 11:34:20.076348  649823 pod_ready.go:98] node "test-preload-934001" hosting pod "kube-proxy-hdwmv" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-934001" has status "Ready":"False"
	I1209 11:34:20.076386  649823 pod_ready.go:82] duration metric: took 403.84685ms for pod "kube-proxy-hdwmv" in "kube-system" namespace to be "Ready" ...
	E1209 11:34:20.076399  649823 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-934001" hosting pod "kube-proxy-hdwmv" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-934001" has status "Ready":"False"
	I1209 11:34:20.076410  649823 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-test-preload-934001" in "kube-system" namespace to be "Ready" ...
	I1209 11:34:20.473121  649823 pod_ready.go:98] node "test-preload-934001" hosting pod "kube-scheduler-test-preload-934001" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-934001" has status "Ready":"False"
	I1209 11:34:20.473156  649823 pod_ready.go:82] duration metric: took 396.737024ms for pod "kube-scheduler-test-preload-934001" in "kube-system" namespace to be "Ready" ...
	E1209 11:34:20.473170  649823 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-934001" hosting pod "kube-scheduler-test-preload-934001" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-934001" has status "Ready":"False"
	I1209 11:34:20.473180  649823 pod_ready.go:39] duration metric: took 1.024371051s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 11:34:20.473211  649823 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1209 11:34:20.488847  649823 ops.go:34] apiserver oom_adj: -16
	I1209 11:34:20.488875  649823 kubeadm.go:597] duration metric: took 9.577046614s to restartPrimaryControlPlane
	I1209 11:34:20.488886  649823 kubeadm.go:394] duration metric: took 9.623646022s to StartCluster
	I1209 11:34:20.488907  649823 settings.go:142] acquiring lock: {Name:mkd819f3469f56ef651f2e5bb461e93bb6b2b5fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 11:34:20.488982  649823 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20068-609844/kubeconfig
	I1209 11:34:20.489695  649823 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/kubeconfig: {Name:mk32876a1ab47f13c0d7c0ba1ee5e32de01b4a4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 11:34:20.489967  649823 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.125 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 11:34:20.490020  649823 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1209 11:34:20.490139  649823 addons.go:69] Setting storage-provisioner=true in profile "test-preload-934001"
	I1209 11:34:20.490194  649823 addons.go:69] Setting default-storageclass=true in profile "test-preload-934001"
	I1209 11:34:20.490232  649823 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-934001"
	I1209 11:34:20.490233  649823 config.go:182] Loaded profile config "test-preload-934001": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I1209 11:34:20.490202  649823 addons.go:234] Setting addon storage-provisioner=true in "test-preload-934001"
	W1209 11:34:20.490314  649823 addons.go:243] addon storage-provisioner should already be in state true
	I1209 11:34:20.490359  649823 host.go:66] Checking if "test-preload-934001" exists ...
	I1209 11:34:20.490693  649823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:34:20.490729  649823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:34:20.490697  649823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:34:20.490832  649823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:34:20.491488  649823 out.go:177] * Verifying Kubernetes components...
	I1209 11:34:20.492719  649823 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 11:34:20.506256  649823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33011
	I1209 11:34:20.506293  649823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45329
	I1209 11:34:20.506766  649823 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:34:20.506822  649823 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:34:20.507306  649823 main.go:141] libmachine: Using API Version  1
	I1209 11:34:20.507345  649823 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:34:20.507455  649823 main.go:141] libmachine: Using API Version  1
	I1209 11:34:20.507478  649823 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:34:20.507732  649823 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:34:20.507821  649823 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:34:20.507929  649823 main.go:141] libmachine: (test-preload-934001) Calling .GetState
	I1209 11:34:20.508428  649823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:34:20.508483  649823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:34:20.510380  649823 kapi.go:59] client config for test-preload-934001: &rest.Config{Host:"https://192.168.39.125:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20068-609844/.minikube/profiles/test-preload-934001/client.crt", KeyFile:"/home/jenkins/minikube-integration/20068-609844/.minikube/profiles/test-preload-934001/client.key", CAFile:"/home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243b6e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1209 11:34:20.510759  649823 addons.go:234] Setting addon default-storageclass=true in "test-preload-934001"
	W1209 11:34:20.510784  649823 addons.go:243] addon default-storageclass should already be in state true
	I1209 11:34:20.510824  649823 host.go:66] Checking if "test-preload-934001" exists ...
	I1209 11:34:20.511213  649823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:34:20.511272  649823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:34:20.524351  649823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35507
	I1209 11:34:20.524897  649823 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:34:20.525550  649823 main.go:141] libmachine: Using API Version  1
	I1209 11:34:20.525581  649823 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:34:20.525990  649823 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:34:20.526044  649823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36061
	I1209 11:34:20.526228  649823 main.go:141] libmachine: (test-preload-934001) Calling .GetState
	I1209 11:34:20.526412  649823 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:34:20.526880  649823 main.go:141] libmachine: Using API Version  1
	I1209 11:34:20.526907  649823 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:34:20.527276  649823 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:34:20.527872  649823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:34:20.527921  649823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:34:20.528186  649823 main.go:141] libmachine: (test-preload-934001) Calling .DriverName
	I1209 11:34:20.529902  649823 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 11:34:20.531232  649823 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 11:34:20.531247  649823 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1209 11:34:20.531263  649823 main.go:141] libmachine: (test-preload-934001) Calling .GetSSHHostname
	I1209 11:34:20.534500  649823 main.go:141] libmachine: (test-preload-934001) DBG | domain test-preload-934001 has defined MAC address 52:54:00:00:7d:c2 in network mk-test-preload-934001
	I1209 11:34:20.534965  649823 main.go:141] libmachine: (test-preload-934001) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:7d:c2", ip: ""} in network mk-test-preload-934001: {Iface:virbr1 ExpiryTime:2024-12-09 12:33:48 +0000 UTC Type:0 Mac:52:54:00:00:7d:c2 Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:test-preload-934001 Clientid:01:52:54:00:00:7d:c2}
	I1209 11:34:20.534999  649823 main.go:141] libmachine: (test-preload-934001) DBG | domain test-preload-934001 has defined IP address 192.168.39.125 and MAC address 52:54:00:00:7d:c2 in network mk-test-preload-934001
	I1209 11:34:20.535144  649823 main.go:141] libmachine: (test-preload-934001) Calling .GetSSHPort
	I1209 11:34:20.535311  649823 main.go:141] libmachine: (test-preload-934001) Calling .GetSSHKeyPath
	I1209 11:34:20.535426  649823 main.go:141] libmachine: (test-preload-934001) Calling .GetSSHUsername
	I1209 11:34:20.535541  649823 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/test-preload-934001/id_rsa Username:docker}
	I1209 11:34:20.562311  649823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33619
	I1209 11:34:20.562903  649823 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:34:20.563454  649823 main.go:141] libmachine: Using API Version  1
	I1209 11:34:20.563485  649823 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:34:20.563907  649823 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:34:20.564078  649823 main.go:141] libmachine: (test-preload-934001) Calling .GetState
	I1209 11:34:20.565580  649823 main.go:141] libmachine: (test-preload-934001) Calling .DriverName
	I1209 11:34:20.565805  649823 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1209 11:34:20.565819  649823 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1209 11:34:20.565842  649823 main.go:141] libmachine: (test-preload-934001) Calling .GetSSHHostname
	I1209 11:34:20.568562  649823 main.go:141] libmachine: (test-preload-934001) DBG | domain test-preload-934001 has defined MAC address 52:54:00:00:7d:c2 in network mk-test-preload-934001
	I1209 11:34:20.568942  649823 main.go:141] libmachine: (test-preload-934001) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:7d:c2", ip: ""} in network mk-test-preload-934001: {Iface:virbr1 ExpiryTime:2024-12-09 12:33:48 +0000 UTC Type:0 Mac:52:54:00:00:7d:c2 Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:test-preload-934001 Clientid:01:52:54:00:00:7d:c2}
	I1209 11:34:20.568982  649823 main.go:141] libmachine: (test-preload-934001) DBG | domain test-preload-934001 has defined IP address 192.168.39.125 and MAC address 52:54:00:00:7d:c2 in network mk-test-preload-934001
	I1209 11:34:20.569117  649823 main.go:141] libmachine: (test-preload-934001) Calling .GetSSHPort
	I1209 11:34:20.569312  649823 main.go:141] libmachine: (test-preload-934001) Calling .GetSSHKeyPath
	I1209 11:34:20.569447  649823 main.go:141] libmachine: (test-preload-934001) Calling .GetSSHUsername
	I1209 11:34:20.569557  649823 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/test-preload-934001/id_rsa Username:docker}
	I1209 11:34:20.650812  649823 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 11:34:20.674155  649823 node_ready.go:35] waiting up to 6m0s for node "test-preload-934001" to be "Ready" ...
	I1209 11:34:20.753569  649823 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 11:34:20.795572  649823 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1209 11:34:21.759615  649823 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.005996787s)
	I1209 11:34:21.759680  649823 main.go:141] libmachine: Making call to close driver server
	I1209 11:34:21.759690  649823 main.go:141] libmachine: (test-preload-934001) Calling .Close
	I1209 11:34:21.759701  649823 main.go:141] libmachine: Making call to close driver server
	I1209 11:34:21.759721  649823 main.go:141] libmachine: (test-preload-934001) Calling .Close
	I1209 11:34:21.760023  649823 main.go:141] libmachine: (test-preload-934001) DBG | Closing plugin on server side
	I1209 11:34:21.760066  649823 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:34:21.760075  649823 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:34:21.760085  649823 main.go:141] libmachine: Making call to close driver server
	I1209 11:34:21.760095  649823 main.go:141] libmachine: (test-preload-934001) DBG | Closing plugin on server side
	I1209 11:34:21.760110  649823 main.go:141] libmachine: (test-preload-934001) Calling .Close
	I1209 11:34:21.760137  649823 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:34:21.760150  649823 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:34:21.760162  649823 main.go:141] libmachine: Making call to close driver server
	I1209 11:34:21.760170  649823 main.go:141] libmachine: (test-preload-934001) Calling .Close
	I1209 11:34:21.760408  649823 main.go:141] libmachine: (test-preload-934001) DBG | Closing plugin on server side
	I1209 11:34:21.760435  649823 main.go:141] libmachine: (test-preload-934001) DBG | Closing plugin on server side
	I1209 11:34:21.760437  649823 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:34:21.760451  649823 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:34:21.760716  649823 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:34:21.760731  649823 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:34:21.768885  649823 main.go:141] libmachine: Making call to close driver server
	I1209 11:34:21.768908  649823 main.go:141] libmachine: (test-preload-934001) Calling .Close
	I1209 11:34:21.769198  649823 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:34:21.769219  649823 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:34:21.769249  649823 main.go:141] libmachine: (test-preload-934001) DBG | Closing plugin on server side
	I1209 11:34:21.771173  649823 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1209 11:34:21.772258  649823 addons.go:510] duration metric: took 1.28225636s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1209 11:34:22.682069  649823 node_ready.go:53] node "test-preload-934001" has status "Ready":"False"
	I1209 11:34:25.179051  649823 node_ready.go:53] node "test-preload-934001" has status "Ready":"False"
	I1209 11:34:27.678160  649823 node_ready.go:53] node "test-preload-934001" has status "Ready":"False"
	I1209 11:34:28.181340  649823 node_ready.go:49] node "test-preload-934001" has status "Ready":"True"
	I1209 11:34:28.181375  649823 node_ready.go:38] duration metric: took 7.507137293s for node "test-preload-934001" to be "Ready" ...
	I1209 11:34:28.181388  649823 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 11:34:28.186382  649823 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6d4b75cb6d-5sm4n" in "kube-system" namespace to be "Ready" ...
	I1209 11:34:28.190695  649823 pod_ready.go:93] pod "coredns-6d4b75cb6d-5sm4n" in "kube-system" namespace has status "Ready":"True"
	I1209 11:34:28.190721  649823 pod_ready.go:82] duration metric: took 4.31316ms for pod "coredns-6d4b75cb6d-5sm4n" in "kube-system" namespace to be "Ready" ...
	I1209 11:34:28.190732  649823 pod_ready.go:79] waiting up to 6m0s for pod "etcd-test-preload-934001" in "kube-system" namespace to be "Ready" ...
	I1209 11:34:30.199617  649823 pod_ready.go:103] pod "etcd-test-preload-934001" in "kube-system" namespace has status "Ready":"False"
	I1209 11:34:31.697950  649823 pod_ready.go:93] pod "etcd-test-preload-934001" in "kube-system" namespace has status "Ready":"True"
	I1209 11:34:31.697975  649823 pod_ready.go:82] duration metric: took 3.50723725s for pod "etcd-test-preload-934001" in "kube-system" namespace to be "Ready" ...
	I1209 11:34:31.697985  649823 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-test-preload-934001" in "kube-system" namespace to be "Ready" ...
	I1209 11:34:31.704018  649823 pod_ready.go:93] pod "kube-apiserver-test-preload-934001" in "kube-system" namespace has status "Ready":"True"
	I1209 11:34:31.704036  649823 pod_ready.go:82] duration metric: took 6.044575ms for pod "kube-apiserver-test-preload-934001" in "kube-system" namespace to be "Ready" ...
	I1209 11:34:31.704047  649823 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-test-preload-934001" in "kube-system" namespace to be "Ready" ...
	I1209 11:34:31.708907  649823 pod_ready.go:93] pod "kube-controller-manager-test-preload-934001" in "kube-system" namespace has status "Ready":"True"
	I1209 11:34:31.708924  649823 pod_ready.go:82] duration metric: took 4.871218ms for pod "kube-controller-manager-test-preload-934001" in "kube-system" namespace to be "Ready" ...
	I1209 11:34:31.708932  649823 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-hdwmv" in "kube-system" namespace to be "Ready" ...
	I1209 11:34:31.712708  649823 pod_ready.go:93] pod "kube-proxy-hdwmv" in "kube-system" namespace has status "Ready":"True"
	I1209 11:34:31.712723  649823 pod_ready.go:82] duration metric: took 3.786572ms for pod "kube-proxy-hdwmv" in "kube-system" namespace to be "Ready" ...
	I1209 11:34:31.712731  649823 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-test-preload-934001" in "kube-system" namespace to be "Ready" ...
	I1209 11:34:31.779060  649823 pod_ready.go:93] pod "kube-scheduler-test-preload-934001" in "kube-system" namespace has status "Ready":"True"
	I1209 11:34:31.779085  649823 pod_ready.go:82] duration metric: took 66.348064ms for pod "kube-scheduler-test-preload-934001" in "kube-system" namespace to be "Ready" ...
	I1209 11:34:31.779097  649823 pod_ready.go:39] duration metric: took 3.597699083s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 11:34:31.779114  649823 api_server.go:52] waiting for apiserver process to appear ...
	I1209 11:34:31.779184  649823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:34:31.794796  649823 api_server.go:72] duration metric: took 11.304788285s to wait for apiserver process to appear ...
	I1209 11:34:31.794825  649823 api_server.go:88] waiting for apiserver healthz status ...
	I1209 11:34:31.794850  649823 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I1209 11:34:31.800207  649823 api_server.go:279] https://192.168.39.125:8443/healthz returned 200:
	ok
	I1209 11:34:31.801332  649823 api_server.go:141] control plane version: v1.24.4
	I1209 11:34:31.801355  649823 api_server.go:131] duration metric: took 6.523041ms to wait for apiserver health ...
	I1209 11:34:31.801363  649823 system_pods.go:43] waiting for kube-system pods to appear ...
	I1209 11:34:31.980804  649823 system_pods.go:59] 7 kube-system pods found
	I1209 11:34:31.980843  649823 system_pods.go:61] "coredns-6d4b75cb6d-5sm4n" [7f7a9af8-0040-4f29-b6e8-6e0df48ff0af] Running
	I1209 11:34:31.980850  649823 system_pods.go:61] "etcd-test-preload-934001" [3e828ef3-6107-4e36-9f6e-ec5dd9e9b04d] Running
	I1209 11:34:31.980855  649823 system_pods.go:61] "kube-apiserver-test-preload-934001" [11e03cec-1cbe-45d7-aed1-7761a3e4bc9b] Running
	I1209 11:34:31.980860  649823 system_pods.go:61] "kube-controller-manager-test-preload-934001" [b89c9b73-dc56-4f75-ad05-6302f340b120] Running
	I1209 11:34:31.980866  649823 system_pods.go:61] "kube-proxy-hdwmv" [afc0156f-bb90-4a58-83a5-42342f7ca40d] Running
	I1209 11:34:31.980871  649823 system_pods.go:61] "kube-scheduler-test-preload-934001" [2311f3c7-3d4c-4006-989a-0fc0ee3cfef6] Running
	I1209 11:34:31.980877  649823 system_pods.go:61] "storage-provisioner" [d0e31e88-e025-4042-bdba-634a3948f362] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1209 11:34:31.980887  649823 system_pods.go:74] duration metric: took 179.516072ms to wait for pod list to return data ...
	I1209 11:34:31.980899  649823 default_sa.go:34] waiting for default service account to be created ...
	I1209 11:34:32.179045  649823 default_sa.go:45] found service account: "default"
	I1209 11:34:32.179075  649823 default_sa.go:55] duration metric: took 198.167609ms for default service account to be created ...
	I1209 11:34:32.179088  649823 system_pods.go:116] waiting for k8s-apps to be running ...
	I1209 11:34:32.382640  649823 system_pods.go:86] 7 kube-system pods found
	I1209 11:34:32.382671  649823 system_pods.go:89] "coredns-6d4b75cb6d-5sm4n" [7f7a9af8-0040-4f29-b6e8-6e0df48ff0af] Running
	I1209 11:34:32.382676  649823 system_pods.go:89] "etcd-test-preload-934001" [3e828ef3-6107-4e36-9f6e-ec5dd9e9b04d] Running
	I1209 11:34:32.382680  649823 system_pods.go:89] "kube-apiserver-test-preload-934001" [11e03cec-1cbe-45d7-aed1-7761a3e4bc9b] Running
	I1209 11:34:32.382684  649823 system_pods.go:89] "kube-controller-manager-test-preload-934001" [b89c9b73-dc56-4f75-ad05-6302f340b120] Running
	I1209 11:34:32.382687  649823 system_pods.go:89] "kube-proxy-hdwmv" [afc0156f-bb90-4a58-83a5-42342f7ca40d] Running
	I1209 11:34:32.382690  649823 system_pods.go:89] "kube-scheduler-test-preload-934001" [2311f3c7-3d4c-4006-989a-0fc0ee3cfef6] Running
	I1209 11:34:32.382696  649823 system_pods.go:89] "storage-provisioner" [d0e31e88-e025-4042-bdba-634a3948f362] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1209 11:34:32.382703  649823 system_pods.go:126] duration metric: took 203.608541ms to wait for k8s-apps to be running ...
	I1209 11:34:32.382712  649823 system_svc.go:44] waiting for kubelet service to be running ....
	I1209 11:34:32.382765  649823 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 11:34:32.396545  649823 system_svc.go:56] duration metric: took 13.819778ms WaitForService to wait for kubelet
	I1209 11:34:32.396573  649823 kubeadm.go:582] duration metric: took 11.906575041s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 11:34:32.396594  649823 node_conditions.go:102] verifying NodePressure condition ...
	I1209 11:34:32.578829  649823 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1209 11:34:32.578855  649823 node_conditions.go:123] node cpu capacity is 2
	I1209 11:34:32.578865  649823 node_conditions.go:105] duration metric: took 182.266339ms to run NodePressure ...
	I1209 11:34:32.578877  649823 start.go:241] waiting for startup goroutines ...
	I1209 11:34:32.578884  649823 start.go:246] waiting for cluster config update ...
	I1209 11:34:32.578894  649823 start.go:255] writing updated cluster config ...
	I1209 11:34:32.579140  649823 ssh_runner.go:195] Run: rm -f paused
	I1209 11:34:32.627857  649823 start.go:600] kubectl: 1.31.3, cluster: 1.24.4 (minor skew: 7)
	I1209 11:34:32.629657  649823 out.go:201] 
	W1209 11:34:32.630865  649823 out.go:270] ! /usr/local/bin/kubectl is version 1.31.3, which may have incompatibilities with Kubernetes 1.24.4.
	I1209 11:34:32.631959  649823 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I1209 11:34:32.633206  649823 out.go:177] * Done! kubectl is now configured to use "test-preload-934001" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 09 11:34:33 test-preload-934001 crio[693]: time="2024-12-09 11:34:33.540504916Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733744073540479671,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f94f15dc-c093-4de5-b7d5-23826ccfc2c0 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 11:34:33 test-preload-934001 crio[693]: time="2024-12-09 11:34:33.541123131Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=40f80ba9-47cf-41a0-a37c-bc155f4bbc5c name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 11:34:33 test-preload-934001 crio[693]: time="2024-12-09 11:34:33.541220065Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=40f80ba9-47cf-41a0-a37c-bc155f4bbc5c name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 11:34:33 test-preload-934001 crio[693]: time="2024-12-09 11:34:33.541388219Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cf20623427f0339a8b74726d7b8e04959c1a9b9a47d72460989cca67890d57a9,PodSandboxId:dd558a9126b676bd9ec8aba3622b275a0543871b30c150e8bff465b1627c6eb4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733744072751996101,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0e31e88-e025-4042-bdba-634a3948f362,},Annotations:map[string]string{io.kubernetes.container.hash: 7fd5c86b,io.kubernetes.container.restartCount: 3,io.kubernetes.co
ntainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86b85f7e5234987b12c9ad465965d5eb60cbf96b0de46421c444d26732f3dd37,PodSandboxId:1a6ed46de72ad9b2aa1d200dcb036bb46b1d10e1a3e85f04b79aff0d0293b233,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1733744067146368767,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-5sm4n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f7a9af8-0040-4f29-b6e8-6e0df48ff0af,},Annotations:map[string]string{io.kubernetes.container.hash: 3dbdaa4e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"U
DP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aceb9a4bc2f5185e04a221b0611c06bfe1483956cc176da0c17510b87d87e77c,PodSandboxId:5ba56f2e18cb85b3f8fa046d2d87722a9852300a43c13ef088e88cbfba774407,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1733744059939665458,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hdwmv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af
c0156f-bb90-4a58-83a5-42342f7ca40d,},Annotations:map[string]string{io.kubernetes.container.hash: 6eee9cfa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8274c3d670143692f8c4b2388ffa7d48d176a4326ad29285c14eea8c5649744f,PodSandboxId:dd558a9126b676bd9ec8aba3622b275a0543871b30c150e8bff465b1627c6eb4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1733744059780876147,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0e31e88-e025-4
042-bdba-634a3948f362,},Annotations:map[string]string{io.kubernetes.container.hash: 7fd5c86b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d00d0a5d8c47956e3206152fd03d749fb5987a30e852a230a5ea8bd28d58bdf6,PodSandboxId:67c2830903534ebe61e928bd32cddcfb86d9fdafc2ebd6b9349624b4d17401ee,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1733744053404488966,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-934001,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ae6b99e3b87a1906ee535
278bc6b97f,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a09be793d9fbbf3bcadd38a7268f54d4646199c61ff57071c9fd2a821cb5347c,PodSandboxId:78e594e16ea0a6a316794e87668e05efb62103f57618737f49b55496b788c50d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1733744053398947795,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-934001,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a31c7869817235e4d30ec3aa16ecb0b7,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 7d6e9e7c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cedf0d13c1d15a11633400faf8b15950542aa6dc1fb5c676ce919b599d1cc6f3,PodSandboxId:f4ba204c4fae1eb1f09e1884cc4e5ff118ff3f9afc5fd0f070fb3e9896737bc1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1733744053333090524,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-934001,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f90815b54fd7362ac0a60487460b279,},Annotations:
map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb08bc4d7da6b845b32b247a3bf052a09409c690301354480ee3ac634b0e2d0e,PodSandboxId:99bd3220bccbf3094d1a48e88165431024c33ec13c7bbb2d32d3bcc733209a79,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1733744053297356489,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-934001,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 931256e0908765a4fdb679bc5bd1c236,},Annotations:map[string]
string{io.kubernetes.container.hash: 151892b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=40f80ba9-47cf-41a0-a37c-bc155f4bbc5c name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 11:34:33 test-preload-934001 crio[693]: time="2024-12-09 11:34:33.577261288Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=74a4aa42-a3b2-4c8c-bf86-b8f5c2a4732a name=/runtime.v1.RuntimeService/Version
	Dec 09 11:34:33 test-preload-934001 crio[693]: time="2024-12-09 11:34:33.577330825Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=74a4aa42-a3b2-4c8c-bf86-b8f5c2a4732a name=/runtime.v1.RuntimeService/Version
	Dec 09 11:34:33 test-preload-934001 crio[693]: time="2024-12-09 11:34:33.578877906Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b9cb34a3-f2ad-46d2-b024-d149a958795c name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 11:34:33 test-preload-934001 crio[693]: time="2024-12-09 11:34:33.579429225Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733744073579403915,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b9cb34a3-f2ad-46d2-b024-d149a958795c name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 11:34:33 test-preload-934001 crio[693]: time="2024-12-09 11:34:33.580182403Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cf4bc63d-005a-441f-b08f-3d49a947f056 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 11:34:33 test-preload-934001 crio[693]: time="2024-12-09 11:34:33.580249786Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cf4bc63d-005a-441f-b08f-3d49a947f056 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 11:34:33 test-preload-934001 crio[693]: time="2024-12-09 11:34:33.580444337Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cf20623427f0339a8b74726d7b8e04959c1a9b9a47d72460989cca67890d57a9,PodSandboxId:dd558a9126b676bd9ec8aba3622b275a0543871b30c150e8bff465b1627c6eb4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733744072751996101,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0e31e88-e025-4042-bdba-634a3948f362,},Annotations:map[string]string{io.kubernetes.container.hash: 7fd5c86b,io.kubernetes.container.restartCount: 3,io.kubernetes.co
ntainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86b85f7e5234987b12c9ad465965d5eb60cbf96b0de46421c444d26732f3dd37,PodSandboxId:1a6ed46de72ad9b2aa1d200dcb036bb46b1d10e1a3e85f04b79aff0d0293b233,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1733744067146368767,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-5sm4n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f7a9af8-0040-4f29-b6e8-6e0df48ff0af,},Annotations:map[string]string{io.kubernetes.container.hash: 3dbdaa4e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"U
DP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aceb9a4bc2f5185e04a221b0611c06bfe1483956cc176da0c17510b87d87e77c,PodSandboxId:5ba56f2e18cb85b3f8fa046d2d87722a9852300a43c13ef088e88cbfba774407,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1733744059939665458,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hdwmv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af
c0156f-bb90-4a58-83a5-42342f7ca40d,},Annotations:map[string]string{io.kubernetes.container.hash: 6eee9cfa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8274c3d670143692f8c4b2388ffa7d48d176a4326ad29285c14eea8c5649744f,PodSandboxId:dd558a9126b676bd9ec8aba3622b275a0543871b30c150e8bff465b1627c6eb4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1733744059780876147,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0e31e88-e025-4
042-bdba-634a3948f362,},Annotations:map[string]string{io.kubernetes.container.hash: 7fd5c86b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d00d0a5d8c47956e3206152fd03d749fb5987a30e852a230a5ea8bd28d58bdf6,PodSandboxId:67c2830903534ebe61e928bd32cddcfb86d9fdafc2ebd6b9349624b4d17401ee,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1733744053404488966,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-934001,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ae6b99e3b87a1906ee535
278bc6b97f,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a09be793d9fbbf3bcadd38a7268f54d4646199c61ff57071c9fd2a821cb5347c,PodSandboxId:78e594e16ea0a6a316794e87668e05efb62103f57618737f49b55496b788c50d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1733744053398947795,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-934001,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a31c7869817235e4d30ec3aa16ecb0b7,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 7d6e9e7c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cedf0d13c1d15a11633400faf8b15950542aa6dc1fb5c676ce919b599d1cc6f3,PodSandboxId:f4ba204c4fae1eb1f09e1884cc4e5ff118ff3f9afc5fd0f070fb3e9896737bc1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1733744053333090524,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-934001,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f90815b54fd7362ac0a60487460b279,},Annotations:
map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb08bc4d7da6b845b32b247a3bf052a09409c690301354480ee3ac634b0e2d0e,PodSandboxId:99bd3220bccbf3094d1a48e88165431024c33ec13c7bbb2d32d3bcc733209a79,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1733744053297356489,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-934001,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 931256e0908765a4fdb679bc5bd1c236,},Annotations:map[string]
string{io.kubernetes.container.hash: 151892b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cf4bc63d-005a-441f-b08f-3d49a947f056 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 11:34:33 test-preload-934001 crio[693]: time="2024-12-09 11:34:33.614026249Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a1d974f7-6823-4bc4-b1c1-16b7448cc152 name=/runtime.v1.RuntimeService/Version
	Dec 09 11:34:33 test-preload-934001 crio[693]: time="2024-12-09 11:34:33.614115336Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a1d974f7-6823-4bc4-b1c1-16b7448cc152 name=/runtime.v1.RuntimeService/Version
	Dec 09 11:34:33 test-preload-934001 crio[693]: time="2024-12-09 11:34:33.615074797Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=18613789-c431-4c29-983d-efddf5c9c4fc name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 11:34:33 test-preload-934001 crio[693]: time="2024-12-09 11:34:33.615702710Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733744073615677218,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=18613789-c431-4c29-983d-efddf5c9c4fc name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 11:34:33 test-preload-934001 crio[693]: time="2024-12-09 11:34:33.616139642Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ff768f7a-07dc-41f3-b193-7dd9796a2383 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 11:34:33 test-preload-934001 crio[693]: time="2024-12-09 11:34:33.616251453Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ff768f7a-07dc-41f3-b193-7dd9796a2383 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 11:34:33 test-preload-934001 crio[693]: time="2024-12-09 11:34:33.616460707Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cf20623427f0339a8b74726d7b8e04959c1a9b9a47d72460989cca67890d57a9,PodSandboxId:dd558a9126b676bd9ec8aba3622b275a0543871b30c150e8bff465b1627c6eb4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733744072751996101,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0e31e88-e025-4042-bdba-634a3948f362,},Annotations:map[string]string{io.kubernetes.container.hash: 7fd5c86b,io.kubernetes.container.restartCount: 3,io.kubernetes.co
ntainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86b85f7e5234987b12c9ad465965d5eb60cbf96b0de46421c444d26732f3dd37,PodSandboxId:1a6ed46de72ad9b2aa1d200dcb036bb46b1d10e1a3e85f04b79aff0d0293b233,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1733744067146368767,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-5sm4n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f7a9af8-0040-4f29-b6e8-6e0df48ff0af,},Annotations:map[string]string{io.kubernetes.container.hash: 3dbdaa4e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"U
DP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aceb9a4bc2f5185e04a221b0611c06bfe1483956cc176da0c17510b87d87e77c,PodSandboxId:5ba56f2e18cb85b3f8fa046d2d87722a9852300a43c13ef088e88cbfba774407,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1733744059939665458,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hdwmv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af
c0156f-bb90-4a58-83a5-42342f7ca40d,},Annotations:map[string]string{io.kubernetes.container.hash: 6eee9cfa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8274c3d670143692f8c4b2388ffa7d48d176a4326ad29285c14eea8c5649744f,PodSandboxId:dd558a9126b676bd9ec8aba3622b275a0543871b30c150e8bff465b1627c6eb4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1733744059780876147,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0e31e88-e025-4
042-bdba-634a3948f362,},Annotations:map[string]string{io.kubernetes.container.hash: 7fd5c86b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d00d0a5d8c47956e3206152fd03d749fb5987a30e852a230a5ea8bd28d58bdf6,PodSandboxId:67c2830903534ebe61e928bd32cddcfb86d9fdafc2ebd6b9349624b4d17401ee,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1733744053404488966,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-934001,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ae6b99e3b87a1906ee535
278bc6b97f,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a09be793d9fbbf3bcadd38a7268f54d4646199c61ff57071c9fd2a821cb5347c,PodSandboxId:78e594e16ea0a6a316794e87668e05efb62103f57618737f49b55496b788c50d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1733744053398947795,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-934001,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a31c7869817235e4d30ec3aa16ecb0b7,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 7d6e9e7c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cedf0d13c1d15a11633400faf8b15950542aa6dc1fb5c676ce919b599d1cc6f3,PodSandboxId:f4ba204c4fae1eb1f09e1884cc4e5ff118ff3f9afc5fd0f070fb3e9896737bc1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1733744053333090524,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-934001,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f90815b54fd7362ac0a60487460b279,},Annotations:
map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb08bc4d7da6b845b32b247a3bf052a09409c690301354480ee3ac634b0e2d0e,PodSandboxId:99bd3220bccbf3094d1a48e88165431024c33ec13c7bbb2d32d3bcc733209a79,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1733744053297356489,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-934001,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 931256e0908765a4fdb679bc5bd1c236,},Annotations:map[string]
string{io.kubernetes.container.hash: 151892b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ff768f7a-07dc-41f3-b193-7dd9796a2383 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 11:34:33 test-preload-934001 crio[693]: time="2024-12-09 11:34:33.646074449Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4c4d9cd6-1de4-4d8f-9630-72463dd2937f name=/runtime.v1.RuntimeService/Version
	Dec 09 11:34:33 test-preload-934001 crio[693]: time="2024-12-09 11:34:33.646145380Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4c4d9cd6-1de4-4d8f-9630-72463dd2937f name=/runtime.v1.RuntimeService/Version
	Dec 09 11:34:33 test-preload-934001 crio[693]: time="2024-12-09 11:34:33.647706042Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=345a8b27-5d33-4f03-8276-fa7c8c5ea663 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 11:34:33 test-preload-934001 crio[693]: time="2024-12-09 11:34:33.648232567Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733744073648203037,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=345a8b27-5d33-4f03-8276-fa7c8c5ea663 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 11:34:33 test-preload-934001 crio[693]: time="2024-12-09 11:34:33.648851161Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bc6fbd37-3843-4934-962f-138fbb8d7112 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 11:34:33 test-preload-934001 crio[693]: time="2024-12-09 11:34:33.648943179Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bc6fbd37-3843-4934-962f-138fbb8d7112 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 11:34:33 test-preload-934001 crio[693]: time="2024-12-09 11:34:33.649105036Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cf20623427f0339a8b74726d7b8e04959c1a9b9a47d72460989cca67890d57a9,PodSandboxId:dd558a9126b676bd9ec8aba3622b275a0543871b30c150e8bff465b1627c6eb4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733744072751996101,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0e31e88-e025-4042-bdba-634a3948f362,},Annotations:map[string]string{io.kubernetes.container.hash: 7fd5c86b,io.kubernetes.container.restartCount: 3,io.kubernetes.co
ntainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86b85f7e5234987b12c9ad465965d5eb60cbf96b0de46421c444d26732f3dd37,PodSandboxId:1a6ed46de72ad9b2aa1d200dcb036bb46b1d10e1a3e85f04b79aff0d0293b233,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1733744067146368767,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-5sm4n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f7a9af8-0040-4f29-b6e8-6e0df48ff0af,},Annotations:map[string]string{io.kubernetes.container.hash: 3dbdaa4e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"U
DP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aceb9a4bc2f5185e04a221b0611c06bfe1483956cc176da0c17510b87d87e77c,PodSandboxId:5ba56f2e18cb85b3f8fa046d2d87722a9852300a43c13ef088e88cbfba774407,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1733744059939665458,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hdwmv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af
c0156f-bb90-4a58-83a5-42342f7ca40d,},Annotations:map[string]string{io.kubernetes.container.hash: 6eee9cfa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8274c3d670143692f8c4b2388ffa7d48d176a4326ad29285c14eea8c5649744f,PodSandboxId:dd558a9126b676bd9ec8aba3622b275a0543871b30c150e8bff465b1627c6eb4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1733744059780876147,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0e31e88-e025-4
042-bdba-634a3948f362,},Annotations:map[string]string{io.kubernetes.container.hash: 7fd5c86b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d00d0a5d8c47956e3206152fd03d749fb5987a30e852a230a5ea8bd28d58bdf6,PodSandboxId:67c2830903534ebe61e928bd32cddcfb86d9fdafc2ebd6b9349624b4d17401ee,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1733744053404488966,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-934001,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ae6b99e3b87a1906ee535
278bc6b97f,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a09be793d9fbbf3bcadd38a7268f54d4646199c61ff57071c9fd2a821cb5347c,PodSandboxId:78e594e16ea0a6a316794e87668e05efb62103f57618737f49b55496b788c50d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1733744053398947795,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-934001,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a31c7869817235e4d30ec3aa16ecb0b7,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 7d6e9e7c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cedf0d13c1d15a11633400faf8b15950542aa6dc1fb5c676ce919b599d1cc6f3,PodSandboxId:f4ba204c4fae1eb1f09e1884cc4e5ff118ff3f9afc5fd0f070fb3e9896737bc1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1733744053333090524,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-934001,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f90815b54fd7362ac0a60487460b279,},Annotations:
map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb08bc4d7da6b845b32b247a3bf052a09409c690301354480ee3ac634b0e2d0e,PodSandboxId:99bd3220bccbf3094d1a48e88165431024c33ec13c7bbb2d32d3bcc733209a79,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1733744053297356489,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-934001,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 931256e0908765a4fdb679bc5bd1c236,},Annotations:map[string]
string{io.kubernetes.container.hash: 151892b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bc6fbd37-3843-4934-962f-138fbb8d7112 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED                  STATE               NAME                      ATTEMPT             POD ID              POD
	cf20623427f03       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   Less than a second ago   Running             storage-provisioner       3                   dd558a9126b67       storage-provisioner
	86b85f7e52349       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   6 seconds ago            Running             coredns                   1                   1a6ed46de72ad       coredns-6d4b75cb6d-5sm4n
	aceb9a4bc2f51       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   13 seconds ago           Running             kube-proxy                1                   5ba56f2e18cb8       kube-proxy-hdwmv
	8274c3d670143       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   13 seconds ago           Exited              storage-provisioner       2                   dd558a9126b67       storage-provisioner
	d00d0a5d8c479       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   20 seconds ago           Running             kube-scheduler            1                   67c2830903534       kube-scheduler-test-preload-934001
	a09be793d9fbb       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   20 seconds ago           Running             etcd                      1                   78e594e16ea0a       etcd-test-preload-934001
	cedf0d13c1d15       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   20 seconds ago           Running             kube-controller-manager   1                   f4ba204c4fae1       kube-controller-manager-test-preload-934001
	eb08bc4d7da6b       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   20 seconds ago           Running             kube-apiserver            1                   99bd3220bccbf       kube-apiserver-test-preload-934001
	
	
	==> coredns [86b85f7e5234987b12c9ad465965d5eb60cbf96b0de46421c444d26732f3dd37] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:37071 - 39462 "HINFO IN 5183907702389781962.6721201764914313832. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015483731s
	
	
	==> describe nodes <==
	Name:               test-preload-934001
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-934001
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=60110addcdbf0fec7168b962521659e922988d6c
	                    minikube.k8s.io/name=test-preload-934001
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_09T11_32_56_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 09 Dec 2024 11:32:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-934001
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 09 Dec 2024 11:34:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 09 Dec 2024 11:34:28 +0000   Mon, 09 Dec 2024 11:32:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 09 Dec 2024 11:34:28 +0000   Mon, 09 Dec 2024 11:32:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 09 Dec 2024 11:34:28 +0000   Mon, 09 Dec 2024 11:32:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 09 Dec 2024 11:34:28 +0000   Mon, 09 Dec 2024 11:34:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.125
	  Hostname:    test-preload-934001
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b316eeef29c4405eb626143e75295ae6
	  System UUID:                b316eeef-29c4-405e-b626-143e75295ae6
	  Boot ID:                    1b894b19-1e18-4658-a52b-3704be732b9c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-5sm4n                       100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     84s
	  kube-system                 etcd-test-preload-934001                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         97s
	  kube-system                 kube-apiserver-test-preload-934001             250m (12%)    0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 kube-controller-manager-test-preload-934001    200m (10%)    0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 kube-proxy-hdwmv                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         84s
	  kube-system                 kube-scheduler-test-preload-934001             100m (5%)     0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         82s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13s                kube-proxy       
	  Normal  Starting                 82s                kube-proxy       
	  Normal  Starting                 98s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  98s                kubelet          Node test-preload-934001 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    98s                kubelet          Node test-preload-934001 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     98s                kubelet          Node test-preload-934001 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  97s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                87s                kubelet          Node test-preload-934001 status is now: NodeReady
	  Normal  RegisteredNode           84s                node-controller  Node test-preload-934001 event: Registered Node test-preload-934001 in Controller
	  Normal  Starting                 21s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  21s (x8 over 21s)  kubelet          Node test-preload-934001 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21s (x8 over 21s)  kubelet          Node test-preload-934001 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21s (x7 over 21s)  kubelet          Node test-preload-934001 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3s                 node-controller  Node test-preload-934001 event: Registered Node test-preload-934001 in Controller
	
	
	==> dmesg <==
	[Dec 9 11:33] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052439] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037537] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.815183] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.022594] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.561605] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.355454] systemd-fstab-generator[617]: Ignoring "noauto" option for root device
	[  +0.060755] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056140] systemd-fstab-generator[629]: Ignoring "noauto" option for root device
	[  +0.199150] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	[  +0.110991] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[  +0.262935] systemd-fstab-generator[684]: Ignoring "noauto" option for root device
	[Dec 9 11:34] systemd-fstab-generator[1014]: Ignoring "noauto" option for root device
	[  +0.062535] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.093468] systemd-fstab-generator[1144]: Ignoring "noauto" option for root device
	[  +6.161172] kauditd_printk_skb: 105 callbacks suppressed
	[  +1.917520] systemd-fstab-generator[1842]: Ignoring "noauto" option for root device
	[  +6.427076] kauditd_printk_skb: 59 callbacks suppressed
	[  +5.835260] kauditd_printk_skb: 13 callbacks suppressed
	
	
	==> etcd [a09be793d9fbbf3bcadd38a7268f54d4646199c61ff57071c9fd2a821cb5347c] <==
	{"level":"info","ts":"2024-12-09T11:34:13.715Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"f4d3edba9e42b28c","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-12-09T11:34:13.717Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-12-09T11:34:13.720Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f4d3edba9e42b28c","initial-advertise-peer-urls":["https://192.168.39.125:2380"],"listen-peer-urls":["https://192.168.39.125:2380"],"advertise-client-urls":["https://192.168.39.125:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.125:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-12-09T11:34:13.720Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-12-09T11:34:13.720Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-12-09T11:34:13.721Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.125:2380"}
	{"level":"info","ts":"2024-12-09T11:34:13.721Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.125:2380"}
	{"level":"info","ts":"2024-12-09T11:34:13.721Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4d3edba9e42b28c switched to configuration voters=(17641705551115235980)"}
	{"level":"info","ts":"2024-12-09T11:34:13.721Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9838e9e2cfdaeabf","local-member-id":"f4d3edba9e42b28c","added-peer-id":"f4d3edba9e42b28c","added-peer-peer-urls":["https://192.168.39.125:2380"]}
	{"level":"info","ts":"2024-12-09T11:34:13.721Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9838e9e2cfdaeabf","local-member-id":"f4d3edba9e42b28c","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-09T11:34:13.721Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-09T11:34:15.489Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4d3edba9e42b28c is starting a new election at term 2"}
	{"level":"info","ts":"2024-12-09T11:34:15.489Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4d3edba9e42b28c became pre-candidate at term 2"}
	{"level":"info","ts":"2024-12-09T11:34:15.489Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4d3edba9e42b28c received MsgPreVoteResp from f4d3edba9e42b28c at term 2"}
	{"level":"info","ts":"2024-12-09T11:34:15.489Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4d3edba9e42b28c became candidate at term 3"}
	{"level":"info","ts":"2024-12-09T11:34:15.489Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4d3edba9e42b28c received MsgVoteResp from f4d3edba9e42b28c at term 3"}
	{"level":"info","ts":"2024-12-09T11:34:15.489Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4d3edba9e42b28c became leader at term 3"}
	{"level":"info","ts":"2024-12-09T11:34:15.489Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f4d3edba9e42b28c elected leader f4d3edba9e42b28c at term 3"}
	{"level":"info","ts":"2024-12-09T11:34:15.489Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f4d3edba9e42b28c","local-member-attributes":"{Name:test-preload-934001 ClientURLs:[https://192.168.39.125:2379]}","request-path":"/0/members/f4d3edba9e42b28c/attributes","cluster-id":"9838e9e2cfdaeabf","publish-timeout":"7s"}
	{"level":"info","ts":"2024-12-09T11:34:15.490Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-09T11:34:15.492Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-12-09T11:34:15.492Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-09T11:34:15.493Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.125:2379"}
	{"level":"info","ts":"2024-12-09T11:34:15.494Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-12-09T11:34:15.494Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 11:34:33 up 0 min,  0 users,  load average: 0.81, 0.23, 0.08
	Linux test-preload-934001 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [eb08bc4d7da6b845b32b247a3bf052a09409c690301354480ee3ac634b0e2d0e] <==
	I1209 11:34:17.871720       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1209 11:34:17.873025       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I1209 11:34:17.873065       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I1209 11:34:17.873087       1 crd_finalizer.go:266] Starting CRDFinalizer
	I1209 11:34:17.889495       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1209 11:34:17.899708       1 establishing_controller.go:76] Starting EstablishingController
	I1209 11:34:17.956587       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I1209 11:34:17.959080       1 shared_informer.go:262] Caches are synced for crd-autoregister
	E1209 11:34:17.970247       1 controller.go:169] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I1209 11:34:17.989977       1 shared_informer.go:262] Caches are synced for node_authorizer
	I1209 11:34:18.029129       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1209 11:34:18.030994       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I1209 11:34:18.031420       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I1209 11:34:18.041479       1 cache.go:39] Caches are synced for autoregister controller
	I1209 11:34:18.041852       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1209 11:34:18.525738       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1209 11:34:18.834085       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1209 11:34:19.330359       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I1209 11:34:19.344360       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I1209 11:34:19.374355       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I1209 11:34:19.387683       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1209 11:34:19.393739       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1209 11:34:20.218498       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I1209 11:34:30.293056       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1209 11:34:30.341505       1 controller.go:611] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [cedf0d13c1d15a11633400faf8b15950542aa6dc1fb5c676ce919b599d1cc6f3] <==
	I1209 11:34:30.302485       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I1209 11:34:30.304555       1 shared_informer.go:262] Caches are synced for bootstrap_signer
	I1209 11:34:30.305597       1 shared_informer.go:262] Caches are synced for disruption
	I1209 11:34:30.305857       1 disruption.go:371] Sending events to api server.
	I1209 11:34:30.325063       1 shared_informer.go:262] Caches are synced for endpoint
	I1209 11:34:30.327305       1 shared_informer.go:262] Caches are synced for service account
	I1209 11:34:30.344288       1 shared_informer.go:262] Caches are synced for node
	I1209 11:34:30.345304       1 range_allocator.go:173] Starting range CIDR allocator
	I1209 11:34:30.345358       1 shared_informer.go:255] Waiting for caches to sync for cidrallocator
	I1209 11:34:30.345389       1 shared_informer.go:262] Caches are synced for cidrallocator
	I1209 11:34:30.346882       1 shared_informer.go:262] Caches are synced for crt configmap
	I1209 11:34:30.348739       1 shared_informer.go:262] Caches are synced for ephemeral
	I1209 11:34:30.351040       1 shared_informer.go:262] Caches are synced for HPA
	I1209 11:34:30.372557       1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
	I1209 11:34:30.422370       1 shared_informer.go:262] Caches are synced for PV protection
	I1209 11:34:30.476699       1 shared_informer.go:262] Caches are synced for attach detach
	I1209 11:34:30.478583       1 shared_informer.go:262] Caches are synced for expand
	I1209 11:34:30.487840       1 shared_informer.go:262] Caches are synced for persistent volume
	I1209 11:34:30.502357       1 shared_informer.go:262] Caches are synced for resource quota
	I1209 11:34:30.531645       1 shared_informer.go:262] Caches are synced for resource quota
	I1209 11:34:30.550230       1 shared_informer.go:262] Caches are synced for daemon sets
	I1209 11:34:30.552657       1 shared_informer.go:262] Caches are synced for stateful set
	I1209 11:34:30.933217       1 shared_informer.go:262] Caches are synced for garbage collector
	I1209 11:34:30.933340       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1209 11:34:30.967485       1 shared_informer.go:262] Caches are synced for garbage collector
	
	
	==> kube-proxy [aceb9a4bc2f5185e04a221b0611c06bfe1483956cc176da0c17510b87d87e77c] <==
	I1209 11:34:20.173548       1 node.go:163] Successfully retrieved node IP: 192.168.39.125
	I1209 11:34:20.173624       1 server_others.go:138] "Detected node IP" address="192.168.39.125"
	I1209 11:34:20.173651       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I1209 11:34:20.212130       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I1209 11:34:20.212216       1 server_others.go:206] "Using iptables Proxier"
	I1209 11:34:20.212799       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I1209 11:34:20.213337       1 server.go:661] "Version info" version="v1.24.4"
	I1209 11:34:20.213362       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1209 11:34:20.214866       1 config.go:317] "Starting service config controller"
	I1209 11:34:20.214907       1 shared_informer.go:255] Waiting for caches to sync for service config
	I1209 11:34:20.214949       1 config.go:226] "Starting endpoint slice config controller"
	I1209 11:34:20.214966       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I1209 11:34:20.216869       1 config.go:444] "Starting node config controller"
	I1209 11:34:20.216894       1 shared_informer.go:255] Waiting for caches to sync for node config
	I1209 11:34:20.315729       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I1209 11:34:20.315730       1 shared_informer.go:262] Caches are synced for service config
	I1209 11:34:20.317568       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [d00d0a5d8c47956e3206152fd03d749fb5987a30e852a230a5ea8bd28d58bdf6] <==
	I1209 11:34:14.266543       1 serving.go:348] Generated self-signed cert in-memory
	W1209 11:34:17.911209       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1209 11:34:17.911287       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1209 11:34:17.911309       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1209 11:34:17.911320       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1209 11:34:17.968778       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I1209 11:34:17.968927       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1209 11:34:17.974489       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1209 11:34:17.975187       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1209 11:34:17.975247       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1209 11:34:17.975288       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1209 11:34:18.075475       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 09 11:34:19 test-preload-934001 kubelet[1151]: I1209 11:34:19.129039    1151 reconciler.go:201] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/52d9d42c-f62b-4195-ab7d-15b043073993-config-volume\") pod \"52d9d42c-f62b-4195-ab7d-15b043073993\" (UID: \"52d9d42c-f62b-4195-ab7d-15b043073993\") "
	Dec 09 11:34:19 test-preload-934001 kubelet[1151]: I1209 11:34:19.129243    1151 reconciler.go:201] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4xbhx\" (UniqueName: \"kubernetes.io/projected/52d9d42c-f62b-4195-ab7d-15b043073993-kube-api-access-4xbhx\") pod \"52d9d42c-f62b-4195-ab7d-15b043073993\" (UID: \"52d9d42c-f62b-4195-ab7d-15b043073993\") "
	Dec 09 11:34:19 test-preload-934001 kubelet[1151]: E1209 11:34:19.130261    1151 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 09 11:34:19 test-preload-934001 kubelet[1151]: E1209 11:34:19.130480    1151 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/7f7a9af8-0040-4f29-b6e8-6e0df48ff0af-config-volume podName:7f7a9af8-0040-4f29-b6e8-6e0df48ff0af nodeName:}" failed. No retries permitted until 2024-12-09 11:34:19.63044286 +0000 UTC m=+7.123747740 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/7f7a9af8-0040-4f29-b6e8-6e0df48ff0af-config-volume") pod "coredns-6d4b75cb6d-5sm4n" (UID: "7f7a9af8-0040-4f29-b6e8-6e0df48ff0af") : object "kube-system"/"coredns" not registered
	Dec 09 11:34:19 test-preload-934001 kubelet[1151]: W1209 11:34:19.131454    1151 empty_dir.go:519] Warning: Failed to clear quota on /var/lib/kubelet/pods/52d9d42c-f62b-4195-ab7d-15b043073993/volumes/kubernetes.io~projected/kube-api-access-4xbhx: clearQuota called, but quotas disabled
	Dec 09 11:34:19 test-preload-934001 kubelet[1151]: I1209 11:34:19.131618    1151 operation_generator.go:863] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52d9d42c-f62b-4195-ab7d-15b043073993-kube-api-access-4xbhx" (OuterVolumeSpecName: "kube-api-access-4xbhx") pod "52d9d42c-f62b-4195-ab7d-15b043073993" (UID: "52d9d42c-f62b-4195-ab7d-15b043073993"). InnerVolumeSpecName "kube-api-access-4xbhx". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Dec 09 11:34:19 test-preload-934001 kubelet[1151]: W1209 11:34:19.131780    1151 empty_dir.go:519] Warning: Failed to clear quota on /var/lib/kubelet/pods/52d9d42c-f62b-4195-ab7d-15b043073993/volumes/kubernetes.io~configmap/config-volume: clearQuota called, but quotas disabled
	Dec 09 11:34:19 test-preload-934001 kubelet[1151]: I1209 11:34:19.132384    1151 operation_generator.go:863] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52d9d42c-f62b-4195-ab7d-15b043073993-config-volume" (OuterVolumeSpecName: "config-volume") pod "52d9d42c-f62b-4195-ab7d-15b043073993" (UID: "52d9d42c-f62b-4195-ab7d-15b043073993"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Dec 09 11:34:19 test-preload-934001 kubelet[1151]: I1209 11:34:19.229972    1151 reconciler.go:384] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/52d9d42c-f62b-4195-ab7d-15b043073993-config-volume\") on node \"test-preload-934001\" DevicePath \"\""
	Dec 09 11:34:19 test-preload-934001 kubelet[1151]: I1209 11:34:19.230012    1151 reconciler.go:384] "Volume detached for volume \"kube-api-access-4xbhx\" (UniqueName: \"kubernetes.io/projected/52d9d42c-f62b-4195-ab7d-15b043073993-kube-api-access-4xbhx\") on node \"test-preload-934001\" DevicePath \"\""
	Dec 09 11:34:19 test-preload-934001 kubelet[1151]: E1209 11:34:19.637519    1151 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 09 11:34:19 test-preload-934001 kubelet[1151]: E1209 11:34:19.637570    1151 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/7f7a9af8-0040-4f29-b6e8-6e0df48ff0af-config-volume podName:7f7a9af8-0040-4f29-b6e8-6e0df48ff0af nodeName:}" failed. No retries permitted until 2024-12-09 11:34:20.637558147 +0000 UTC m=+8.130863026 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/7f7a9af8-0040-4f29-b6e8-6e0df48ff0af-config-volume") pod "coredns-6d4b75cb6d-5sm4n" (UID: "7f7a9af8-0040-4f29-b6e8-6e0df48ff0af") : object "kube-system"/"coredns" not registered
	Dec 09 11:34:19 test-preload-934001 kubelet[1151]: I1209 11:34:19.773519    1151 scope.go:110] "RemoveContainer" containerID="87ba584372e3db12541d4b5bf80c71eca7e67f739aedf3dfd8a14b84cbb27f02"
	Dec 09 11:34:20 test-preload-934001 kubelet[1151]: E1209 11:34:20.645762    1151 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 09 11:34:20 test-preload-934001 kubelet[1151]: E1209 11:34:20.645836    1151 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/7f7a9af8-0040-4f29-b6e8-6e0df48ff0af-config-volume podName:7f7a9af8-0040-4f29-b6e8-6e0df48ff0af nodeName:}" failed. No retries permitted until 2024-12-09 11:34:22.645820249 +0000 UTC m=+10.139125129 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/7f7a9af8-0040-4f29-b6e8-6e0df48ff0af-config-volume") pod "coredns-6d4b75cb6d-5sm4n" (UID: "7f7a9af8-0040-4f29-b6e8-6e0df48ff0af") : object "kube-system"/"coredns" not registered
	Dec 09 11:34:20 test-preload-934001 kubelet[1151]: E1209 11:34:20.729462    1151 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-5sm4n" podUID=7f7a9af8-0040-4f29-b6e8-6e0df48ff0af
	Dec 09 11:34:20 test-preload-934001 kubelet[1151]: I1209 11:34:20.745659    1151 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=52d9d42c-f62b-4195-ab7d-15b043073993 path="/var/lib/kubelet/pods/52d9d42c-f62b-4195-ab7d-15b043073993/volumes"
	Dec 09 11:34:20 test-preload-934001 kubelet[1151]: I1209 11:34:20.779680    1151 scope.go:110] "RemoveContainer" containerID="87ba584372e3db12541d4b5bf80c71eca7e67f739aedf3dfd8a14b84cbb27f02"
	Dec 09 11:34:20 test-preload-934001 kubelet[1151]: I1209 11:34:20.780063    1151 scope.go:110] "RemoveContainer" containerID="8274c3d670143692f8c4b2388ffa7d48d176a4326ad29285c14eea8c5649744f"
	Dec 09 11:34:20 test-preload-934001 kubelet[1151]: E1209 11:34:20.780412    1151 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(d0e31e88-e025-4042-bdba-634a3948f362)\"" pod="kube-system/storage-provisioner" podUID=d0e31e88-e025-4042-bdba-634a3948f362
	Dec 09 11:34:21 test-preload-934001 kubelet[1151]: I1209 11:34:21.788421    1151 scope.go:110] "RemoveContainer" containerID="8274c3d670143692f8c4b2388ffa7d48d176a4326ad29285c14eea8c5649744f"
	Dec 09 11:34:21 test-preload-934001 kubelet[1151]: E1209 11:34:21.788703    1151 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(d0e31e88-e025-4042-bdba-634a3948f362)\"" pod="kube-system/storage-provisioner" podUID=d0e31e88-e025-4042-bdba-634a3948f362
	Dec 09 11:34:22 test-preload-934001 kubelet[1151]: E1209 11:34:22.663601    1151 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 09 11:34:22 test-preload-934001 kubelet[1151]: E1209 11:34:22.663752    1151 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/7f7a9af8-0040-4f29-b6e8-6e0df48ff0af-config-volume podName:7f7a9af8-0040-4f29-b6e8-6e0df48ff0af nodeName:}" failed. No retries permitted until 2024-12-09 11:34:26.663723635 +0000 UTC m=+14.157028516 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/7f7a9af8-0040-4f29-b6e8-6e0df48ff0af-config-volume") pod "coredns-6d4b75cb6d-5sm4n" (UID: "7f7a9af8-0040-4f29-b6e8-6e0df48ff0af") : object "kube-system"/"coredns" not registered
	Dec 09 11:34:32 test-preload-934001 kubelet[1151]: I1209 11:34:32.727217    1151 scope.go:110] "RemoveContainer" containerID="8274c3d670143692f8c4b2388ffa7d48d176a4326ad29285c14eea8c5649744f"
	
	
	==> storage-provisioner [8274c3d670143692f8c4b2388ffa7d48d176a4326ad29285c14eea8c5649744f] <==
	I1209 11:34:19.886525       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1209 11:34:19.889441       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [cf20623427f0339a8b74726d7b8e04959c1a9b9a47d72460989cca67890d57a9] <==
	I1209 11:34:32.857881       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1209 11:34:32.874607       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1209 11:34:32.875247       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-934001 -n test-preload-934001
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-934001 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-934001" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-934001
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-934001: (1.151889366s)
--- FAIL: TestPreload (175.44s)

                                                
                                    
x
+
TestKubernetesUpgrade (363.31s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-835095 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-835095 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m47.285899001s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-835095] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20068
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20068-609844/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20068-609844/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-835095" primary control-plane node in "kubernetes-upgrade-835095" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 11:40:22.389508  657318 out.go:345] Setting OutFile to fd 1 ...
	I1209 11:40:22.389624  657318 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 11:40:22.389633  657318 out.go:358] Setting ErrFile to fd 2...
	I1209 11:40:22.389640  657318 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 11:40:22.389834  657318 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20068-609844/.minikube/bin
	I1209 11:40:22.390496  657318 out.go:352] Setting JSON to false
	I1209 11:40:22.391565  657318 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":15766,"bootTime":1733728656,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 11:40:22.391661  657318 start.go:139] virtualization: kvm guest
	I1209 11:40:22.393781  657318 out.go:177] * [kubernetes-upgrade-835095] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1209 11:40:22.395018  657318 out.go:177]   - MINIKUBE_LOCATION=20068
	I1209 11:40:22.395016  657318 notify.go:220] Checking for updates...
	I1209 11:40:22.397062  657318 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 11:40:22.398243  657318 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20068-609844/kubeconfig
	I1209 11:40:22.399358  657318 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20068-609844/.minikube
	I1209 11:40:22.400454  657318 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1209 11:40:22.401545  657318 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 11:40:22.403043  657318 config.go:182] Loaded profile config "NoKubernetes-597739": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1209 11:40:22.403148  657318 config.go:182] Loaded profile config "cert-expiration-752166": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 11:40:22.403260  657318 config.go:182] Loaded profile config "stopped-upgrade-676904": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I1209 11:40:22.403365  657318 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 11:40:22.440866  657318 out.go:177] * Using the kvm2 driver based on user configuration
	I1209 11:40:22.442076  657318 start.go:297] selected driver: kvm2
	I1209 11:40:22.442136  657318 start.go:901] validating driver "kvm2" against <nil>
	I1209 11:40:22.442164  657318 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 11:40:22.443203  657318 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 11:40:22.443340  657318 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20068-609844/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1209 11:40:22.458366  657318 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1209 11:40:22.458429  657318 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1209 11:40:22.458757  657318 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1209 11:40:22.458789  657318 cni.go:84] Creating CNI manager for ""
	I1209 11:40:22.458849  657318 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 11:40:22.458861  657318 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1209 11:40:22.458930  657318 start.go:340] cluster config:
	{Name:kubernetes-upgrade-835095 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-835095 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 11:40:22.459064  657318 iso.go:125] acquiring lock: {Name:mk3bf4d9da9b75cc0ae0a3732f4492bac54a4b46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 11:40:22.461796  657318 out.go:177] * Starting "kubernetes-upgrade-835095" primary control-plane node in "kubernetes-upgrade-835095" cluster
	I1209 11:40:22.463045  657318 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1209 11:40:22.463104  657318 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1209 11:40:22.463123  657318 cache.go:56] Caching tarball of preloaded images
	I1209 11:40:22.463240  657318 preload.go:172] Found /home/jenkins/minikube-integration/20068-609844/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1209 11:40:22.463256  657318 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1209 11:40:22.463365  657318 profile.go:143] Saving config to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/kubernetes-upgrade-835095/config.json ...
	I1209 11:40:22.463388  657318 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/kubernetes-upgrade-835095/config.json: {Name:mk9db76b4cfa55584125136ee4ba569d0f29df47 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 11:40:22.463570  657318 start.go:360] acquireMachinesLock for kubernetes-upgrade-835095: {Name:mka6afffe9ddf2faebdc603ed0401805d58bf31e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 11:40:42.978859  657318 start.go:364] duration metric: took 20.51522728s to acquireMachinesLock for "kubernetes-upgrade-835095"
	I1209 11:40:42.978957  657318 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-835095 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-835095 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 11:40:42.979085  657318 start.go:125] createHost starting for "" (driver="kvm2")
	I1209 11:40:42.981167  657318 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1209 11:40:42.981401  657318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:40:42.981440  657318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:40:42.998489  657318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42465
	I1209 11:40:42.998977  657318 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:40:42.999626  657318 main.go:141] libmachine: Using API Version  1
	I1209 11:40:42.999652  657318 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:40:43.000084  657318 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:40:43.000297  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .GetMachineName
	I1209 11:40:43.000504  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .DriverName
	I1209 11:40:43.000701  657318 start.go:159] libmachine.API.Create for "kubernetes-upgrade-835095" (driver="kvm2")
	I1209 11:40:43.000732  657318 client.go:168] LocalClient.Create starting
	I1209 11:40:43.000771  657318 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem
	I1209 11:40:43.000818  657318 main.go:141] libmachine: Decoding PEM data...
	I1209 11:40:43.000836  657318 main.go:141] libmachine: Parsing certificate...
	I1209 11:40:43.000924  657318 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem
	I1209 11:40:43.000951  657318 main.go:141] libmachine: Decoding PEM data...
	I1209 11:40:43.000964  657318 main.go:141] libmachine: Parsing certificate...
	I1209 11:40:43.000988  657318 main.go:141] libmachine: Running pre-create checks...
	I1209 11:40:43.000999  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .PreCreateCheck
	I1209 11:40:43.001450  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .GetConfigRaw
	I1209 11:40:43.002026  657318 main.go:141] libmachine: Creating machine...
	I1209 11:40:43.002042  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .Create
	I1209 11:40:43.002255  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) Creating KVM machine...
	I1209 11:40:43.003677  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | found existing default KVM network
	I1209 11:40:43.005634  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | I1209 11:40:43.005412  657498 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:5a:d1:1f} reservation:<nil>}
	I1209 11:40:43.007212  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | I1209 11:40:43.007105  657498 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000205710}
	I1209 11:40:43.007293  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | created network xml: 
	I1209 11:40:43.007319  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | <network>
	I1209 11:40:43.007331  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG |   <name>mk-kubernetes-upgrade-835095</name>
	I1209 11:40:43.007339  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG |   <dns enable='no'/>
	I1209 11:40:43.007348  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG |   
	I1209 11:40:43.007357  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I1209 11:40:43.007373  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG |     <dhcp>
	I1209 11:40:43.007381  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I1209 11:40:43.007390  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG |     </dhcp>
	I1209 11:40:43.007395  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG |   </ip>
	I1209 11:40:43.007403  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG |   
	I1209 11:40:43.007410  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | </network>
	I1209 11:40:43.007420  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | 
	I1209 11:40:43.013279  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | trying to create private KVM network mk-kubernetes-upgrade-835095 192.168.50.0/24...
	I1209 11:40:43.092857  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | private KVM network mk-kubernetes-upgrade-835095 192.168.50.0/24 created
	I1209 11:40:43.092896  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | I1209 11:40:43.092822  657498 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20068-609844/.minikube
	I1209 11:40:43.092917  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) Setting up store path in /home/jenkins/minikube-integration/20068-609844/.minikube/machines/kubernetes-upgrade-835095 ...
	I1209 11:40:43.092934  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) Building disk image from file:///home/jenkins/minikube-integration/20068-609844/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1209 11:40:43.093029  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) Downloading /home/jenkins/minikube-integration/20068-609844/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20068-609844/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1209 11:40:43.393790  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | I1209 11:40:43.393638  657498 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/kubernetes-upgrade-835095/id_rsa...
	I1209 11:40:43.514076  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | I1209 11:40:43.513926  657498 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/kubernetes-upgrade-835095/kubernetes-upgrade-835095.rawdisk...
	I1209 11:40:43.514115  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | Writing magic tar header
	I1209 11:40:43.514133  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | Writing SSH key tar header
	I1209 11:40:43.514148  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | I1209 11:40:43.514047  657498 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20068-609844/.minikube/machines/kubernetes-upgrade-835095 ...
	I1209 11:40:43.514181  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/kubernetes-upgrade-835095
	I1209 11:40:43.514200  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20068-609844/.minikube/machines
	I1209 11:40:43.514227  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) Setting executable bit set on /home/jenkins/minikube-integration/20068-609844/.minikube/machines/kubernetes-upgrade-835095 (perms=drwx------)
	I1209 11:40:43.514243  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20068-609844/.minikube
	I1209 11:40:43.514258  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20068-609844
	I1209 11:40:43.514269  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1209 11:40:43.514285  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) Setting executable bit set on /home/jenkins/minikube-integration/20068-609844/.minikube/machines (perms=drwxr-xr-x)
	I1209 11:40:43.514298  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | Checking permissions on dir: /home/jenkins
	I1209 11:40:43.514310  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | Checking permissions on dir: /home
	I1209 11:40:43.514321  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | Skipping /home - not owner
	I1209 11:40:43.514339  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) Setting executable bit set on /home/jenkins/minikube-integration/20068-609844/.minikube (perms=drwxr-xr-x)
	I1209 11:40:43.514352  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) Setting executable bit set on /home/jenkins/minikube-integration/20068-609844 (perms=drwxrwxr-x)
	I1209 11:40:43.514368  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1209 11:40:43.514376  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1209 11:40:43.514386  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) Creating domain...
	I1209 11:40:43.515888  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) define libvirt domain using xml: 
	I1209 11:40:43.515924  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) <domain type='kvm'>
	I1209 11:40:43.515939  657318 main.go:141] libmachine: (kubernetes-upgrade-835095)   <name>kubernetes-upgrade-835095</name>
	I1209 11:40:43.515952  657318 main.go:141] libmachine: (kubernetes-upgrade-835095)   <memory unit='MiB'>2200</memory>
	I1209 11:40:43.515990  657318 main.go:141] libmachine: (kubernetes-upgrade-835095)   <vcpu>2</vcpu>
	I1209 11:40:43.516018  657318 main.go:141] libmachine: (kubernetes-upgrade-835095)   <features>
	I1209 11:40:43.516030  657318 main.go:141] libmachine: (kubernetes-upgrade-835095)     <acpi/>
	I1209 11:40:43.516036  657318 main.go:141] libmachine: (kubernetes-upgrade-835095)     <apic/>
	I1209 11:40:43.516047  657318 main.go:141] libmachine: (kubernetes-upgrade-835095)     <pae/>
	I1209 11:40:43.516071  657318 main.go:141] libmachine: (kubernetes-upgrade-835095)     
	I1209 11:40:43.516082  657318 main.go:141] libmachine: (kubernetes-upgrade-835095)   </features>
	I1209 11:40:43.516091  657318 main.go:141] libmachine: (kubernetes-upgrade-835095)   <cpu mode='host-passthrough'>
	I1209 11:40:43.516102  657318 main.go:141] libmachine: (kubernetes-upgrade-835095)   
	I1209 11:40:43.516117  657318 main.go:141] libmachine: (kubernetes-upgrade-835095)   </cpu>
	I1209 11:40:43.516173  657318 main.go:141] libmachine: (kubernetes-upgrade-835095)   <os>
	I1209 11:40:43.516218  657318 main.go:141] libmachine: (kubernetes-upgrade-835095)     <type>hvm</type>
	I1209 11:40:43.516230  657318 main.go:141] libmachine: (kubernetes-upgrade-835095)     <boot dev='cdrom'/>
	I1209 11:40:43.516238  657318 main.go:141] libmachine: (kubernetes-upgrade-835095)     <boot dev='hd'/>
	I1209 11:40:43.516253  657318 main.go:141] libmachine: (kubernetes-upgrade-835095)     <bootmenu enable='no'/>
	I1209 11:40:43.516263  657318 main.go:141] libmachine: (kubernetes-upgrade-835095)   </os>
	I1209 11:40:43.516272  657318 main.go:141] libmachine: (kubernetes-upgrade-835095)   <devices>
	I1209 11:40:43.516284  657318 main.go:141] libmachine: (kubernetes-upgrade-835095)     <disk type='file' device='cdrom'>
	I1209 11:40:43.516299  657318 main.go:141] libmachine: (kubernetes-upgrade-835095)       <source file='/home/jenkins/minikube-integration/20068-609844/.minikube/machines/kubernetes-upgrade-835095/boot2docker.iso'/>
	I1209 11:40:43.516315  657318 main.go:141] libmachine: (kubernetes-upgrade-835095)       <target dev='hdc' bus='scsi'/>
	I1209 11:40:43.516326  657318 main.go:141] libmachine: (kubernetes-upgrade-835095)       <readonly/>
	I1209 11:40:43.516342  657318 main.go:141] libmachine: (kubernetes-upgrade-835095)     </disk>
	I1209 11:40:43.516355  657318 main.go:141] libmachine: (kubernetes-upgrade-835095)     <disk type='file' device='disk'>
	I1209 11:40:43.516372  657318 main.go:141] libmachine: (kubernetes-upgrade-835095)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1209 11:40:43.516390  657318 main.go:141] libmachine: (kubernetes-upgrade-835095)       <source file='/home/jenkins/minikube-integration/20068-609844/.minikube/machines/kubernetes-upgrade-835095/kubernetes-upgrade-835095.rawdisk'/>
	I1209 11:40:43.516405  657318 main.go:141] libmachine: (kubernetes-upgrade-835095)       <target dev='hda' bus='virtio'/>
	I1209 11:40:43.516430  657318 main.go:141] libmachine: (kubernetes-upgrade-835095)     </disk>
	I1209 11:40:43.516465  657318 main.go:141] libmachine: (kubernetes-upgrade-835095)     <interface type='network'>
	I1209 11:40:43.516483  657318 main.go:141] libmachine: (kubernetes-upgrade-835095)       <source network='mk-kubernetes-upgrade-835095'/>
	I1209 11:40:43.516496  657318 main.go:141] libmachine: (kubernetes-upgrade-835095)       <model type='virtio'/>
	I1209 11:40:43.516511  657318 main.go:141] libmachine: (kubernetes-upgrade-835095)     </interface>
	I1209 11:40:43.516524  657318 main.go:141] libmachine: (kubernetes-upgrade-835095)     <interface type='network'>
	I1209 11:40:43.516538  657318 main.go:141] libmachine: (kubernetes-upgrade-835095)       <source network='default'/>
	I1209 11:40:43.516553  657318 main.go:141] libmachine: (kubernetes-upgrade-835095)       <model type='virtio'/>
	I1209 11:40:43.516565  657318 main.go:141] libmachine: (kubernetes-upgrade-835095)     </interface>
	I1209 11:40:43.516580  657318 main.go:141] libmachine: (kubernetes-upgrade-835095)     <serial type='pty'>
	I1209 11:40:43.516593  657318 main.go:141] libmachine: (kubernetes-upgrade-835095)       <target port='0'/>
	I1209 11:40:43.516622  657318 main.go:141] libmachine: (kubernetes-upgrade-835095)     </serial>
	I1209 11:40:43.516644  657318 main.go:141] libmachine: (kubernetes-upgrade-835095)     <console type='pty'>
	I1209 11:40:43.516655  657318 main.go:141] libmachine: (kubernetes-upgrade-835095)       <target type='serial' port='0'/>
	I1209 11:40:43.516662  657318 main.go:141] libmachine: (kubernetes-upgrade-835095)     </console>
	I1209 11:40:43.516672  657318 main.go:141] libmachine: (kubernetes-upgrade-835095)     <rng model='virtio'>
	I1209 11:40:43.516683  657318 main.go:141] libmachine: (kubernetes-upgrade-835095)       <backend model='random'>/dev/random</backend>
	I1209 11:40:43.516695  657318 main.go:141] libmachine: (kubernetes-upgrade-835095)     </rng>
	I1209 11:40:43.516702  657318 main.go:141] libmachine: (kubernetes-upgrade-835095)     
	I1209 11:40:43.516712  657318 main.go:141] libmachine: (kubernetes-upgrade-835095)     
	I1209 11:40:43.516721  657318 main.go:141] libmachine: (kubernetes-upgrade-835095)   </devices>
	I1209 11:40:43.516729  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) </domain>
	I1209 11:40:43.516738  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) 
	I1209 11:40:43.523908  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | domain kubernetes-upgrade-835095 has defined MAC address 52:54:00:50:37:a5 in network default
	I1209 11:40:43.524701  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) Ensuring networks are active...
	I1209 11:40:43.524719  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | domain kubernetes-upgrade-835095 has defined MAC address 52:54:00:76:1d:d0 in network mk-kubernetes-upgrade-835095
	I1209 11:40:43.525586  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) Ensuring network default is active
	I1209 11:40:43.526020  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) Ensuring network mk-kubernetes-upgrade-835095 is active
	I1209 11:40:43.526671  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) Getting domain xml...
	I1209 11:40:43.527694  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) Creating domain...
	I1209 11:40:44.889069  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) Waiting to get IP...
	I1209 11:40:44.889906  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | domain kubernetes-upgrade-835095 has defined MAC address 52:54:00:76:1d:d0 in network mk-kubernetes-upgrade-835095
	I1209 11:40:44.890522  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | unable to find current IP address of domain kubernetes-upgrade-835095 in network mk-kubernetes-upgrade-835095
	I1209 11:40:44.890549  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | I1209 11:40:44.890496  657498 retry.go:31] will retry after 203.057613ms: waiting for machine to come up
	I1209 11:40:45.094973  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | domain kubernetes-upgrade-835095 has defined MAC address 52:54:00:76:1d:d0 in network mk-kubernetes-upgrade-835095
	I1209 11:40:45.095367  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | unable to find current IP address of domain kubernetes-upgrade-835095 in network mk-kubernetes-upgrade-835095
	I1209 11:40:45.095393  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | I1209 11:40:45.095318  657498 retry.go:31] will retry after 372.101444ms: waiting for machine to come up
	I1209 11:40:45.468824  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | domain kubernetes-upgrade-835095 has defined MAC address 52:54:00:76:1d:d0 in network mk-kubernetes-upgrade-835095
	I1209 11:40:45.527898  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | unable to find current IP address of domain kubernetes-upgrade-835095 in network mk-kubernetes-upgrade-835095
	I1209 11:40:45.527935  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | I1209 11:40:45.527834  657498 retry.go:31] will retry after 397.702582ms: waiting for machine to come up
	I1209 11:40:46.096057  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | domain kubernetes-upgrade-835095 has defined MAC address 52:54:00:76:1d:d0 in network mk-kubernetes-upgrade-835095
	I1209 11:40:46.096589  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | unable to find current IP address of domain kubernetes-upgrade-835095 in network mk-kubernetes-upgrade-835095
	I1209 11:40:46.096621  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | I1209 11:40:46.096557  657498 retry.go:31] will retry after 547.630903ms: waiting for machine to come up
	I1209 11:40:46.645521  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | domain kubernetes-upgrade-835095 has defined MAC address 52:54:00:76:1d:d0 in network mk-kubernetes-upgrade-835095
	I1209 11:40:46.646092  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | unable to find current IP address of domain kubernetes-upgrade-835095 in network mk-kubernetes-upgrade-835095
	I1209 11:40:46.646124  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | I1209 11:40:46.646046  657498 retry.go:31] will retry after 698.022243ms: waiting for machine to come up
	I1209 11:40:47.346135  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | domain kubernetes-upgrade-835095 has defined MAC address 52:54:00:76:1d:d0 in network mk-kubernetes-upgrade-835095
	I1209 11:40:47.346815  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | unable to find current IP address of domain kubernetes-upgrade-835095 in network mk-kubernetes-upgrade-835095
	I1209 11:40:47.346838  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | I1209 11:40:47.346749  657498 retry.go:31] will retry after 622.838586ms: waiting for machine to come up
	I1209 11:40:47.971606  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | domain kubernetes-upgrade-835095 has defined MAC address 52:54:00:76:1d:d0 in network mk-kubernetes-upgrade-835095
	I1209 11:40:47.972116  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | unable to find current IP address of domain kubernetes-upgrade-835095 in network mk-kubernetes-upgrade-835095
	I1209 11:40:47.972148  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | I1209 11:40:47.972068  657498 retry.go:31] will retry after 767.898808ms: waiting for machine to come up
	I1209 11:40:48.741616  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | domain kubernetes-upgrade-835095 has defined MAC address 52:54:00:76:1d:d0 in network mk-kubernetes-upgrade-835095
	I1209 11:40:48.742184  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | unable to find current IP address of domain kubernetes-upgrade-835095 in network mk-kubernetes-upgrade-835095
	I1209 11:40:48.742211  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | I1209 11:40:48.742122  657498 retry.go:31] will retry after 1.486905855s: waiting for machine to come up
	I1209 11:40:50.231256  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | domain kubernetes-upgrade-835095 has defined MAC address 52:54:00:76:1d:d0 in network mk-kubernetes-upgrade-835095
	I1209 11:40:50.231711  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | unable to find current IP address of domain kubernetes-upgrade-835095 in network mk-kubernetes-upgrade-835095
	I1209 11:40:50.231743  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | I1209 11:40:50.231653  657498 retry.go:31] will retry after 1.362959197s: waiting for machine to come up
	I1209 11:40:51.596589  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | domain kubernetes-upgrade-835095 has defined MAC address 52:54:00:76:1d:d0 in network mk-kubernetes-upgrade-835095
	I1209 11:40:51.597044  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | unable to find current IP address of domain kubernetes-upgrade-835095 in network mk-kubernetes-upgrade-835095
	I1209 11:40:51.597072  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | I1209 11:40:51.597015  657498 retry.go:31] will retry after 2.049023918s: waiting for machine to come up
	I1209 11:40:53.647507  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | domain kubernetes-upgrade-835095 has defined MAC address 52:54:00:76:1d:d0 in network mk-kubernetes-upgrade-835095
	I1209 11:40:53.647906  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | unable to find current IP address of domain kubernetes-upgrade-835095 in network mk-kubernetes-upgrade-835095
	I1209 11:40:53.647942  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | I1209 11:40:53.647844  657498 retry.go:31] will retry after 2.493320737s: waiting for machine to come up
	I1209 11:40:56.144492  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | domain kubernetes-upgrade-835095 has defined MAC address 52:54:00:76:1d:d0 in network mk-kubernetes-upgrade-835095
	I1209 11:40:56.144976  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | unable to find current IP address of domain kubernetes-upgrade-835095 in network mk-kubernetes-upgrade-835095
	I1209 11:40:56.145015  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | I1209 11:40:56.144925  657498 retry.go:31] will retry after 3.440616315s: waiting for machine to come up
	I1209 11:40:59.587826  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | domain kubernetes-upgrade-835095 has defined MAC address 52:54:00:76:1d:d0 in network mk-kubernetes-upgrade-835095
	I1209 11:40:59.588282  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | unable to find current IP address of domain kubernetes-upgrade-835095 in network mk-kubernetes-upgrade-835095
	I1209 11:40:59.588314  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | I1209 11:40:59.588231  657498 retry.go:31] will retry after 3.992724018s: waiting for machine to come up
	I1209 11:41:03.582255  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | domain kubernetes-upgrade-835095 has defined MAC address 52:54:00:76:1d:d0 in network mk-kubernetes-upgrade-835095
	I1209 11:41:03.582793  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | domain kubernetes-upgrade-835095 has current primary IP address 192.168.50.241 and MAC address 52:54:00:76:1d:d0 in network mk-kubernetes-upgrade-835095
	I1209 11:41:03.582814  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) Found IP for machine: 192.168.50.241
	I1209 11:41:03.582827  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) Reserving static IP address...
	I1209 11:41:03.583291  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-835095", mac: "52:54:00:76:1d:d0", ip: "192.168.50.241"} in network mk-kubernetes-upgrade-835095
	I1209 11:41:03.660580  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | Getting to WaitForSSH function...
	I1209 11:41:03.660634  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) Reserved static IP address: 192.168.50.241
	I1209 11:41:03.660653  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) Waiting for SSH to be available...
	I1209 11:41:03.663276  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | domain kubernetes-upgrade-835095 has defined MAC address 52:54:00:76:1d:d0 in network mk-kubernetes-upgrade-835095
	I1209 11:41:03.663645  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:1d:d0", ip: ""} in network mk-kubernetes-upgrade-835095: {Iface:virbr2 ExpiryTime:2024-12-09 12:40:57 +0000 UTC Type:0 Mac:52:54:00:76:1d:d0 Iaid: IPaddr:192.168.50.241 Prefix:24 Hostname:minikube Clientid:01:52:54:00:76:1d:d0}
	I1209 11:41:03.663678  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | domain kubernetes-upgrade-835095 has defined IP address 192.168.50.241 and MAC address 52:54:00:76:1d:d0 in network mk-kubernetes-upgrade-835095
	I1209 11:41:03.663838  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | Using SSH client type: external
	I1209 11:41:03.663872  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | Using SSH private key: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/kubernetes-upgrade-835095/id_rsa (-rw-------)
	I1209 11:41:03.663906  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.241 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20068-609844/.minikube/machines/kubernetes-upgrade-835095/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1209 11:41:03.663918  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | About to run SSH command:
	I1209 11:41:03.663948  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | exit 0
	I1209 11:41:03.790075  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | SSH cmd err, output: <nil>: 
	I1209 11:41:03.790463  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) KVM machine creation complete!
	I1209 11:41:03.790844  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .GetConfigRaw
	I1209 11:41:03.791409  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .DriverName
	I1209 11:41:03.791620  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .DriverName
	I1209 11:41:03.791780  657318 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1209 11:41:03.791794  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .GetState
	I1209 11:41:03.793089  657318 main.go:141] libmachine: Detecting operating system of created instance...
	I1209 11:41:03.793103  657318 main.go:141] libmachine: Waiting for SSH to be available...
	I1209 11:41:03.793109  657318 main.go:141] libmachine: Getting to WaitForSSH function...
	I1209 11:41:03.793114  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .GetSSHHostname
	I1209 11:41:03.795439  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | domain kubernetes-upgrade-835095 has defined MAC address 52:54:00:76:1d:d0 in network mk-kubernetes-upgrade-835095
	I1209 11:41:03.795887  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:1d:d0", ip: ""} in network mk-kubernetes-upgrade-835095: {Iface:virbr2 ExpiryTime:2024-12-09 12:40:57 +0000 UTC Type:0 Mac:52:54:00:76:1d:d0 Iaid: IPaddr:192.168.50.241 Prefix:24 Hostname:kubernetes-upgrade-835095 Clientid:01:52:54:00:76:1d:d0}
	I1209 11:41:03.795917  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | domain kubernetes-upgrade-835095 has defined IP address 192.168.50.241 and MAC address 52:54:00:76:1d:d0 in network mk-kubernetes-upgrade-835095
	I1209 11:41:03.796060  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .GetSSHPort
	I1209 11:41:03.796222  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .GetSSHKeyPath
	I1209 11:41:03.796382  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .GetSSHKeyPath
	I1209 11:41:03.796540  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .GetSSHUsername
	I1209 11:41:03.796713  657318 main.go:141] libmachine: Using SSH client type: native
	I1209 11:41:03.796920  657318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.241 22 <nil> <nil>}
	I1209 11:41:03.796934  657318 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1209 11:41:03.901500  657318 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 11:41:03.901530  657318 main.go:141] libmachine: Detecting the provisioner...
	I1209 11:41:03.901539  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .GetSSHHostname
	I1209 11:41:03.904347  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | domain kubernetes-upgrade-835095 has defined MAC address 52:54:00:76:1d:d0 in network mk-kubernetes-upgrade-835095
	I1209 11:41:03.904657  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:1d:d0", ip: ""} in network mk-kubernetes-upgrade-835095: {Iface:virbr2 ExpiryTime:2024-12-09 12:40:57 +0000 UTC Type:0 Mac:52:54:00:76:1d:d0 Iaid: IPaddr:192.168.50.241 Prefix:24 Hostname:kubernetes-upgrade-835095 Clientid:01:52:54:00:76:1d:d0}
	I1209 11:41:03.904693  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | domain kubernetes-upgrade-835095 has defined IP address 192.168.50.241 and MAC address 52:54:00:76:1d:d0 in network mk-kubernetes-upgrade-835095
	I1209 11:41:03.904841  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .GetSSHPort
	I1209 11:41:03.905045  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .GetSSHKeyPath
	I1209 11:41:03.905229  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .GetSSHKeyPath
	I1209 11:41:03.905333  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .GetSSHUsername
	I1209 11:41:03.905486  657318 main.go:141] libmachine: Using SSH client type: native
	I1209 11:41:03.905699  657318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.241 22 <nil> <nil>}
	I1209 11:41:03.905714  657318 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1209 11:41:04.010830  657318 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1209 11:41:04.010902  657318 main.go:141] libmachine: found compatible host: buildroot
	I1209 11:41:04.010913  657318 main.go:141] libmachine: Provisioning with buildroot...
	I1209 11:41:04.010922  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .GetMachineName
	I1209 11:41:04.011177  657318 buildroot.go:166] provisioning hostname "kubernetes-upgrade-835095"
	I1209 11:41:04.011225  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .GetMachineName
	I1209 11:41:04.011404  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .GetSSHHostname
	I1209 11:41:04.013729  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | domain kubernetes-upgrade-835095 has defined MAC address 52:54:00:76:1d:d0 in network mk-kubernetes-upgrade-835095
	I1209 11:41:04.014037  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:1d:d0", ip: ""} in network mk-kubernetes-upgrade-835095: {Iface:virbr2 ExpiryTime:2024-12-09 12:40:57 +0000 UTC Type:0 Mac:52:54:00:76:1d:d0 Iaid: IPaddr:192.168.50.241 Prefix:24 Hostname:kubernetes-upgrade-835095 Clientid:01:52:54:00:76:1d:d0}
	I1209 11:41:04.014067  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | domain kubernetes-upgrade-835095 has defined IP address 192.168.50.241 and MAC address 52:54:00:76:1d:d0 in network mk-kubernetes-upgrade-835095
	I1209 11:41:04.014211  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .GetSSHPort
	I1209 11:41:04.014401  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .GetSSHKeyPath
	I1209 11:41:04.014587  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .GetSSHKeyPath
	I1209 11:41:04.014737  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .GetSSHUsername
	I1209 11:41:04.014880  657318 main.go:141] libmachine: Using SSH client type: native
	I1209 11:41:04.015090  657318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.241 22 <nil> <nil>}
	I1209 11:41:04.015111  657318 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-835095 && echo "kubernetes-upgrade-835095" | sudo tee /etc/hostname
	I1209 11:41:04.131535  657318 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-835095
	
	I1209 11:41:04.131574  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .GetSSHHostname
	I1209 11:41:04.134670  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | domain kubernetes-upgrade-835095 has defined MAC address 52:54:00:76:1d:d0 in network mk-kubernetes-upgrade-835095
	I1209 11:41:04.135045  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:1d:d0", ip: ""} in network mk-kubernetes-upgrade-835095: {Iface:virbr2 ExpiryTime:2024-12-09 12:40:57 +0000 UTC Type:0 Mac:52:54:00:76:1d:d0 Iaid: IPaddr:192.168.50.241 Prefix:24 Hostname:kubernetes-upgrade-835095 Clientid:01:52:54:00:76:1d:d0}
	I1209 11:41:04.135074  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | domain kubernetes-upgrade-835095 has defined IP address 192.168.50.241 and MAC address 52:54:00:76:1d:d0 in network mk-kubernetes-upgrade-835095
	I1209 11:41:04.135291  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .GetSSHPort
	I1209 11:41:04.135517  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .GetSSHKeyPath
	I1209 11:41:04.135713  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .GetSSHKeyPath
	I1209 11:41:04.135851  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .GetSSHUsername
	I1209 11:41:04.136026  657318 main.go:141] libmachine: Using SSH client type: native
	I1209 11:41:04.136204  657318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.241 22 <nil> <nil>}
	I1209 11:41:04.136222  657318 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-835095' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-835095/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-835095' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 11:41:04.248057  657318 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 11:41:04.248087  657318 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20068-609844/.minikube CaCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20068-609844/.minikube}
	I1209 11:41:04.248160  657318 buildroot.go:174] setting up certificates
	I1209 11:41:04.248173  657318 provision.go:84] configureAuth start
	I1209 11:41:04.248185  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .GetMachineName
	I1209 11:41:04.248499  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .GetIP
	I1209 11:41:04.251230  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | domain kubernetes-upgrade-835095 has defined MAC address 52:54:00:76:1d:d0 in network mk-kubernetes-upgrade-835095
	I1209 11:41:04.251553  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:1d:d0", ip: ""} in network mk-kubernetes-upgrade-835095: {Iface:virbr2 ExpiryTime:2024-12-09 12:40:57 +0000 UTC Type:0 Mac:52:54:00:76:1d:d0 Iaid: IPaddr:192.168.50.241 Prefix:24 Hostname:kubernetes-upgrade-835095 Clientid:01:52:54:00:76:1d:d0}
	I1209 11:41:04.251583  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | domain kubernetes-upgrade-835095 has defined IP address 192.168.50.241 and MAC address 52:54:00:76:1d:d0 in network mk-kubernetes-upgrade-835095
	I1209 11:41:04.251759  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .GetSSHHostname
	I1209 11:41:04.254012  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | domain kubernetes-upgrade-835095 has defined MAC address 52:54:00:76:1d:d0 in network mk-kubernetes-upgrade-835095
	I1209 11:41:04.254359  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:1d:d0", ip: ""} in network mk-kubernetes-upgrade-835095: {Iface:virbr2 ExpiryTime:2024-12-09 12:40:57 +0000 UTC Type:0 Mac:52:54:00:76:1d:d0 Iaid: IPaddr:192.168.50.241 Prefix:24 Hostname:kubernetes-upgrade-835095 Clientid:01:52:54:00:76:1d:d0}
	I1209 11:41:04.254385  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | domain kubernetes-upgrade-835095 has defined IP address 192.168.50.241 and MAC address 52:54:00:76:1d:d0 in network mk-kubernetes-upgrade-835095
	I1209 11:41:04.254557  657318 provision.go:143] copyHostCerts
	I1209 11:41:04.254615  657318 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem, removing ...
	I1209 11:41:04.254636  657318 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem
	I1209 11:41:04.254690  657318 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem (1082 bytes)
	I1209 11:41:04.254792  657318 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem, removing ...
	I1209 11:41:04.254826  657318 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem
	I1209 11:41:04.254862  657318 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem (1123 bytes)
	I1209 11:41:04.254941  657318 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem, removing ...
	I1209 11:41:04.254951  657318 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem
	I1209 11:41:04.254970  657318 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem (1679 bytes)
	I1209 11:41:04.255023  657318 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-835095 san=[127.0.0.1 192.168.50.241 kubernetes-upgrade-835095 localhost minikube]
	I1209 11:41:04.535701  657318 provision.go:177] copyRemoteCerts
	I1209 11:41:04.535766  657318 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 11:41:04.535797  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .GetSSHHostname
	I1209 11:41:04.538668  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | domain kubernetes-upgrade-835095 has defined MAC address 52:54:00:76:1d:d0 in network mk-kubernetes-upgrade-835095
	I1209 11:41:04.538976  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:1d:d0", ip: ""} in network mk-kubernetes-upgrade-835095: {Iface:virbr2 ExpiryTime:2024-12-09 12:40:57 +0000 UTC Type:0 Mac:52:54:00:76:1d:d0 Iaid: IPaddr:192.168.50.241 Prefix:24 Hostname:kubernetes-upgrade-835095 Clientid:01:52:54:00:76:1d:d0}
	I1209 11:41:04.539013  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | domain kubernetes-upgrade-835095 has defined IP address 192.168.50.241 and MAC address 52:54:00:76:1d:d0 in network mk-kubernetes-upgrade-835095
	I1209 11:41:04.539146  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .GetSSHPort
	I1209 11:41:04.539356  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .GetSSHKeyPath
	I1209 11:41:04.539544  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .GetSSHUsername
	I1209 11:41:04.539688  657318 sshutil.go:53] new ssh client: &{IP:192.168.50.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/kubernetes-upgrade-835095/id_rsa Username:docker}
	I1209 11:41:04.625758  657318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1209 11:41:04.648083  657318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1209 11:41:04.669577  657318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1209 11:41:04.692911  657318 provision.go:87] duration metric: took 444.721696ms to configureAuth
	I1209 11:41:04.692943  657318 buildroot.go:189] setting minikube options for container-runtime
	I1209 11:41:04.693108  657318 config.go:182] Loaded profile config "kubernetes-upgrade-835095": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1209 11:41:04.693187  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .GetSSHHostname
	I1209 11:41:04.696109  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | domain kubernetes-upgrade-835095 has defined MAC address 52:54:00:76:1d:d0 in network mk-kubernetes-upgrade-835095
	I1209 11:41:04.696515  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:1d:d0", ip: ""} in network mk-kubernetes-upgrade-835095: {Iface:virbr2 ExpiryTime:2024-12-09 12:40:57 +0000 UTC Type:0 Mac:52:54:00:76:1d:d0 Iaid: IPaddr:192.168.50.241 Prefix:24 Hostname:kubernetes-upgrade-835095 Clientid:01:52:54:00:76:1d:d0}
	I1209 11:41:04.696549  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | domain kubernetes-upgrade-835095 has defined IP address 192.168.50.241 and MAC address 52:54:00:76:1d:d0 in network mk-kubernetes-upgrade-835095
	I1209 11:41:04.696679  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .GetSSHPort
	I1209 11:41:04.696886  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .GetSSHKeyPath
	I1209 11:41:04.697056  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .GetSSHKeyPath
	I1209 11:41:04.697253  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .GetSSHUsername
	I1209 11:41:04.697476  657318 main.go:141] libmachine: Using SSH client type: native
	I1209 11:41:04.697699  657318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.241 22 <nil> <nil>}
	I1209 11:41:04.697721  657318 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 11:41:04.933258  657318 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 11:41:04.933295  657318 main.go:141] libmachine: Checking connection to Docker...
	I1209 11:41:04.933304  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .GetURL
	I1209 11:41:04.934760  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | Using libvirt version 6000000
	I1209 11:41:04.936942  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | domain kubernetes-upgrade-835095 has defined MAC address 52:54:00:76:1d:d0 in network mk-kubernetes-upgrade-835095
	I1209 11:41:04.937363  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:1d:d0", ip: ""} in network mk-kubernetes-upgrade-835095: {Iface:virbr2 ExpiryTime:2024-12-09 12:40:57 +0000 UTC Type:0 Mac:52:54:00:76:1d:d0 Iaid: IPaddr:192.168.50.241 Prefix:24 Hostname:kubernetes-upgrade-835095 Clientid:01:52:54:00:76:1d:d0}
	I1209 11:41:04.937400  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | domain kubernetes-upgrade-835095 has defined IP address 192.168.50.241 and MAC address 52:54:00:76:1d:d0 in network mk-kubernetes-upgrade-835095
	I1209 11:41:04.937543  657318 main.go:141] libmachine: Docker is up and running!
	I1209 11:41:04.937562  657318 main.go:141] libmachine: Reticulating splines...
	I1209 11:41:04.937572  657318 client.go:171] duration metric: took 21.936830782s to LocalClient.Create
	I1209 11:41:04.937603  657318 start.go:167] duration metric: took 21.936914551s to libmachine.API.Create "kubernetes-upgrade-835095"
	I1209 11:41:04.937615  657318 start.go:293] postStartSetup for "kubernetes-upgrade-835095" (driver="kvm2")
	I1209 11:41:04.937626  657318 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 11:41:04.937644  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .DriverName
	I1209 11:41:04.937868  657318 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 11:41:04.937898  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .GetSSHHostname
	I1209 11:41:04.940175  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | domain kubernetes-upgrade-835095 has defined MAC address 52:54:00:76:1d:d0 in network mk-kubernetes-upgrade-835095
	I1209 11:41:04.940536  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:1d:d0", ip: ""} in network mk-kubernetes-upgrade-835095: {Iface:virbr2 ExpiryTime:2024-12-09 12:40:57 +0000 UTC Type:0 Mac:52:54:00:76:1d:d0 Iaid: IPaddr:192.168.50.241 Prefix:24 Hostname:kubernetes-upgrade-835095 Clientid:01:52:54:00:76:1d:d0}
	I1209 11:41:04.940568  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | domain kubernetes-upgrade-835095 has defined IP address 192.168.50.241 and MAC address 52:54:00:76:1d:d0 in network mk-kubernetes-upgrade-835095
	I1209 11:41:04.940764  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .GetSSHPort
	I1209 11:41:04.940905  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .GetSSHKeyPath
	I1209 11:41:04.941036  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .GetSSHUsername
	I1209 11:41:04.941197  657318 sshutil.go:53] new ssh client: &{IP:192.168.50.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/kubernetes-upgrade-835095/id_rsa Username:docker}
	I1209 11:41:05.024077  657318 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 11:41:05.028404  657318 info.go:137] Remote host: Buildroot 2023.02.9
	I1209 11:41:05.028436  657318 filesync.go:126] Scanning /home/jenkins/minikube-integration/20068-609844/.minikube/addons for local assets ...
	I1209 11:41:05.028519  657318 filesync.go:126] Scanning /home/jenkins/minikube-integration/20068-609844/.minikube/files for local assets ...
	I1209 11:41:05.028616  657318 filesync.go:149] local asset: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem -> 6170172.pem in /etc/ssl/certs
	I1209 11:41:05.028736  657318 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 11:41:05.037792  657318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem --> /etc/ssl/certs/6170172.pem (1708 bytes)
	I1209 11:41:05.060628  657318 start.go:296] duration metric: took 122.994751ms for postStartSetup
	I1209 11:41:05.060690  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .GetConfigRaw
	I1209 11:41:05.061517  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .GetIP
	I1209 11:41:05.064310  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | domain kubernetes-upgrade-835095 has defined MAC address 52:54:00:76:1d:d0 in network mk-kubernetes-upgrade-835095
	I1209 11:41:05.064675  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:1d:d0", ip: ""} in network mk-kubernetes-upgrade-835095: {Iface:virbr2 ExpiryTime:2024-12-09 12:40:57 +0000 UTC Type:0 Mac:52:54:00:76:1d:d0 Iaid: IPaddr:192.168.50.241 Prefix:24 Hostname:kubernetes-upgrade-835095 Clientid:01:52:54:00:76:1d:d0}
	I1209 11:41:05.064703  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | domain kubernetes-upgrade-835095 has defined IP address 192.168.50.241 and MAC address 52:54:00:76:1d:d0 in network mk-kubernetes-upgrade-835095
	I1209 11:41:05.064983  657318 profile.go:143] Saving config to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/kubernetes-upgrade-835095/config.json ...
	I1209 11:41:05.065202  657318 start.go:128] duration metric: took 22.086102174s to createHost
	I1209 11:41:05.065238  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .GetSSHHostname
	I1209 11:41:05.067618  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | domain kubernetes-upgrade-835095 has defined MAC address 52:54:00:76:1d:d0 in network mk-kubernetes-upgrade-835095
	I1209 11:41:05.068011  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:1d:d0", ip: ""} in network mk-kubernetes-upgrade-835095: {Iface:virbr2 ExpiryTime:2024-12-09 12:40:57 +0000 UTC Type:0 Mac:52:54:00:76:1d:d0 Iaid: IPaddr:192.168.50.241 Prefix:24 Hostname:kubernetes-upgrade-835095 Clientid:01:52:54:00:76:1d:d0}
	I1209 11:41:05.068043  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | domain kubernetes-upgrade-835095 has defined IP address 192.168.50.241 and MAC address 52:54:00:76:1d:d0 in network mk-kubernetes-upgrade-835095
	I1209 11:41:05.068159  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .GetSSHPort
	I1209 11:41:05.068406  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .GetSSHKeyPath
	I1209 11:41:05.068584  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .GetSSHKeyPath
	I1209 11:41:05.068688  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .GetSSHUsername
	I1209 11:41:05.068895  657318 main.go:141] libmachine: Using SSH client type: native
	I1209 11:41:05.069105  657318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.241 22 <nil> <nil>}
	I1209 11:41:05.069120  657318 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1209 11:41:05.171011  657318 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733744465.139863611
	
	I1209 11:41:05.171042  657318 fix.go:216] guest clock: 1733744465.139863611
	I1209 11:41:05.171053  657318 fix.go:229] Guest: 2024-12-09 11:41:05.139863611 +0000 UTC Remote: 2024-12-09 11:41:05.065217983 +0000 UTC m=+42.719054101 (delta=74.645628ms)
	I1209 11:41:05.171113  657318 fix.go:200] guest clock delta is within tolerance: 74.645628ms
	I1209 11:41:05.171123  657318 start.go:83] releasing machines lock for "kubernetes-upgrade-835095", held for 22.192208087s
	I1209 11:41:05.171162  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .DriverName
	I1209 11:41:05.171453  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .GetIP
	I1209 11:41:05.174461  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | domain kubernetes-upgrade-835095 has defined MAC address 52:54:00:76:1d:d0 in network mk-kubernetes-upgrade-835095
	I1209 11:41:05.174787  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:1d:d0", ip: ""} in network mk-kubernetes-upgrade-835095: {Iface:virbr2 ExpiryTime:2024-12-09 12:40:57 +0000 UTC Type:0 Mac:52:54:00:76:1d:d0 Iaid: IPaddr:192.168.50.241 Prefix:24 Hostname:kubernetes-upgrade-835095 Clientid:01:52:54:00:76:1d:d0}
	I1209 11:41:05.174819  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | domain kubernetes-upgrade-835095 has defined IP address 192.168.50.241 and MAC address 52:54:00:76:1d:d0 in network mk-kubernetes-upgrade-835095
	I1209 11:41:05.174994  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .DriverName
	I1209 11:41:05.175529  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .DriverName
	I1209 11:41:05.175732  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .DriverName
	I1209 11:41:05.175831  657318 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 11:41:05.175882  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .GetSSHHostname
	I1209 11:41:05.176024  657318 ssh_runner.go:195] Run: cat /version.json
	I1209 11:41:05.176056  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .GetSSHHostname
	I1209 11:41:05.178808  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | domain kubernetes-upgrade-835095 has defined MAC address 52:54:00:76:1d:d0 in network mk-kubernetes-upgrade-835095
	I1209 11:41:05.178981  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | domain kubernetes-upgrade-835095 has defined MAC address 52:54:00:76:1d:d0 in network mk-kubernetes-upgrade-835095
	I1209 11:41:05.179171  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:1d:d0", ip: ""} in network mk-kubernetes-upgrade-835095: {Iface:virbr2 ExpiryTime:2024-12-09 12:40:57 +0000 UTC Type:0 Mac:52:54:00:76:1d:d0 Iaid: IPaddr:192.168.50.241 Prefix:24 Hostname:kubernetes-upgrade-835095 Clientid:01:52:54:00:76:1d:d0}
	I1209 11:41:05.179198  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | domain kubernetes-upgrade-835095 has defined IP address 192.168.50.241 and MAC address 52:54:00:76:1d:d0 in network mk-kubernetes-upgrade-835095
	I1209 11:41:05.179363  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .GetSSHPort
	I1209 11:41:05.179471  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:1d:d0", ip: ""} in network mk-kubernetes-upgrade-835095: {Iface:virbr2 ExpiryTime:2024-12-09 12:40:57 +0000 UTC Type:0 Mac:52:54:00:76:1d:d0 Iaid: IPaddr:192.168.50.241 Prefix:24 Hostname:kubernetes-upgrade-835095 Clientid:01:52:54:00:76:1d:d0}
	I1209 11:41:05.179509  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | domain kubernetes-upgrade-835095 has defined IP address 192.168.50.241 and MAC address 52:54:00:76:1d:d0 in network mk-kubernetes-upgrade-835095
	I1209 11:41:05.179547  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .GetSSHKeyPath
	I1209 11:41:05.179636  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .GetSSHPort
	I1209 11:41:05.179709  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .GetSSHUsername
	I1209 11:41:05.179769  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .GetSSHKeyPath
	I1209 11:41:05.179856  657318 sshutil.go:53] new ssh client: &{IP:192.168.50.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/kubernetes-upgrade-835095/id_rsa Username:docker}
	I1209 11:41:05.179949  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .GetSSHUsername
	I1209 11:41:05.180089  657318 sshutil.go:53] new ssh client: &{IP:192.168.50.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/kubernetes-upgrade-835095/id_rsa Username:docker}
	I1209 11:41:05.259578  657318 ssh_runner.go:195] Run: systemctl --version
	I1209 11:41:05.292687  657318 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 11:41:05.465887  657318 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 11:41:05.471746  657318 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 11:41:05.471812  657318 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 11:41:05.487686  657318 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1209 11:41:05.487714  657318 start.go:495] detecting cgroup driver to use...
	I1209 11:41:05.487781  657318 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 11:41:05.505452  657318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 11:41:05.521106  657318 docker.go:217] disabling cri-docker service (if available) ...
	I1209 11:41:05.521236  657318 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 11:41:05.538045  657318 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 11:41:05.554367  657318 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 11:41:05.681346  657318 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 11:41:05.834912  657318 docker.go:233] disabling docker service ...
	I1209 11:41:05.834990  657318 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 11:41:05.850817  657318 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 11:41:05.864533  657318 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 11:41:05.994815  657318 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 11:41:06.121223  657318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 11:41:06.134439  657318 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 11:41:06.151726  657318 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1209 11:41:06.151802  657318 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:41:06.161717  657318 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 11:41:06.161786  657318 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:41:06.171460  657318 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:41:06.181559  657318 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:41:06.191842  657318 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 11:41:06.201827  657318 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 11:41:06.210845  657318 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1209 11:41:06.210905  657318 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1209 11:41:06.223209  657318 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 11:41:06.232359  657318 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 11:41:06.360561  657318 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 11:41:06.457583  657318 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 11:41:06.457681  657318 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 11:41:06.463321  657318 start.go:563] Will wait 60s for crictl version
	I1209 11:41:06.463386  657318 ssh_runner.go:195] Run: which crictl
	I1209 11:41:06.467018  657318 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 11:41:06.508882  657318 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1209 11:41:06.508979  657318 ssh_runner.go:195] Run: crio --version
	I1209 11:41:06.536044  657318 ssh_runner.go:195] Run: crio --version
	I1209 11:41:06.565811  657318 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1209 11:41:06.567052  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .GetIP
	I1209 11:41:06.571208  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | domain kubernetes-upgrade-835095 has defined MAC address 52:54:00:76:1d:d0 in network mk-kubernetes-upgrade-835095
	I1209 11:41:06.571955  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:1d:d0", ip: ""} in network mk-kubernetes-upgrade-835095: {Iface:virbr2 ExpiryTime:2024-12-09 12:40:57 +0000 UTC Type:0 Mac:52:54:00:76:1d:d0 Iaid: IPaddr:192.168.50.241 Prefix:24 Hostname:kubernetes-upgrade-835095 Clientid:01:52:54:00:76:1d:d0}
	I1209 11:41:06.572040  657318 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | domain kubernetes-upgrade-835095 has defined IP address 192.168.50.241 and MAC address 52:54:00:76:1d:d0 in network mk-kubernetes-upgrade-835095
	I1209 11:41:06.572260  657318 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1209 11:41:06.576597  657318 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 11:41:06.590042  657318 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-835095 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-835095 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.241 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 11:41:06.590209  657318 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1209 11:41:06.590278  657318 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 11:41:06.630871  657318 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1209 11:41:06.630952  657318 ssh_runner.go:195] Run: which lz4
	I1209 11:41:06.635215  657318 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1209 11:41:06.640311  657318 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1209 11:41:06.640350  657318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1209 11:41:08.163361  657318 crio.go:462] duration metric: took 1.528181295s to copy over tarball
	I1209 11:41:08.163466  657318 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1209 11:41:10.745569  657318 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.582062416s)
	I1209 11:41:10.745614  657318 crio.go:469] duration metric: took 2.582208131s to extract the tarball
	I1209 11:41:10.745624  657318 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1209 11:41:10.787388  657318 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 11:41:10.836535  657318 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1209 11:41:10.836569  657318 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1209 11:41:10.836637  657318 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 11:41:10.836683  657318 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1209 11:41:10.836708  657318 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1209 11:41:10.836727  657318 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1209 11:41:10.836763  657318 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1209 11:41:10.836724  657318 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1209 11:41:10.836691  657318 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 11:41:10.836691  657318 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1209 11:41:10.838532  657318 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 11:41:10.838545  657318 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1209 11:41:10.838570  657318 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1209 11:41:10.838544  657318 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 11:41:10.838532  657318 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1209 11:41:10.838536  657318 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1209 11:41:10.838534  657318 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1209 11:41:10.838542  657318 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1209 11:41:11.044304  657318 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 11:41:11.081284  657318 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1209 11:41:11.086069  657318 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1209 11:41:11.086719  657318 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1209 11:41:11.087568  657318 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1209 11:41:11.087649  657318 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 11:41:11.087693  657318 ssh_runner.go:195] Run: which crictl
	I1209 11:41:11.111077  657318 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1209 11:41:11.115017  657318 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1209 11:41:11.139143  657318 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1209 11:41:11.147642  657318 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1209 11:41:11.147695  657318 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1209 11:41:11.147745  657318 ssh_runner.go:195] Run: which crictl
	I1209 11:41:11.214786  657318 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1209 11:41:11.214842  657318 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1209 11:41:11.214898  657318 ssh_runner.go:195] Run: which crictl
	I1209 11:41:11.216483  657318 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 11:41:11.216673  657318 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1209 11:41:11.216714  657318 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1209 11:41:11.216755  657318 ssh_runner.go:195] Run: which crictl
	I1209 11:41:11.218239  657318 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1209 11:41:11.218270  657318 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1209 11:41:11.218310  657318 ssh_runner.go:195] Run: which crictl
	I1209 11:41:11.231584  657318 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1209 11:41:11.231659  657318 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1209 11:41:11.231709  657318 ssh_runner.go:195] Run: which crictl
	I1209 11:41:11.271629  657318 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1209 11:41:11.271679  657318 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1209 11:41:11.271688  657318 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1209 11:41:11.271708  657318 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1209 11:41:11.271726  657318 ssh_runner.go:195] Run: which crictl
	I1209 11:41:11.284527  657318 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 11:41:11.284580  657318 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1209 11:41:11.284635  657318 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1209 11:41:11.284684  657318 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1209 11:41:11.364465  657318 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1209 11:41:11.364469  657318 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1209 11:41:11.404968  657318 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 11:41:11.405072  657318 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1209 11:41:11.405134  657318 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1209 11:41:11.405196  657318 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1209 11:41:11.415604  657318 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1209 11:41:11.454711  657318 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1209 11:41:11.553729  657318 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1209 11:41:11.553775  657318 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1209 11:41:11.553877  657318 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1209 11:41:11.576701  657318 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1209 11:41:11.581492  657318 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1209 11:41:11.598273  657318 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1209 11:41:11.598647  657318 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1209 11:41:11.652988  657318 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1209 11:41:11.662518  657318 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1209 11:41:11.682887  657318 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1209 11:41:11.693940  657318 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1209 11:41:11.694909  657318 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1209 11:41:11.721717  657318 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1209 11:41:12.115438  657318 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 11:41:12.253833  657318 cache_images.go:92] duration metric: took 1.417237708s to LoadCachedImages
	W1209 11:41:12.253958  657318 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I1209 11:41:12.253977  657318 kubeadm.go:934] updating node { 192.168.50.241 8443 v1.20.0 crio true true} ...
	I1209 11:41:12.254105  657318 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-835095 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.241
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-835095 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 11:41:12.254250  657318 ssh_runner.go:195] Run: crio config
	I1209 11:41:12.305066  657318 cni.go:84] Creating CNI manager for ""
	I1209 11:41:12.305103  657318 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 11:41:12.305119  657318 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1209 11:41:12.305165  657318 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.241 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-835095 NodeName:kubernetes-upgrade-835095 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.241"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.241 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1209 11:41:12.305345  657318 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.241
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-835095"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.241
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.241"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 11:41:12.305429  657318 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1209 11:41:12.318616  657318 binaries.go:44] Found k8s binaries, skipping transfer
	I1209 11:41:12.318711  657318 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 11:41:12.331105  657318 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (433 bytes)
	I1209 11:41:12.348571  657318 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 11:41:12.364021  657318 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I1209 11:41:12.379833  657318 ssh_runner.go:195] Run: grep 192.168.50.241	control-plane.minikube.internal$ /etc/hosts
	I1209 11:41:12.383791  657318 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.241	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 11:41:12.396220  657318 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 11:41:12.522524  657318 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 11:41:12.542058  657318 certs.go:68] Setting up /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/kubernetes-upgrade-835095 for IP: 192.168.50.241
	I1209 11:41:12.542084  657318 certs.go:194] generating shared ca certs ...
	I1209 11:41:12.542102  657318 certs.go:226] acquiring lock for ca certs: {Name:mk2df5887e08965d909a9c950da5dfffb8a04ddc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 11:41:12.542306  657318 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.key
	I1209 11:41:12.542350  657318 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.key
	I1209 11:41:12.542363  657318 certs.go:256] generating profile certs ...
	I1209 11:41:12.542426  657318 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/kubernetes-upgrade-835095/client.key
	I1209 11:41:12.542440  657318 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/kubernetes-upgrade-835095/client.crt with IP's: []
	I1209 11:41:12.798986  657318 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/kubernetes-upgrade-835095/client.crt ...
	I1209 11:41:12.799028  657318 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/kubernetes-upgrade-835095/client.crt: {Name:mk9072c5bf4b3f98237a56e0aa2c056eb5b624ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 11:41:12.799225  657318 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/kubernetes-upgrade-835095/client.key ...
	I1209 11:41:12.799245  657318 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/kubernetes-upgrade-835095/client.key: {Name:mk1467afd01258834a150d0188496582bab22212 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 11:41:12.799354  657318 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/kubernetes-upgrade-835095/apiserver.key.65a63dc6
	I1209 11:41:12.799376  657318 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/kubernetes-upgrade-835095/apiserver.crt.65a63dc6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.241]
	I1209 11:41:13.160501  657318 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/kubernetes-upgrade-835095/apiserver.crt.65a63dc6 ...
	I1209 11:41:13.160537  657318 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/kubernetes-upgrade-835095/apiserver.crt.65a63dc6: {Name:mk91f984247d4d51fbc2f2e790a0f9c478e6f229 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 11:41:13.160730  657318 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/kubernetes-upgrade-835095/apiserver.key.65a63dc6 ...
	I1209 11:41:13.160758  657318 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/kubernetes-upgrade-835095/apiserver.key.65a63dc6: {Name:mk9da0fc4e0c7718db2786ee331375d186305190 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 11:41:13.160880  657318 certs.go:381] copying /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/kubernetes-upgrade-835095/apiserver.crt.65a63dc6 -> /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/kubernetes-upgrade-835095/apiserver.crt
	I1209 11:41:13.160956  657318 certs.go:385] copying /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/kubernetes-upgrade-835095/apiserver.key.65a63dc6 -> /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/kubernetes-upgrade-835095/apiserver.key
	I1209 11:41:13.161008  657318 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/kubernetes-upgrade-835095/proxy-client.key
	I1209 11:41:13.161027  657318 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/kubernetes-upgrade-835095/proxy-client.crt with IP's: []
	I1209 11:41:13.334528  657318 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/kubernetes-upgrade-835095/proxy-client.crt ...
	I1209 11:41:13.334563  657318 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/kubernetes-upgrade-835095/proxy-client.crt: {Name:mkb4d67ace129873af63c54641adb7f7b82b2910 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 11:41:13.334733  657318 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/kubernetes-upgrade-835095/proxy-client.key ...
	I1209 11:41:13.334747  657318 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/kubernetes-upgrade-835095/proxy-client.key: {Name:mk4c78bec64a0cb2e9490b18157a25da645afd51 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 11:41:13.334915  657318 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017.pem (1338 bytes)
	W1209 11:41:13.334969  657318 certs.go:480] ignoring /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017_empty.pem, impossibly tiny 0 bytes
	I1209 11:41:13.334979  657318 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem (1675 bytes)
	I1209 11:41:13.335013  657318 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem (1082 bytes)
	I1209 11:41:13.335051  657318 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem (1123 bytes)
	I1209 11:41:13.335087  657318 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem (1679 bytes)
	I1209 11:41:13.335146  657318 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem (1708 bytes)
	I1209 11:41:13.335763  657318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 11:41:13.367748  657318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1209 11:41:13.392521  657318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 11:41:13.416002  657318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1209 11:41:13.439458  657318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/kubernetes-upgrade-835095/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1209 11:41:13.465643  657318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/kubernetes-upgrade-835095/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1209 11:41:13.487833  657318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/kubernetes-upgrade-835095/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 11:41:13.515699  657318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/kubernetes-upgrade-835095/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1209 11:41:13.542046  657318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017.pem --> /usr/share/ca-certificates/617017.pem (1338 bytes)
	I1209 11:41:13.565804  657318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem --> /usr/share/ca-certificates/6170172.pem (1708 bytes)
	I1209 11:41:13.589297  657318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 11:41:13.613209  657318 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 11:41:13.629491  657318 ssh_runner.go:195] Run: openssl version
	I1209 11:41:13.635464  657318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/617017.pem && ln -fs /usr/share/ca-certificates/617017.pem /etc/ssl/certs/617017.pem"
	I1209 11:41:13.646276  657318 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/617017.pem
	I1209 11:41:13.650650  657318 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 10:45 /usr/share/ca-certificates/617017.pem
	I1209 11:41:13.650712  657318 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/617017.pem
	I1209 11:41:13.656755  657318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/617017.pem /etc/ssl/certs/51391683.0"
	I1209 11:41:13.668511  657318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6170172.pem && ln -fs /usr/share/ca-certificates/6170172.pem /etc/ssl/certs/6170172.pem"
	I1209 11:41:13.680398  657318 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6170172.pem
	I1209 11:41:13.684767  657318 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 10:45 /usr/share/ca-certificates/6170172.pem
	I1209 11:41:13.684844  657318 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6170172.pem
	I1209 11:41:13.690321  657318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6170172.pem /etc/ssl/certs/3ec20f2e.0"
	I1209 11:41:13.701458  657318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1209 11:41:13.713470  657318 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 11:41:13.717850  657318 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 10:34 /usr/share/ca-certificates/minikubeCA.pem
	I1209 11:41:13.717925  657318 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 11:41:13.723622  657318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1209 11:41:13.735365  657318 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 11:41:13.739344  657318 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1209 11:41:13.739425  657318 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-835095 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-835095 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.241 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 11:41:13.739533  657318 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 11:41:13.739592  657318 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 11:41:13.778325  657318 cri.go:89] found id: ""
	I1209 11:41:13.778414  657318 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 11:41:13.788896  657318 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 11:41:13.798632  657318 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 11:41:13.811999  657318 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 11:41:13.812030  657318 kubeadm.go:157] found existing configuration files:
	
	I1209 11:41:13.812142  657318 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 11:41:13.824944  657318 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 11:41:13.825038  657318 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 11:41:13.835349  657318 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 11:41:13.844791  657318 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 11:41:13.844877  657318 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 11:41:13.854651  657318 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 11:41:13.863523  657318 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 11:41:13.863600  657318 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 11:41:13.872851  657318 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 11:41:13.882882  657318 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 11:41:13.882959  657318 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 11:41:13.893532  657318 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1209 11:41:14.198789  657318 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1209 11:43:12.232702  657318 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1209 11:43:12.232808  657318 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1209 11:43:12.234247  657318 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1209 11:43:12.234347  657318 kubeadm.go:310] [preflight] Running pre-flight checks
	I1209 11:43:12.234450  657318 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1209 11:43:12.234602  657318 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1209 11:43:12.234742  657318 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1209 11:43:12.234855  657318 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1209 11:43:12.236413  657318 out.go:235]   - Generating certificates and keys ...
	I1209 11:43:12.236513  657318 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1209 11:43:12.236604  657318 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1209 11:43:12.236701  657318 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1209 11:43:12.236783  657318 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1209 11:43:12.236882  657318 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1209 11:43:12.236956  657318 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1209 11:43:12.237043  657318 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1209 11:43:12.237196  657318 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-835095 localhost] and IPs [192.168.50.241 127.0.0.1 ::1]
	I1209 11:43:12.237292  657318 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1209 11:43:12.237485  657318 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-835095 localhost] and IPs [192.168.50.241 127.0.0.1 ::1]
	I1209 11:43:12.237593  657318 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1209 11:43:12.237684  657318 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1209 11:43:12.237764  657318 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1209 11:43:12.237836  657318 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1209 11:43:12.237906  657318 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1209 11:43:12.237975  657318 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1209 11:43:12.238061  657318 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1209 11:43:12.238134  657318 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1209 11:43:12.238300  657318 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1209 11:43:12.238421  657318 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1209 11:43:12.238457  657318 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1209 11:43:12.238515  657318 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1209 11:43:12.240029  657318 out.go:235]   - Booting up control plane ...
	I1209 11:43:12.240120  657318 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1209 11:43:12.240233  657318 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1209 11:43:12.240347  657318 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1209 11:43:12.240455  657318 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1209 11:43:12.240673  657318 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1209 11:43:12.240750  657318 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1209 11:43:12.240842  657318 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1209 11:43:12.241093  657318 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1209 11:43:12.241192  657318 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1209 11:43:12.241429  657318 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1209 11:43:12.241537  657318 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1209 11:43:12.241799  657318 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1209 11:43:12.241906  657318 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1209 11:43:12.242097  657318 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1209 11:43:12.242199  657318 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1209 11:43:12.242464  657318 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1209 11:43:12.242486  657318 kubeadm.go:310] 
	I1209 11:43:12.242546  657318 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1209 11:43:12.242605  657318 kubeadm.go:310] 		timed out waiting for the condition
	I1209 11:43:12.242616  657318 kubeadm.go:310] 
	I1209 11:43:12.242666  657318 kubeadm.go:310] 	This error is likely caused by:
	I1209 11:43:12.242707  657318 kubeadm.go:310] 		- The kubelet is not running
	I1209 11:43:12.242830  657318 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1209 11:43:12.242843  657318 kubeadm.go:310] 
	I1209 11:43:12.242986  657318 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1209 11:43:12.243037  657318 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1209 11:43:12.243078  657318 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1209 11:43:12.243088  657318 kubeadm.go:310] 
	I1209 11:43:12.243246  657318 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1209 11:43:12.243376  657318 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1209 11:43:12.243390  657318 kubeadm.go:310] 
	I1209 11:43:12.243521  657318 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1209 11:43:12.243644  657318 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1209 11:43:12.243756  657318 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1209 11:43:12.243858  657318 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1209 11:43:12.243899  657318 kubeadm.go:310] 
	W1209 11:43:12.244052  657318 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-835095 localhost] and IPs [192.168.50.241 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-835095 localhost] and IPs [192.168.50.241 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-835095 localhost] and IPs [192.168.50.241 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-835095 localhost] and IPs [192.168.50.241 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1209 11:43:12.244104  657318 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1209 11:43:12.935679  657318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 11:43:12.949634  657318 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 11:43:12.958650  657318 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 11:43:12.958668  657318 kubeadm.go:157] found existing configuration files:
	
	I1209 11:43:12.958707  657318 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 11:43:12.967274  657318 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 11:43:12.967355  657318 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 11:43:12.975971  657318 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 11:43:12.987688  657318 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 11:43:12.987756  657318 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 11:43:12.999686  657318 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 11:43:13.008726  657318 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 11:43:13.008797  657318 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 11:43:13.017694  657318 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 11:43:13.026212  657318 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 11:43:13.026274  657318 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 11:43:13.035058  657318 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1209 11:43:13.106387  657318 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1209 11:43:13.106483  657318 kubeadm.go:310] [preflight] Running pre-flight checks
	I1209 11:43:13.247332  657318 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1209 11:43:13.247511  657318 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1209 11:43:13.247640  657318 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1209 11:43:13.463747  657318 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1209 11:43:13.466523  657318 out.go:235]   - Generating certificates and keys ...
	I1209 11:43:13.466626  657318 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1209 11:43:13.466712  657318 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1209 11:43:13.466863  657318 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1209 11:43:13.466972  657318 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1209 11:43:13.467089  657318 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1209 11:43:13.467177  657318 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1209 11:43:13.467289  657318 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1209 11:43:13.467378  657318 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1209 11:43:13.467488  657318 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1209 11:43:13.467616  657318 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1209 11:43:13.467672  657318 kubeadm.go:310] [certs] Using the existing "sa" key
	I1209 11:43:13.467780  657318 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1209 11:43:13.581191  657318 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1209 11:43:13.652305  657318 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1209 11:43:13.748876  657318 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1209 11:43:13.825274  657318 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1209 11:43:13.839386  657318 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1209 11:43:13.840462  657318 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1209 11:43:13.840545  657318 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1209 11:43:13.967792  657318 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1209 11:43:13.969332  657318 out.go:235]   - Booting up control plane ...
	I1209 11:43:13.969444  657318 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1209 11:43:13.975374  657318 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1209 11:43:13.976343  657318 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1209 11:43:13.977045  657318 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1209 11:43:13.979035  657318 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1209 11:43:53.981625  657318 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1209 11:43:53.982232  657318 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1209 11:43:53.982483  657318 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1209 11:43:58.982906  657318 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1209 11:43:58.983145  657318 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1209 11:44:08.983912  657318 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1209 11:44:08.984101  657318 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1209 11:44:28.983510  657318 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1209 11:44:28.983736  657318 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1209 11:45:08.983452  657318 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1209 11:45:08.983736  657318 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1209 11:45:08.983763  657318 kubeadm.go:310] 
	I1209 11:45:08.983811  657318 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1209 11:45:08.983862  657318 kubeadm.go:310] 		timed out waiting for the condition
	I1209 11:45:08.983873  657318 kubeadm.go:310] 
	I1209 11:45:08.983925  657318 kubeadm.go:310] 	This error is likely caused by:
	I1209 11:45:08.983970  657318 kubeadm.go:310] 		- The kubelet is not running
	I1209 11:45:08.984110  657318 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1209 11:45:08.984123  657318 kubeadm.go:310] 
	I1209 11:45:08.984266  657318 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1209 11:45:08.984309  657318 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1209 11:45:08.984363  657318 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1209 11:45:08.984373  657318 kubeadm.go:310] 
	I1209 11:45:08.984497  657318 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1209 11:45:08.984577  657318 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1209 11:45:08.984585  657318 kubeadm.go:310] 
	I1209 11:45:08.984676  657318 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1209 11:45:08.984753  657318 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1209 11:45:08.984816  657318 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1209 11:45:08.984873  657318 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1209 11:45:08.984881  657318 kubeadm.go:310] 
	I1209 11:45:08.985666  657318 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1209 11:45:08.985815  657318 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1209 11:45:08.985913  657318 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1209 11:45:08.985989  657318 kubeadm.go:394] duration metric: took 3m55.24657848s to StartCluster
	I1209 11:45:08.986037  657318 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:45:08.986103  657318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:45:09.030186  657318 cri.go:89] found id: ""
	I1209 11:45:09.030225  657318 logs.go:282] 0 containers: []
	W1209 11:45:09.030247  657318 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:45:09.030256  657318 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:45:09.030337  657318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:45:09.064263  657318 cri.go:89] found id: ""
	I1209 11:45:09.064297  657318 logs.go:282] 0 containers: []
	W1209 11:45:09.064309  657318 logs.go:284] No container was found matching "etcd"
	I1209 11:45:09.064322  657318 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:45:09.064405  657318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:45:09.099512  657318 cri.go:89] found id: ""
	I1209 11:45:09.099549  657318 logs.go:282] 0 containers: []
	W1209 11:45:09.099561  657318 logs.go:284] No container was found matching "coredns"
	I1209 11:45:09.099570  657318 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:45:09.099645  657318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:45:09.132754  657318 cri.go:89] found id: ""
	I1209 11:45:09.132784  657318 logs.go:282] 0 containers: []
	W1209 11:45:09.132795  657318 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:45:09.132804  657318 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:45:09.132864  657318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:45:09.165866  657318 cri.go:89] found id: ""
	I1209 11:45:09.165902  657318 logs.go:282] 0 containers: []
	W1209 11:45:09.165916  657318 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:45:09.165926  657318 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:45:09.166003  657318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:45:09.199134  657318 cri.go:89] found id: ""
	I1209 11:45:09.199176  657318 logs.go:282] 0 containers: []
	W1209 11:45:09.199189  657318 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:45:09.199198  657318 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:45:09.199268  657318 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:45:09.232293  657318 cri.go:89] found id: ""
	I1209 11:45:09.232328  657318 logs.go:282] 0 containers: []
	W1209 11:45:09.232338  657318 logs.go:284] No container was found matching "kindnet"
	I1209 11:45:09.232350  657318 logs.go:123] Gathering logs for kubelet ...
	I1209 11:45:09.232365  657318 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:45:09.283086  657318 logs.go:123] Gathering logs for dmesg ...
	I1209 11:45:09.283130  657318 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:45:09.299982  657318 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:45:09.300041  657318 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:45:09.460803  657318 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:45:09.460838  657318 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:45:09.460856  657318 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:45:09.567903  657318 logs.go:123] Gathering logs for container status ...
	I1209 11:45:09.567951  657318 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1209 11:45:09.612621  657318 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1209 11:45:09.612708  657318 out.go:270] * 
	* 
	W1209 11:45:09.612795  657318 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1209 11:45:09.612810  657318 out.go:270] * 
	* 
	W1209 11:45:09.613673  657318 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 11:45:09.617717  657318 out.go:201] 
	W1209 11:45:09.619094  657318 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1209 11:45:09.619133  657318 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1209 11:45:09.619157  657318 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1209 11:45:09.620662  657318 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-835095 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-835095
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-835095: (5.588869736s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-835095 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-835095 status --format={{.Host}}: exit status 7 (77.020354ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-835095 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-835095 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (44.4000136s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-835095 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-835095 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-835095 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (87.712381ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-835095] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20068
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20068-609844/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20068-609844/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-835095
	    minikube start -p kubernetes-upgrade-835095 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-8350952 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.2, by running:
	    
	    minikube start -p kubernetes-upgrade-835095 --kubernetes-version=v1.31.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-835095 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-835095 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (22.498547663s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-12-09 11:46:22.397409781 +0000 UTC m=+4370.858135298
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-835095 -n kubernetes-upgrade-835095
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-835095 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-835095 logs -n 25: (1.504679906s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cert-options-935628 -- sudo                        | cert-options-935628          | jenkins | v1.34.0 | 09 Dec 24 11:40 UTC | 09 Dec 24 11:40 UTC |
	|         | cat /etc/kubernetes/admin.conf                        |                              |         |         |                     |                     |
	| delete  | -p cert-options-935628                                | cert-options-935628          | jenkins | v1.34.0 | 09 Dec 24 11:40 UTC | 09 Dec 24 11:40 UTC |
	| start   | -p kubernetes-upgrade-835095                          | kubernetes-upgrade-835095    | jenkins | v1.34.0 | 09 Dec 24 11:40 UTC |                     |
	|         | --memory=2200                                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                          |                              |         |         |                     |                     |
	|         | --alsologtostderr                                     |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                    |                              |         |         |                     |                     |
	|         | --container-runtime=crio                              |                              |         |         |                     |                     |
	| ssh     | -p NoKubernetes-597739 sudo                           | NoKubernetes-597739          | jenkins | v1.34.0 | 09 Dec 24 11:40 UTC |                     |
	|         | systemctl is-active --quiet                           |                              |         |         |                     |                     |
	|         | service kubelet                                       |                              |         |         |                     |                     |
	| delete  | -p NoKubernetes-597739                                | NoKubernetes-597739          | jenkins | v1.34.0 | 09 Dec 24 11:40 UTC | 09 Dec 24 11:40 UTC |
	| start   | -p running-upgrade-119214                             | minikube                     | jenkins | v1.26.0 | 09 Dec 24 11:40 UTC | 09 Dec 24 11:41 UTC |
	|         | --memory=2200 --vm-driver=kvm2                        |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                             |                              |         |         |                     |                     |
	| stop    | stopped-upgrade-676904 stop                           | minikube                     | jenkins | v1.26.0 | 09 Dec 24 11:40 UTC | 09 Dec 24 11:41 UTC |
	| start   | -p stopped-upgrade-676904                             | stopped-upgrade-676904       | jenkins | v1.34.0 | 09 Dec 24 11:41 UTC | 09 Dec 24 11:42 UTC |
	|         | --memory=2200                                         |                              |         |         |                     |                     |
	|         | --alsologtostderr                                     |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                    |                              |         |         |                     |                     |
	|         | --container-runtime=crio                              |                              |         |         |                     |                     |
	| start   | -p running-upgrade-119214                             | running-upgrade-119214       | jenkins | v1.34.0 | 09 Dec 24 11:41 UTC | 09 Dec 24 11:43 UTC |
	|         | --memory=2200                                         |                              |         |         |                     |                     |
	|         | --alsologtostderr                                     |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                    |                              |         |         |                     |                     |
	|         | --container-runtime=crio                              |                              |         |         |                     |                     |
	| delete  | -p stopped-upgrade-676904                             | stopped-upgrade-676904       | jenkins | v1.34.0 | 09 Dec 24 11:42 UTC | 09 Dec 24 11:42 UTC |
	| start   | -p old-k8s-version-014592                             | old-k8s-version-014592       | jenkins | v1.34.0 | 09 Dec 24 11:42 UTC |                     |
	|         | --memory=2200                                         |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                         |                              |         |         |                     |                     |
	|         | --kvm-network=default                                 |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                         |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                               |                              |         |         |                     |                     |
	|         | --keep-context=false                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                         |                              |         |         |                     |                     |
	|         | --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                          |                              |         |         |                     |                     |
	| start   | -p cert-expiration-752166                             | cert-expiration-752166       | jenkins | v1.34.0 | 09 Dec 24 11:42 UTC | 09 Dec 24 11:43 UTC |
	|         | --memory=2048                                         |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                               |                              |         |         |                     |                     |
	|         | --driver=kvm2                                         |                              |         |         |                     |                     |
	|         | --container-runtime=crio                              |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-752166                             | cert-expiration-752166       | jenkins | v1.34.0 | 09 Dec 24 11:43 UTC | 09 Dec 24 11:43 UTC |
	| start   | -p embed-certs-005123                                 | embed-certs-005123           | jenkins | v1.34.0 | 09 Dec 24 11:43 UTC | 09 Dec 24 11:43 UTC |
	|         | --memory=2200                                         |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                         |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                           |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                             |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                          |                              |         |         |                     |                     |
	| delete  | -p running-upgrade-119214                             | running-upgrade-119214       | jenkins | v1.34.0 | 09 Dec 24 11:43 UTC | 09 Dec 24 11:43 UTC |
	| delete  | -p                                                    | disable-driver-mounts-905993 | jenkins | v1.34.0 | 09 Dec 24 11:43 UTC | 09 Dec 24 11:43 UTC |
	|         | disable-driver-mounts-905993                          |                              |         |         |                     |                     |
	| start   | -p no-preload-820741                                  | no-preload-820741            | jenkins | v1.34.0 | 09 Dec 24 11:43 UTC | 09 Dec 24 11:44 UTC |
	|         | --memory=2200                                         |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                         |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                         |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                             |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                          |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-005123           | embed-certs-005123           | jenkins | v1.34.0 | 09 Dec 24 11:44 UTC | 09 Dec 24 11:44 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                |                              |         |         |                     |                     |
	| stop    | -p embed-certs-005123                                 | embed-certs-005123           | jenkins | v1.34.0 | 09 Dec 24 11:44 UTC |                     |
	|         | --alsologtostderr -v=3                                |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-820741            | no-preload-820741            | jenkins | v1.34.0 | 09 Dec 24 11:44 UTC | 09 Dec 24 11:44 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                |                              |         |         |                     |                     |
	| stop    | -p no-preload-820741                                  | no-preload-820741            | jenkins | v1.34.0 | 09 Dec 24 11:44 UTC |                     |
	|         | --alsologtostderr -v=3                                |                              |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-835095                          | kubernetes-upgrade-835095    | jenkins | v1.34.0 | 09 Dec 24 11:45 UTC | 09 Dec 24 11:45 UTC |
	| start   | -p kubernetes-upgrade-835095                          | kubernetes-upgrade-835095    | jenkins | v1.34.0 | 09 Dec 24 11:45 UTC | 09 Dec 24 11:45 UTC |
	|         | --memory=2200                                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                          |                              |         |         |                     |                     |
	|         | --alsologtostderr                                     |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                    |                              |         |         |                     |                     |
	|         | --container-runtime=crio                              |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-835095                          | kubernetes-upgrade-835095    | jenkins | v1.34.0 | 09 Dec 24 11:45 UTC |                     |
	|         | --memory=2200                                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                          |                              |         |         |                     |                     |
	|         | --driver=kvm2                                         |                              |         |         |                     |                     |
	|         | --container-runtime=crio                              |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-835095                          | kubernetes-upgrade-835095    | jenkins | v1.34.0 | 09 Dec 24 11:45 UTC | 09 Dec 24 11:46 UTC |
	|         | --memory=2200                                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                          |                              |         |         |                     |                     |
	|         | --alsologtostderr                                     |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                    |                              |         |         |                     |                     |
	|         | --container-runtime=crio                              |                              |         |         |                     |                     |
	|---------|-------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/09 11:45:59
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 11:45:59.942739  660953 out.go:345] Setting OutFile to fd 1 ...
	I1209 11:45:59.943004  660953 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 11:45:59.943014  660953 out.go:358] Setting ErrFile to fd 2...
	I1209 11:45:59.943019  660953 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 11:45:59.943229  660953 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20068-609844/.minikube/bin
	I1209 11:45:59.943768  660953 out.go:352] Setting JSON to false
	I1209 11:45:59.944716  660953 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":16104,"bootTime":1733728656,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 11:45:59.944825  660953 start.go:139] virtualization: kvm guest
	I1209 11:45:59.946752  660953 out.go:177] * [kubernetes-upgrade-835095] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1209 11:45:59.948036  660953 notify.go:220] Checking for updates...
	I1209 11:45:59.948052  660953 out.go:177]   - MINIKUBE_LOCATION=20068
	I1209 11:45:59.949359  660953 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 11:45:59.950448  660953 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20068-609844/kubeconfig
	I1209 11:45:59.951626  660953 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20068-609844/.minikube
	I1209 11:45:59.952805  660953 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1209 11:45:59.953958  660953 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 11:45:59.955360  660953 config.go:182] Loaded profile config "kubernetes-upgrade-835095": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 11:45:59.955897  660953 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:45:59.955967  660953 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:45:59.971146  660953 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35255
	I1209 11:45:59.971734  660953 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:45:59.972336  660953 main.go:141] libmachine: Using API Version  1
	I1209 11:45:59.972365  660953 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:45:59.972742  660953 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:45:59.972950  660953 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .DriverName
	I1209 11:45:59.973226  660953 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 11:45:59.973544  660953 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:45:59.973581  660953 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:45:59.988786  660953 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33489
	I1209 11:45:59.989259  660953 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:45:59.989827  660953 main.go:141] libmachine: Using API Version  1
	I1209 11:45:59.989875  660953 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:45:59.990192  660953 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:45:59.990366  660953 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .DriverName
	I1209 11:46:00.027978  660953 out.go:177] * Using the kvm2 driver based on existing profile
	I1209 11:46:00.029333  660953 start.go:297] selected driver: kvm2
	I1209 11:46:00.029353  660953 start.go:901] validating driver "kvm2" against &{Name:kubernetes-upgrade-835095 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.31.2 ClusterName:kubernetes-upgrade-835095 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.241 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 11:46:00.029494  660953 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 11:46:00.030566  660953 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 11:46:00.030683  660953 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20068-609844/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1209 11:46:00.046579  660953 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1209 11:46:00.047111  660953 cni.go:84] Creating CNI manager for ""
	I1209 11:46:00.047167  660953 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 11:46:00.047227  660953 start.go:340] cluster config:
	{Name:kubernetes-upgrade-835095 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:kubernetes-upgrade-835095 Namespace:d
efault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.241 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 11:46:00.047376  660953 iso.go:125] acquiring lock: {Name:mk3bf4d9da9b75cc0ae0a3732f4492bac54a4b46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 11:46:00.049188  660953 out.go:177] * Starting "kubernetes-upgrade-835095" primary control-plane node in "kubernetes-upgrade-835095" cluster
	I1209 11:46:00.050554  660953 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 11:46:00.050599  660953 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1209 11:46:00.050611  660953 cache.go:56] Caching tarball of preloaded images
	I1209 11:46:00.050710  660953 preload.go:172] Found /home/jenkins/minikube-integration/20068-609844/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1209 11:46:00.050726  660953 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1209 11:46:00.050848  660953 profile.go:143] Saving config to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/kubernetes-upgrade-835095/config.json ...
	I1209 11:46:00.051080  660953 start.go:360] acquireMachinesLock for kubernetes-upgrade-835095: {Name:mka6afffe9ddf2faebdc603ed0401805d58bf31e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 11:46:00.051143  660953 start.go:364] duration metric: took 37.618µs to acquireMachinesLock for "kubernetes-upgrade-835095"
	I1209 11:46:00.051162  660953 start.go:96] Skipping create...Using existing machine configuration
	I1209 11:46:00.051182  660953 fix.go:54] fixHost starting: 
	I1209 11:46:00.051569  660953 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:46:00.051615  660953 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:46:00.066539  660953 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36339
	I1209 11:46:00.067060  660953 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:46:00.067645  660953 main.go:141] libmachine: Using API Version  1
	I1209 11:46:00.067670  660953 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:46:00.068023  660953 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:46:00.068245  660953 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .DriverName
	I1209 11:46:00.068405  660953 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .GetState
	I1209 11:46:00.070286  660953 fix.go:112] recreateIfNeeded on kubernetes-upgrade-835095: state=Running err=<nil>
	W1209 11:46:00.070328  660953 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 11:46:00.071796  660953 out.go:177] * Updating the running kvm2 "kubernetes-upgrade-835095" VM ...
	I1209 11:46:00.073028  660953 machine.go:93] provisionDockerMachine start ...
	I1209 11:46:00.073055  660953 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .DriverName
	I1209 11:46:00.073283  660953 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .GetSSHHostname
	I1209 11:46:00.076910  660953 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | domain kubernetes-upgrade-835095 has defined MAC address 52:54:00:76:1d:d0 in network mk-kubernetes-upgrade-835095
	I1209 11:46:00.076971  660953 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:1d:d0", ip: ""} in network mk-kubernetes-upgrade-835095: {Iface:virbr2 ExpiryTime:2024-12-09 12:45:26 +0000 UTC Type:0 Mac:52:54:00:76:1d:d0 Iaid: IPaddr:192.168.50.241 Prefix:24 Hostname:kubernetes-upgrade-835095 Clientid:01:52:54:00:76:1d:d0}
	I1209 11:46:00.077016  660953 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | domain kubernetes-upgrade-835095 has defined IP address 192.168.50.241 and MAC address 52:54:00:76:1d:d0 in network mk-kubernetes-upgrade-835095
	I1209 11:46:00.077044  660953 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .GetSSHPort
	I1209 11:46:00.077229  660953 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .GetSSHKeyPath
	I1209 11:46:00.077424  660953 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .GetSSHKeyPath
	I1209 11:46:00.077707  660953 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .GetSSHUsername
	I1209 11:46:00.077899  660953 main.go:141] libmachine: Using SSH client type: native
	I1209 11:46:00.078154  660953 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.241 22 <nil> <nil>}
	I1209 11:46:00.078193  660953 main.go:141] libmachine: About to run SSH command:
	hostname
	I1209 11:46:00.204147  660953 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-835095
	
	I1209 11:46:00.204176  660953 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .GetMachineName
	I1209 11:46:00.204484  660953 buildroot.go:166] provisioning hostname "kubernetes-upgrade-835095"
	I1209 11:46:00.204533  660953 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .GetMachineName
	I1209 11:46:00.204720  660953 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .GetSSHHostname
	I1209 11:46:00.207936  660953 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | domain kubernetes-upgrade-835095 has defined MAC address 52:54:00:76:1d:d0 in network mk-kubernetes-upgrade-835095
	I1209 11:46:00.208275  660953 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:1d:d0", ip: ""} in network mk-kubernetes-upgrade-835095: {Iface:virbr2 ExpiryTime:2024-12-09 12:45:26 +0000 UTC Type:0 Mac:52:54:00:76:1d:d0 Iaid: IPaddr:192.168.50.241 Prefix:24 Hostname:kubernetes-upgrade-835095 Clientid:01:52:54:00:76:1d:d0}
	I1209 11:46:00.208302  660953 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | domain kubernetes-upgrade-835095 has defined IP address 192.168.50.241 and MAC address 52:54:00:76:1d:d0 in network mk-kubernetes-upgrade-835095
	I1209 11:46:00.208523  660953 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .GetSSHPort
	I1209 11:46:00.208708  660953 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .GetSSHKeyPath
	I1209 11:46:00.208902  660953 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .GetSSHKeyPath
	I1209 11:46:00.209025  660953 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .GetSSHUsername
	I1209 11:46:00.209295  660953 main.go:141] libmachine: Using SSH client type: native
	I1209 11:46:00.209531  660953 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.241 22 <nil> <nil>}
	I1209 11:46:00.209552  660953 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-835095 && echo "kubernetes-upgrade-835095" | sudo tee /etc/hostname
	I1209 11:46:00.340856  660953 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-835095
	
	I1209 11:46:00.340890  660953 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .GetSSHHostname
	I1209 11:46:00.343734  660953 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | domain kubernetes-upgrade-835095 has defined MAC address 52:54:00:76:1d:d0 in network mk-kubernetes-upgrade-835095
	I1209 11:46:00.344087  660953 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:1d:d0", ip: ""} in network mk-kubernetes-upgrade-835095: {Iface:virbr2 ExpiryTime:2024-12-09 12:45:26 +0000 UTC Type:0 Mac:52:54:00:76:1d:d0 Iaid: IPaddr:192.168.50.241 Prefix:24 Hostname:kubernetes-upgrade-835095 Clientid:01:52:54:00:76:1d:d0}
	I1209 11:46:00.344118  660953 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | domain kubernetes-upgrade-835095 has defined IP address 192.168.50.241 and MAC address 52:54:00:76:1d:d0 in network mk-kubernetes-upgrade-835095
	I1209 11:46:00.344275  660953 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .GetSSHPort
	I1209 11:46:00.344482  660953 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .GetSSHKeyPath
	I1209 11:46:00.344643  660953 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .GetSSHKeyPath
	I1209 11:46:00.344788  660953 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .GetSSHUsername
	I1209 11:46:00.344930  660953 main.go:141] libmachine: Using SSH client type: native
	I1209 11:46:00.345099  660953 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.241 22 <nil> <nil>}
	I1209 11:46:00.345115  660953 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-835095' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-835095/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-835095' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 11:46:00.465339  660953 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 11:46:00.465385  660953 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20068-609844/.minikube CaCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20068-609844/.minikube}
	I1209 11:46:00.465408  660953 buildroot.go:174] setting up certificates
	I1209 11:46:00.465420  660953 provision.go:84] configureAuth start
	I1209 11:46:00.465435  660953 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .GetMachineName
	I1209 11:46:00.465798  660953 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .GetIP
	I1209 11:46:00.468850  660953 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | domain kubernetes-upgrade-835095 has defined MAC address 52:54:00:76:1d:d0 in network mk-kubernetes-upgrade-835095
	I1209 11:46:00.469237  660953 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:1d:d0", ip: ""} in network mk-kubernetes-upgrade-835095: {Iface:virbr2 ExpiryTime:2024-12-09 12:45:26 +0000 UTC Type:0 Mac:52:54:00:76:1d:d0 Iaid: IPaddr:192.168.50.241 Prefix:24 Hostname:kubernetes-upgrade-835095 Clientid:01:52:54:00:76:1d:d0}
	I1209 11:46:00.469273  660953 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | domain kubernetes-upgrade-835095 has defined IP address 192.168.50.241 and MAC address 52:54:00:76:1d:d0 in network mk-kubernetes-upgrade-835095
	I1209 11:46:00.469473  660953 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .GetSSHHostname
	I1209 11:46:00.472287  660953 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | domain kubernetes-upgrade-835095 has defined MAC address 52:54:00:76:1d:d0 in network mk-kubernetes-upgrade-835095
	I1209 11:46:00.472722  660953 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:1d:d0", ip: ""} in network mk-kubernetes-upgrade-835095: {Iface:virbr2 ExpiryTime:2024-12-09 12:45:26 +0000 UTC Type:0 Mac:52:54:00:76:1d:d0 Iaid: IPaddr:192.168.50.241 Prefix:24 Hostname:kubernetes-upgrade-835095 Clientid:01:52:54:00:76:1d:d0}
	I1209 11:46:00.472755  660953 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | domain kubernetes-upgrade-835095 has defined IP address 192.168.50.241 and MAC address 52:54:00:76:1d:d0 in network mk-kubernetes-upgrade-835095
	I1209 11:46:00.472979  660953 provision.go:143] copyHostCerts
	I1209 11:46:00.473054  660953 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem, removing ...
	I1209 11:46:00.473080  660953 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem
	I1209 11:46:00.473165  660953 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem (1082 bytes)
	I1209 11:46:00.473293  660953 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem, removing ...
	I1209 11:46:00.473306  660953 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem
	I1209 11:46:00.473348  660953 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem (1123 bytes)
	I1209 11:46:00.473441  660953 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem, removing ...
	I1209 11:46:00.473453  660953 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem
	I1209 11:46:00.473486  660953 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem (1679 bytes)
	I1209 11:46:00.473563  660953 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-835095 san=[127.0.0.1 192.168.50.241 kubernetes-upgrade-835095 localhost minikube]
	I1209 11:46:00.581283  660953 provision.go:177] copyRemoteCerts
	I1209 11:46:00.581349  660953 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 11:46:00.581382  660953 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .GetSSHHostname
	I1209 11:46:00.584067  660953 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | domain kubernetes-upgrade-835095 has defined MAC address 52:54:00:76:1d:d0 in network mk-kubernetes-upgrade-835095
	I1209 11:46:00.584461  660953 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:1d:d0", ip: ""} in network mk-kubernetes-upgrade-835095: {Iface:virbr2 ExpiryTime:2024-12-09 12:45:26 +0000 UTC Type:0 Mac:52:54:00:76:1d:d0 Iaid: IPaddr:192.168.50.241 Prefix:24 Hostname:kubernetes-upgrade-835095 Clientid:01:52:54:00:76:1d:d0}
	I1209 11:46:00.584495  660953 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | domain kubernetes-upgrade-835095 has defined IP address 192.168.50.241 and MAC address 52:54:00:76:1d:d0 in network mk-kubernetes-upgrade-835095
	I1209 11:46:00.584699  660953 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .GetSSHPort
	I1209 11:46:00.584943  660953 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .GetSSHKeyPath
	I1209 11:46:00.585120  660953 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .GetSSHUsername
	I1209 11:46:00.585301  660953 sshutil.go:53] new ssh client: &{IP:192.168.50.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/kubernetes-upgrade-835095/id_rsa Username:docker}
	I1209 11:46:00.678229  660953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1209 11:46:00.708188  660953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1209 11:46:00.740447  660953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1209 11:46:00.768308  660953 provision.go:87] duration metric: took 302.872202ms to configureAuth
	I1209 11:46:00.768338  660953 buildroot.go:189] setting minikube options for container-runtime
	I1209 11:46:00.768563  660953 config.go:182] Loaded profile config "kubernetes-upgrade-835095": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 11:46:00.768643  660953 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .GetSSHHostname
	I1209 11:46:00.771518  660953 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | domain kubernetes-upgrade-835095 has defined MAC address 52:54:00:76:1d:d0 in network mk-kubernetes-upgrade-835095
	I1209 11:46:00.771884  660953 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:1d:d0", ip: ""} in network mk-kubernetes-upgrade-835095: {Iface:virbr2 ExpiryTime:2024-12-09 12:45:26 +0000 UTC Type:0 Mac:52:54:00:76:1d:d0 Iaid: IPaddr:192.168.50.241 Prefix:24 Hostname:kubernetes-upgrade-835095 Clientid:01:52:54:00:76:1d:d0}
	I1209 11:46:00.771916  660953 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | domain kubernetes-upgrade-835095 has defined IP address 192.168.50.241 and MAC address 52:54:00:76:1d:d0 in network mk-kubernetes-upgrade-835095
	I1209 11:46:00.772050  660953 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .GetSSHPort
	I1209 11:46:00.772219  660953 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .GetSSHKeyPath
	I1209 11:46:00.772402  660953 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .GetSSHKeyPath
	I1209 11:46:00.772524  660953 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .GetSSHUsername
	I1209 11:46:00.772683  660953 main.go:141] libmachine: Using SSH client type: native
	I1209 11:46:00.772892  660953 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.241 22 <nil> <nil>}
	I1209 11:46:00.772909  660953 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 11:46:10.533670  660953 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 11:46:10.533714  660953 machine.go:96] duration metric: took 10.460665679s to provisionDockerMachine
	I1209 11:46:10.533733  660953 start.go:293] postStartSetup for "kubernetes-upgrade-835095" (driver="kvm2")
	I1209 11:46:10.533749  660953 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 11:46:10.533778  660953 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .DriverName
	I1209 11:46:10.534112  660953 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 11:46:10.534165  660953 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .GetSSHHostname
	I1209 11:46:10.536989  660953 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | domain kubernetes-upgrade-835095 has defined MAC address 52:54:00:76:1d:d0 in network mk-kubernetes-upgrade-835095
	I1209 11:46:10.537392  660953 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:1d:d0", ip: ""} in network mk-kubernetes-upgrade-835095: {Iface:virbr2 ExpiryTime:2024-12-09 12:45:26 +0000 UTC Type:0 Mac:52:54:00:76:1d:d0 Iaid: IPaddr:192.168.50.241 Prefix:24 Hostname:kubernetes-upgrade-835095 Clientid:01:52:54:00:76:1d:d0}
	I1209 11:46:10.537423  660953 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | domain kubernetes-upgrade-835095 has defined IP address 192.168.50.241 and MAC address 52:54:00:76:1d:d0 in network mk-kubernetes-upgrade-835095
	I1209 11:46:10.537647  660953 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .GetSSHPort
	I1209 11:46:10.537855  660953 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .GetSSHKeyPath
	I1209 11:46:10.538017  660953 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .GetSSHUsername
	I1209 11:46:10.538220  660953 sshutil.go:53] new ssh client: &{IP:192.168.50.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/kubernetes-upgrade-835095/id_rsa Username:docker}
	I1209 11:46:10.625090  660953 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 11:46:10.629003  660953 info.go:137] Remote host: Buildroot 2023.02.9
	I1209 11:46:10.629029  660953 filesync.go:126] Scanning /home/jenkins/minikube-integration/20068-609844/.minikube/addons for local assets ...
	I1209 11:46:10.629112  660953 filesync.go:126] Scanning /home/jenkins/minikube-integration/20068-609844/.minikube/files for local assets ...
	I1209 11:46:10.629227  660953 filesync.go:149] local asset: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem -> 6170172.pem in /etc/ssl/certs
	I1209 11:46:10.629366  660953 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 11:46:10.638334  660953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem --> /etc/ssl/certs/6170172.pem (1708 bytes)
	I1209 11:46:10.660113  660953 start.go:296] duration metric: took 126.361367ms for postStartSetup
	I1209 11:46:10.660166  660953 fix.go:56] duration metric: took 10.608995633s for fixHost
	I1209 11:46:10.660195  660953 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .GetSSHHostname
	I1209 11:46:10.662739  660953 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | domain kubernetes-upgrade-835095 has defined MAC address 52:54:00:76:1d:d0 in network mk-kubernetes-upgrade-835095
	I1209 11:46:10.663087  660953 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:1d:d0", ip: ""} in network mk-kubernetes-upgrade-835095: {Iface:virbr2 ExpiryTime:2024-12-09 12:45:26 +0000 UTC Type:0 Mac:52:54:00:76:1d:d0 Iaid: IPaddr:192.168.50.241 Prefix:24 Hostname:kubernetes-upgrade-835095 Clientid:01:52:54:00:76:1d:d0}
	I1209 11:46:10.663111  660953 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | domain kubernetes-upgrade-835095 has defined IP address 192.168.50.241 and MAC address 52:54:00:76:1d:d0 in network mk-kubernetes-upgrade-835095
	I1209 11:46:10.663266  660953 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .GetSSHPort
	I1209 11:46:10.663454  660953 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .GetSSHKeyPath
	I1209 11:46:10.663606  660953 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .GetSSHKeyPath
	I1209 11:46:10.663722  660953 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .GetSSHUsername
	I1209 11:46:10.663873  660953 main.go:141] libmachine: Using SSH client type: native
	I1209 11:46:10.664061  660953 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.241 22 <nil> <nil>}
	I1209 11:46:10.664075  660953 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1209 11:46:10.774614  660953 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733744770.764916280
	
	I1209 11:46:10.774647  660953 fix.go:216] guest clock: 1733744770.764916280
	I1209 11:46:10.774660  660953 fix.go:229] Guest: 2024-12-09 11:46:10.76491628 +0000 UTC Remote: 2024-12-09 11:46:10.660171464 +0000 UTC m=+10.756985049 (delta=104.744816ms)
	I1209 11:46:10.774720  660953 fix.go:200] guest clock delta is within tolerance: 104.744816ms
	I1209 11:46:10.774726  660953 start.go:83] releasing machines lock for "kubernetes-upgrade-835095", held for 10.723574473s
	I1209 11:46:10.774755  660953 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .DriverName
	I1209 11:46:10.775043  660953 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .GetIP
	I1209 11:46:10.777727  660953 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | domain kubernetes-upgrade-835095 has defined MAC address 52:54:00:76:1d:d0 in network mk-kubernetes-upgrade-835095
	I1209 11:46:10.778088  660953 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:1d:d0", ip: ""} in network mk-kubernetes-upgrade-835095: {Iface:virbr2 ExpiryTime:2024-12-09 12:45:26 +0000 UTC Type:0 Mac:52:54:00:76:1d:d0 Iaid: IPaddr:192.168.50.241 Prefix:24 Hostname:kubernetes-upgrade-835095 Clientid:01:52:54:00:76:1d:d0}
	I1209 11:46:10.778106  660953 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | domain kubernetes-upgrade-835095 has defined IP address 192.168.50.241 and MAC address 52:54:00:76:1d:d0 in network mk-kubernetes-upgrade-835095
	I1209 11:46:10.778309  660953 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .DriverName
	I1209 11:46:10.778860  660953 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .DriverName
	I1209 11:46:10.779019  660953 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .DriverName
	I1209 11:46:10.779094  660953 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 11:46:10.779136  660953 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .GetSSHHostname
	I1209 11:46:10.779389  660953 ssh_runner.go:195] Run: cat /version.json
	I1209 11:46:10.779412  660953 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .GetSSHHostname
	I1209 11:46:10.782078  660953 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | domain kubernetes-upgrade-835095 has defined MAC address 52:54:00:76:1d:d0 in network mk-kubernetes-upgrade-835095
	I1209 11:46:10.782402  660953 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:1d:d0", ip: ""} in network mk-kubernetes-upgrade-835095: {Iface:virbr2 ExpiryTime:2024-12-09 12:45:26 +0000 UTC Type:0 Mac:52:54:00:76:1d:d0 Iaid: IPaddr:192.168.50.241 Prefix:24 Hostname:kubernetes-upgrade-835095 Clientid:01:52:54:00:76:1d:d0}
	I1209 11:46:10.782436  660953 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | domain kubernetes-upgrade-835095 has defined IP address 192.168.50.241 and MAC address 52:54:00:76:1d:d0 in network mk-kubernetes-upgrade-835095
	I1209 11:46:10.782600  660953 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .GetSSHPort
	I1209 11:46:10.782631  660953 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | domain kubernetes-upgrade-835095 has defined MAC address 52:54:00:76:1d:d0 in network mk-kubernetes-upgrade-835095
	I1209 11:46:10.782789  660953 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .GetSSHKeyPath
	I1209 11:46:10.782966  660953 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .GetSSHUsername
	I1209 11:46:10.783020  660953 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:1d:d0", ip: ""} in network mk-kubernetes-upgrade-835095: {Iface:virbr2 ExpiryTime:2024-12-09 12:45:26 +0000 UTC Type:0 Mac:52:54:00:76:1d:d0 Iaid: IPaddr:192.168.50.241 Prefix:24 Hostname:kubernetes-upgrade-835095 Clientid:01:52:54:00:76:1d:d0}
	I1209 11:46:10.783042  660953 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | domain kubernetes-upgrade-835095 has defined IP address 192.168.50.241 and MAC address 52:54:00:76:1d:d0 in network mk-kubernetes-upgrade-835095
	I1209 11:46:10.783143  660953 sshutil.go:53] new ssh client: &{IP:192.168.50.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/kubernetes-upgrade-835095/id_rsa Username:docker}
	I1209 11:46:10.783247  660953 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .GetSSHPort
	I1209 11:46:10.783382  660953 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .GetSSHKeyPath
	I1209 11:46:10.783518  660953 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .GetSSHUsername
	I1209 11:46:10.783677  660953 sshutil.go:53] new ssh client: &{IP:192.168.50.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/kubernetes-upgrade-835095/id_rsa Username:docker}
	I1209 11:46:10.863398  660953 ssh_runner.go:195] Run: systemctl --version
	I1209 11:46:10.896848  660953 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 11:46:11.049866  660953 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 11:46:11.056293  660953 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 11:46:11.056368  660953 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 11:46:11.065137  660953 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1209 11:46:11.065164  660953 start.go:495] detecting cgroup driver to use...
	I1209 11:46:11.065233  660953 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 11:46:11.080430  660953 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 11:46:11.093975  660953 docker.go:217] disabling cri-docker service (if available) ...
	I1209 11:46:11.094049  660953 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 11:46:11.107823  660953 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 11:46:11.121067  660953 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 11:46:11.264797  660953 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 11:46:11.401143  660953 docker.go:233] disabling docker service ...
	I1209 11:46:11.401231  660953 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 11:46:11.416851  660953 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 11:46:11.429880  660953 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 11:46:11.564229  660953 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 11:46:11.706068  660953 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 11:46:11.731690  660953 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 11:46:11.750493  660953 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1209 11:46:11.750557  660953 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:46:11.762803  660953 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 11:46:11.762887  660953 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:46:11.774017  660953 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:46:11.783879  660953 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:46:11.793242  660953 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 11:46:11.803004  660953 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:46:11.812504  660953 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:46:11.823151  660953 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:46:11.833024  660953 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 11:46:11.841751  660953 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 11:46:11.850245  660953 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 11:46:12.034312  660953 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 11:46:12.499983  660953 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 11:46:12.500071  660953 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 11:46:12.504979  660953 start.go:563] Will wait 60s for crictl version
	I1209 11:46:12.505048  660953 ssh_runner.go:195] Run: which crictl
	I1209 11:46:12.508433  660953 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 11:46:12.547604  660953 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1209 11:46:12.547711  660953 ssh_runner.go:195] Run: crio --version
	I1209 11:46:12.574887  660953 ssh_runner.go:195] Run: crio --version
	I1209 11:46:12.604280  660953 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1209 11:46:12.605453  660953 main.go:141] libmachine: (kubernetes-upgrade-835095) Calling .GetIP
	I1209 11:46:12.607978  660953 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | domain kubernetes-upgrade-835095 has defined MAC address 52:54:00:76:1d:d0 in network mk-kubernetes-upgrade-835095
	I1209 11:46:12.608334  660953 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:1d:d0", ip: ""} in network mk-kubernetes-upgrade-835095: {Iface:virbr2 ExpiryTime:2024-12-09 12:45:26 +0000 UTC Type:0 Mac:52:54:00:76:1d:d0 Iaid: IPaddr:192.168.50.241 Prefix:24 Hostname:kubernetes-upgrade-835095 Clientid:01:52:54:00:76:1d:d0}
	I1209 11:46:12.608363  660953 main.go:141] libmachine: (kubernetes-upgrade-835095) DBG | domain kubernetes-upgrade-835095 has defined IP address 192.168.50.241 and MAC address 52:54:00:76:1d:d0 in network mk-kubernetes-upgrade-835095
	I1209 11:46:12.608630  660953 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1209 11:46:12.612459  660953 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-835095 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.2 ClusterName:kubernetes-upgrade-835095 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.241 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 11:46:12.612555  660953 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 11:46:12.612596  660953 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 11:46:12.652851  660953 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 11:46:12.652876  660953 crio.go:433] Images already preloaded, skipping extraction
	I1209 11:46:12.652936  660953 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 11:46:12.683839  660953 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 11:46:12.683873  660953 cache_images.go:84] Images are preloaded, skipping loading
	I1209 11:46:12.683885  660953 kubeadm.go:934] updating node { 192.168.50.241 8443 v1.31.2 crio true true} ...
	I1209 11:46:12.684027  660953 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-835095 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.241
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:kubernetes-upgrade-835095 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 11:46:12.684127  660953 ssh_runner.go:195] Run: crio config
	I1209 11:46:12.731932  660953 cni.go:84] Creating CNI manager for ""
	I1209 11:46:12.731954  660953 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 11:46:12.731965  660953 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1209 11:46:12.731989  660953 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.241 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-835095 NodeName:kubernetes-upgrade-835095 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.241"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.241 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 11:46:12.732112  660953 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.241
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-835095"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.241"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.241"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 11:46:12.732205  660953 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1209 11:46:12.741819  660953 binaries.go:44] Found k8s binaries, skipping transfer
	I1209 11:46:12.741894  660953 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 11:46:12.750613  660953 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (325 bytes)
	I1209 11:46:12.766682  660953 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 11:46:12.781913  660953 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2305 bytes)
	I1209 11:46:12.797109  660953 ssh_runner.go:195] Run: grep 192.168.50.241	control-plane.minikube.internal$ /etc/hosts
	I1209 11:46:12.800754  660953 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 11:46:12.935209  660953 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 11:46:12.949165  660953 certs.go:68] Setting up /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/kubernetes-upgrade-835095 for IP: 192.168.50.241
	I1209 11:46:12.949190  660953 certs.go:194] generating shared ca certs ...
	I1209 11:46:12.949211  660953 certs.go:226] acquiring lock for ca certs: {Name:mk2df5887e08965d909a9c950da5dfffb8a04ddc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 11:46:12.949400  660953 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.key
	I1209 11:46:12.949456  660953 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.key
	I1209 11:46:12.949471  660953 certs.go:256] generating profile certs ...
	I1209 11:46:12.949571  660953 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/kubernetes-upgrade-835095/client.key
	I1209 11:46:12.949640  660953 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/kubernetes-upgrade-835095/apiserver.key.65a63dc6
	I1209 11:46:12.949697  660953 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/kubernetes-upgrade-835095/proxy-client.key
	I1209 11:46:12.949829  660953 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017.pem (1338 bytes)
	W1209 11:46:12.949869  660953 certs.go:480] ignoring /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017_empty.pem, impossibly tiny 0 bytes
	I1209 11:46:12.949885  660953 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem (1675 bytes)
	I1209 11:46:12.949919  660953 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem (1082 bytes)
	I1209 11:46:12.949954  660953 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem (1123 bytes)
	I1209 11:46:12.949987  660953 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem (1679 bytes)
	I1209 11:46:12.950043  660953 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem (1708 bytes)
	I1209 11:46:12.950779  660953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 11:46:12.974783  660953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1209 11:46:13.029481  660953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 11:46:13.087889  660953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1209 11:46:13.150033  660953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/kubernetes-upgrade-835095/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1209 11:46:13.254553  660953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/kubernetes-upgrade-835095/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1209 11:46:13.302417  660953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/kubernetes-upgrade-835095/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 11:46:13.347445  660953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/kubernetes-upgrade-835095/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1209 11:46:13.393051  660953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem --> /usr/share/ca-certificates/6170172.pem (1708 bytes)
	I1209 11:46:13.460665  660953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 11:46:13.491281  660953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017.pem --> /usr/share/ca-certificates/617017.pem (1338 bytes)
	I1209 11:46:13.519883  660953 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 11:46:13.539914  660953 ssh_runner.go:195] Run: openssl version
	I1209 11:46:13.546533  660953 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6170172.pem && ln -fs /usr/share/ca-certificates/6170172.pem /etc/ssl/certs/6170172.pem"
	I1209 11:46:13.557632  660953 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6170172.pem
	I1209 11:46:13.566703  660953 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 10:45 /usr/share/ca-certificates/6170172.pem
	I1209 11:46:13.566764  660953 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6170172.pem
	I1209 11:46:13.572578  660953 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6170172.pem /etc/ssl/certs/3ec20f2e.0"
	I1209 11:46:13.582934  660953 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1209 11:46:13.598953  660953 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 11:46:13.603299  660953 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 10:34 /usr/share/ca-certificates/minikubeCA.pem
	I1209 11:46:13.603369  660953 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 11:46:13.608744  660953 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1209 11:46:13.618642  660953 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/617017.pem && ln -fs /usr/share/ca-certificates/617017.pem /etc/ssl/certs/617017.pem"
	I1209 11:46:13.630545  660953 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/617017.pem
	I1209 11:46:13.634983  660953 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 10:45 /usr/share/ca-certificates/617017.pem
	I1209 11:46:13.635051  660953 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/617017.pem
	I1209 11:46:13.640910  660953 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/617017.pem /etc/ssl/certs/51391683.0"
	I1209 11:46:13.652184  660953 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 11:46:13.657332  660953 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1209 11:46:13.662796  660953 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1209 11:46:13.669627  660953 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1209 11:46:13.676199  660953 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1209 11:46:13.683574  660953 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1209 11:46:13.689434  660953 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1209 11:46:13.695837  660953 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-835095 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.31.2 ClusterName:kubernetes-upgrade-835095 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.241 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 11:46:13.695928  660953 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 11:46:13.695984  660953 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 11:46:13.793049  660953 cri.go:89] found id: "a02613570c0599a2feca20341050dfb02ec47a14d1176c076cb02d72287e4734"
	I1209 11:46:13.793080  660953 cri.go:89] found id: "dcde23f61506209a9be6b52ff7180948ec9bb464c76e1b7fb635aaa7c3059d9e"
	I1209 11:46:13.793085  660953 cri.go:89] found id: "c83620f9b6cde8e82f4aeaf466195b0230b8ca6cb65ebef2d7e5f84f4f9a3cf4"
	I1209 11:46:13.793088  660953 cri.go:89] found id: "aa5fcff4429da277dc155109601bd8bd19b59ac03f1bf772193381e188eba44e"
	I1209 11:46:13.793091  660953 cri.go:89] found id: "1eba1ac170df5dbdb9471a5da2498c33ab3e7f38f990e40fd8920654bf8fe9fb"
	I1209 11:46:13.793094  660953 cri.go:89] found id: "bf7296a43dfaaba2cda3bf4fd9f9e604dc9bc61ba37e356abc3cc43d12a557da"
	I1209 11:46:13.793097  660953 cri.go:89] found id: "ec664f69236febbce299b765441108ece1f53da3c187614a92c84165683ae1fa"
	I1209 11:46:13.793099  660953 cri.go:89] found id: "87277ee2064a37d3834c1c1916d2bfd5656fae0683db6c4827db7dac1070ae98"
	I1209 11:46:13.793101  660953 cri.go:89] found id: "6ec6b1939736f91e8f2f97a27135eb21e1b61b670ab53b0045a74e2a5869bf4d"
	I1209 11:46:13.793107  660953 cri.go:89] found id: "2152b4c09a303db90dc70dbbdc25569871ddb5911029d548449f69b1dc3b8f56"
	I1209 11:46:13.793109  660953 cri.go:89] found id: "9d863bbaad8359de6f323b9844e1be46eaf5c3fc33919874032a723a3e0726b6"
	I1209 11:46:13.793112  660953 cri.go:89] found id: ""
	I1209 11:46:13.793169  660953 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-835095 -n kubernetes-upgrade-835095
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-835095 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-835095" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-835095
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-835095: (1.100182148s)
--- FAIL: TestKubernetesUpgrade (363.31s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (94.2s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-529265 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-529265 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m30.62019775s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-529265] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20068
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20068-609844/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20068-609844/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-529265" primary control-plane node in "pause-529265" cluster
	* Updating the running kvm2 "pause-529265" VM ...
	* Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	* Done! kubectl is now configured to use "pause-529265" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 11:37:32.811128  654294 out.go:345] Setting OutFile to fd 1 ...
	I1209 11:37:32.811552  654294 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 11:37:32.811590  654294 out.go:358] Setting ErrFile to fd 2...
	I1209 11:37:32.811608  654294 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 11:37:32.811896  654294 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20068-609844/.minikube/bin
	I1209 11:37:32.812623  654294 out.go:352] Setting JSON to false
	I1209 11:37:32.814051  654294 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":15597,"bootTime":1733728656,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 11:37:32.814215  654294 start.go:139] virtualization: kvm guest
	I1209 11:37:32.816793  654294 out.go:177] * [pause-529265] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1209 11:37:32.818077  654294 out.go:177]   - MINIKUBE_LOCATION=20068
	I1209 11:37:32.818103  654294 notify.go:220] Checking for updates...
	I1209 11:37:32.820303  654294 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 11:37:32.821341  654294 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20068-609844/kubeconfig
	I1209 11:37:32.822394  654294 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20068-609844/.minikube
	I1209 11:37:32.823341  654294 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1209 11:37:32.824274  654294 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 11:37:32.825922  654294 config.go:182] Loaded profile config "pause-529265": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 11:37:32.826573  654294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:37:32.826645  654294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:37:32.847049  654294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36571
	I1209 11:37:32.847748  654294 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:37:32.848459  654294 main.go:141] libmachine: Using API Version  1
	I1209 11:37:32.848486  654294 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:37:32.848874  654294 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:37:32.849069  654294 main.go:141] libmachine: (pause-529265) Calling .DriverName
	I1209 11:37:32.849444  654294 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 11:37:32.849885  654294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:37:32.849970  654294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:37:32.870325  654294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33037
	I1209 11:37:32.870878  654294 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:37:32.871809  654294 main.go:141] libmachine: Using API Version  1
	I1209 11:37:32.871862  654294 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:37:32.872343  654294 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:37:32.872529  654294 main.go:141] libmachine: (pause-529265) Calling .DriverName
	I1209 11:37:32.921493  654294 out.go:177] * Using the kvm2 driver based on existing profile
	I1209 11:37:32.922534  654294 start.go:297] selected driver: kvm2
	I1209 11:37:32.922554  654294 start.go:901] validating driver "kvm2" against &{Name:pause-529265 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.31.2 ClusterName:pause-529265 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.137 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvid
ia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 11:37:32.922726  654294 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 11:37:32.923189  654294 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 11:37:32.923292  654294 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20068-609844/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1209 11:37:32.939436  654294 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1209 11:37:32.940455  654294 cni.go:84] Creating CNI manager for ""
	I1209 11:37:32.940512  654294 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 11:37:32.940632  654294 start.go:340] cluster config:
	{Name:pause-529265 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:pause-529265 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.137 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false
registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 11:37:32.940816  654294 iso.go:125] acquiring lock: {Name:mk3bf4d9da9b75cc0ae0a3732f4492bac54a4b46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 11:37:32.942362  654294 out.go:177] * Starting "pause-529265" primary control-plane node in "pause-529265" cluster
	I1209 11:37:32.943587  654294 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 11:37:32.943632  654294 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1209 11:37:32.943642  654294 cache.go:56] Caching tarball of preloaded images
	I1209 11:37:32.943729  654294 preload.go:172] Found /home/jenkins/minikube-integration/20068-609844/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1209 11:37:32.943742  654294 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1209 11:37:32.943890  654294 profile.go:143] Saving config to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/pause-529265/config.json ...
	I1209 11:37:32.944118  654294 start.go:360] acquireMachinesLock for pause-529265: {Name:mka6afffe9ddf2faebdc603ed0401805d58bf31e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 11:38:08.478967  654294 start.go:364] duration metric: took 35.534807468s to acquireMachinesLock for "pause-529265"
	I1209 11:38:08.479045  654294 start.go:96] Skipping create...Using existing machine configuration
	I1209 11:38:08.479058  654294 fix.go:54] fixHost starting: 
	I1209 11:38:08.479492  654294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:38:08.479536  654294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:38:08.500247  654294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36363
	I1209 11:38:08.500691  654294 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:38:08.501268  654294 main.go:141] libmachine: Using API Version  1
	I1209 11:38:08.501301  654294 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:38:08.501704  654294 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:38:08.501922  654294 main.go:141] libmachine: (pause-529265) Calling .DriverName
	I1209 11:38:08.502099  654294 main.go:141] libmachine: (pause-529265) Calling .GetState
	I1209 11:38:08.503942  654294 fix.go:112] recreateIfNeeded on pause-529265: state=Running err=<nil>
	W1209 11:38:08.503968  654294 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 11:38:08.505823  654294 out.go:177] * Updating the running kvm2 "pause-529265" VM ...
	I1209 11:38:08.506936  654294 machine.go:93] provisionDockerMachine start ...
	I1209 11:38:08.506968  654294 main.go:141] libmachine: (pause-529265) Calling .DriverName
	I1209 11:38:08.507195  654294 main.go:141] libmachine: (pause-529265) Calling .GetSSHHostname
	I1209 11:38:08.509757  654294 main.go:141] libmachine: (pause-529265) DBG | domain pause-529265 has defined MAC address 52:54:00:db:89:cb in network mk-pause-529265
	I1209 11:38:08.510234  654294 main.go:141] libmachine: (pause-529265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:89:cb", ip: ""} in network mk-pause-529265: {Iface:virbr1 ExpiryTime:2024-12-09 12:36:48 +0000 UTC Type:0 Mac:52:54:00:db:89:cb Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:pause-529265 Clientid:01:52:54:00:db:89:cb}
	I1209 11:38:08.510261  654294 main.go:141] libmachine: (pause-529265) DBG | domain pause-529265 has defined IP address 192.168.39.137 and MAC address 52:54:00:db:89:cb in network mk-pause-529265
	I1209 11:38:08.510426  654294 main.go:141] libmachine: (pause-529265) Calling .GetSSHPort
	I1209 11:38:08.510580  654294 main.go:141] libmachine: (pause-529265) Calling .GetSSHKeyPath
	I1209 11:38:08.510708  654294 main.go:141] libmachine: (pause-529265) Calling .GetSSHKeyPath
	I1209 11:38:08.510801  654294 main.go:141] libmachine: (pause-529265) Calling .GetSSHUsername
	I1209 11:38:08.510920  654294 main.go:141] libmachine: Using SSH client type: native
	I1209 11:38:08.511099  654294 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.137 22 <nil> <nil>}
	I1209 11:38:08.511108  654294 main.go:141] libmachine: About to run SSH command:
	hostname
	I1209 11:38:08.619339  654294 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-529265
	
	I1209 11:38:08.619369  654294 main.go:141] libmachine: (pause-529265) Calling .GetMachineName
	I1209 11:38:08.619656  654294 buildroot.go:166] provisioning hostname "pause-529265"
	I1209 11:38:08.619693  654294 main.go:141] libmachine: (pause-529265) Calling .GetMachineName
	I1209 11:38:08.619907  654294 main.go:141] libmachine: (pause-529265) Calling .GetSSHHostname
	I1209 11:38:08.622800  654294 main.go:141] libmachine: (pause-529265) DBG | domain pause-529265 has defined MAC address 52:54:00:db:89:cb in network mk-pause-529265
	I1209 11:38:08.623255  654294 main.go:141] libmachine: (pause-529265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:89:cb", ip: ""} in network mk-pause-529265: {Iface:virbr1 ExpiryTime:2024-12-09 12:36:48 +0000 UTC Type:0 Mac:52:54:00:db:89:cb Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:pause-529265 Clientid:01:52:54:00:db:89:cb}
	I1209 11:38:08.623285  654294 main.go:141] libmachine: (pause-529265) DBG | domain pause-529265 has defined IP address 192.168.39.137 and MAC address 52:54:00:db:89:cb in network mk-pause-529265
	I1209 11:38:08.623515  654294 main.go:141] libmachine: (pause-529265) Calling .GetSSHPort
	I1209 11:38:08.623679  654294 main.go:141] libmachine: (pause-529265) Calling .GetSSHKeyPath
	I1209 11:38:08.623839  654294 main.go:141] libmachine: (pause-529265) Calling .GetSSHKeyPath
	I1209 11:38:08.624039  654294 main.go:141] libmachine: (pause-529265) Calling .GetSSHUsername
	I1209 11:38:08.624285  654294 main.go:141] libmachine: Using SSH client type: native
	I1209 11:38:08.624473  654294 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.137 22 <nil> <nil>}
	I1209 11:38:08.624487  654294 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-529265 && echo "pause-529265" | sudo tee /etc/hostname
	I1209 11:38:08.747981  654294 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-529265
	
	I1209 11:38:08.748020  654294 main.go:141] libmachine: (pause-529265) Calling .GetSSHHostname
	I1209 11:38:08.750763  654294 main.go:141] libmachine: (pause-529265) DBG | domain pause-529265 has defined MAC address 52:54:00:db:89:cb in network mk-pause-529265
	I1209 11:38:08.751153  654294 main.go:141] libmachine: (pause-529265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:89:cb", ip: ""} in network mk-pause-529265: {Iface:virbr1 ExpiryTime:2024-12-09 12:36:48 +0000 UTC Type:0 Mac:52:54:00:db:89:cb Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:pause-529265 Clientid:01:52:54:00:db:89:cb}
	I1209 11:38:08.751189  654294 main.go:141] libmachine: (pause-529265) DBG | domain pause-529265 has defined IP address 192.168.39.137 and MAC address 52:54:00:db:89:cb in network mk-pause-529265
	I1209 11:38:08.751803  654294 main.go:141] libmachine: (pause-529265) Calling .GetSSHPort
	I1209 11:38:08.753096  654294 main.go:141] libmachine: (pause-529265) Calling .GetSSHKeyPath
	I1209 11:38:08.753304  654294 main.go:141] libmachine: (pause-529265) Calling .GetSSHKeyPath
	I1209 11:38:08.753488  654294 main.go:141] libmachine: (pause-529265) Calling .GetSSHUsername
	I1209 11:38:08.753639  654294 main.go:141] libmachine: Using SSH client type: native
	I1209 11:38:08.753815  654294 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.137 22 <nil> <nil>}
	I1209 11:38:08.753831  654294 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-529265' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-529265/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-529265' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 11:38:08.858984  654294 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 11:38:08.859032  654294 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20068-609844/.minikube CaCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20068-609844/.minikube}
	I1209 11:38:08.859064  654294 buildroot.go:174] setting up certificates
	I1209 11:38:08.859076  654294 provision.go:84] configureAuth start
	I1209 11:38:08.859088  654294 main.go:141] libmachine: (pause-529265) Calling .GetMachineName
	I1209 11:38:08.859393  654294 main.go:141] libmachine: (pause-529265) Calling .GetIP
	I1209 11:38:08.862666  654294 main.go:141] libmachine: (pause-529265) DBG | domain pause-529265 has defined MAC address 52:54:00:db:89:cb in network mk-pause-529265
	I1209 11:38:08.863061  654294 main.go:141] libmachine: (pause-529265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:89:cb", ip: ""} in network mk-pause-529265: {Iface:virbr1 ExpiryTime:2024-12-09 12:36:48 +0000 UTC Type:0 Mac:52:54:00:db:89:cb Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:pause-529265 Clientid:01:52:54:00:db:89:cb}
	I1209 11:38:08.863090  654294 main.go:141] libmachine: (pause-529265) DBG | domain pause-529265 has defined IP address 192.168.39.137 and MAC address 52:54:00:db:89:cb in network mk-pause-529265
	I1209 11:38:08.863268  654294 main.go:141] libmachine: (pause-529265) Calling .GetSSHHostname
	I1209 11:38:08.865470  654294 main.go:141] libmachine: (pause-529265) DBG | domain pause-529265 has defined MAC address 52:54:00:db:89:cb in network mk-pause-529265
	I1209 11:38:08.865806  654294 main.go:141] libmachine: (pause-529265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:89:cb", ip: ""} in network mk-pause-529265: {Iface:virbr1 ExpiryTime:2024-12-09 12:36:48 +0000 UTC Type:0 Mac:52:54:00:db:89:cb Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:pause-529265 Clientid:01:52:54:00:db:89:cb}
	I1209 11:38:08.865836  654294 main.go:141] libmachine: (pause-529265) DBG | domain pause-529265 has defined IP address 192.168.39.137 and MAC address 52:54:00:db:89:cb in network mk-pause-529265
	I1209 11:38:08.865975  654294 provision.go:143] copyHostCerts
	I1209 11:38:08.866038  654294 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem, removing ...
	I1209 11:38:08.866063  654294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem
	I1209 11:38:08.866136  654294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem (1082 bytes)
	I1209 11:38:08.866338  654294 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem, removing ...
	I1209 11:38:08.866352  654294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem
	I1209 11:38:08.866377  654294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem (1123 bytes)
	I1209 11:38:08.866445  654294 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem, removing ...
	I1209 11:38:08.866452  654294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem
	I1209 11:38:08.866473  654294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem (1679 bytes)
	I1209 11:38:08.866521  654294 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem org=jenkins.pause-529265 san=[127.0.0.1 192.168.39.137 localhost minikube pause-529265]
	I1209 11:38:08.926752  654294 provision.go:177] copyRemoteCerts
	I1209 11:38:08.926831  654294 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 11:38:08.926866  654294 main.go:141] libmachine: (pause-529265) Calling .GetSSHHostname
	I1209 11:38:08.930142  654294 main.go:141] libmachine: (pause-529265) DBG | domain pause-529265 has defined MAC address 52:54:00:db:89:cb in network mk-pause-529265
	I1209 11:38:08.930607  654294 main.go:141] libmachine: (pause-529265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:89:cb", ip: ""} in network mk-pause-529265: {Iface:virbr1 ExpiryTime:2024-12-09 12:36:48 +0000 UTC Type:0 Mac:52:54:00:db:89:cb Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:pause-529265 Clientid:01:52:54:00:db:89:cb}
	I1209 11:38:08.930635  654294 main.go:141] libmachine: (pause-529265) DBG | domain pause-529265 has defined IP address 192.168.39.137 and MAC address 52:54:00:db:89:cb in network mk-pause-529265
	I1209 11:38:08.930925  654294 main.go:141] libmachine: (pause-529265) Calling .GetSSHPort
	I1209 11:38:08.931164  654294 main.go:141] libmachine: (pause-529265) Calling .GetSSHKeyPath
	I1209 11:38:08.931371  654294 main.go:141] libmachine: (pause-529265) Calling .GetSSHUsername
	I1209 11:38:08.931574  654294 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/pause-529265/id_rsa Username:docker}
	I1209 11:38:09.018484  654294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1209 11:38:09.042403  654294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1209 11:38:09.068419  654294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1209 11:38:09.094676  654294 provision.go:87] duration metric: took 235.584945ms to configureAuth
	I1209 11:38:09.094708  654294 buildroot.go:189] setting minikube options for container-runtime
	I1209 11:38:09.094930  654294 config.go:182] Loaded profile config "pause-529265": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 11:38:09.095032  654294 main.go:141] libmachine: (pause-529265) Calling .GetSSHHostname
	I1209 11:38:09.097738  654294 main.go:141] libmachine: (pause-529265) DBG | domain pause-529265 has defined MAC address 52:54:00:db:89:cb in network mk-pause-529265
	I1209 11:38:09.098095  654294 main.go:141] libmachine: (pause-529265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:89:cb", ip: ""} in network mk-pause-529265: {Iface:virbr1 ExpiryTime:2024-12-09 12:36:48 +0000 UTC Type:0 Mac:52:54:00:db:89:cb Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:pause-529265 Clientid:01:52:54:00:db:89:cb}
	I1209 11:38:09.098128  654294 main.go:141] libmachine: (pause-529265) DBG | domain pause-529265 has defined IP address 192.168.39.137 and MAC address 52:54:00:db:89:cb in network mk-pause-529265
	I1209 11:38:09.098285  654294 main.go:141] libmachine: (pause-529265) Calling .GetSSHPort
	I1209 11:38:09.098475  654294 main.go:141] libmachine: (pause-529265) Calling .GetSSHKeyPath
	I1209 11:38:09.098632  654294 main.go:141] libmachine: (pause-529265) Calling .GetSSHKeyPath
	I1209 11:38:09.098763  654294 main.go:141] libmachine: (pause-529265) Calling .GetSSHUsername
	I1209 11:38:09.098903  654294 main.go:141] libmachine: Using SSH client type: native
	I1209 11:38:09.099089  654294 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.137 22 <nil> <nil>}
	I1209 11:38:09.099106  654294 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 11:38:14.592564  654294 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 11:38:14.592603  654294 machine.go:96] duration metric: took 6.08564319s to provisionDockerMachine
	I1209 11:38:14.592622  654294 start.go:293] postStartSetup for "pause-529265" (driver="kvm2")
	I1209 11:38:14.592638  654294 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 11:38:14.592674  654294 main.go:141] libmachine: (pause-529265) Calling .DriverName
	I1209 11:38:14.593035  654294 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 11:38:14.593078  654294 main.go:141] libmachine: (pause-529265) Calling .GetSSHHostname
	I1209 11:38:14.596100  654294 main.go:141] libmachine: (pause-529265) DBG | domain pause-529265 has defined MAC address 52:54:00:db:89:cb in network mk-pause-529265
	I1209 11:38:14.596516  654294 main.go:141] libmachine: (pause-529265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:89:cb", ip: ""} in network mk-pause-529265: {Iface:virbr1 ExpiryTime:2024-12-09 12:36:48 +0000 UTC Type:0 Mac:52:54:00:db:89:cb Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:pause-529265 Clientid:01:52:54:00:db:89:cb}
	I1209 11:38:14.596548  654294 main.go:141] libmachine: (pause-529265) DBG | domain pause-529265 has defined IP address 192.168.39.137 and MAC address 52:54:00:db:89:cb in network mk-pause-529265
	I1209 11:38:14.596786  654294 main.go:141] libmachine: (pause-529265) Calling .GetSSHPort
	I1209 11:38:14.596978  654294 main.go:141] libmachine: (pause-529265) Calling .GetSSHKeyPath
	I1209 11:38:14.597156  654294 main.go:141] libmachine: (pause-529265) Calling .GetSSHUsername
	I1209 11:38:14.597328  654294 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/pause-529265/id_rsa Username:docker}
	I1209 11:38:14.690459  654294 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 11:38:14.694358  654294 info.go:137] Remote host: Buildroot 2023.02.9
	I1209 11:38:14.694390  654294 filesync.go:126] Scanning /home/jenkins/minikube-integration/20068-609844/.minikube/addons for local assets ...
	I1209 11:38:14.694479  654294 filesync.go:126] Scanning /home/jenkins/minikube-integration/20068-609844/.minikube/files for local assets ...
	I1209 11:38:14.694556  654294 filesync.go:149] local asset: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem -> 6170172.pem in /etc/ssl/certs
	I1209 11:38:14.694642  654294 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 11:38:14.707514  654294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem --> /etc/ssl/certs/6170172.pem (1708 bytes)
	I1209 11:38:14.734072  654294 start.go:296] duration metric: took 141.431916ms for postStartSetup
	I1209 11:38:14.734119  654294 fix.go:56] duration metric: took 6.255061353s for fixHost
	I1209 11:38:14.734147  654294 main.go:141] libmachine: (pause-529265) Calling .GetSSHHostname
	I1209 11:38:14.736915  654294 main.go:141] libmachine: (pause-529265) DBG | domain pause-529265 has defined MAC address 52:54:00:db:89:cb in network mk-pause-529265
	I1209 11:38:14.737303  654294 main.go:141] libmachine: (pause-529265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:89:cb", ip: ""} in network mk-pause-529265: {Iface:virbr1 ExpiryTime:2024-12-09 12:36:48 +0000 UTC Type:0 Mac:52:54:00:db:89:cb Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:pause-529265 Clientid:01:52:54:00:db:89:cb}
	I1209 11:38:14.737331  654294 main.go:141] libmachine: (pause-529265) DBG | domain pause-529265 has defined IP address 192.168.39.137 and MAC address 52:54:00:db:89:cb in network mk-pause-529265
	I1209 11:38:14.737500  654294 main.go:141] libmachine: (pause-529265) Calling .GetSSHPort
	I1209 11:38:14.737709  654294 main.go:141] libmachine: (pause-529265) Calling .GetSSHKeyPath
	I1209 11:38:14.737902  654294 main.go:141] libmachine: (pause-529265) Calling .GetSSHKeyPath
	I1209 11:38:14.738068  654294 main.go:141] libmachine: (pause-529265) Calling .GetSSHUsername
	I1209 11:38:14.738280  654294 main.go:141] libmachine: Using SSH client type: native
	I1209 11:38:14.738442  654294 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.137 22 <nil> <nil>}
	I1209 11:38:14.738453  654294 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1209 11:38:14.842831  654294 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733744294.831035079
	
	I1209 11:38:14.842862  654294 fix.go:216] guest clock: 1733744294.831035079
	I1209 11:38:14.842873  654294 fix.go:229] Guest: 2024-12-09 11:38:14.831035079 +0000 UTC Remote: 2024-12-09 11:38:14.734124584 +0000 UTC m=+41.978567798 (delta=96.910495ms)
	I1209 11:38:14.842895  654294 fix.go:200] guest clock delta is within tolerance: 96.910495ms
	I1209 11:38:14.842900  654294 start.go:83] releasing machines lock for "pause-529265", held for 6.363882262s
	I1209 11:38:14.842930  654294 main.go:141] libmachine: (pause-529265) Calling .DriverName
	I1209 11:38:14.843226  654294 main.go:141] libmachine: (pause-529265) Calling .GetIP
	I1209 11:38:14.850380  654294 main.go:141] libmachine: (pause-529265) DBG | domain pause-529265 has defined MAC address 52:54:00:db:89:cb in network mk-pause-529265
	I1209 11:38:14.850843  654294 main.go:141] libmachine: (pause-529265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:89:cb", ip: ""} in network mk-pause-529265: {Iface:virbr1 ExpiryTime:2024-12-09 12:36:48 +0000 UTC Type:0 Mac:52:54:00:db:89:cb Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:pause-529265 Clientid:01:52:54:00:db:89:cb}
	I1209 11:38:14.850875  654294 main.go:141] libmachine: (pause-529265) DBG | domain pause-529265 has defined IP address 192.168.39.137 and MAC address 52:54:00:db:89:cb in network mk-pause-529265
	I1209 11:38:14.851018  654294 main.go:141] libmachine: (pause-529265) Calling .DriverName
	I1209 11:38:14.851566  654294 main.go:141] libmachine: (pause-529265) Calling .DriverName
	I1209 11:38:14.851732  654294 main.go:141] libmachine: (pause-529265) Calling .DriverName
	I1209 11:38:14.851821  654294 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 11:38:14.851887  654294 main.go:141] libmachine: (pause-529265) Calling .GetSSHHostname
	I1209 11:38:14.851911  654294 ssh_runner.go:195] Run: cat /version.json
	I1209 11:38:14.851932  654294 main.go:141] libmachine: (pause-529265) Calling .GetSSHHostname
	I1209 11:38:14.854872  654294 main.go:141] libmachine: (pause-529265) DBG | domain pause-529265 has defined MAC address 52:54:00:db:89:cb in network mk-pause-529265
	I1209 11:38:14.854956  654294 main.go:141] libmachine: (pause-529265) DBG | domain pause-529265 has defined MAC address 52:54:00:db:89:cb in network mk-pause-529265
	I1209 11:38:14.855323  654294 main.go:141] libmachine: (pause-529265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:89:cb", ip: ""} in network mk-pause-529265: {Iface:virbr1 ExpiryTime:2024-12-09 12:36:48 +0000 UTC Type:0 Mac:52:54:00:db:89:cb Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:pause-529265 Clientid:01:52:54:00:db:89:cb}
	I1209 11:38:14.855354  654294 main.go:141] libmachine: (pause-529265) DBG | domain pause-529265 has defined IP address 192.168.39.137 and MAC address 52:54:00:db:89:cb in network mk-pause-529265
	I1209 11:38:14.855387  654294 main.go:141] libmachine: (pause-529265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:89:cb", ip: ""} in network mk-pause-529265: {Iface:virbr1 ExpiryTime:2024-12-09 12:36:48 +0000 UTC Type:0 Mac:52:54:00:db:89:cb Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:pause-529265 Clientid:01:52:54:00:db:89:cb}
	I1209 11:38:14.855400  654294 main.go:141] libmachine: (pause-529265) DBG | domain pause-529265 has defined IP address 192.168.39.137 and MAC address 52:54:00:db:89:cb in network mk-pause-529265
	I1209 11:38:14.855662  654294 main.go:141] libmachine: (pause-529265) Calling .GetSSHPort
	I1209 11:38:14.855666  654294 main.go:141] libmachine: (pause-529265) Calling .GetSSHPort
	I1209 11:38:14.855860  654294 main.go:141] libmachine: (pause-529265) Calling .GetSSHKeyPath
	I1209 11:38:14.855863  654294 main.go:141] libmachine: (pause-529265) Calling .GetSSHKeyPath
	I1209 11:38:14.856042  654294 main.go:141] libmachine: (pause-529265) Calling .GetSSHUsername
	I1209 11:38:14.856058  654294 main.go:141] libmachine: (pause-529265) Calling .GetSSHUsername
	I1209 11:38:14.856206  654294 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/pause-529265/id_rsa Username:docker}
	I1209 11:38:14.856223  654294 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/pause-529265/id_rsa Username:docker}
	I1209 11:38:14.977478  654294 ssh_runner.go:195] Run: systemctl --version
	I1209 11:38:14.984184  654294 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 11:38:15.146490  654294 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 11:38:15.153673  654294 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 11:38:15.153745  654294 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 11:38:15.164329  654294 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1209 11:38:15.164358  654294 start.go:495] detecting cgroup driver to use...
	I1209 11:38:15.164432  654294 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 11:38:15.185896  654294 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 11:38:15.204111  654294 docker.go:217] disabling cri-docker service (if available) ...
	I1209 11:38:15.204195  654294 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 11:38:15.219436  654294 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 11:38:15.234906  654294 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 11:38:15.382760  654294 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 11:38:15.528595  654294 docker.go:233] disabling docker service ...
	I1209 11:38:15.528722  654294 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 11:38:15.546980  654294 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 11:38:15.563320  654294 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 11:38:15.719293  654294 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 11:38:15.875514  654294 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 11:38:15.890866  654294 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 11:38:15.910255  654294 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1209 11:38:15.910339  654294 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:38:15.921546  654294 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 11:38:15.921635  654294 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:38:15.931877  654294 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:38:15.942193  654294 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:38:15.956609  654294 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 11:38:15.967348  654294 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:38:15.981308  654294 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:38:15.992512  654294 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:38:16.005080  654294 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 11:38:16.015931  654294 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 11:38:16.028197  654294 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 11:38:16.181862  654294 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 11:38:18.015626  654294 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.83371071s)
	I1209 11:38:18.015666  654294 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 11:38:18.015736  654294 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 11:38:18.022203  654294 start.go:563] Will wait 60s for crictl version
	I1209 11:38:18.022299  654294 ssh_runner.go:195] Run: which crictl
	I1209 11:38:18.028282  654294 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 11:38:18.069719  654294 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1209 11:38:18.069816  654294 ssh_runner.go:195] Run: crio --version
	I1209 11:38:18.100817  654294 ssh_runner.go:195] Run: crio --version
	I1209 11:38:18.133428  654294 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1209 11:38:18.134608  654294 main.go:141] libmachine: (pause-529265) Calling .GetIP
	I1209 11:38:18.137801  654294 main.go:141] libmachine: (pause-529265) DBG | domain pause-529265 has defined MAC address 52:54:00:db:89:cb in network mk-pause-529265
	I1209 11:38:18.138251  654294 main.go:141] libmachine: (pause-529265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:89:cb", ip: ""} in network mk-pause-529265: {Iface:virbr1 ExpiryTime:2024-12-09 12:36:48 +0000 UTC Type:0 Mac:52:54:00:db:89:cb Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:pause-529265 Clientid:01:52:54:00:db:89:cb}
	I1209 11:38:18.138283  654294 main.go:141] libmachine: (pause-529265) DBG | domain pause-529265 has defined IP address 192.168.39.137 and MAC address 52:54:00:db:89:cb in network mk-pause-529265
	I1209 11:38:18.138522  654294 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1209 11:38:18.142852  654294 kubeadm.go:883] updating cluster {Name:pause-529265 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2
ClusterName:pause-529265 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.137 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-p
lugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 11:38:18.143027  654294 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 11:38:18.143108  654294 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 11:38:18.192411  654294 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 11:38:18.192446  654294 crio.go:433] Images already preloaded, skipping extraction
	I1209 11:38:18.192510  654294 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 11:38:18.231368  654294 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 11:38:18.231396  654294 cache_images.go:84] Images are preloaded, skipping loading
	I1209 11:38:18.231406  654294 kubeadm.go:934] updating node { 192.168.39.137 8443 v1.31.2 crio true true} ...
	I1209 11:38:18.231587  654294 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-529265 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.137
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:pause-529265 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 11:38:18.231672  654294 ssh_runner.go:195] Run: crio config
	I1209 11:38:18.297062  654294 cni.go:84] Creating CNI manager for ""
	I1209 11:38:18.297091  654294 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 11:38:18.297104  654294 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1209 11:38:18.297134  654294 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.137 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-529265 NodeName:pause-529265 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.137"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.137 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 11:38:18.297325  654294 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.137
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-529265"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.137"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.137"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 11:38:18.297423  654294 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1209 11:38:18.309933  654294 binaries.go:44] Found k8s binaries, skipping transfer
	I1209 11:38:18.310017  654294 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 11:38:18.322301  654294 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1209 11:38:18.344059  654294 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 11:38:18.366473  654294 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2292 bytes)
	I1209 11:38:18.386573  654294 ssh_runner.go:195] Run: grep 192.168.39.137	control-plane.minikube.internal$ /etc/hosts
	I1209 11:38:18.391080  654294 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 11:38:18.540524  654294 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 11:38:18.560619  654294 certs.go:68] Setting up /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/pause-529265 for IP: 192.168.39.137
	I1209 11:38:18.560646  654294 certs.go:194] generating shared ca certs ...
	I1209 11:38:18.560663  654294 certs.go:226] acquiring lock for ca certs: {Name:mk2df5887e08965d909a9c950da5dfffb8a04ddc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 11:38:18.560886  654294 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.key
	I1209 11:38:18.560952  654294 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.key
	I1209 11:38:18.560964  654294 certs.go:256] generating profile certs ...
	I1209 11:38:18.561084  654294 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/pause-529265/client.key
	I1209 11:38:18.561163  654294 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/pause-529265/apiserver.key.a265dede
	I1209 11:38:18.561214  654294 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/pause-529265/proxy-client.key
	I1209 11:38:18.561367  654294 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017.pem (1338 bytes)
	W1209 11:38:18.561406  654294 certs.go:480] ignoring /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017_empty.pem, impossibly tiny 0 bytes
	I1209 11:38:18.561424  654294 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem (1675 bytes)
	I1209 11:38:18.561460  654294 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem (1082 bytes)
	I1209 11:38:18.561500  654294 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem (1123 bytes)
	I1209 11:38:18.561527  654294 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem (1679 bytes)
	I1209 11:38:18.561584  654294 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem (1708 bytes)
	I1209 11:38:18.562458  654294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 11:38:18.596208  654294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1209 11:38:18.691692  654294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 11:38:18.836527  654294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1209 11:38:18.964205  654294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/pause-529265/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1209 11:38:19.007419  654294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/pause-529265/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1209 11:38:19.087812  654294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/pause-529265/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 11:38:19.247625  654294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/pause-529265/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1209 11:38:19.357973  654294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 11:38:19.479983  654294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017.pem --> /usr/share/ca-certificates/617017.pem (1338 bytes)
	I1209 11:38:19.562641  654294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem --> /usr/share/ca-certificates/6170172.pem (1708 bytes)
	I1209 11:38:19.623716  654294 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 11:38:19.680003  654294 ssh_runner.go:195] Run: openssl version
	I1209 11:38:19.703422  654294 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1209 11:38:19.718318  654294 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 11:38:19.726768  654294 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 10:34 /usr/share/ca-certificates/minikubeCA.pem
	I1209 11:38:19.726846  654294 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 11:38:19.734068  654294 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1209 11:38:19.748604  654294 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/617017.pem && ln -fs /usr/share/ca-certificates/617017.pem /etc/ssl/certs/617017.pem"
	I1209 11:38:19.762076  654294 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/617017.pem
	I1209 11:38:19.773184  654294 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 10:45 /usr/share/ca-certificates/617017.pem
	I1209 11:38:19.773279  654294 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/617017.pem
	I1209 11:38:19.785906  654294 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/617017.pem /etc/ssl/certs/51391683.0"
	I1209 11:38:19.806246  654294 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6170172.pem && ln -fs /usr/share/ca-certificates/6170172.pem /etc/ssl/certs/6170172.pem"
	I1209 11:38:19.831523  654294 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6170172.pem
	I1209 11:38:19.849854  654294 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 10:45 /usr/share/ca-certificates/6170172.pem
	I1209 11:38:19.849937  654294 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6170172.pem
	I1209 11:38:19.857236  654294 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6170172.pem /etc/ssl/certs/3ec20f2e.0"
	I1209 11:38:19.926163  654294 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 11:38:19.942463  654294 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1209 11:38:19.961857  654294 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1209 11:38:19.973797  654294 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1209 11:38:19.980139  654294 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1209 11:38:19.988082  654294 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1209 11:38:19.995247  654294 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1209 11:38:20.002363  654294 kubeadm.go:392] StartCluster: {Name:pause-529265 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cl
usterName:pause-529265 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.137 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plug
in:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 11:38:20.002546  654294 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 11:38:20.002615  654294 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 11:38:20.086845  654294 cri.go:89] found id: "7b56cd8c9787715b1e91b413565fe2f41a70c9f63a067ff3d5e33d92f6e77bb1"
	I1209 11:38:20.086876  654294 cri.go:89] found id: "1f8b2a354553d276bad1bd0ba810a5aadcc95aafb81669149c05c48daa23cc85"
	I1209 11:38:20.086883  654294 cri.go:89] found id: "33ba3053afaf0a9fcd10abf6dde45a993ff265831a41dd56519bfabb6eeda8ca"
	I1209 11:38:20.086888  654294 cri.go:89] found id: "2da0baee10803c8d66c70488dc28bbcb5168ac188eedf55f3bdcf8ca35ec0b6e"
	I1209 11:38:20.086892  654294 cri.go:89] found id: "87d9b7e067ec790850a7d5224c0afcb57c57d2b835e00985bb994729d7d45cf9"
	I1209 11:38:20.086898  654294 cri.go:89] found id: "99e86d67a83768363952b0b84b084aa05e11bde2661f8e32fcff05d30275e001"
	I1209 11:38:20.086902  654294 cri.go:89] found id: "5a09c13512123eb832c91592317786989db38148d9ab2e0c9b89b16ed4c34a21"
	I1209 11:38:20.086906  654294 cri.go:89] found id: "e031c61516c14f9d7088fd9a97492f5b76d109d7573c878781d37f5708838f40"
	I1209 11:38:20.086911  654294 cri.go:89] found id: "a7bb02b5623366a86db9e313f9645ce7606390e7d63c5a1040cce1dc37aa301d"
	I1209 11:38:20.086918  654294 cri.go:89] found id: "789a6ada6df524820fbd3102249db3bd8c07ef5951571363539b4dbafcfa89ce"
	I1209 11:38:20.086923  654294 cri.go:89] found id: "1d9787bbdd7769bac5add617979400f1752b6a0db09ff4ea5c781269ab887090"
	I1209 11:38:20.086927  654294 cri.go:89] found id: ""
	I1209 11:38:20.086986  654294 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-529265 -n pause-529265
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-529265 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-529265 logs -n 25: (1.204548979s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-763643 sudo cat                            | cilium-763643             | jenkins | v1.34.0 | 09 Dec 24 11:36 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p cilium-763643 sudo cat                            | cilium-763643             | jenkins | v1.34.0 | 09 Dec 24 11:36 UTC |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p cilium-763643 sudo                                | cilium-763643             | jenkins | v1.34.0 | 09 Dec 24 11:36 UTC |                     |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p cilium-763643 sudo                                | cilium-763643             | jenkins | v1.34.0 | 09 Dec 24 11:36 UTC |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-763643 sudo                                | cilium-763643             | jenkins | v1.34.0 | 09 Dec 24 11:36 UTC |                     |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-763643 sudo cat                            | cilium-763643             | jenkins | v1.34.0 | 09 Dec 24 11:36 UTC |                     |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p cilium-763643 sudo cat                            | cilium-763643             | jenkins | v1.34.0 | 09 Dec 24 11:36 UTC |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-763643 sudo                                | cilium-763643             | jenkins | v1.34.0 | 09 Dec 24 11:36 UTC |                     |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p cilium-763643 sudo                                | cilium-763643             | jenkins | v1.34.0 | 09 Dec 24 11:36 UTC |                     |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p cilium-763643 sudo                                | cilium-763643             | jenkins | v1.34.0 | 09 Dec 24 11:36 UTC |                     |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p cilium-763643 sudo find                           | cilium-763643             | jenkins | v1.34.0 | 09 Dec 24 11:36 UTC |                     |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-763643 sudo crio                           | cilium-763643             | jenkins | v1.34.0 | 09 Dec 24 11:36 UTC |                     |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p cilium-763643                                     | cilium-763643             | jenkins | v1.34.0 | 09 Dec 24 11:36 UTC | 09 Dec 24 11:36 UTC |
	| start   | -p force-systemd-env-250964                          | force-systemd-env-250964  | jenkins | v1.34.0 | 09 Dec 24 11:36 UTC | 09 Dec 24 11:38 UTC |
	|         | --memory=2048                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| start   | -p pause-529265                                      | pause-529265              | jenkins | v1.34.0 | 09 Dec 24 11:37 UTC | 09 Dec 24 11:39 UTC |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| delete  | -p offline-crio-482227                               | offline-crio-482227       | jenkins | v1.34.0 | 09 Dec 24 11:37 UTC | 09 Dec 24 11:37 UTC |
	| start   | -p force-systemd-flag-451257                         | force-systemd-flag-451257 | jenkins | v1.34.0 | 09 Dec 24 11:37 UTC | 09 Dec 24 11:39 UTC |
	|         | --memory=2048 --force-systemd                        |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-597739                               | NoKubernetes-597739       | jenkins | v1.34.0 | 09 Dec 24 11:38 UTC | 09 Dec 24 11:38 UTC |
	|         | --no-kubernetes --driver=kvm2                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-250964                          | force-systemd-env-250964  | jenkins | v1.34.0 | 09 Dec 24 11:38 UTC | 09 Dec 24 11:38 UTC |
	| start   | -p cert-expiration-752166                            | cert-expiration-752166    | jenkins | v1.34.0 | 09 Dec 24 11:38 UTC |                     |
	|         | --memory=2048                                        |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-597739                               | NoKubernetes-597739       | jenkins | v1.34.0 | 09 Dec 24 11:38 UTC | 09 Dec 24 11:38 UTC |
	| start   | -p NoKubernetes-597739                               | NoKubernetes-597739       | jenkins | v1.34.0 | 09 Dec 24 11:38 UTC |                     |
	|         | --no-kubernetes --driver=kvm2                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-451257 ssh cat                    | force-systemd-flag-451257 | jenkins | v1.34.0 | 09 Dec 24 11:39 UTC | 09 Dec 24 11:39 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf                   |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-451257                         | force-systemd-flag-451257 | jenkins | v1.34.0 | 09 Dec 24 11:39 UTC | 09 Dec 24 11:39 UTC |
	| start   | -p cert-options-935628                               | cert-options-935628       | jenkins | v1.34.0 | 09 Dec 24 11:39 UTC |                     |
	|         | --memory=2048                                        |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                            |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15                        |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost                          |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com                     |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                                |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/09 11:39:02
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 11:39:02.634472  655780 out.go:345] Setting OutFile to fd 1 ...
	I1209 11:39:02.634726  655780 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 11:39:02.634731  655780 out.go:358] Setting ErrFile to fd 2...
	I1209 11:39:02.634733  655780 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 11:39:02.634896  655780 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20068-609844/.minikube/bin
	I1209 11:39:02.636093  655780 out.go:352] Setting JSON to false
	I1209 11:39:02.637195  655780 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":15687,"bootTime":1733728656,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 11:39:02.637311  655780 start.go:139] virtualization: kvm guest
	I1209 11:39:02.639145  655780 out.go:177] * [cert-options-935628] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1209 11:39:02.640437  655780 out.go:177]   - MINIKUBE_LOCATION=20068
	I1209 11:39:02.640484  655780 notify.go:220] Checking for updates...
	I1209 11:39:02.642587  655780 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 11:39:02.643645  655780 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20068-609844/kubeconfig
	I1209 11:39:02.644588  655780 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20068-609844/.minikube
	I1209 11:39:02.645622  655780 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1209 11:39:02.646617  655780 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 11:39:02.647983  655780 config.go:182] Loaded profile config "NoKubernetes-597739": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1209 11:39:02.648060  655780 config.go:182] Loaded profile config "cert-expiration-752166": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 11:39:02.648163  655780 config.go:182] Loaded profile config "pause-529265": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 11:39:02.648266  655780 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 11:39:02.684055  655780 out.go:177] * Using the kvm2 driver based on user configuration
	I1209 11:39:02.685015  655780 start.go:297] selected driver: kvm2
	I1209 11:39:02.685022  655780 start.go:901] validating driver "kvm2" against <nil>
	I1209 11:39:02.685032  655780 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 11:39:02.685781  655780 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 11:39:02.685863  655780 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20068-609844/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1209 11:39:02.701484  655780 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1209 11:39:02.701535  655780 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1209 11:39:02.701827  655780 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1209 11:39:02.701853  655780 cni.go:84] Creating CNI manager for ""
	I1209 11:39:02.701899  655780 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 11:39:02.701903  655780 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1209 11:39:02.701953  655780 start.go:340] cluster config:
	{Name:cert-options-935628 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8555 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:cert-options-935628 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.0.
1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8555 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1209 11:39:02.702051  655780 iso.go:125] acquiring lock: {Name:mk3bf4d9da9b75cc0ae0a3732f4492bac54a4b46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 11:39:02.703750  655780 out.go:177] * Starting "cert-options-935628" primary control-plane node in "cert-options-935628" cluster
	I1209 11:39:00.053979  654294 addons.go:510] duration metric: took 2.696329ms for enable addons: enabled=[]
	I1209 11:39:00.053992  654294 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 11:39:00.254065  654294 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 11:39:00.277180  654294 node_ready.go:35] waiting up to 6m0s for node "pause-529265" to be "Ready" ...
	I1209 11:39:00.281459  654294 node_ready.go:49] node "pause-529265" has status "Ready":"True"
	I1209 11:39:00.281537  654294 node_ready.go:38] duration metric: took 4.313724ms for node "pause-529265" to be "Ready" ...
	I1209 11:39:00.281554  654294 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 11:39:00.287250  654294 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-lbpw6" in "kube-system" namespace to be "Ready" ...
	I1209 11:39:00.507793  654294 pod_ready.go:93] pod "coredns-7c65d6cfc9-lbpw6" in "kube-system" namespace has status "Ready":"True"
	I1209 11:39:00.507828  654294 pod_ready.go:82] duration metric: took 220.541392ms for pod "coredns-7c65d6cfc9-lbpw6" in "kube-system" namespace to be "Ready" ...
	I1209 11:39:00.507844  654294 pod_ready.go:79] waiting up to 6m0s for pod "etcd-pause-529265" in "kube-system" namespace to be "Ready" ...
	I1209 11:39:00.906345  654294 pod_ready.go:93] pod "etcd-pause-529265" in "kube-system" namespace has status "Ready":"True"
	I1209 11:39:00.906382  654294 pod_ready.go:82] duration metric: took 398.528579ms for pod "etcd-pause-529265" in "kube-system" namespace to be "Ready" ...
	I1209 11:39:00.906399  654294 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-pause-529265" in "kube-system" namespace to be "Ready" ...
	I1209 11:39:01.306162  654294 pod_ready.go:93] pod "kube-apiserver-pause-529265" in "kube-system" namespace has status "Ready":"True"
	I1209 11:39:01.306291  654294 pod_ready.go:82] duration metric: took 399.878535ms for pod "kube-apiserver-pause-529265" in "kube-system" namespace to be "Ready" ...
	I1209 11:39:01.306309  654294 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-pause-529265" in "kube-system" namespace to be "Ready" ...
	I1209 11:39:01.705500  654294 pod_ready.go:93] pod "kube-controller-manager-pause-529265" in "kube-system" namespace has status "Ready":"True"
	I1209 11:39:01.705527  654294 pod_ready.go:82] duration metric: took 399.210052ms for pod "kube-controller-manager-pause-529265" in "kube-system" namespace to be "Ready" ...
	I1209 11:39:01.705538  654294 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-96c5d" in "kube-system" namespace to be "Ready" ...
	I1209 11:39:02.105469  654294 pod_ready.go:93] pod "kube-proxy-96c5d" in "kube-system" namespace has status "Ready":"True"
	I1209 11:39:02.105503  654294 pod_ready.go:82] duration metric: took 399.958612ms for pod "kube-proxy-96c5d" in "kube-system" namespace to be "Ready" ...
	I1209 11:39:02.105512  654294 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-pause-529265" in "kube-system" namespace to be "Ready" ...
	I1209 11:39:02.505328  654294 pod_ready.go:93] pod "kube-scheduler-pause-529265" in "kube-system" namespace has status "Ready":"True"
	I1209 11:39:02.505365  654294 pod_ready.go:82] duration metric: took 399.84513ms for pod "kube-scheduler-pause-529265" in "kube-system" namespace to be "Ready" ...
	I1209 11:39:02.505377  654294 pod_ready.go:39] duration metric: took 2.223806691s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 11:39:02.505397  654294 api_server.go:52] waiting for apiserver process to appear ...
	I1209 11:39:02.505459  654294 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:39:02.525680  654294 api_server.go:72] duration metric: took 2.474430571s to wait for apiserver process to appear ...
	I1209 11:39:02.525714  654294 api_server.go:88] waiting for apiserver healthz status ...
	I1209 11:39:02.525740  654294 api_server.go:253] Checking apiserver healthz at https://192.168.39.137:8443/healthz ...
	I1209 11:39:02.530874  654294 api_server.go:279] https://192.168.39.137:8443/healthz returned 200:
	ok
	I1209 11:39:02.531819  654294 api_server.go:141] control plane version: v1.31.2
	I1209 11:39:02.531839  654294 api_server.go:131] duration metric: took 6.118634ms to wait for apiserver health ...
	I1209 11:39:02.531847  654294 system_pods.go:43] waiting for kube-system pods to appear ...
	I1209 11:39:02.707547  654294 system_pods.go:59] 6 kube-system pods found
	I1209 11:39:02.707576  654294 system_pods.go:61] "coredns-7c65d6cfc9-lbpw6" [12d106d8-2009-4438-8487-53336745302b] Running
	I1209 11:39:02.707583  654294 system_pods.go:61] "etcd-pause-529265" [abd5409d-fef3-4a92-9827-3f46526dcc2f] Running
	I1209 11:39:02.707589  654294 system_pods.go:61] "kube-apiserver-pause-529265" [b4356d27-51ba-4063-900b-f449f9553286] Running
	I1209 11:39:02.707593  654294 system_pods.go:61] "kube-controller-manager-pause-529265" [6c770060-667c-44fe-a821-47f55502b61a] Running
	I1209 11:39:02.707600  654294 system_pods.go:61] "kube-proxy-96c5d" [4724f3fd-b481-4a95-b628-3bdaee03df58] Running
	I1209 11:39:02.707605  654294 system_pods.go:61] "kube-scheduler-pause-529265" [cb045617-3848-4e28-b2e6-2b7798897621] Running
	I1209 11:39:02.707614  654294 system_pods.go:74] duration metric: took 175.759249ms to wait for pod list to return data ...
	I1209 11:39:02.707625  654294 default_sa.go:34] waiting for default service account to be created ...
	I1209 11:39:02.905782  654294 default_sa.go:45] found service account: "default"
	I1209 11:39:02.905807  654294 default_sa.go:55] duration metric: took 198.172667ms for default service account to be created ...
	I1209 11:39:02.905817  654294 system_pods.go:116] waiting for k8s-apps to be running ...
	I1209 11:39:03.106850  654294 system_pods.go:86] 6 kube-system pods found
	I1209 11:39:03.106879  654294 system_pods.go:89] "coredns-7c65d6cfc9-lbpw6" [12d106d8-2009-4438-8487-53336745302b] Running
	I1209 11:39:03.106884  654294 system_pods.go:89] "etcd-pause-529265" [abd5409d-fef3-4a92-9827-3f46526dcc2f] Running
	I1209 11:39:03.106888  654294 system_pods.go:89] "kube-apiserver-pause-529265" [b4356d27-51ba-4063-900b-f449f9553286] Running
	I1209 11:39:03.106892  654294 system_pods.go:89] "kube-controller-manager-pause-529265" [6c770060-667c-44fe-a821-47f55502b61a] Running
	I1209 11:39:03.106896  654294 system_pods.go:89] "kube-proxy-96c5d" [4724f3fd-b481-4a95-b628-3bdaee03df58] Running
	I1209 11:39:03.106899  654294 system_pods.go:89] "kube-scheduler-pause-529265" [cb045617-3848-4e28-b2e6-2b7798897621] Running
	I1209 11:39:03.106905  654294 system_pods.go:126] duration metric: took 201.083281ms to wait for k8s-apps to be running ...
	I1209 11:39:03.106912  654294 system_svc.go:44] waiting for kubelet service to be running ....
	I1209 11:39:03.106955  654294 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 11:39:03.120800  654294 system_svc.go:56] duration metric: took 13.878988ms WaitForService to wait for kubelet
	I1209 11:39:03.120830  654294 kubeadm.go:582] duration metric: took 3.06959407s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 11:39:03.120847  654294 node_conditions.go:102] verifying NodePressure condition ...
	I1209 11:39:03.304949  654294 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1209 11:39:03.304975  654294 node_conditions.go:123] node cpu capacity is 2
	I1209 11:39:03.304992  654294 node_conditions.go:105] duration metric: took 184.140298ms to run NodePressure ...
	I1209 11:39:03.305022  654294 start.go:241] waiting for startup goroutines ...
	I1209 11:39:03.305045  654294 start.go:246] waiting for cluster config update ...
	I1209 11:39:03.305057  654294 start.go:255] writing updated cluster config ...
	I1209 11:39:03.305382  654294 ssh_runner.go:195] Run: rm -f paused
	I1209 11:39:03.353593  654294 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1209 11:39:03.355259  654294 out.go:177] * Done! kubectl is now configured to use "pause-529265" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 09 11:39:03 pause-529265 crio[2336]: time="2024-12-09 11:39:03.989593787Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733744343989571796,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0013ec72-0ae7-4217-bfa8-04ee3677e599 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 11:39:03 pause-529265 crio[2336]: time="2024-12-09 11:39:03.990040662Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2b26fab3-692b-49cd-a1aa-630ff50121f4 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 11:39:03 pause-529265 crio[2336]: time="2024-12-09 11:39:03.990107260Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2b26fab3-692b-49cd-a1aa-630ff50121f4 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 11:39:03 pause-529265 crio[2336]: time="2024-12-09 11:39:03.990344308Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b76d3b70fbda2f3ddc918debff7b70b481699eb997d48e505e26ae2f11a87997,PodSandboxId:889b5ff7952d5f04a70897fb2279f2ee29495bee1751cd43b6c540320af5e802,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733744322946773539,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-529265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4395cca1841dbdb67cff36eec0a10ff6,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb3d343aa45baf1f29eee5aa63bb0b681ca1a3f371c7df0b297730eed39389d5,PodSandboxId:20db52729296aae0c64d8eb498ecacfb16bbd5839567cce8c9dc2218b489d03d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733744322970676546,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-529265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19e373a3d8a9080aa7bdaf5aa3dc33dc,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b6100c030924c1bda5d7e3f16ca4eee787f0ecc9713226426c7f4dbf0e28de0,PodSandboxId:8253ff818926963a734a83b6e56266f73c8e7e78dcf7cbf3ed723c5096732b83,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733744322917286791,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-529265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e728576d235a404dc3d55892eb05ecb3,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.t
erminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:333147e8a89653476b04335614d92a7499aee0a25829570f8ec0d64f62cc22ab,PodSandboxId:c3dbc3f05adbc85fd267b5924280225ff937781d899a4d64e8a174cab7329880,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733744319338177475,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-529265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdee20116799e47b2c6d59a844ac20d4,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bdec1e4dfdcac30e0b3b896e872220973c73c7b9ee1573d4b2864f81c89f277,PodSandboxId:90cac579c4dc9ed51cc3e08fafba7dd1bbf9cbe30aaf681aa35dcfbed4795078,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733744317342397931,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-lbpw6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d106d8-2009-4438-8487-53336745302b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"
},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54723efd278180f1068f7ebf26c7f1aefaa082144fee4af513ccfe973ac23757,PodSandboxId:5a9cc6f32c36e65e0aaf2bd3e71e0cf5131ea1577a850c304a29d66a7e575868,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733744311342566119,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-96c5d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4724f3fd-b481-4a95-b628-3bdaee03df58,},Annotations:map[string]string{io
.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8844efda91d299dc8c21e077e850c1a61f66450372a1b174157411dbf5db1ce,PodSandboxId:90cac579c4dc9ed51cc3e08fafba7dd1bbf9cbe30aaf681aa35dcfbed4795078,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1733744299892817605,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-lbpw6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d106d8-2009-4438-8487-53336745302b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a
204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b56cd8c9787715b1e91b413565fe2f41a70c9f63a067ff3d5e33d92f6e77bb1,PodSandboxId:5a9cc6f32c36e65e0aaf2bd3e71e0cf5131ea1577a850c304a29d66a7e575868,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_EXITED,CreatedAt:1733744299146242176,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-96c5d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4724f3fd-b481-4a95-b628-3bdaee03df58,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f8b2a354553d276bad1bd0ba810a5aadcc95aafb81669149c05c48daa23cc85,PodSandboxId:8253ff818926963a734a83b6e56266f73c8e7e78dcf7cbf3ed723c5096732b83,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1733744299111041317,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-529265,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: e728576d235a404dc3d55892eb05ecb3,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33ba3053afaf0a9fcd10abf6dde45a993ff265831a41dd56519bfabb6eeda8ca,PodSandboxId:c3dbc3f05adbc85fd267b5924280225ff937781d899a4d64e8a174cab7329880,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_EXITED,CreatedAt:1733744299068063820,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-529265,io.kubernete
s.pod.namespace: kube-system,io.kubernetes.pod.uid: cdee20116799e47b2c6d59a844ac20d4,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2da0baee10803c8d66c70488dc28bbcb5168ac188eedf55f3bdcf8ca35ec0b6e,PodSandboxId:20db52729296aae0c64d8eb498ecacfb16bbd5839567cce8c9dc2218b489d03d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_EXITED,CreatedAt:1733744298989265060,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-529265,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 19e373a3d8a9080aa7bdaf5aa3dc33dc,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87d9b7e067ec790850a7d5224c0afcb57c57d2b835e00985bb994729d7d45cf9,PodSandboxId:889b5ff7952d5f04a70897fb2279f2ee29495bee1751cd43b6c540320af5e802,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733744298965604594,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-529265,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 4395cca1841dbdb67cff36eec0a10ff6,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2b26fab3-692b-49cd-a1aa-630ff50121f4 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 11:39:04 pause-529265 crio[2336]: time="2024-12-09 11:39:04.028208294Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=48667c78-d4fe-4b3d-98b3-06419d8681eb name=/runtime.v1.RuntimeService/Version
	Dec 09 11:39:04 pause-529265 crio[2336]: time="2024-12-09 11:39:04.028297383Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=48667c78-d4fe-4b3d-98b3-06419d8681eb name=/runtime.v1.RuntimeService/Version
	Dec 09 11:39:04 pause-529265 crio[2336]: time="2024-12-09 11:39:04.029422723Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=718452fe-1e29-4ea6-b2e6-3e0c432f07bf name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 11:39:04 pause-529265 crio[2336]: time="2024-12-09 11:39:04.029802408Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733744344029780640,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=718452fe-1e29-4ea6-b2e6-3e0c432f07bf name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 11:39:04 pause-529265 crio[2336]: time="2024-12-09 11:39:04.030484807Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9301c652-f4ac-4057-a52f-f3d0d2378a1f name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 11:39:04 pause-529265 crio[2336]: time="2024-12-09 11:39:04.030559465Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9301c652-f4ac-4057-a52f-f3d0d2378a1f name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 11:39:04 pause-529265 crio[2336]: time="2024-12-09 11:39:04.030845228Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b76d3b70fbda2f3ddc918debff7b70b481699eb997d48e505e26ae2f11a87997,PodSandboxId:889b5ff7952d5f04a70897fb2279f2ee29495bee1751cd43b6c540320af5e802,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733744322946773539,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-529265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4395cca1841dbdb67cff36eec0a10ff6,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb3d343aa45baf1f29eee5aa63bb0b681ca1a3f371c7df0b297730eed39389d5,PodSandboxId:20db52729296aae0c64d8eb498ecacfb16bbd5839567cce8c9dc2218b489d03d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733744322970676546,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-529265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19e373a3d8a9080aa7bdaf5aa3dc33dc,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b6100c030924c1bda5d7e3f16ca4eee787f0ecc9713226426c7f4dbf0e28de0,PodSandboxId:8253ff818926963a734a83b6e56266f73c8e7e78dcf7cbf3ed723c5096732b83,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733744322917286791,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-529265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e728576d235a404dc3d55892eb05ecb3,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.t
erminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:333147e8a89653476b04335614d92a7499aee0a25829570f8ec0d64f62cc22ab,PodSandboxId:c3dbc3f05adbc85fd267b5924280225ff937781d899a4d64e8a174cab7329880,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733744319338177475,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-529265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdee20116799e47b2c6d59a844ac20d4,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bdec1e4dfdcac30e0b3b896e872220973c73c7b9ee1573d4b2864f81c89f277,PodSandboxId:90cac579c4dc9ed51cc3e08fafba7dd1bbf9cbe30aaf681aa35dcfbed4795078,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733744317342397931,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-lbpw6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d106d8-2009-4438-8487-53336745302b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"
},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54723efd278180f1068f7ebf26c7f1aefaa082144fee4af513ccfe973ac23757,PodSandboxId:5a9cc6f32c36e65e0aaf2bd3e71e0cf5131ea1577a850c304a29d66a7e575868,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733744311342566119,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-96c5d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4724f3fd-b481-4a95-b628-3bdaee03df58,},Annotations:map[string]string{io
.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8844efda91d299dc8c21e077e850c1a61f66450372a1b174157411dbf5db1ce,PodSandboxId:90cac579c4dc9ed51cc3e08fafba7dd1bbf9cbe30aaf681aa35dcfbed4795078,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1733744299892817605,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-lbpw6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d106d8-2009-4438-8487-53336745302b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a
204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b56cd8c9787715b1e91b413565fe2f41a70c9f63a067ff3d5e33d92f6e77bb1,PodSandboxId:5a9cc6f32c36e65e0aaf2bd3e71e0cf5131ea1577a850c304a29d66a7e575868,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_EXITED,CreatedAt:1733744299146242176,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-96c5d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4724f3fd-b481-4a95-b628-3bdaee03df58,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f8b2a354553d276bad1bd0ba810a5aadcc95aafb81669149c05c48daa23cc85,PodSandboxId:8253ff818926963a734a83b6e56266f73c8e7e78dcf7cbf3ed723c5096732b83,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1733744299111041317,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-529265,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: e728576d235a404dc3d55892eb05ecb3,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33ba3053afaf0a9fcd10abf6dde45a993ff265831a41dd56519bfabb6eeda8ca,PodSandboxId:c3dbc3f05adbc85fd267b5924280225ff937781d899a4d64e8a174cab7329880,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_EXITED,CreatedAt:1733744299068063820,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-529265,io.kubernete
s.pod.namespace: kube-system,io.kubernetes.pod.uid: cdee20116799e47b2c6d59a844ac20d4,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2da0baee10803c8d66c70488dc28bbcb5168ac188eedf55f3bdcf8ca35ec0b6e,PodSandboxId:20db52729296aae0c64d8eb498ecacfb16bbd5839567cce8c9dc2218b489d03d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_EXITED,CreatedAt:1733744298989265060,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-529265,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 19e373a3d8a9080aa7bdaf5aa3dc33dc,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87d9b7e067ec790850a7d5224c0afcb57c57d2b835e00985bb994729d7d45cf9,PodSandboxId:889b5ff7952d5f04a70897fb2279f2ee29495bee1751cd43b6c540320af5e802,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733744298965604594,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-529265,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 4395cca1841dbdb67cff36eec0a10ff6,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9301c652-f4ac-4057-a52f-f3d0d2378a1f name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 11:39:04 pause-529265 crio[2336]: time="2024-12-09 11:39:04.067712800Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7b9fae41-2c87-4bd5-a627-ce2bec1c2817 name=/runtime.v1.RuntimeService/Version
	Dec 09 11:39:04 pause-529265 crio[2336]: time="2024-12-09 11:39:04.067783773Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7b9fae41-2c87-4bd5-a627-ce2bec1c2817 name=/runtime.v1.RuntimeService/Version
	Dec 09 11:39:04 pause-529265 crio[2336]: time="2024-12-09 11:39:04.069075951Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8ffd7f0e-0e78-4c3a-b597-5d5ef309c671 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 11:39:04 pause-529265 crio[2336]: time="2024-12-09 11:39:04.069411184Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733744344069390242,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8ffd7f0e-0e78-4c3a-b597-5d5ef309c671 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 11:39:04 pause-529265 crio[2336]: time="2024-12-09 11:39:04.069871991Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bad21b8a-8c25-4946-b959-4bf9d4a5befa name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 11:39:04 pause-529265 crio[2336]: time="2024-12-09 11:39:04.069979005Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bad21b8a-8c25-4946-b959-4bf9d4a5befa name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 11:39:04 pause-529265 crio[2336]: time="2024-12-09 11:39:04.070209372Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b76d3b70fbda2f3ddc918debff7b70b481699eb997d48e505e26ae2f11a87997,PodSandboxId:889b5ff7952d5f04a70897fb2279f2ee29495bee1751cd43b6c540320af5e802,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733744322946773539,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-529265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4395cca1841dbdb67cff36eec0a10ff6,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb3d343aa45baf1f29eee5aa63bb0b681ca1a3f371c7df0b297730eed39389d5,PodSandboxId:20db52729296aae0c64d8eb498ecacfb16bbd5839567cce8c9dc2218b489d03d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733744322970676546,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-529265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19e373a3d8a9080aa7bdaf5aa3dc33dc,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b6100c030924c1bda5d7e3f16ca4eee787f0ecc9713226426c7f4dbf0e28de0,PodSandboxId:8253ff818926963a734a83b6e56266f73c8e7e78dcf7cbf3ed723c5096732b83,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733744322917286791,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-529265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e728576d235a404dc3d55892eb05ecb3,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.t
erminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:333147e8a89653476b04335614d92a7499aee0a25829570f8ec0d64f62cc22ab,PodSandboxId:c3dbc3f05adbc85fd267b5924280225ff937781d899a4d64e8a174cab7329880,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733744319338177475,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-529265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdee20116799e47b2c6d59a844ac20d4,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bdec1e4dfdcac30e0b3b896e872220973c73c7b9ee1573d4b2864f81c89f277,PodSandboxId:90cac579c4dc9ed51cc3e08fafba7dd1bbf9cbe30aaf681aa35dcfbed4795078,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733744317342397931,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-lbpw6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d106d8-2009-4438-8487-53336745302b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"
},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54723efd278180f1068f7ebf26c7f1aefaa082144fee4af513ccfe973ac23757,PodSandboxId:5a9cc6f32c36e65e0aaf2bd3e71e0cf5131ea1577a850c304a29d66a7e575868,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733744311342566119,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-96c5d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4724f3fd-b481-4a95-b628-3bdaee03df58,},Annotations:map[string]string{io
.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8844efda91d299dc8c21e077e850c1a61f66450372a1b174157411dbf5db1ce,PodSandboxId:90cac579c4dc9ed51cc3e08fafba7dd1bbf9cbe30aaf681aa35dcfbed4795078,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1733744299892817605,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-lbpw6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d106d8-2009-4438-8487-53336745302b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a
204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b56cd8c9787715b1e91b413565fe2f41a70c9f63a067ff3d5e33d92f6e77bb1,PodSandboxId:5a9cc6f32c36e65e0aaf2bd3e71e0cf5131ea1577a850c304a29d66a7e575868,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_EXITED,CreatedAt:1733744299146242176,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-96c5d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4724f3fd-b481-4a95-b628-3bdaee03df58,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f8b2a354553d276bad1bd0ba810a5aadcc95aafb81669149c05c48daa23cc85,PodSandboxId:8253ff818926963a734a83b6e56266f73c8e7e78dcf7cbf3ed723c5096732b83,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1733744299111041317,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-529265,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: e728576d235a404dc3d55892eb05ecb3,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33ba3053afaf0a9fcd10abf6dde45a993ff265831a41dd56519bfabb6eeda8ca,PodSandboxId:c3dbc3f05adbc85fd267b5924280225ff937781d899a4d64e8a174cab7329880,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_EXITED,CreatedAt:1733744299068063820,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-529265,io.kubernete
s.pod.namespace: kube-system,io.kubernetes.pod.uid: cdee20116799e47b2c6d59a844ac20d4,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2da0baee10803c8d66c70488dc28bbcb5168ac188eedf55f3bdcf8ca35ec0b6e,PodSandboxId:20db52729296aae0c64d8eb498ecacfb16bbd5839567cce8c9dc2218b489d03d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_EXITED,CreatedAt:1733744298989265060,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-529265,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 19e373a3d8a9080aa7bdaf5aa3dc33dc,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87d9b7e067ec790850a7d5224c0afcb57c57d2b835e00985bb994729d7d45cf9,PodSandboxId:889b5ff7952d5f04a70897fb2279f2ee29495bee1751cd43b6c540320af5e802,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733744298965604594,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-529265,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 4395cca1841dbdb67cff36eec0a10ff6,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bad21b8a-8c25-4946-b959-4bf9d4a5befa name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 11:39:04 pause-529265 crio[2336]: time="2024-12-09 11:39:04.108580130Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f26ef904-c1d1-4f97-a689-bf462355becc name=/runtime.v1.RuntimeService/Version
	Dec 09 11:39:04 pause-529265 crio[2336]: time="2024-12-09 11:39:04.108657095Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f26ef904-c1d1-4f97-a689-bf462355becc name=/runtime.v1.RuntimeService/Version
	Dec 09 11:39:04 pause-529265 crio[2336]: time="2024-12-09 11:39:04.109590585Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=746a4d4a-32a9-4d04-b56e-cf7421aee27d name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 11:39:04 pause-529265 crio[2336]: time="2024-12-09 11:39:04.110306996Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733744344110280515,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=746a4d4a-32a9-4d04-b56e-cf7421aee27d name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 11:39:04 pause-529265 crio[2336]: time="2024-12-09 11:39:04.110744536Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9bea01ab-4e0c-418c-bdc4-49bb702cadce name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 11:39:04 pause-529265 crio[2336]: time="2024-12-09 11:39:04.110791777Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9bea01ab-4e0c-418c-bdc4-49bb702cadce name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 11:39:04 pause-529265 crio[2336]: time="2024-12-09 11:39:04.111177048Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b76d3b70fbda2f3ddc918debff7b70b481699eb997d48e505e26ae2f11a87997,PodSandboxId:889b5ff7952d5f04a70897fb2279f2ee29495bee1751cd43b6c540320af5e802,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733744322946773539,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-529265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4395cca1841dbdb67cff36eec0a10ff6,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb3d343aa45baf1f29eee5aa63bb0b681ca1a3f371c7df0b297730eed39389d5,PodSandboxId:20db52729296aae0c64d8eb498ecacfb16bbd5839567cce8c9dc2218b489d03d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733744322970676546,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-529265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19e373a3d8a9080aa7bdaf5aa3dc33dc,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b6100c030924c1bda5d7e3f16ca4eee787f0ecc9713226426c7f4dbf0e28de0,PodSandboxId:8253ff818926963a734a83b6e56266f73c8e7e78dcf7cbf3ed723c5096732b83,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733744322917286791,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-529265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e728576d235a404dc3d55892eb05ecb3,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.t
erminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:333147e8a89653476b04335614d92a7499aee0a25829570f8ec0d64f62cc22ab,PodSandboxId:c3dbc3f05adbc85fd267b5924280225ff937781d899a4d64e8a174cab7329880,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733744319338177475,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-529265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdee20116799e47b2c6d59a844ac20d4,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bdec1e4dfdcac30e0b3b896e872220973c73c7b9ee1573d4b2864f81c89f277,PodSandboxId:90cac579c4dc9ed51cc3e08fafba7dd1bbf9cbe30aaf681aa35dcfbed4795078,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733744317342397931,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-lbpw6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d106d8-2009-4438-8487-53336745302b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"
},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54723efd278180f1068f7ebf26c7f1aefaa082144fee4af513ccfe973ac23757,PodSandboxId:5a9cc6f32c36e65e0aaf2bd3e71e0cf5131ea1577a850c304a29d66a7e575868,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733744311342566119,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-96c5d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4724f3fd-b481-4a95-b628-3bdaee03df58,},Annotations:map[string]string{io
.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8844efda91d299dc8c21e077e850c1a61f66450372a1b174157411dbf5db1ce,PodSandboxId:90cac579c4dc9ed51cc3e08fafba7dd1bbf9cbe30aaf681aa35dcfbed4795078,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1733744299892817605,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-lbpw6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d106d8-2009-4438-8487-53336745302b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a
204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b56cd8c9787715b1e91b413565fe2f41a70c9f63a067ff3d5e33d92f6e77bb1,PodSandboxId:5a9cc6f32c36e65e0aaf2bd3e71e0cf5131ea1577a850c304a29d66a7e575868,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_EXITED,CreatedAt:1733744299146242176,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-96c5d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4724f3fd-b481-4a95-b628-3bdaee03df58,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f8b2a354553d276bad1bd0ba810a5aadcc95aafb81669149c05c48daa23cc85,PodSandboxId:8253ff818926963a734a83b6e56266f73c8e7e78dcf7cbf3ed723c5096732b83,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1733744299111041317,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-529265,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: e728576d235a404dc3d55892eb05ecb3,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33ba3053afaf0a9fcd10abf6dde45a993ff265831a41dd56519bfabb6eeda8ca,PodSandboxId:c3dbc3f05adbc85fd267b5924280225ff937781d899a4d64e8a174cab7329880,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_EXITED,CreatedAt:1733744299068063820,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-529265,io.kubernete
s.pod.namespace: kube-system,io.kubernetes.pod.uid: cdee20116799e47b2c6d59a844ac20d4,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2da0baee10803c8d66c70488dc28bbcb5168ac188eedf55f3bdcf8ca35ec0b6e,PodSandboxId:20db52729296aae0c64d8eb498ecacfb16bbd5839567cce8c9dc2218b489d03d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_EXITED,CreatedAt:1733744298989265060,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-529265,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 19e373a3d8a9080aa7bdaf5aa3dc33dc,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87d9b7e067ec790850a7d5224c0afcb57c57d2b835e00985bb994729d7d45cf9,PodSandboxId:889b5ff7952d5f04a70897fb2279f2ee29495bee1751cd43b6c540320af5e802,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733744298965604594,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-529265,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 4395cca1841dbdb67cff36eec0a10ff6,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9bea01ab-4e0c-418c-bdc4-49bb702cadce name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	fb3d343aa45ba       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856   21 seconds ago      Running             kube-scheduler            2                   20db52729296a       kube-scheduler-pause-529265
	b76d3b70fbda2       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   21 seconds ago      Running             kube-apiserver            2                   889b5ff7952d5       kube-apiserver-pause-529265
	0b6100c030924       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   21 seconds ago      Running             etcd                      2                   8253ff8189269       etcd-pause-529265
	333147e8a8965       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503   24 seconds ago      Running             kube-controller-manager   2                   c3dbc3f05adbc       kube-controller-manager-pause-529265
	0bdec1e4dfdca       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   26 seconds ago      Running             coredns                   2                   90cac579c4dc9       coredns-7c65d6cfc9-lbpw6
	54723efd27818       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38   32 seconds ago      Running             kube-proxy                2                   5a9cc6f32c36e       kube-proxy-96c5d
	a8844efda91d2       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   44 seconds ago      Exited              coredns                   1                   90cac579c4dc9       coredns-7c65d6cfc9-lbpw6
	7b56cd8c97877       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38   45 seconds ago      Exited              kube-proxy                1                   5a9cc6f32c36e       kube-proxy-96c5d
	1f8b2a354553d       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   45 seconds ago      Exited              etcd                      1                   8253ff8189269       etcd-pause-529265
	33ba3053afaf0       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503   45 seconds ago      Exited              kube-controller-manager   1                   c3dbc3f05adbc       kube-controller-manager-pause-529265
	2da0baee10803       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856   45 seconds ago      Exited              kube-scheduler            1                   20db52729296a       kube-scheduler-pause-529265
	87d9b7e067ec7       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   45 seconds ago      Exited              kube-apiserver            1                   889b5ff7952d5       kube-apiserver-pause-529265
	
	
	==> coredns [0bdec1e4dfdcac30e0b3b896e872220973c73c7b9ee1573d4b2864f81c89f277] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:47212->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:47212->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:47198->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:47198->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:47196->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:47196->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:55357 - 51627 "HINFO IN 4791590791798458663.8364992037365514839. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011256759s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> coredns [a8844efda91d299dc8c21e077e850c1a61f66450372a1b174157411dbf5db1ce] <==
	
	
	==> describe nodes <==
	Name:               pause-529265
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-529265
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=60110addcdbf0fec7168b962521659e922988d6c
	                    minikube.k8s.io/name=pause-529265
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_09T11_37_15_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 09 Dec 2024 11:37:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-529265
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 09 Dec 2024 11:38:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 09 Dec 2024 11:38:46 +0000   Mon, 09 Dec 2024 11:37:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 09 Dec 2024 11:38:46 +0000   Mon, 09 Dec 2024 11:37:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 09 Dec 2024 11:38:46 +0000   Mon, 09 Dec 2024 11:37:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 09 Dec 2024 11:38:46 +0000   Mon, 09 Dec 2024 11:37:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.137
	  Hostname:    pause-529265
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 060a33f57f1a4609945486db94879266
	  System UUID:                060a33f5-7f1a-4609-9454-86db94879266
	  Boot ID:                    2c03bb52-5989-4560-964d-c1b03c82d1b6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-lbpw6                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     105s
	  kube-system                 etcd-pause-529265                       100m (5%)     0 (0%)      100Mi (5%)       0 (0%)         110s
	  kube-system                 kube-apiserver-pause-529265             250m (12%)    0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-controller-manager-pause-529265    200m (10%)    0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-proxy-96c5d                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kube-system                 kube-scheduler-pause-529265             100m (5%)     0 (0%)      0 (0%)           0 (0%)         110s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 103s               kube-proxy       
	  Normal  Starting                 17s                kube-proxy       
	  Normal  NodeHasSufficientPID     110s               kubelet          Node pause-529265 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  110s               kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  110s               kubelet          Node pause-529265 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    110s               kubelet          Node pause-529265 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 110s               kubelet          Starting kubelet.
	  Normal  NodeReady                109s               kubelet          Node pause-529265 status is now: NodeReady
	  Normal  RegisteredNode           106s               node-controller  Node pause-529265 event: Registered Node pause-529265 in Controller
	  Normal  Starting                 22s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22s (x8 over 22s)  kubelet          Node pause-529265 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22s (x8 over 22s)  kubelet          Node pause-529265 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22s (x7 over 22s)  kubelet          Node pause-529265 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           16s                node-controller  Node pause-529265 event: Registered Node pause-529265 in Controller
	
	
	==> dmesg <==
	[  +0.059367] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.051363] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.163370] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.130374] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.275357] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[Dec 9 11:37] systemd-fstab-generator[746]: Ignoring "noauto" option for root device
	[  +4.440170] systemd-fstab-generator[882]: Ignoring "noauto" option for root device
	[  +0.062884] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.490443] systemd-fstab-generator[1219]: Ignoring "noauto" option for root device
	[  +0.075393] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.322542] systemd-fstab-generator[1354]: Ignoring "noauto" option for root device
	[  +0.123998] kauditd_printk_skb: 21 callbacks suppressed
	[  +7.179026] kauditd_printk_skb: 88 callbacks suppressed
	[Dec 9 11:38] systemd-fstab-generator[2260]: Ignoring "noauto" option for root device
	[  +0.084828] kauditd_printk_skb: 2 callbacks suppressed
	[  +0.069768] systemd-fstab-generator[2272]: Ignoring "noauto" option for root device
	[  +0.190385] systemd-fstab-generator[2286]: Ignoring "noauto" option for root device
	[  +0.146453] systemd-fstab-generator[2298]: Ignoring "noauto" option for root device
	[  +0.309013] systemd-fstab-generator[2326]: Ignoring "noauto" option for root device
	[  +2.356409] systemd-fstab-generator[2446]: Ignoring "noauto" option for root device
	[  +2.285549] kauditd_printk_skb: 195 callbacks suppressed
	[ +21.504768] systemd-fstab-generator[3356]: Ignoring "noauto" option for root device
	[  +0.279304] kauditd_printk_skb: 22 callbacks suppressed
	[  +7.398936] kauditd_printk_skb: 14 callbacks suppressed
	[ +10.203712] systemd-fstab-generator[3689]: Ignoring "noauto" option for root device
	
	
	==> etcd [0b6100c030924c1bda5d7e3f16ca4eee787f0ecc9713226426c7f4dbf0e28de0] <==
	{"level":"info","ts":"2024-12-09T11:38:49.557510Z","caller":"traceutil/trace.go:171","msg":"trace[437221602] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/endpoint-controller; range_end:; response_count:1; response_revision:448; }","duration":"349.05938ms","start":"2024-12-09T11:38:49.208439Z","end":"2024-12-09T11:38:49.557499Z","steps":["trace[437221602] 'agreement among raft nodes before linearized reading'  (duration: 348.968103ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-09T11:38:49.557562Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-09T11:38:49.208396Z","time spent":"349.157706ms","remote":"127.0.0.1:52184","response type":"/etcdserverpb.KV/Range","request count":0,"request size":59,"response count":1,"response size":226,"request content":"key:\"/registry/serviceaccounts/kube-system/endpoint-controller\" "}
	{"level":"warn","ts":"2024-12-09T11:38:49.891749Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"123.33679ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9748766706966870230 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/endpointslices/kube-system/kube-dns-gh29q\" mod_revision:384 > success:<request_put:<key:\"/registry/endpointslices/kube-system/kube-dns-gh29q\" value_size:1193 >> failure:<request_range:<key:\"/registry/endpointslices/kube-system/kube-dns-gh29q\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-12-09T11:38:49.892001Z","caller":"traceutil/trace.go:171","msg":"trace[1995444147] linearizableReadLoop","detail":"{readStateIndex:490; appliedIndex:488; }","duration":"223.445401ms","start":"2024-12-09T11:38:49.668531Z","end":"2024-12-09T11:38:49.891976Z","steps":["trace[1995444147] 'read index received'  (duration: 99.718174ms)","trace[1995444147] 'applied index is now lower than readState.Index'  (duration: 123.726426ms)"],"step_count":2}
	{"level":"info","ts":"2024-12-09T11:38:49.892167Z","caller":"traceutil/trace.go:171","msg":"trace[2053266273] transaction","detail":"{read_only:false; response_revision:451; number_of_response:1; }","duration":"310.612972ms","start":"2024-12-09T11:38:49.581496Z","end":"2024-12-09T11:38:49.892109Z","steps":["trace[2053266273] 'process raft request'  (duration: 310.351522ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-09T11:38:49.892289Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"223.748045ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/coredns\" ","response":"range_response_count:1 size:4133"}
	{"level":"info","ts":"2024-12-09T11:38:49.892358Z","caller":"traceutil/trace.go:171","msg":"trace[1798635486] range","detail":"{range_begin:/registry/deployments/kube-system/coredns; range_end:; response_count:1; response_revision:451; }","duration":"223.820329ms","start":"2024-12-09T11:38:49.668527Z","end":"2024-12-09T11:38:49.892347Z","steps":["trace[1798635486] 'agreement among raft nodes before linearized reading'  (duration: 223.719194ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-09T11:38:49.892404Z","caller":"traceutil/trace.go:171","msg":"trace[1064574568] transaction","detail":"{read_only:false; response_revision:450; number_of_response:1; }","duration":"315.940087ms","start":"2024-12-09T11:38:49.576450Z","end":"2024-12-09T11:38:49.892390Z","steps":["trace[1064574568] 'process raft request'  (duration: 191.851474ms)","trace[1064574568] 'compare'  (duration: 123.259241ms)"],"step_count":2}
	{"level":"warn","ts":"2024-12-09T11:38:49.892475Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-09T11:38:49.576442Z","time spent":"316.010289ms","remote":"127.0.0.1:52256","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1252,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/endpointslices/kube-system/kube-dns-gh29q\" mod_revision:384 > success:<request_put:<key:\"/registry/endpointslices/kube-system/kube-dns-gh29q\" value_size:1193 >> failure:<request_range:<key:\"/registry/endpointslices/kube-system/kube-dns-gh29q\" > >"}
	{"level":"warn","ts":"2024-12-09T11:38:49.892941Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"224.202157ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-scheduler-pause-529265\" ","response":"range_response_count:1 size:4566"}
	{"level":"info","ts":"2024-12-09T11:38:49.894759Z","caller":"traceutil/trace.go:171","msg":"trace[1371166847] range","detail":"{range_begin:/registry/pods/kube-system/kube-scheduler-pause-529265; range_end:; response_count:1; response_revision:451; }","duration":"226.062162ms","start":"2024-12-09T11:38:49.668683Z","end":"2024-12-09T11:38:49.894746Z","steps":["trace[1371166847] 'agreement among raft nodes before linearized reading'  (duration: 224.178743ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-09T11:38:49.894517Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"225.89465ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/csinodes/pause-529265\" ","response":"range_response_count:1 size:706"}
	{"level":"info","ts":"2024-12-09T11:38:49.895887Z","caller":"traceutil/trace.go:171","msg":"trace[2146788514] range","detail":"{range_begin:/registry/csinodes/pause-529265; range_end:; response_count:1; response_revision:451; }","duration":"227.26913ms","start":"2024-12-09T11:38:49.668609Z","end":"2024-12-09T11:38:49.895878Z","steps":["trace[2146788514] 'agreement among raft nodes before linearized reading'  (duration: 223.919488ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-09T11:38:49.894599Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"225.912731ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-pause-529265\" ","response":"range_response_count:1 size:5847"}
	{"level":"warn","ts":"2024-12-09T11:38:49.894640Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"225.967034ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-controller-manager-pause-529265\" ","response":"range_response_count:1 size:6600"}
	{"level":"warn","ts":"2024-12-09T11:38:49.894668Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"226.013475ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/specs/kube-system/kube-dns\" ","response":"range_response_count:1 size:1211"}
	{"level":"warn","ts":"2024-12-09T11:38:49.894697Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"226.065318ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-apiserver-pause-529265\" ","response":"range_response_count:1 size:7000"}
	{"level":"warn","ts":"2024-12-09T11:38:49.892303Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-09T11:38:49.581487Z","time spent":"310.762313ms","remote":"127.0.0.1:52442","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4118,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/deployments/kube-system/coredns\" mod_revision:448 > success:<request_put:<key:\"/registry/deployments/kube-system/coredns\" value_size:4069 >> failure:<request_range:<key:\"/registry/deployments/kube-system/coredns\" > >"}
	{"level":"info","ts":"2024-12-09T11:38:49.897278Z","caller":"traceutil/trace.go:171","msg":"trace[309388272] range","detail":"{range_begin:/registry/pods/kube-system/etcd-pause-529265; range_end:; response_count:1; response_revision:451; }","duration":"228.590909ms","start":"2024-12-09T11:38:49.668679Z","end":"2024-12-09T11:38:49.897270Z","steps":["trace[309388272] 'agreement among raft nodes before linearized reading'  (duration: 225.859553ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-09T11:38:49.897375Z","caller":"traceutil/trace.go:171","msg":"trace[1349455708] range","detail":"{range_begin:/registry/pods/kube-system/kube-controller-manager-pause-529265; range_end:; response_count:1; response_revision:451; }","duration":"228.693473ms","start":"2024-12-09T11:38:49.668668Z","end":"2024-12-09T11:38:49.897362Z","steps":["trace[1349455708] 'agreement among raft nodes before linearized reading'  (duration: 225.946897ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-09T11:38:49.897474Z","caller":"traceutil/trace.go:171","msg":"trace[1040830611] range","detail":"{range_begin:/registry/services/specs/kube-system/kube-dns; range_end:; response_count:1; response_revision:451; }","duration":"228.798608ms","start":"2024-12-09T11:38:49.668649Z","end":"2024-12-09T11:38:49.897448Z","steps":["trace[1040830611] 'agreement among raft nodes before linearized reading'  (duration: 226.001452ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-09T11:38:49.897568Z","caller":"traceutil/trace.go:171","msg":"trace[158366361] range","detail":"{range_begin:/registry/pods/kube-system/kube-apiserver-pause-529265; range_end:; response_count:1; response_revision:451; }","duration":"228.933109ms","start":"2024-12-09T11:38:49.668628Z","end":"2024-12-09T11:38:49.897561Z","steps":["trace[158366361] 'agreement among raft nodes before linearized reading'  (duration: 226.051315ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-09T11:38:50.167329Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"173.314956ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/pause-529265\" ","response":"range_response_count:1 size:5428"}
	{"level":"info","ts":"2024-12-09T11:38:50.167806Z","caller":"traceutil/trace.go:171","msg":"trace[379048474] range","detail":"{range_begin:/registry/minions/pause-529265; range_end:; response_count:1; response_revision:451; }","duration":"173.802852ms","start":"2024-12-09T11:38:49.993987Z","end":"2024-12-09T11:38:50.167789Z","steps":["trace[379048474] 'range keys from in-memory index tree'  (duration: 173.224534ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-09T11:38:51.273853Z","caller":"traceutil/trace.go:171","msg":"trace[434038371] transaction","detail":"{read_only:false; response_revision:452; number_of_response:1; }","duration":"168.197261ms","start":"2024-12-09T11:38:51.105643Z","end":"2024-12-09T11:38:51.273840Z","steps":["trace[434038371] 'process raft request'  (duration: 168.090393ms)"],"step_count":1}
	
	
	==> etcd [1f8b2a354553d276bad1bd0ba810a5aadcc95aafb81669149c05c48daa23cc85] <==
	{"level":"info","ts":"2024-12-09T11:38:20.149689Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-12-09T11:38:20.159439Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"8623b2a8b011233f","local-member-id":"5527995f6263874a","commit-index":420}
	{"level":"info","ts":"2024-12-09T11:38:20.159587Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5527995f6263874a switched to configuration voters=()"}
	{"level":"info","ts":"2024-12-09T11:38:20.159675Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5527995f6263874a became follower at term 2"}
	{"level":"info","ts":"2024-12-09T11:38:20.159711Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 5527995f6263874a [peers: [], term: 2, commit: 420, applied: 0, lastindex: 420, lastterm: 2]"}
	{"level":"warn","ts":"2024-12-09T11:38:20.164982Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-12-09T11:38:20.205701Z","caller":"mvcc/kvstore.go:418","msg":"kvstore restored","current-rev":398}
	{"level":"info","ts":"2024-12-09T11:38:20.249979Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-12-09T11:38:20.257203Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"5527995f6263874a","timeout":"7s"}
	{"level":"info","ts":"2024-12-09T11:38:20.259220Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"5527995f6263874a"}
	{"level":"info","ts":"2024-12-09T11:38:20.260972Z","caller":"etcdserver/server.go:867","msg":"starting etcd server","local-member-id":"5527995f6263874a","local-server-version":"3.5.15","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-12-09T11:38:20.261556Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-09T11:38:20.262225Z","caller":"etcdserver/server.go:767","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-12-09T11:38:20.262396Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-12-09T11:38:20.268249Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-12-09T11:38:20.268343Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-12-09T11:38:20.269269Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-12-09T11:38:20.269481Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"5527995f6263874a","initial-advertise-peer-urls":["https://192.168.39.137:2380"],"listen-peer-urls":["https://192.168.39.137:2380"],"advertise-client-urls":["https://192.168.39.137:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.137:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-12-09T11:38:20.269518Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-12-09T11:38:20.269606Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.137:2380"}
	{"level":"info","ts":"2024-12-09T11:38:20.269629Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.137:2380"}
	{"level":"info","ts":"2024-12-09T11:38:20.262622Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5527995f6263874a switched to configuration voters=(6136041652267222858)"}
	{"level":"info","ts":"2024-12-09T11:38:20.270150Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"8623b2a8b011233f","local-member-id":"5527995f6263874a","added-peer-id":"5527995f6263874a","added-peer-peer-urls":["https://192.168.39.137:2380"]}
	{"level":"info","ts":"2024-12-09T11:38:20.270307Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"8623b2a8b011233f","local-member-id":"5527995f6263874a","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-09T11:38:20.270340Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	
	
	==> kernel <==
	 11:39:04 up 2 min,  0 users,  load average: 1.22, 0.43, 0.16
	Linux pause-529265 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [87d9b7e067ec790850a7d5224c0afcb57c57d2b835e00985bb994729d7d45cf9] <==
	I1209 11:38:19.597338       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1209 11:38:20.595255       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I1209 11:38:20.620040       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1209 11:38:20.623762       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I1209 11:38:20.637011       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I1209 11:38:20.637328       1 instance.go:232] Using reconciler: lease
	W1209 11:38:20.682336       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: read tcp 127.0.0.1:44434->127.0.0.1:2379: read: connection reset by peer"
	W1209 11:38:20.682764       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: read tcp 127.0.0.1:47952->127.0.0.1:2379: read: connection reset by peer"
	W1209 11:38:20.683326       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: read tcp 127.0.0.1:47964->127.0.0.1:2379: read: connection reset by peer"
	W1209 11:38:21.683766       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:38:21.683855       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:38:21.684090       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:38:23.333834       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:38:23.529146       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:38:23.572061       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:38:25.784758       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:38:25.896355       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:38:25.951703       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:38:29.527715       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:38:30.174661       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:38:30.338239       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:38:35.447958       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:38:35.541127       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:38:36.644237       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	F1209 11:38:40.638701       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [b76d3b70fbda2f3ddc918debff7b70b481699eb997d48e505e26ae2f11a87997] <==
	I1209 11:38:46.327137       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1209 11:38:46.327171       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1209 11:38:46.328659       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1209 11:38:46.328849       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I1209 11:38:46.329205       1 shared_informer.go:320] Caches are synced for configmaps
	I1209 11:38:46.329404       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I1209 11:38:46.329856       1 aggregator.go:171] initial CRD sync complete...
	I1209 11:38:46.329971       1 autoregister_controller.go:144] Starting autoregister controller
	I1209 11:38:46.329999       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1209 11:38:46.330021       1 cache.go:39] Caches are synced for autoregister controller
	I1209 11:38:46.330359       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1209 11:38:46.336889       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I1209 11:38:46.358462       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1209 11:38:46.362250       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1209 11:38:46.362322       1 policy_source.go:224] refreshing policies
	E1209 11:38:46.370087       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1209 11:38:46.402063       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1209 11:38:47.235493       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1209 11:38:47.814185       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1209 11:38:47.843951       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1209 11:38:47.921334       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1209 11:38:47.953194       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1209 11:38:47.962014       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1209 11:38:49.567808       1 controller.go:615] quota admission added evaluator for: endpoints
	I1209 11:38:49.575625       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [333147e8a89653476b04335614d92a7499aee0a25829570f8ec0d64f62cc22ab] <==
	I1209 11:38:49.004076       1 shared_informer.go:320] Caches are synced for cidrallocator
	I1209 11:38:49.004164       1 shared_informer.go:320] Caches are synced for endpoint
	I1209 11:38:49.004170       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="pause-529265"
	I1209 11:38:49.008976       1 shared_informer.go:320] Caches are synced for persistent volume
	I1209 11:38:49.010236       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I1209 11:38:49.010321       1 shared_informer.go:320] Caches are synced for ephemeral
	I1209 11:38:49.010359       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I1209 11:38:49.010785       1 shared_informer.go:320] Caches are synced for crt configmap
	I1209 11:38:49.010874       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I1209 11:38:49.012235       1 shared_informer.go:320] Caches are synced for stateful set
	I1209 11:38:49.014456       1 shared_informer.go:320] Caches are synced for attach detach
	I1209 11:38:49.014721       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I1209 11:38:49.015610       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I1209 11:38:49.018307       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I1209 11:38:49.024306       1 shared_informer.go:320] Caches are synced for disruption
	I1209 11:38:49.044676       1 shared_informer.go:320] Caches are synced for GC
	I1209 11:38:49.109802       1 shared_informer.go:320] Caches are synced for cronjob
	I1209 11:38:49.218822       1 shared_informer.go:320] Caches are synced for resource quota
	I1209 11:38:49.224375       1 shared_informer.go:320] Caches are synced for resource quota
	I1209 11:38:49.646663       1 shared_informer.go:320] Caches are synced for garbage collector
	I1209 11:38:49.665628       1 shared_informer.go:320] Caches are synced for garbage collector
	I1209 11:38:49.665666       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I1209 11:38:51.312331       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="35.077052ms"
	I1209 11:38:51.324975       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="10.790117ms"
	I1209 11:38:51.325838       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="96.546µs"
	
	
	==> kube-controller-manager [33ba3053afaf0a9fcd10abf6dde45a993ff265831a41dd56519bfabb6eeda8ca] <==
	
	
	==> kube-proxy [54723efd278180f1068f7ebf26c7f1aefaa082144fee4af513ccfe973ac23757] <==
	 >
	E1209 11:38:31.507167       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1209 11:38:41.652103       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-529265\": dial tcp 192.168.39.137:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.137:59852->192.168.39.137:8443: read: connection reset by peer"
	E1209 11:38:42.797538       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-529265\": dial tcp 192.168.39.137:8443: connect: connection refused"
	I1209 11:38:46.323217       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.137"]
	E1209 11:38:46.323356       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1209 11:38:46.399370       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1209 11:38:46.399422       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1209 11:38:46.399454       1 server_linux.go:169] "Using iptables Proxier"
	I1209 11:38:46.402287       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1209 11:38:46.402629       1 server.go:483] "Version info" version="v1.31.2"
	I1209 11:38:46.402654       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1209 11:38:46.406242       1 config.go:199] "Starting service config controller"
	I1209 11:38:46.408194       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1209 11:38:46.408335       1 config.go:105] "Starting endpoint slice config controller"
	I1209 11:38:46.408359       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1209 11:38:46.409035       1 config.go:328] "Starting node config controller"
	I1209 11:38:46.410714       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1209 11:38:46.508881       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1209 11:38:46.509003       1 shared_informer.go:320] Caches are synced for service config
	I1209 11:38:46.511066       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [7b56cd8c9787715b1e91b413565fe2f41a70c9f63a067ff3d5e33d92f6e77bb1] <==
	
	
	==> kube-scheduler [2da0baee10803c8d66c70488dc28bbcb5168ac188eedf55f3bdcf8ca35ec0b6e] <==
	I1209 11:38:21.028248       1 serving.go:386] Generated self-signed cert in-memory
	
	
	==> kube-scheduler [fb3d343aa45baf1f29eee5aa63bb0b681ca1a3f371c7df0b297730eed39389d5] <==
	I1209 11:38:43.864967       1 serving.go:386] Generated self-signed cert in-memory
	W1209 11:38:46.289059       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1209 11:38:46.289094       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1209 11:38:46.289104       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1209 11:38:46.289110       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1209 11:38:46.338589       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.2"
	I1209 11:38:46.338707       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1209 11:38:46.344095       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1209 11:38:46.344202       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1209 11:38:46.344942       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1209 11:38:46.345140       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1209 11:38:46.445405       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 09 11:38:42 pause-529265 kubelet[3363]: I1209 11:38:42.698095    3363 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4395cca1841dbdb67cff36eec0a10ff6-ca-certs\") pod \"kube-apiserver-pause-529265\" (UID: \"4395cca1841dbdb67cff36eec0a10ff6\") " pod="kube-system/kube-apiserver-pause-529265"
	Dec 09 11:38:42 pause-529265 kubelet[3363]: I1209 11:38:42.698115    3363 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4395cca1841dbdb67cff36eec0a10ff6-usr-share-ca-certificates\") pod \"kube-apiserver-pause-529265\" (UID: \"4395cca1841dbdb67cff36eec0a10ff6\") " pod="kube-system/kube-apiserver-pause-529265"
	Dec 09 11:38:42 pause-529265 kubelet[3363]: I1209 11:38:42.698131    3363 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cdee20116799e47b2c6d59a844ac20d4-k8s-certs\") pod \"kube-controller-manager-pause-529265\" (UID: \"cdee20116799e47b2c6d59a844ac20d4\") " pod="kube-system/kube-controller-manager-pause-529265"
	Dec 09 11:38:42 pause-529265 kubelet[3363]: I1209 11:38:42.698146    3363 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cdee20116799e47b2c6d59a844ac20d4-usr-share-ca-certificates\") pod \"kube-controller-manager-pause-529265\" (UID: \"cdee20116799e47b2c6d59a844ac20d4\") " pod="kube-system/kube-controller-manager-pause-529265"
	Dec 09 11:38:42 pause-529265 kubelet[3363]: I1209 11:38:42.703840    3363 kubelet_node_status.go:72] "Attempting to register node" node="pause-529265"
	Dec 09 11:38:42 pause-529265 kubelet[3363]: E1209 11:38:42.704807    3363 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.137:8443: connect: connection refused" node="pause-529265"
	Dec 09 11:38:42 pause-529265 kubelet[3363]: I1209 11:38:42.906707    3363 kubelet_node_status.go:72] "Attempting to register node" node="pause-529265"
	Dec 09 11:38:42 pause-529265 kubelet[3363]: I1209 11:38:42.907706    3363 scope.go:117] "RemoveContainer" containerID="2da0baee10803c8d66c70488dc28bbcb5168ac188eedf55f3bdcf8ca35ec0b6e"
	Dec 09 11:38:42 pause-529265 kubelet[3363]: E1209 11:38:42.908094    3363 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.137:8443: connect: connection refused" node="pause-529265"
	Dec 09 11:38:42 pause-529265 kubelet[3363]: I1209 11:38:42.909010    3363 scope.go:117] "RemoveContainer" containerID="1f8b2a354553d276bad1bd0ba810a5aadcc95aafb81669149c05c48daa23cc85"
	Dec 09 11:38:42 pause-529265 kubelet[3363]: I1209 11:38:42.910471    3363 scope.go:117] "RemoveContainer" containerID="87d9b7e067ec790850a7d5224c0afcb57c57d2b835e00985bb994729d7d45cf9"
	Dec 09 11:38:43 pause-529265 kubelet[3363]: E1209 11:38:43.094749    3363 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-529265?timeout=10s\": dial tcp 192.168.39.137:8443: connect: connection refused" interval="800ms"
	Dec 09 11:38:43 pause-529265 kubelet[3363]: I1209 11:38:43.310144    3363 kubelet_node_status.go:72] "Attempting to register node" node="pause-529265"
	Dec 09 11:38:46 pause-529265 kubelet[3363]: I1209 11:38:46.417661    3363 kubelet_node_status.go:111] "Node was previously registered" node="pause-529265"
	Dec 09 11:38:46 pause-529265 kubelet[3363]: I1209 11:38:46.417847    3363 kubelet_node_status.go:75] "Successfully registered node" node="pause-529265"
	Dec 09 11:38:46 pause-529265 kubelet[3363]: I1209 11:38:46.417873    3363 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 09 11:38:46 pause-529265 kubelet[3363]: I1209 11:38:46.419289    3363 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 09 11:38:46 pause-529265 kubelet[3363]: I1209 11:38:46.474507    3363 apiserver.go:52] "Watching apiserver"
	Dec 09 11:38:46 pause-529265 kubelet[3363]: I1209 11:38:46.497121    3363 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 09 11:38:46 pause-529265 kubelet[3363]: I1209 11:38:46.598081    3363 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4724f3fd-b481-4a95-b628-3bdaee03df58-xtables-lock\") pod \"kube-proxy-96c5d\" (UID: \"4724f3fd-b481-4a95-b628-3bdaee03df58\") " pod="kube-system/kube-proxy-96c5d"
	Dec 09 11:38:46 pause-529265 kubelet[3363]: I1209 11:38:46.598393    3363 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4724f3fd-b481-4a95-b628-3bdaee03df58-lib-modules\") pod \"kube-proxy-96c5d\" (UID: \"4724f3fd-b481-4a95-b628-3bdaee03df58\") " pod="kube-system/kube-proxy-96c5d"
	Dec 09 11:38:52 pause-529265 kubelet[3363]: E1209 11:38:52.607765    3363 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733744332607510844,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 11:38:52 pause-529265 kubelet[3363]: E1209 11:38:52.608183    3363 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733744332607510844,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 11:39:02 pause-529265 kubelet[3363]: E1209 11:39:02.611677    3363 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733744342610608577,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 11:39:02 pause-529265 kubelet[3363]: E1209 11:39:02.611718    3363 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733744342610608577,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-529265 -n pause-529265
helpers_test.go:261: (dbg) Run:  kubectl --context pause-529265 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-529265 -n pause-529265
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-529265 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-529265 logs -n 25: (1.190699841s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-763643 sudo cat                            | cilium-763643             | jenkins | v1.34.0 | 09 Dec 24 11:36 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p cilium-763643 sudo cat                            | cilium-763643             | jenkins | v1.34.0 | 09 Dec 24 11:36 UTC |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p cilium-763643 sudo                                | cilium-763643             | jenkins | v1.34.0 | 09 Dec 24 11:36 UTC |                     |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p cilium-763643 sudo                                | cilium-763643             | jenkins | v1.34.0 | 09 Dec 24 11:36 UTC |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-763643 sudo                                | cilium-763643             | jenkins | v1.34.0 | 09 Dec 24 11:36 UTC |                     |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-763643 sudo cat                            | cilium-763643             | jenkins | v1.34.0 | 09 Dec 24 11:36 UTC |                     |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p cilium-763643 sudo cat                            | cilium-763643             | jenkins | v1.34.0 | 09 Dec 24 11:36 UTC |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-763643 sudo                                | cilium-763643             | jenkins | v1.34.0 | 09 Dec 24 11:36 UTC |                     |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p cilium-763643 sudo                                | cilium-763643             | jenkins | v1.34.0 | 09 Dec 24 11:36 UTC |                     |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p cilium-763643 sudo                                | cilium-763643             | jenkins | v1.34.0 | 09 Dec 24 11:36 UTC |                     |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p cilium-763643 sudo find                           | cilium-763643             | jenkins | v1.34.0 | 09 Dec 24 11:36 UTC |                     |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-763643 sudo crio                           | cilium-763643             | jenkins | v1.34.0 | 09 Dec 24 11:36 UTC |                     |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p cilium-763643                                     | cilium-763643             | jenkins | v1.34.0 | 09 Dec 24 11:36 UTC | 09 Dec 24 11:36 UTC |
	| start   | -p force-systemd-env-250964                          | force-systemd-env-250964  | jenkins | v1.34.0 | 09 Dec 24 11:36 UTC | 09 Dec 24 11:38 UTC |
	|         | --memory=2048                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| start   | -p pause-529265                                      | pause-529265              | jenkins | v1.34.0 | 09 Dec 24 11:37 UTC | 09 Dec 24 11:39 UTC |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| delete  | -p offline-crio-482227                               | offline-crio-482227       | jenkins | v1.34.0 | 09 Dec 24 11:37 UTC | 09 Dec 24 11:37 UTC |
	| start   | -p force-systemd-flag-451257                         | force-systemd-flag-451257 | jenkins | v1.34.0 | 09 Dec 24 11:37 UTC | 09 Dec 24 11:39 UTC |
	|         | --memory=2048 --force-systemd                        |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-597739                               | NoKubernetes-597739       | jenkins | v1.34.0 | 09 Dec 24 11:38 UTC | 09 Dec 24 11:38 UTC |
	|         | --no-kubernetes --driver=kvm2                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-250964                          | force-systemd-env-250964  | jenkins | v1.34.0 | 09 Dec 24 11:38 UTC | 09 Dec 24 11:38 UTC |
	| start   | -p cert-expiration-752166                            | cert-expiration-752166    | jenkins | v1.34.0 | 09 Dec 24 11:38 UTC |                     |
	|         | --memory=2048                                        |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-597739                               | NoKubernetes-597739       | jenkins | v1.34.0 | 09 Dec 24 11:38 UTC | 09 Dec 24 11:38 UTC |
	| start   | -p NoKubernetes-597739                               | NoKubernetes-597739       | jenkins | v1.34.0 | 09 Dec 24 11:38 UTC |                     |
	|         | --no-kubernetes --driver=kvm2                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-451257 ssh cat                    | force-systemd-flag-451257 | jenkins | v1.34.0 | 09 Dec 24 11:39 UTC | 09 Dec 24 11:39 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf                   |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-451257                         | force-systemd-flag-451257 | jenkins | v1.34.0 | 09 Dec 24 11:39 UTC | 09 Dec 24 11:39 UTC |
	| start   | -p cert-options-935628                               | cert-options-935628       | jenkins | v1.34.0 | 09 Dec 24 11:39 UTC |                     |
	|         | --memory=2048                                        |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                            |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15                        |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost                          |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com                     |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                                |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/09 11:39:02
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 11:39:02.634472  655780 out.go:345] Setting OutFile to fd 1 ...
	I1209 11:39:02.634726  655780 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 11:39:02.634731  655780 out.go:358] Setting ErrFile to fd 2...
	I1209 11:39:02.634733  655780 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 11:39:02.634896  655780 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20068-609844/.minikube/bin
	I1209 11:39:02.636093  655780 out.go:352] Setting JSON to false
	I1209 11:39:02.637195  655780 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":15687,"bootTime":1733728656,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 11:39:02.637311  655780 start.go:139] virtualization: kvm guest
	I1209 11:39:02.639145  655780 out.go:177] * [cert-options-935628] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1209 11:39:02.640437  655780 out.go:177]   - MINIKUBE_LOCATION=20068
	I1209 11:39:02.640484  655780 notify.go:220] Checking for updates...
	I1209 11:39:02.642587  655780 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 11:39:02.643645  655780 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20068-609844/kubeconfig
	I1209 11:39:02.644588  655780 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20068-609844/.minikube
	I1209 11:39:02.645622  655780 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1209 11:39:02.646617  655780 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 11:39:02.647983  655780 config.go:182] Loaded profile config "NoKubernetes-597739": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1209 11:39:02.648060  655780 config.go:182] Loaded profile config "cert-expiration-752166": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 11:39:02.648163  655780 config.go:182] Loaded profile config "pause-529265": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 11:39:02.648266  655780 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 11:39:02.684055  655780 out.go:177] * Using the kvm2 driver based on user configuration
	I1209 11:39:02.685015  655780 start.go:297] selected driver: kvm2
	I1209 11:39:02.685022  655780 start.go:901] validating driver "kvm2" against <nil>
	I1209 11:39:02.685032  655780 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 11:39:02.685781  655780 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 11:39:02.685863  655780 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20068-609844/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1209 11:39:02.701484  655780 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1209 11:39:02.701535  655780 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1209 11:39:02.701827  655780 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1209 11:39:02.701853  655780 cni.go:84] Creating CNI manager for ""
	I1209 11:39:02.701899  655780 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 11:39:02.701903  655780 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1209 11:39:02.701953  655780 start.go:340] cluster config:
	{Name:cert-options-935628 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8555 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:cert-options-935628 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.0.
1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8555 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1209 11:39:02.702051  655780 iso.go:125] acquiring lock: {Name:mk3bf4d9da9b75cc0ae0a3732f4492bac54a4b46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 11:39:02.703750  655780 out.go:177] * Starting "cert-options-935628" primary control-plane node in "cert-options-935628" cluster
	I1209 11:39:00.053979  654294 addons.go:510] duration metric: took 2.696329ms for enable addons: enabled=[]
	I1209 11:39:00.053992  654294 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 11:39:00.254065  654294 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 11:39:00.277180  654294 node_ready.go:35] waiting up to 6m0s for node "pause-529265" to be "Ready" ...
	I1209 11:39:00.281459  654294 node_ready.go:49] node "pause-529265" has status "Ready":"True"
	I1209 11:39:00.281537  654294 node_ready.go:38] duration metric: took 4.313724ms for node "pause-529265" to be "Ready" ...
	I1209 11:39:00.281554  654294 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 11:39:00.287250  654294 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-lbpw6" in "kube-system" namespace to be "Ready" ...
	I1209 11:39:00.507793  654294 pod_ready.go:93] pod "coredns-7c65d6cfc9-lbpw6" in "kube-system" namespace has status "Ready":"True"
	I1209 11:39:00.507828  654294 pod_ready.go:82] duration metric: took 220.541392ms for pod "coredns-7c65d6cfc9-lbpw6" in "kube-system" namespace to be "Ready" ...
	I1209 11:39:00.507844  654294 pod_ready.go:79] waiting up to 6m0s for pod "etcd-pause-529265" in "kube-system" namespace to be "Ready" ...
	I1209 11:39:00.906345  654294 pod_ready.go:93] pod "etcd-pause-529265" in "kube-system" namespace has status "Ready":"True"
	I1209 11:39:00.906382  654294 pod_ready.go:82] duration metric: took 398.528579ms for pod "etcd-pause-529265" in "kube-system" namespace to be "Ready" ...
	I1209 11:39:00.906399  654294 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-pause-529265" in "kube-system" namespace to be "Ready" ...
	I1209 11:39:01.306162  654294 pod_ready.go:93] pod "kube-apiserver-pause-529265" in "kube-system" namespace has status "Ready":"True"
	I1209 11:39:01.306291  654294 pod_ready.go:82] duration metric: took 399.878535ms for pod "kube-apiserver-pause-529265" in "kube-system" namespace to be "Ready" ...
	I1209 11:39:01.306309  654294 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-pause-529265" in "kube-system" namespace to be "Ready" ...
	I1209 11:39:01.705500  654294 pod_ready.go:93] pod "kube-controller-manager-pause-529265" in "kube-system" namespace has status "Ready":"True"
	I1209 11:39:01.705527  654294 pod_ready.go:82] duration metric: took 399.210052ms for pod "kube-controller-manager-pause-529265" in "kube-system" namespace to be "Ready" ...
	I1209 11:39:01.705538  654294 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-96c5d" in "kube-system" namespace to be "Ready" ...
	I1209 11:39:02.105469  654294 pod_ready.go:93] pod "kube-proxy-96c5d" in "kube-system" namespace has status "Ready":"True"
	I1209 11:39:02.105503  654294 pod_ready.go:82] duration metric: took 399.958612ms for pod "kube-proxy-96c5d" in "kube-system" namespace to be "Ready" ...
	I1209 11:39:02.105512  654294 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-pause-529265" in "kube-system" namespace to be "Ready" ...
	I1209 11:39:02.505328  654294 pod_ready.go:93] pod "kube-scheduler-pause-529265" in "kube-system" namespace has status "Ready":"True"
	I1209 11:39:02.505365  654294 pod_ready.go:82] duration metric: took 399.84513ms for pod "kube-scheduler-pause-529265" in "kube-system" namespace to be "Ready" ...
	I1209 11:39:02.505377  654294 pod_ready.go:39] duration metric: took 2.223806691s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 11:39:02.505397  654294 api_server.go:52] waiting for apiserver process to appear ...
	I1209 11:39:02.505459  654294 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:39:02.525680  654294 api_server.go:72] duration metric: took 2.474430571s to wait for apiserver process to appear ...
	I1209 11:39:02.525714  654294 api_server.go:88] waiting for apiserver healthz status ...
	I1209 11:39:02.525740  654294 api_server.go:253] Checking apiserver healthz at https://192.168.39.137:8443/healthz ...
	I1209 11:39:02.530874  654294 api_server.go:279] https://192.168.39.137:8443/healthz returned 200:
	ok
	I1209 11:39:02.531819  654294 api_server.go:141] control plane version: v1.31.2
	I1209 11:39:02.531839  654294 api_server.go:131] duration metric: took 6.118634ms to wait for apiserver health ...
	I1209 11:39:02.531847  654294 system_pods.go:43] waiting for kube-system pods to appear ...
	I1209 11:39:02.707547  654294 system_pods.go:59] 6 kube-system pods found
	I1209 11:39:02.707576  654294 system_pods.go:61] "coredns-7c65d6cfc9-lbpw6" [12d106d8-2009-4438-8487-53336745302b] Running
	I1209 11:39:02.707583  654294 system_pods.go:61] "etcd-pause-529265" [abd5409d-fef3-4a92-9827-3f46526dcc2f] Running
	I1209 11:39:02.707589  654294 system_pods.go:61] "kube-apiserver-pause-529265" [b4356d27-51ba-4063-900b-f449f9553286] Running
	I1209 11:39:02.707593  654294 system_pods.go:61] "kube-controller-manager-pause-529265" [6c770060-667c-44fe-a821-47f55502b61a] Running
	I1209 11:39:02.707600  654294 system_pods.go:61] "kube-proxy-96c5d" [4724f3fd-b481-4a95-b628-3bdaee03df58] Running
	I1209 11:39:02.707605  654294 system_pods.go:61] "kube-scheduler-pause-529265" [cb045617-3848-4e28-b2e6-2b7798897621] Running
	I1209 11:39:02.707614  654294 system_pods.go:74] duration metric: took 175.759249ms to wait for pod list to return data ...
	I1209 11:39:02.707625  654294 default_sa.go:34] waiting for default service account to be created ...
	I1209 11:39:02.905782  654294 default_sa.go:45] found service account: "default"
	I1209 11:39:02.905807  654294 default_sa.go:55] duration metric: took 198.172667ms for default service account to be created ...
	I1209 11:39:02.905817  654294 system_pods.go:116] waiting for k8s-apps to be running ...
	I1209 11:39:03.106850  654294 system_pods.go:86] 6 kube-system pods found
	I1209 11:39:03.106879  654294 system_pods.go:89] "coredns-7c65d6cfc9-lbpw6" [12d106d8-2009-4438-8487-53336745302b] Running
	I1209 11:39:03.106884  654294 system_pods.go:89] "etcd-pause-529265" [abd5409d-fef3-4a92-9827-3f46526dcc2f] Running
	I1209 11:39:03.106888  654294 system_pods.go:89] "kube-apiserver-pause-529265" [b4356d27-51ba-4063-900b-f449f9553286] Running
	I1209 11:39:03.106892  654294 system_pods.go:89] "kube-controller-manager-pause-529265" [6c770060-667c-44fe-a821-47f55502b61a] Running
	I1209 11:39:03.106896  654294 system_pods.go:89] "kube-proxy-96c5d" [4724f3fd-b481-4a95-b628-3bdaee03df58] Running
	I1209 11:39:03.106899  654294 system_pods.go:89] "kube-scheduler-pause-529265" [cb045617-3848-4e28-b2e6-2b7798897621] Running
	I1209 11:39:03.106905  654294 system_pods.go:126] duration metric: took 201.083281ms to wait for k8s-apps to be running ...
	I1209 11:39:03.106912  654294 system_svc.go:44] waiting for kubelet service to be running ....
	I1209 11:39:03.106955  654294 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 11:39:03.120800  654294 system_svc.go:56] duration metric: took 13.878988ms WaitForService to wait for kubelet
	I1209 11:39:03.120830  654294 kubeadm.go:582] duration metric: took 3.06959407s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 11:39:03.120847  654294 node_conditions.go:102] verifying NodePressure condition ...
	I1209 11:39:03.304949  654294 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1209 11:39:03.304975  654294 node_conditions.go:123] node cpu capacity is 2
	I1209 11:39:03.304992  654294 node_conditions.go:105] duration metric: took 184.140298ms to run NodePressure ...
	I1209 11:39:03.305022  654294 start.go:241] waiting for startup goroutines ...
	I1209 11:39:03.305045  654294 start.go:246] waiting for cluster config update ...
	I1209 11:39:03.305057  654294 start.go:255] writing updated cluster config ...
	I1209 11:39:03.305382  654294 ssh_runner.go:195] Run: rm -f paused
	I1209 11:39:03.353593  654294 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1209 11:39:03.355259  654294 out.go:177] * Done! kubectl is now configured to use "pause-529265" cluster and "default" namespace by default
	I1209 11:39:01.055740  655201 main.go:141] libmachine: (cert-expiration-752166) DBG | domain cert-expiration-752166 has defined MAC address 52:54:00:b0:e3:4b in network mk-cert-expiration-752166
	I1209 11:39:01.056260  655201 main.go:141] libmachine: (cert-expiration-752166) DBG | unable to find current IP address of domain cert-expiration-752166 in network mk-cert-expiration-752166
	I1209 11:39:01.056352  655201 main.go:141] libmachine: (cert-expiration-752166) DBG | I1209 11:39:01.056257  655369 retry.go:31] will retry after 2.190798334s: waiting for machine to come up
	I1209 11:39:03.248531  655201 main.go:141] libmachine: (cert-expiration-752166) DBG | domain cert-expiration-752166 has defined MAC address 52:54:00:b0:e3:4b in network mk-cert-expiration-752166
	I1209 11:39:03.249054  655201 main.go:141] libmachine: (cert-expiration-752166) DBG | unable to find current IP address of domain cert-expiration-752166 in network mk-cert-expiration-752166
	I1209 11:39:03.249070  655201 main.go:141] libmachine: (cert-expiration-752166) DBG | I1209 11:39:03.249004  655369 retry.go:31] will retry after 3.506246014s: waiting for machine to come up
	
	
	==> CRI-O <==
	Dec 09 11:39:05 pause-529265 crio[2336]: time="2024-12-09 11:39:05.745463638Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733744345745445729,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7f33778c-f28f-43de-85ab-673d79a4dd36 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 11:39:05 pause-529265 crio[2336]: time="2024-12-09 11:39:05.745936390Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2453937d-957c-4942-a0fb-abc81e27ab26 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 11:39:05 pause-529265 crio[2336]: time="2024-12-09 11:39:05.745994783Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2453937d-957c-4942-a0fb-abc81e27ab26 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 11:39:05 pause-529265 crio[2336]: time="2024-12-09 11:39:05.746212391Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b76d3b70fbda2f3ddc918debff7b70b481699eb997d48e505e26ae2f11a87997,PodSandboxId:889b5ff7952d5f04a70897fb2279f2ee29495bee1751cd43b6c540320af5e802,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733744322946773539,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-529265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4395cca1841dbdb67cff36eec0a10ff6,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb3d343aa45baf1f29eee5aa63bb0b681ca1a3f371c7df0b297730eed39389d5,PodSandboxId:20db52729296aae0c64d8eb498ecacfb16bbd5839567cce8c9dc2218b489d03d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733744322970676546,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-529265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19e373a3d8a9080aa7bdaf5aa3dc33dc,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b6100c030924c1bda5d7e3f16ca4eee787f0ecc9713226426c7f4dbf0e28de0,PodSandboxId:8253ff818926963a734a83b6e56266f73c8e7e78dcf7cbf3ed723c5096732b83,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733744322917286791,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-529265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e728576d235a404dc3d55892eb05ecb3,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.t
erminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:333147e8a89653476b04335614d92a7499aee0a25829570f8ec0d64f62cc22ab,PodSandboxId:c3dbc3f05adbc85fd267b5924280225ff937781d899a4d64e8a174cab7329880,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733744319338177475,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-529265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdee20116799e47b2c6d59a844ac20d4,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bdec1e4dfdcac30e0b3b896e872220973c73c7b9ee1573d4b2864f81c89f277,PodSandboxId:90cac579c4dc9ed51cc3e08fafba7dd1bbf9cbe30aaf681aa35dcfbed4795078,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733744317342397931,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-lbpw6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d106d8-2009-4438-8487-53336745302b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"
},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54723efd278180f1068f7ebf26c7f1aefaa082144fee4af513ccfe973ac23757,PodSandboxId:5a9cc6f32c36e65e0aaf2bd3e71e0cf5131ea1577a850c304a29d66a7e575868,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733744311342566119,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-96c5d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4724f3fd-b481-4a95-b628-3bdaee03df58,},Annotations:map[string]string{io
.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8844efda91d299dc8c21e077e850c1a61f66450372a1b174157411dbf5db1ce,PodSandboxId:90cac579c4dc9ed51cc3e08fafba7dd1bbf9cbe30aaf681aa35dcfbed4795078,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1733744299892817605,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-lbpw6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d106d8-2009-4438-8487-53336745302b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a
204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b56cd8c9787715b1e91b413565fe2f41a70c9f63a067ff3d5e33d92f6e77bb1,PodSandboxId:5a9cc6f32c36e65e0aaf2bd3e71e0cf5131ea1577a850c304a29d66a7e575868,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_EXITED,CreatedAt:1733744299146242176,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-96c5d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4724f3fd-b481-4a95-b628-3bdaee03df58,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f8b2a354553d276bad1bd0ba810a5aadcc95aafb81669149c05c48daa23cc85,PodSandboxId:8253ff818926963a734a83b6e56266f73c8e7e78dcf7cbf3ed723c5096732b83,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1733744299111041317,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-529265,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: e728576d235a404dc3d55892eb05ecb3,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33ba3053afaf0a9fcd10abf6dde45a993ff265831a41dd56519bfabb6eeda8ca,PodSandboxId:c3dbc3f05adbc85fd267b5924280225ff937781d899a4d64e8a174cab7329880,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_EXITED,CreatedAt:1733744299068063820,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-529265,io.kubernete
s.pod.namespace: kube-system,io.kubernetes.pod.uid: cdee20116799e47b2c6d59a844ac20d4,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2da0baee10803c8d66c70488dc28bbcb5168ac188eedf55f3bdcf8ca35ec0b6e,PodSandboxId:20db52729296aae0c64d8eb498ecacfb16bbd5839567cce8c9dc2218b489d03d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_EXITED,CreatedAt:1733744298989265060,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-529265,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 19e373a3d8a9080aa7bdaf5aa3dc33dc,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87d9b7e067ec790850a7d5224c0afcb57c57d2b835e00985bb994729d7d45cf9,PodSandboxId:889b5ff7952d5f04a70897fb2279f2ee29495bee1751cd43b6c540320af5e802,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733744298965604594,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-529265,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 4395cca1841dbdb67cff36eec0a10ff6,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2453937d-957c-4942-a0fb-abc81e27ab26 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 11:39:05 pause-529265 crio[2336]: time="2024-12-09 11:39:05.783767371Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d68fb5e2-9df3-4fc9-9f3a-8a7a524efa83 name=/runtime.v1.RuntimeService/Version
	Dec 09 11:39:05 pause-529265 crio[2336]: time="2024-12-09 11:39:05.783849718Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d68fb5e2-9df3-4fc9-9f3a-8a7a524efa83 name=/runtime.v1.RuntimeService/Version
	Dec 09 11:39:05 pause-529265 crio[2336]: time="2024-12-09 11:39:05.785215064Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e6c399bb-2b19-4c4d-a290-b66442ffcf91 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 11:39:05 pause-529265 crio[2336]: time="2024-12-09 11:39:05.786024387Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733744345785996485,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e6c399bb-2b19-4c4d-a290-b66442ffcf91 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 11:39:05 pause-529265 crio[2336]: time="2024-12-09 11:39:05.786449244Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a934f90b-344e-44dc-ad92-e10fc10d2161 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 11:39:05 pause-529265 crio[2336]: time="2024-12-09 11:39:05.786521261Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a934f90b-344e-44dc-ad92-e10fc10d2161 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 11:39:05 pause-529265 crio[2336]: time="2024-12-09 11:39:05.786774421Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b76d3b70fbda2f3ddc918debff7b70b481699eb997d48e505e26ae2f11a87997,PodSandboxId:889b5ff7952d5f04a70897fb2279f2ee29495bee1751cd43b6c540320af5e802,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733744322946773539,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-529265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4395cca1841dbdb67cff36eec0a10ff6,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb3d343aa45baf1f29eee5aa63bb0b681ca1a3f371c7df0b297730eed39389d5,PodSandboxId:20db52729296aae0c64d8eb498ecacfb16bbd5839567cce8c9dc2218b489d03d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733744322970676546,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-529265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19e373a3d8a9080aa7bdaf5aa3dc33dc,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b6100c030924c1bda5d7e3f16ca4eee787f0ecc9713226426c7f4dbf0e28de0,PodSandboxId:8253ff818926963a734a83b6e56266f73c8e7e78dcf7cbf3ed723c5096732b83,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733744322917286791,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-529265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e728576d235a404dc3d55892eb05ecb3,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.t
erminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:333147e8a89653476b04335614d92a7499aee0a25829570f8ec0d64f62cc22ab,PodSandboxId:c3dbc3f05adbc85fd267b5924280225ff937781d899a4d64e8a174cab7329880,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733744319338177475,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-529265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdee20116799e47b2c6d59a844ac20d4,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bdec1e4dfdcac30e0b3b896e872220973c73c7b9ee1573d4b2864f81c89f277,PodSandboxId:90cac579c4dc9ed51cc3e08fafba7dd1bbf9cbe30aaf681aa35dcfbed4795078,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733744317342397931,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-lbpw6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d106d8-2009-4438-8487-53336745302b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"
},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54723efd278180f1068f7ebf26c7f1aefaa082144fee4af513ccfe973ac23757,PodSandboxId:5a9cc6f32c36e65e0aaf2bd3e71e0cf5131ea1577a850c304a29d66a7e575868,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733744311342566119,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-96c5d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4724f3fd-b481-4a95-b628-3bdaee03df58,},Annotations:map[string]string{io
.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8844efda91d299dc8c21e077e850c1a61f66450372a1b174157411dbf5db1ce,PodSandboxId:90cac579c4dc9ed51cc3e08fafba7dd1bbf9cbe30aaf681aa35dcfbed4795078,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1733744299892817605,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-lbpw6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d106d8-2009-4438-8487-53336745302b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a
204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b56cd8c9787715b1e91b413565fe2f41a70c9f63a067ff3d5e33d92f6e77bb1,PodSandboxId:5a9cc6f32c36e65e0aaf2bd3e71e0cf5131ea1577a850c304a29d66a7e575868,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_EXITED,CreatedAt:1733744299146242176,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-96c5d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4724f3fd-b481-4a95-b628-3bdaee03df58,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f8b2a354553d276bad1bd0ba810a5aadcc95aafb81669149c05c48daa23cc85,PodSandboxId:8253ff818926963a734a83b6e56266f73c8e7e78dcf7cbf3ed723c5096732b83,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1733744299111041317,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-529265,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: e728576d235a404dc3d55892eb05ecb3,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33ba3053afaf0a9fcd10abf6dde45a993ff265831a41dd56519bfabb6eeda8ca,PodSandboxId:c3dbc3f05adbc85fd267b5924280225ff937781d899a4d64e8a174cab7329880,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_EXITED,CreatedAt:1733744299068063820,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-529265,io.kubernete
s.pod.namespace: kube-system,io.kubernetes.pod.uid: cdee20116799e47b2c6d59a844ac20d4,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2da0baee10803c8d66c70488dc28bbcb5168ac188eedf55f3bdcf8ca35ec0b6e,PodSandboxId:20db52729296aae0c64d8eb498ecacfb16bbd5839567cce8c9dc2218b489d03d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_EXITED,CreatedAt:1733744298989265060,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-529265,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 19e373a3d8a9080aa7bdaf5aa3dc33dc,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87d9b7e067ec790850a7d5224c0afcb57c57d2b835e00985bb994729d7d45cf9,PodSandboxId:889b5ff7952d5f04a70897fb2279f2ee29495bee1751cd43b6c540320af5e802,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733744298965604594,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-529265,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 4395cca1841dbdb67cff36eec0a10ff6,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a934f90b-344e-44dc-ad92-e10fc10d2161 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 11:39:05 pause-529265 crio[2336]: time="2024-12-09 11:39:05.833066189Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8ecec3fe-508f-4d1d-90d9-2ba48beaeb16 name=/runtime.v1.RuntimeService/Version
	Dec 09 11:39:05 pause-529265 crio[2336]: time="2024-12-09 11:39:05.833148990Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8ecec3fe-508f-4d1d-90d9-2ba48beaeb16 name=/runtime.v1.RuntimeService/Version
	Dec 09 11:39:05 pause-529265 crio[2336]: time="2024-12-09 11:39:05.834685767Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=98e71fc5-002f-4fb3-852a-0563f4164bb3 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 11:39:05 pause-529265 crio[2336]: time="2024-12-09 11:39:05.835357896Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733744345835318613,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=98e71fc5-002f-4fb3-852a-0563f4164bb3 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 11:39:05 pause-529265 crio[2336]: time="2024-12-09 11:39:05.836249906Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=48ec75e1-4414-40d2-a799-316ff32f318f name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 11:39:05 pause-529265 crio[2336]: time="2024-12-09 11:39:05.836400465Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=48ec75e1-4414-40d2-a799-316ff32f318f name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 11:39:05 pause-529265 crio[2336]: time="2024-12-09 11:39:05.836670666Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b76d3b70fbda2f3ddc918debff7b70b481699eb997d48e505e26ae2f11a87997,PodSandboxId:889b5ff7952d5f04a70897fb2279f2ee29495bee1751cd43b6c540320af5e802,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733744322946773539,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-529265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4395cca1841dbdb67cff36eec0a10ff6,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb3d343aa45baf1f29eee5aa63bb0b681ca1a3f371c7df0b297730eed39389d5,PodSandboxId:20db52729296aae0c64d8eb498ecacfb16bbd5839567cce8c9dc2218b489d03d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733744322970676546,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-529265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19e373a3d8a9080aa7bdaf5aa3dc33dc,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b6100c030924c1bda5d7e3f16ca4eee787f0ecc9713226426c7f4dbf0e28de0,PodSandboxId:8253ff818926963a734a83b6e56266f73c8e7e78dcf7cbf3ed723c5096732b83,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733744322917286791,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-529265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e728576d235a404dc3d55892eb05ecb3,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.t
erminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:333147e8a89653476b04335614d92a7499aee0a25829570f8ec0d64f62cc22ab,PodSandboxId:c3dbc3f05adbc85fd267b5924280225ff937781d899a4d64e8a174cab7329880,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733744319338177475,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-529265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdee20116799e47b2c6d59a844ac20d4,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bdec1e4dfdcac30e0b3b896e872220973c73c7b9ee1573d4b2864f81c89f277,PodSandboxId:90cac579c4dc9ed51cc3e08fafba7dd1bbf9cbe30aaf681aa35dcfbed4795078,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733744317342397931,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-lbpw6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d106d8-2009-4438-8487-53336745302b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"
},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54723efd278180f1068f7ebf26c7f1aefaa082144fee4af513ccfe973ac23757,PodSandboxId:5a9cc6f32c36e65e0aaf2bd3e71e0cf5131ea1577a850c304a29d66a7e575868,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733744311342566119,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-96c5d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4724f3fd-b481-4a95-b628-3bdaee03df58,},Annotations:map[string]string{io
.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8844efda91d299dc8c21e077e850c1a61f66450372a1b174157411dbf5db1ce,PodSandboxId:90cac579c4dc9ed51cc3e08fafba7dd1bbf9cbe30aaf681aa35dcfbed4795078,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1733744299892817605,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-lbpw6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d106d8-2009-4438-8487-53336745302b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a
204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b56cd8c9787715b1e91b413565fe2f41a70c9f63a067ff3d5e33d92f6e77bb1,PodSandboxId:5a9cc6f32c36e65e0aaf2bd3e71e0cf5131ea1577a850c304a29d66a7e575868,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_EXITED,CreatedAt:1733744299146242176,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-96c5d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4724f3fd-b481-4a95-b628-3bdaee03df58,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f8b2a354553d276bad1bd0ba810a5aadcc95aafb81669149c05c48daa23cc85,PodSandboxId:8253ff818926963a734a83b6e56266f73c8e7e78dcf7cbf3ed723c5096732b83,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1733744299111041317,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-529265,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: e728576d235a404dc3d55892eb05ecb3,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33ba3053afaf0a9fcd10abf6dde45a993ff265831a41dd56519bfabb6eeda8ca,PodSandboxId:c3dbc3f05adbc85fd267b5924280225ff937781d899a4d64e8a174cab7329880,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_EXITED,CreatedAt:1733744299068063820,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-529265,io.kubernete
s.pod.namespace: kube-system,io.kubernetes.pod.uid: cdee20116799e47b2c6d59a844ac20d4,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2da0baee10803c8d66c70488dc28bbcb5168ac188eedf55f3bdcf8ca35ec0b6e,PodSandboxId:20db52729296aae0c64d8eb498ecacfb16bbd5839567cce8c9dc2218b489d03d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_EXITED,CreatedAt:1733744298989265060,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-529265,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 19e373a3d8a9080aa7bdaf5aa3dc33dc,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87d9b7e067ec790850a7d5224c0afcb57c57d2b835e00985bb994729d7d45cf9,PodSandboxId:889b5ff7952d5f04a70897fb2279f2ee29495bee1751cd43b6c540320af5e802,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733744298965604594,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-529265,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 4395cca1841dbdb67cff36eec0a10ff6,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=48ec75e1-4414-40d2-a799-316ff32f318f name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 11:39:05 pause-529265 crio[2336]: time="2024-12-09 11:39:05.875614900Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=126f5fbd-1294-456a-9fe0-aa477c868b88 name=/runtime.v1.RuntimeService/Version
	Dec 09 11:39:05 pause-529265 crio[2336]: time="2024-12-09 11:39:05.875704710Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=126f5fbd-1294-456a-9fe0-aa477c868b88 name=/runtime.v1.RuntimeService/Version
	Dec 09 11:39:05 pause-529265 crio[2336]: time="2024-12-09 11:39:05.877074108Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=083bd7f7-5029-4744-b356-2c13ca21095d name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 11:39:05 pause-529265 crio[2336]: time="2024-12-09 11:39:05.877441091Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733744345877420499,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=083bd7f7-5029-4744-b356-2c13ca21095d name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 11:39:05 pause-529265 crio[2336]: time="2024-12-09 11:39:05.877950598Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3c3fea7f-992b-425f-92bf-2a6d48f99fa9 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 11:39:05 pause-529265 crio[2336]: time="2024-12-09 11:39:05.878021701Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3c3fea7f-992b-425f-92bf-2a6d48f99fa9 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 11:39:05 pause-529265 crio[2336]: time="2024-12-09 11:39:05.878258822Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b76d3b70fbda2f3ddc918debff7b70b481699eb997d48e505e26ae2f11a87997,PodSandboxId:889b5ff7952d5f04a70897fb2279f2ee29495bee1751cd43b6c540320af5e802,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733744322946773539,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-529265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4395cca1841dbdb67cff36eec0a10ff6,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb3d343aa45baf1f29eee5aa63bb0b681ca1a3f371c7df0b297730eed39389d5,PodSandboxId:20db52729296aae0c64d8eb498ecacfb16bbd5839567cce8c9dc2218b489d03d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733744322970676546,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-529265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19e373a3d8a9080aa7bdaf5aa3dc33dc,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b6100c030924c1bda5d7e3f16ca4eee787f0ecc9713226426c7f4dbf0e28de0,PodSandboxId:8253ff818926963a734a83b6e56266f73c8e7e78dcf7cbf3ed723c5096732b83,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733744322917286791,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-529265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e728576d235a404dc3d55892eb05ecb3,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.t
erminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:333147e8a89653476b04335614d92a7499aee0a25829570f8ec0d64f62cc22ab,PodSandboxId:c3dbc3f05adbc85fd267b5924280225ff937781d899a4d64e8a174cab7329880,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733744319338177475,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-529265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdee20116799e47b2c6d59a844ac20d4,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bdec1e4dfdcac30e0b3b896e872220973c73c7b9ee1573d4b2864f81c89f277,PodSandboxId:90cac579c4dc9ed51cc3e08fafba7dd1bbf9cbe30aaf681aa35dcfbed4795078,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733744317342397931,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-lbpw6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d106d8-2009-4438-8487-53336745302b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"
},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54723efd278180f1068f7ebf26c7f1aefaa082144fee4af513ccfe973ac23757,PodSandboxId:5a9cc6f32c36e65e0aaf2bd3e71e0cf5131ea1577a850c304a29d66a7e575868,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733744311342566119,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-96c5d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4724f3fd-b481-4a95-b628-3bdaee03df58,},Annotations:map[string]string{io
.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8844efda91d299dc8c21e077e850c1a61f66450372a1b174157411dbf5db1ce,PodSandboxId:90cac579c4dc9ed51cc3e08fafba7dd1bbf9cbe30aaf681aa35dcfbed4795078,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1733744299892817605,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-lbpw6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d106d8-2009-4438-8487-53336745302b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a
204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b56cd8c9787715b1e91b413565fe2f41a70c9f63a067ff3d5e33d92f6e77bb1,PodSandboxId:5a9cc6f32c36e65e0aaf2bd3e71e0cf5131ea1577a850c304a29d66a7e575868,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_EXITED,CreatedAt:1733744299146242176,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-96c5d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4724f3fd-b481-4a95-b628-3bdaee03df58,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f8b2a354553d276bad1bd0ba810a5aadcc95aafb81669149c05c48daa23cc85,PodSandboxId:8253ff818926963a734a83b6e56266f73c8e7e78dcf7cbf3ed723c5096732b83,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1733744299111041317,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-529265,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: e728576d235a404dc3d55892eb05ecb3,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33ba3053afaf0a9fcd10abf6dde45a993ff265831a41dd56519bfabb6eeda8ca,PodSandboxId:c3dbc3f05adbc85fd267b5924280225ff937781d899a4d64e8a174cab7329880,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_EXITED,CreatedAt:1733744299068063820,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-529265,io.kubernete
s.pod.namespace: kube-system,io.kubernetes.pod.uid: cdee20116799e47b2c6d59a844ac20d4,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2da0baee10803c8d66c70488dc28bbcb5168ac188eedf55f3bdcf8ca35ec0b6e,PodSandboxId:20db52729296aae0c64d8eb498ecacfb16bbd5839567cce8c9dc2218b489d03d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_EXITED,CreatedAt:1733744298989265060,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-529265,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 19e373a3d8a9080aa7bdaf5aa3dc33dc,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87d9b7e067ec790850a7d5224c0afcb57c57d2b835e00985bb994729d7d45cf9,PodSandboxId:889b5ff7952d5f04a70897fb2279f2ee29495bee1751cd43b6c540320af5e802,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733744298965604594,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-529265,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 4395cca1841dbdb67cff36eec0a10ff6,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3c3fea7f-992b-425f-92bf-2a6d48f99fa9 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	fb3d343aa45ba       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856   22 seconds ago      Running             kube-scheduler            2                   20db52729296a       kube-scheduler-pause-529265
	b76d3b70fbda2       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   22 seconds ago      Running             kube-apiserver            2                   889b5ff7952d5       kube-apiserver-pause-529265
	0b6100c030924       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   23 seconds ago      Running             etcd                      2                   8253ff8189269       etcd-pause-529265
	333147e8a8965       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503   26 seconds ago      Running             kube-controller-manager   2                   c3dbc3f05adbc       kube-controller-manager-pause-529265
	0bdec1e4dfdca       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   28 seconds ago      Running             coredns                   2                   90cac579c4dc9       coredns-7c65d6cfc9-lbpw6
	54723efd27818       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38   34 seconds ago      Running             kube-proxy                2                   5a9cc6f32c36e       kube-proxy-96c5d
	a8844efda91d2       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   46 seconds ago      Exited              coredns                   1                   90cac579c4dc9       coredns-7c65d6cfc9-lbpw6
	7b56cd8c97877       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38   46 seconds ago      Exited              kube-proxy                1                   5a9cc6f32c36e       kube-proxy-96c5d
	1f8b2a354553d       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   46 seconds ago      Exited              etcd                      1                   8253ff8189269       etcd-pause-529265
	33ba3053afaf0       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503   46 seconds ago      Exited              kube-controller-manager   1                   c3dbc3f05adbc       kube-controller-manager-pause-529265
	2da0baee10803       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856   46 seconds ago      Exited              kube-scheduler            1                   20db52729296a       kube-scheduler-pause-529265
	87d9b7e067ec7       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   46 seconds ago      Exited              kube-apiserver            1                   889b5ff7952d5       kube-apiserver-pause-529265
	
	
	==> coredns [0bdec1e4dfdcac30e0b3b896e872220973c73c7b9ee1573d4b2864f81c89f277] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:47212->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:47212->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:47198->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:47198->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:47196->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:47196->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:55357 - 51627 "HINFO IN 4791590791798458663.8364992037365514839. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011256759s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> coredns [a8844efda91d299dc8c21e077e850c1a61f66450372a1b174157411dbf5db1ce] <==
	
	
	==> describe nodes <==
	Name:               pause-529265
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-529265
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=60110addcdbf0fec7168b962521659e922988d6c
	                    minikube.k8s.io/name=pause-529265
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_09T11_37_15_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 09 Dec 2024 11:37:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-529265
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 09 Dec 2024 11:38:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 09 Dec 2024 11:38:46 +0000   Mon, 09 Dec 2024 11:37:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 09 Dec 2024 11:38:46 +0000   Mon, 09 Dec 2024 11:37:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 09 Dec 2024 11:38:46 +0000   Mon, 09 Dec 2024 11:37:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 09 Dec 2024 11:38:46 +0000   Mon, 09 Dec 2024 11:37:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.137
	  Hostname:    pause-529265
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 060a33f57f1a4609945486db94879266
	  System UUID:                060a33f5-7f1a-4609-9454-86db94879266
	  Boot ID:                    2c03bb52-5989-4560-964d-c1b03c82d1b6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-lbpw6                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     107s
	  kube-system                 etcd-pause-529265                       100m (5%)     0 (0%)      100Mi (5%)       0 (0%)         112s
	  kube-system                 kube-apiserver-pause-529265             250m (12%)    0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-controller-manager-pause-529265    200m (10%)    0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-proxy-96c5d                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 kube-scheduler-pause-529265             100m (5%)     0 (0%)      0 (0%)           0 (0%)         112s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 105s               kube-proxy       
	  Normal  Starting                 19s                kube-proxy       
	  Normal  NodeHasSufficientPID     112s               kubelet          Node pause-529265 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  112s               kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  112s               kubelet          Node pause-529265 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    112s               kubelet          Node pause-529265 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 112s               kubelet          Starting kubelet.
	  Normal  NodeReady                111s               kubelet          Node pause-529265 status is now: NodeReady
	  Normal  RegisteredNode           108s               node-controller  Node pause-529265 event: Registered Node pause-529265 in Controller
	  Normal  Starting                 24s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  24s (x8 over 24s)  kubelet          Node pause-529265 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    24s (x8 over 24s)  kubelet          Node pause-529265 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     24s (x7 over 24s)  kubelet          Node pause-529265 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  24s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           18s                node-controller  Node pause-529265 event: Registered Node pause-529265 in Controller
	
	
	==> dmesg <==
	[  +0.059367] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.051363] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.163370] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.130374] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.275357] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[Dec 9 11:37] systemd-fstab-generator[746]: Ignoring "noauto" option for root device
	[  +4.440170] systemd-fstab-generator[882]: Ignoring "noauto" option for root device
	[  +0.062884] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.490443] systemd-fstab-generator[1219]: Ignoring "noauto" option for root device
	[  +0.075393] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.322542] systemd-fstab-generator[1354]: Ignoring "noauto" option for root device
	[  +0.123998] kauditd_printk_skb: 21 callbacks suppressed
	[  +7.179026] kauditd_printk_skb: 88 callbacks suppressed
	[Dec 9 11:38] systemd-fstab-generator[2260]: Ignoring "noauto" option for root device
	[  +0.084828] kauditd_printk_skb: 2 callbacks suppressed
	[  +0.069768] systemd-fstab-generator[2272]: Ignoring "noauto" option for root device
	[  +0.190385] systemd-fstab-generator[2286]: Ignoring "noauto" option for root device
	[  +0.146453] systemd-fstab-generator[2298]: Ignoring "noauto" option for root device
	[  +0.309013] systemd-fstab-generator[2326]: Ignoring "noauto" option for root device
	[  +2.356409] systemd-fstab-generator[2446]: Ignoring "noauto" option for root device
	[  +2.285549] kauditd_printk_skb: 195 callbacks suppressed
	[ +21.504768] systemd-fstab-generator[3356]: Ignoring "noauto" option for root device
	[  +0.279304] kauditd_printk_skb: 22 callbacks suppressed
	[  +7.398936] kauditd_printk_skb: 14 callbacks suppressed
	[ +10.203712] systemd-fstab-generator[3689]: Ignoring "noauto" option for root device
	
	
	==> etcd [0b6100c030924c1bda5d7e3f16ca4eee787f0ecc9713226426c7f4dbf0e28de0] <==
	{"level":"info","ts":"2024-12-09T11:38:49.557510Z","caller":"traceutil/trace.go:171","msg":"trace[437221602] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/endpoint-controller; range_end:; response_count:1; response_revision:448; }","duration":"349.05938ms","start":"2024-12-09T11:38:49.208439Z","end":"2024-12-09T11:38:49.557499Z","steps":["trace[437221602] 'agreement among raft nodes before linearized reading'  (duration: 348.968103ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-09T11:38:49.557562Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-09T11:38:49.208396Z","time spent":"349.157706ms","remote":"127.0.0.1:52184","response type":"/etcdserverpb.KV/Range","request count":0,"request size":59,"response count":1,"response size":226,"request content":"key:\"/registry/serviceaccounts/kube-system/endpoint-controller\" "}
	{"level":"warn","ts":"2024-12-09T11:38:49.891749Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"123.33679ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9748766706966870230 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/endpointslices/kube-system/kube-dns-gh29q\" mod_revision:384 > success:<request_put:<key:\"/registry/endpointslices/kube-system/kube-dns-gh29q\" value_size:1193 >> failure:<request_range:<key:\"/registry/endpointslices/kube-system/kube-dns-gh29q\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-12-09T11:38:49.892001Z","caller":"traceutil/trace.go:171","msg":"trace[1995444147] linearizableReadLoop","detail":"{readStateIndex:490; appliedIndex:488; }","duration":"223.445401ms","start":"2024-12-09T11:38:49.668531Z","end":"2024-12-09T11:38:49.891976Z","steps":["trace[1995444147] 'read index received'  (duration: 99.718174ms)","trace[1995444147] 'applied index is now lower than readState.Index'  (duration: 123.726426ms)"],"step_count":2}
	{"level":"info","ts":"2024-12-09T11:38:49.892167Z","caller":"traceutil/trace.go:171","msg":"trace[2053266273] transaction","detail":"{read_only:false; response_revision:451; number_of_response:1; }","duration":"310.612972ms","start":"2024-12-09T11:38:49.581496Z","end":"2024-12-09T11:38:49.892109Z","steps":["trace[2053266273] 'process raft request'  (duration: 310.351522ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-09T11:38:49.892289Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"223.748045ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/coredns\" ","response":"range_response_count:1 size:4133"}
	{"level":"info","ts":"2024-12-09T11:38:49.892358Z","caller":"traceutil/trace.go:171","msg":"trace[1798635486] range","detail":"{range_begin:/registry/deployments/kube-system/coredns; range_end:; response_count:1; response_revision:451; }","duration":"223.820329ms","start":"2024-12-09T11:38:49.668527Z","end":"2024-12-09T11:38:49.892347Z","steps":["trace[1798635486] 'agreement among raft nodes before linearized reading'  (duration: 223.719194ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-09T11:38:49.892404Z","caller":"traceutil/trace.go:171","msg":"trace[1064574568] transaction","detail":"{read_only:false; response_revision:450; number_of_response:1; }","duration":"315.940087ms","start":"2024-12-09T11:38:49.576450Z","end":"2024-12-09T11:38:49.892390Z","steps":["trace[1064574568] 'process raft request'  (duration: 191.851474ms)","trace[1064574568] 'compare'  (duration: 123.259241ms)"],"step_count":2}
	{"level":"warn","ts":"2024-12-09T11:38:49.892475Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-09T11:38:49.576442Z","time spent":"316.010289ms","remote":"127.0.0.1:52256","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1252,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/endpointslices/kube-system/kube-dns-gh29q\" mod_revision:384 > success:<request_put:<key:\"/registry/endpointslices/kube-system/kube-dns-gh29q\" value_size:1193 >> failure:<request_range:<key:\"/registry/endpointslices/kube-system/kube-dns-gh29q\" > >"}
	{"level":"warn","ts":"2024-12-09T11:38:49.892941Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"224.202157ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-scheduler-pause-529265\" ","response":"range_response_count:1 size:4566"}
	{"level":"info","ts":"2024-12-09T11:38:49.894759Z","caller":"traceutil/trace.go:171","msg":"trace[1371166847] range","detail":"{range_begin:/registry/pods/kube-system/kube-scheduler-pause-529265; range_end:; response_count:1; response_revision:451; }","duration":"226.062162ms","start":"2024-12-09T11:38:49.668683Z","end":"2024-12-09T11:38:49.894746Z","steps":["trace[1371166847] 'agreement among raft nodes before linearized reading'  (duration: 224.178743ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-09T11:38:49.894517Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"225.89465ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/csinodes/pause-529265\" ","response":"range_response_count:1 size:706"}
	{"level":"info","ts":"2024-12-09T11:38:49.895887Z","caller":"traceutil/trace.go:171","msg":"trace[2146788514] range","detail":"{range_begin:/registry/csinodes/pause-529265; range_end:; response_count:1; response_revision:451; }","duration":"227.26913ms","start":"2024-12-09T11:38:49.668609Z","end":"2024-12-09T11:38:49.895878Z","steps":["trace[2146788514] 'agreement among raft nodes before linearized reading'  (duration: 223.919488ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-09T11:38:49.894599Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"225.912731ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-pause-529265\" ","response":"range_response_count:1 size:5847"}
	{"level":"warn","ts":"2024-12-09T11:38:49.894640Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"225.967034ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-controller-manager-pause-529265\" ","response":"range_response_count:1 size:6600"}
	{"level":"warn","ts":"2024-12-09T11:38:49.894668Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"226.013475ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/specs/kube-system/kube-dns\" ","response":"range_response_count:1 size:1211"}
	{"level":"warn","ts":"2024-12-09T11:38:49.894697Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"226.065318ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-apiserver-pause-529265\" ","response":"range_response_count:1 size:7000"}
	{"level":"warn","ts":"2024-12-09T11:38:49.892303Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-09T11:38:49.581487Z","time spent":"310.762313ms","remote":"127.0.0.1:52442","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4118,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/deployments/kube-system/coredns\" mod_revision:448 > success:<request_put:<key:\"/registry/deployments/kube-system/coredns\" value_size:4069 >> failure:<request_range:<key:\"/registry/deployments/kube-system/coredns\" > >"}
	{"level":"info","ts":"2024-12-09T11:38:49.897278Z","caller":"traceutil/trace.go:171","msg":"trace[309388272] range","detail":"{range_begin:/registry/pods/kube-system/etcd-pause-529265; range_end:; response_count:1; response_revision:451; }","duration":"228.590909ms","start":"2024-12-09T11:38:49.668679Z","end":"2024-12-09T11:38:49.897270Z","steps":["trace[309388272] 'agreement among raft nodes before linearized reading'  (duration: 225.859553ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-09T11:38:49.897375Z","caller":"traceutil/trace.go:171","msg":"trace[1349455708] range","detail":"{range_begin:/registry/pods/kube-system/kube-controller-manager-pause-529265; range_end:; response_count:1; response_revision:451; }","duration":"228.693473ms","start":"2024-12-09T11:38:49.668668Z","end":"2024-12-09T11:38:49.897362Z","steps":["trace[1349455708] 'agreement among raft nodes before linearized reading'  (duration: 225.946897ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-09T11:38:49.897474Z","caller":"traceutil/trace.go:171","msg":"trace[1040830611] range","detail":"{range_begin:/registry/services/specs/kube-system/kube-dns; range_end:; response_count:1; response_revision:451; }","duration":"228.798608ms","start":"2024-12-09T11:38:49.668649Z","end":"2024-12-09T11:38:49.897448Z","steps":["trace[1040830611] 'agreement among raft nodes before linearized reading'  (duration: 226.001452ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-09T11:38:49.897568Z","caller":"traceutil/trace.go:171","msg":"trace[158366361] range","detail":"{range_begin:/registry/pods/kube-system/kube-apiserver-pause-529265; range_end:; response_count:1; response_revision:451; }","duration":"228.933109ms","start":"2024-12-09T11:38:49.668628Z","end":"2024-12-09T11:38:49.897561Z","steps":["trace[158366361] 'agreement among raft nodes before linearized reading'  (duration: 226.051315ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-09T11:38:50.167329Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"173.314956ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/pause-529265\" ","response":"range_response_count:1 size:5428"}
	{"level":"info","ts":"2024-12-09T11:38:50.167806Z","caller":"traceutil/trace.go:171","msg":"trace[379048474] range","detail":"{range_begin:/registry/minions/pause-529265; range_end:; response_count:1; response_revision:451; }","duration":"173.802852ms","start":"2024-12-09T11:38:49.993987Z","end":"2024-12-09T11:38:50.167789Z","steps":["trace[379048474] 'range keys from in-memory index tree'  (duration: 173.224534ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-09T11:38:51.273853Z","caller":"traceutil/trace.go:171","msg":"trace[434038371] transaction","detail":"{read_only:false; response_revision:452; number_of_response:1; }","duration":"168.197261ms","start":"2024-12-09T11:38:51.105643Z","end":"2024-12-09T11:38:51.273840Z","steps":["trace[434038371] 'process raft request'  (duration: 168.090393ms)"],"step_count":1}
	
	
	==> etcd [1f8b2a354553d276bad1bd0ba810a5aadcc95aafb81669149c05c48daa23cc85] <==
	{"level":"info","ts":"2024-12-09T11:38:20.149689Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-12-09T11:38:20.159439Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"8623b2a8b011233f","local-member-id":"5527995f6263874a","commit-index":420}
	{"level":"info","ts":"2024-12-09T11:38:20.159587Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5527995f6263874a switched to configuration voters=()"}
	{"level":"info","ts":"2024-12-09T11:38:20.159675Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5527995f6263874a became follower at term 2"}
	{"level":"info","ts":"2024-12-09T11:38:20.159711Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 5527995f6263874a [peers: [], term: 2, commit: 420, applied: 0, lastindex: 420, lastterm: 2]"}
	{"level":"warn","ts":"2024-12-09T11:38:20.164982Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-12-09T11:38:20.205701Z","caller":"mvcc/kvstore.go:418","msg":"kvstore restored","current-rev":398}
	{"level":"info","ts":"2024-12-09T11:38:20.249979Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-12-09T11:38:20.257203Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"5527995f6263874a","timeout":"7s"}
	{"level":"info","ts":"2024-12-09T11:38:20.259220Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"5527995f6263874a"}
	{"level":"info","ts":"2024-12-09T11:38:20.260972Z","caller":"etcdserver/server.go:867","msg":"starting etcd server","local-member-id":"5527995f6263874a","local-server-version":"3.5.15","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-12-09T11:38:20.261556Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-09T11:38:20.262225Z","caller":"etcdserver/server.go:767","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-12-09T11:38:20.262396Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-12-09T11:38:20.268249Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-12-09T11:38:20.268343Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-12-09T11:38:20.269269Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-12-09T11:38:20.269481Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"5527995f6263874a","initial-advertise-peer-urls":["https://192.168.39.137:2380"],"listen-peer-urls":["https://192.168.39.137:2380"],"advertise-client-urls":["https://192.168.39.137:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.137:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-12-09T11:38:20.269518Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-12-09T11:38:20.269606Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.137:2380"}
	{"level":"info","ts":"2024-12-09T11:38:20.269629Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.137:2380"}
	{"level":"info","ts":"2024-12-09T11:38:20.262622Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5527995f6263874a switched to configuration voters=(6136041652267222858)"}
	{"level":"info","ts":"2024-12-09T11:38:20.270150Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"8623b2a8b011233f","local-member-id":"5527995f6263874a","added-peer-id":"5527995f6263874a","added-peer-peer-urls":["https://192.168.39.137:2380"]}
	{"level":"info","ts":"2024-12-09T11:38:20.270307Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"8623b2a8b011233f","local-member-id":"5527995f6263874a","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-09T11:38:20.270340Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	
	
	==> kernel <==
	 11:39:06 up 2 min,  0 users,  load average: 1.22, 0.43, 0.16
	Linux pause-529265 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [87d9b7e067ec790850a7d5224c0afcb57c57d2b835e00985bb994729d7d45cf9] <==
	I1209 11:38:19.597338       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1209 11:38:20.595255       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I1209 11:38:20.620040       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1209 11:38:20.623762       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I1209 11:38:20.637011       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I1209 11:38:20.637328       1 instance.go:232] Using reconciler: lease
	W1209 11:38:20.682336       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: read tcp 127.0.0.1:44434->127.0.0.1:2379: read: connection reset by peer"
	W1209 11:38:20.682764       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: read tcp 127.0.0.1:47952->127.0.0.1:2379: read: connection reset by peer"
	W1209 11:38:20.683326       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: read tcp 127.0.0.1:47964->127.0.0.1:2379: read: connection reset by peer"
	W1209 11:38:21.683766       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:38:21.683855       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:38:21.684090       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:38:23.333834       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:38:23.529146       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:38:23.572061       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:38:25.784758       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:38:25.896355       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:38:25.951703       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:38:29.527715       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:38:30.174661       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:38:30.338239       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:38:35.447958       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:38:35.541127       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:38:36.644237       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	F1209 11:38:40.638701       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [b76d3b70fbda2f3ddc918debff7b70b481699eb997d48e505e26ae2f11a87997] <==
	I1209 11:38:46.327137       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1209 11:38:46.327171       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1209 11:38:46.328659       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1209 11:38:46.328849       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I1209 11:38:46.329205       1 shared_informer.go:320] Caches are synced for configmaps
	I1209 11:38:46.329404       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I1209 11:38:46.329856       1 aggregator.go:171] initial CRD sync complete...
	I1209 11:38:46.329971       1 autoregister_controller.go:144] Starting autoregister controller
	I1209 11:38:46.329999       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1209 11:38:46.330021       1 cache.go:39] Caches are synced for autoregister controller
	I1209 11:38:46.330359       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1209 11:38:46.336889       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I1209 11:38:46.358462       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1209 11:38:46.362250       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1209 11:38:46.362322       1 policy_source.go:224] refreshing policies
	E1209 11:38:46.370087       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1209 11:38:46.402063       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1209 11:38:47.235493       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1209 11:38:47.814185       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1209 11:38:47.843951       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1209 11:38:47.921334       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1209 11:38:47.953194       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1209 11:38:47.962014       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1209 11:38:49.567808       1 controller.go:615] quota admission added evaluator for: endpoints
	I1209 11:38:49.575625       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [333147e8a89653476b04335614d92a7499aee0a25829570f8ec0d64f62cc22ab] <==
	I1209 11:38:49.004076       1 shared_informer.go:320] Caches are synced for cidrallocator
	I1209 11:38:49.004164       1 shared_informer.go:320] Caches are synced for endpoint
	I1209 11:38:49.004170       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="pause-529265"
	I1209 11:38:49.008976       1 shared_informer.go:320] Caches are synced for persistent volume
	I1209 11:38:49.010236       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I1209 11:38:49.010321       1 shared_informer.go:320] Caches are synced for ephemeral
	I1209 11:38:49.010359       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I1209 11:38:49.010785       1 shared_informer.go:320] Caches are synced for crt configmap
	I1209 11:38:49.010874       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I1209 11:38:49.012235       1 shared_informer.go:320] Caches are synced for stateful set
	I1209 11:38:49.014456       1 shared_informer.go:320] Caches are synced for attach detach
	I1209 11:38:49.014721       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I1209 11:38:49.015610       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I1209 11:38:49.018307       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I1209 11:38:49.024306       1 shared_informer.go:320] Caches are synced for disruption
	I1209 11:38:49.044676       1 shared_informer.go:320] Caches are synced for GC
	I1209 11:38:49.109802       1 shared_informer.go:320] Caches are synced for cronjob
	I1209 11:38:49.218822       1 shared_informer.go:320] Caches are synced for resource quota
	I1209 11:38:49.224375       1 shared_informer.go:320] Caches are synced for resource quota
	I1209 11:38:49.646663       1 shared_informer.go:320] Caches are synced for garbage collector
	I1209 11:38:49.665628       1 shared_informer.go:320] Caches are synced for garbage collector
	I1209 11:38:49.665666       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I1209 11:38:51.312331       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="35.077052ms"
	I1209 11:38:51.324975       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="10.790117ms"
	I1209 11:38:51.325838       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="96.546µs"
	
	
	==> kube-controller-manager [33ba3053afaf0a9fcd10abf6dde45a993ff265831a41dd56519bfabb6eeda8ca] <==
	
	
	==> kube-proxy [54723efd278180f1068f7ebf26c7f1aefaa082144fee4af513ccfe973ac23757] <==
	 >
	E1209 11:38:31.507167       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1209 11:38:41.652103       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-529265\": dial tcp 192.168.39.137:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.137:59852->192.168.39.137:8443: read: connection reset by peer"
	E1209 11:38:42.797538       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-529265\": dial tcp 192.168.39.137:8443: connect: connection refused"
	I1209 11:38:46.323217       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.137"]
	E1209 11:38:46.323356       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1209 11:38:46.399370       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1209 11:38:46.399422       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1209 11:38:46.399454       1 server_linux.go:169] "Using iptables Proxier"
	I1209 11:38:46.402287       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1209 11:38:46.402629       1 server.go:483] "Version info" version="v1.31.2"
	I1209 11:38:46.402654       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1209 11:38:46.406242       1 config.go:199] "Starting service config controller"
	I1209 11:38:46.408194       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1209 11:38:46.408335       1 config.go:105] "Starting endpoint slice config controller"
	I1209 11:38:46.408359       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1209 11:38:46.409035       1 config.go:328] "Starting node config controller"
	I1209 11:38:46.410714       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1209 11:38:46.508881       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1209 11:38:46.509003       1 shared_informer.go:320] Caches are synced for service config
	I1209 11:38:46.511066       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [7b56cd8c9787715b1e91b413565fe2f41a70c9f63a067ff3d5e33d92f6e77bb1] <==
	
	
	==> kube-scheduler [2da0baee10803c8d66c70488dc28bbcb5168ac188eedf55f3bdcf8ca35ec0b6e] <==
	I1209 11:38:21.028248       1 serving.go:386] Generated self-signed cert in-memory
	
	
	==> kube-scheduler [fb3d343aa45baf1f29eee5aa63bb0b681ca1a3f371c7df0b297730eed39389d5] <==
	I1209 11:38:43.864967       1 serving.go:386] Generated self-signed cert in-memory
	W1209 11:38:46.289059       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1209 11:38:46.289094       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1209 11:38:46.289104       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1209 11:38:46.289110       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1209 11:38:46.338589       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.2"
	I1209 11:38:46.338707       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1209 11:38:46.344095       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1209 11:38:46.344202       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1209 11:38:46.344942       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1209 11:38:46.345140       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1209 11:38:46.445405       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 09 11:38:42 pause-529265 kubelet[3363]: I1209 11:38:42.698095    3363 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4395cca1841dbdb67cff36eec0a10ff6-ca-certs\") pod \"kube-apiserver-pause-529265\" (UID: \"4395cca1841dbdb67cff36eec0a10ff6\") " pod="kube-system/kube-apiserver-pause-529265"
	Dec 09 11:38:42 pause-529265 kubelet[3363]: I1209 11:38:42.698115    3363 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4395cca1841dbdb67cff36eec0a10ff6-usr-share-ca-certificates\") pod \"kube-apiserver-pause-529265\" (UID: \"4395cca1841dbdb67cff36eec0a10ff6\") " pod="kube-system/kube-apiserver-pause-529265"
	Dec 09 11:38:42 pause-529265 kubelet[3363]: I1209 11:38:42.698131    3363 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cdee20116799e47b2c6d59a844ac20d4-k8s-certs\") pod \"kube-controller-manager-pause-529265\" (UID: \"cdee20116799e47b2c6d59a844ac20d4\") " pod="kube-system/kube-controller-manager-pause-529265"
	Dec 09 11:38:42 pause-529265 kubelet[3363]: I1209 11:38:42.698146    3363 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cdee20116799e47b2c6d59a844ac20d4-usr-share-ca-certificates\") pod \"kube-controller-manager-pause-529265\" (UID: \"cdee20116799e47b2c6d59a844ac20d4\") " pod="kube-system/kube-controller-manager-pause-529265"
	Dec 09 11:38:42 pause-529265 kubelet[3363]: I1209 11:38:42.703840    3363 kubelet_node_status.go:72] "Attempting to register node" node="pause-529265"
	Dec 09 11:38:42 pause-529265 kubelet[3363]: E1209 11:38:42.704807    3363 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.137:8443: connect: connection refused" node="pause-529265"
	Dec 09 11:38:42 pause-529265 kubelet[3363]: I1209 11:38:42.906707    3363 kubelet_node_status.go:72] "Attempting to register node" node="pause-529265"
	Dec 09 11:38:42 pause-529265 kubelet[3363]: I1209 11:38:42.907706    3363 scope.go:117] "RemoveContainer" containerID="2da0baee10803c8d66c70488dc28bbcb5168ac188eedf55f3bdcf8ca35ec0b6e"
	Dec 09 11:38:42 pause-529265 kubelet[3363]: E1209 11:38:42.908094    3363 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.137:8443: connect: connection refused" node="pause-529265"
	Dec 09 11:38:42 pause-529265 kubelet[3363]: I1209 11:38:42.909010    3363 scope.go:117] "RemoveContainer" containerID="1f8b2a354553d276bad1bd0ba810a5aadcc95aafb81669149c05c48daa23cc85"
	Dec 09 11:38:42 pause-529265 kubelet[3363]: I1209 11:38:42.910471    3363 scope.go:117] "RemoveContainer" containerID="87d9b7e067ec790850a7d5224c0afcb57c57d2b835e00985bb994729d7d45cf9"
	Dec 09 11:38:43 pause-529265 kubelet[3363]: E1209 11:38:43.094749    3363 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-529265?timeout=10s\": dial tcp 192.168.39.137:8443: connect: connection refused" interval="800ms"
	Dec 09 11:38:43 pause-529265 kubelet[3363]: I1209 11:38:43.310144    3363 kubelet_node_status.go:72] "Attempting to register node" node="pause-529265"
	Dec 09 11:38:46 pause-529265 kubelet[3363]: I1209 11:38:46.417661    3363 kubelet_node_status.go:111] "Node was previously registered" node="pause-529265"
	Dec 09 11:38:46 pause-529265 kubelet[3363]: I1209 11:38:46.417847    3363 kubelet_node_status.go:75] "Successfully registered node" node="pause-529265"
	Dec 09 11:38:46 pause-529265 kubelet[3363]: I1209 11:38:46.417873    3363 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 09 11:38:46 pause-529265 kubelet[3363]: I1209 11:38:46.419289    3363 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 09 11:38:46 pause-529265 kubelet[3363]: I1209 11:38:46.474507    3363 apiserver.go:52] "Watching apiserver"
	Dec 09 11:38:46 pause-529265 kubelet[3363]: I1209 11:38:46.497121    3363 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 09 11:38:46 pause-529265 kubelet[3363]: I1209 11:38:46.598081    3363 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4724f3fd-b481-4a95-b628-3bdaee03df58-xtables-lock\") pod \"kube-proxy-96c5d\" (UID: \"4724f3fd-b481-4a95-b628-3bdaee03df58\") " pod="kube-system/kube-proxy-96c5d"
	Dec 09 11:38:46 pause-529265 kubelet[3363]: I1209 11:38:46.598393    3363 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4724f3fd-b481-4a95-b628-3bdaee03df58-lib-modules\") pod \"kube-proxy-96c5d\" (UID: \"4724f3fd-b481-4a95-b628-3bdaee03df58\") " pod="kube-system/kube-proxy-96c5d"
	Dec 09 11:38:52 pause-529265 kubelet[3363]: E1209 11:38:52.607765    3363 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733744332607510844,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 11:38:52 pause-529265 kubelet[3363]: E1209 11:38:52.608183    3363 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733744332607510844,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 11:39:02 pause-529265 kubelet[3363]: E1209 11:39:02.611677    3363 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733744342610608577,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 11:39:02 pause-529265 kubelet[3363]: E1209 11:39:02.611718    3363 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733744342610608577,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-529265 -n pause-529265
helpers_test.go:261: (dbg) Run:  kubectl --context pause-529265 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (94.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (269.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-014592 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-014592 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (4m28.852633317s)

                                                
                                                
-- stdout --
	* [old-k8s-version-014592] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20068
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20068-609844/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20068-609844/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-014592" primary control-plane node in "old-k8s-version-014592" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 11:42:14.694953  658729 out.go:345] Setting OutFile to fd 1 ...
	I1209 11:42:14.695252  658729 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 11:42:14.695263  658729 out.go:358] Setting ErrFile to fd 2...
	I1209 11:42:14.695266  658729 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 11:42:14.695446  658729 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20068-609844/.minikube/bin
	I1209 11:42:14.696060  658729 out.go:352] Setting JSON to false
	I1209 11:42:14.697070  658729 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":15879,"bootTime":1733728656,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 11:42:14.697186  658729 start.go:139] virtualization: kvm guest
	I1209 11:42:14.699561  658729 out.go:177] * [old-k8s-version-014592] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1209 11:42:14.700850  658729 out.go:177]   - MINIKUBE_LOCATION=20068
	I1209 11:42:14.700854  658729 notify.go:220] Checking for updates...
	I1209 11:42:14.703711  658729 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 11:42:14.705132  658729 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20068-609844/kubeconfig
	I1209 11:42:14.706232  658729 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20068-609844/.minikube
	I1209 11:42:14.707351  658729 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1209 11:42:14.708529  658729 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 11:42:14.710493  658729 config.go:182] Loaded profile config "cert-expiration-752166": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 11:42:14.710652  658729 config.go:182] Loaded profile config "kubernetes-upgrade-835095": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1209 11:42:14.710812  658729 config.go:182] Loaded profile config "running-upgrade-119214": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I1209 11:42:14.710953  658729 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 11:42:14.752424  658729 out.go:177] * Using the kvm2 driver based on user configuration
	I1209 11:42:14.753611  658729 start.go:297] selected driver: kvm2
	I1209 11:42:14.753635  658729 start.go:901] validating driver "kvm2" against <nil>
	I1209 11:42:14.753652  658729 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 11:42:14.754856  658729 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 11:42:14.754981  658729 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20068-609844/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1209 11:42:14.772685  658729 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1209 11:42:14.773025  658729 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1209 11:42:14.774053  658729 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 11:42:14.774120  658729 cni.go:84] Creating CNI manager for ""
	I1209 11:42:14.774201  658729 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 11:42:14.774218  658729 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1209 11:42:14.774330  658729 start.go:340] cluster config:
	{Name:old-k8s-version-014592 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-014592 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 11:42:14.775212  658729 iso.go:125] acquiring lock: {Name:mk3bf4d9da9b75cc0ae0a3732f4492bac54a4b46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 11:42:14.776891  658729 out.go:177] * Starting "old-k8s-version-014592" primary control-plane node in "old-k8s-version-014592" cluster
	I1209 11:42:14.778043  658729 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1209 11:42:14.778078  658729 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1209 11:42:14.778093  658729 cache.go:56] Caching tarball of preloaded images
	I1209 11:42:14.778161  658729 preload.go:172] Found /home/jenkins/minikube-integration/20068-609844/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1209 11:42:14.778211  658729 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1209 11:42:14.778343  658729 profile.go:143] Saving config to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/old-k8s-version-014592/config.json ...
	I1209 11:42:14.778371  658729 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/old-k8s-version-014592/config.json: {Name:mk13ef9171f0753a158431baa8b1986193172d99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 11:42:14.778538  658729 start.go:360] acquireMachinesLock for old-k8s-version-014592: {Name:mka6afffe9ddf2faebdc603ed0401805d58bf31e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 11:42:14.778575  658729 start.go:364] duration metric: took 19.322µs to acquireMachinesLock for "old-k8s-version-014592"
	I1209 11:42:14.778595  658729 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-014592 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-014592 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 11:42:14.778683  658729 start.go:125] createHost starting for "" (driver="kvm2")
	I1209 11:42:14.780199  658729 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1209 11:42:14.780350  658729 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:42:14.780383  658729 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:42:14.797079  658729 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40275
	I1209 11:42:14.797571  658729 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:42:14.798252  658729 main.go:141] libmachine: Using API Version  1
	I1209 11:42:14.798279  658729 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:42:14.798685  658729 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:42:14.798898  658729 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetMachineName
	I1209 11:42:14.799061  658729 main.go:141] libmachine: (old-k8s-version-014592) Calling .DriverName
	I1209 11:42:14.799235  658729 start.go:159] libmachine.API.Create for "old-k8s-version-014592" (driver="kvm2")
	I1209 11:42:14.799286  658729 client.go:168] LocalClient.Create starting
	I1209 11:42:14.799323  658729 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem
	I1209 11:42:14.799362  658729 main.go:141] libmachine: Decoding PEM data...
	I1209 11:42:14.799382  658729 main.go:141] libmachine: Parsing certificate...
	I1209 11:42:14.799450  658729 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem
	I1209 11:42:14.799480  658729 main.go:141] libmachine: Decoding PEM data...
	I1209 11:42:14.799499  658729 main.go:141] libmachine: Parsing certificate...
	I1209 11:42:14.799526  658729 main.go:141] libmachine: Running pre-create checks...
	I1209 11:42:14.799540  658729 main.go:141] libmachine: (old-k8s-version-014592) Calling .PreCreateCheck
	I1209 11:42:14.799973  658729 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetConfigRaw
	I1209 11:42:14.800352  658729 main.go:141] libmachine: Creating machine...
	I1209 11:42:14.800368  658729 main.go:141] libmachine: (old-k8s-version-014592) Calling .Create
	I1209 11:42:14.800504  658729 main.go:141] libmachine: (old-k8s-version-014592) Creating KVM machine...
	I1209 11:42:14.801794  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | found existing default KVM network
	I1209 11:42:14.803639  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | I1209 11:42:14.803453  658752 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:04:ad:0f} reservation:<nil>}
	I1209 11:42:14.804918  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | I1209 11:42:14.804822  658752 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:b2:28:08} reservation:<nil>}
	I1209 11:42:14.806328  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | I1209 11:42:14.806225  658752 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002ad610}
	I1209 11:42:14.806377  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | created network xml: 
	I1209 11:42:14.806393  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | <network>
	I1209 11:42:14.806405  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG |   <name>mk-old-k8s-version-014592</name>
	I1209 11:42:14.806419  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG |   <dns enable='no'/>
	I1209 11:42:14.806428  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG |   
	I1209 11:42:14.806438  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I1209 11:42:14.806449  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG |     <dhcp>
	I1209 11:42:14.806461  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I1209 11:42:14.806474  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG |     </dhcp>
	I1209 11:42:14.806486  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG |   </ip>
	I1209 11:42:14.806496  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG |   
	I1209 11:42:14.806502  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | </network>
	I1209 11:42:14.806511  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | 
	I1209 11:42:14.811791  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | trying to create private KVM network mk-old-k8s-version-014592 192.168.61.0/24...
	I1209 11:42:14.900032  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | private KVM network mk-old-k8s-version-014592 192.168.61.0/24 created
	I1209 11:42:14.900112  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | I1209 11:42:14.899844  658752 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20068-609844/.minikube
	I1209 11:42:14.900141  658729 main.go:141] libmachine: (old-k8s-version-014592) Setting up store path in /home/jenkins/minikube-integration/20068-609844/.minikube/machines/old-k8s-version-014592 ...
	I1209 11:42:14.900165  658729 main.go:141] libmachine: (old-k8s-version-014592) Building disk image from file:///home/jenkins/minikube-integration/20068-609844/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1209 11:42:14.900182  658729 main.go:141] libmachine: (old-k8s-version-014592) Downloading /home/jenkins/minikube-integration/20068-609844/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20068-609844/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1209 11:42:15.203650  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | I1209 11:42:15.203483  658752 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/old-k8s-version-014592/id_rsa...
	I1209 11:42:15.310612  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | I1209 11:42:15.310418  658752 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/old-k8s-version-014592/old-k8s-version-014592.rawdisk...
	I1209 11:42:15.310666  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | Writing magic tar header
	I1209 11:42:15.310728  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | Writing SSH key tar header
	I1209 11:42:15.310765  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | I1209 11:42:15.310550  658752 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20068-609844/.minikube/machines/old-k8s-version-014592 ...
	I1209 11:42:15.310784  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/old-k8s-version-014592
	I1209 11:42:15.310802  658729 main.go:141] libmachine: (old-k8s-version-014592) Setting executable bit set on /home/jenkins/minikube-integration/20068-609844/.minikube/machines/old-k8s-version-014592 (perms=drwx------)
	I1209 11:42:15.310821  658729 main.go:141] libmachine: (old-k8s-version-014592) Setting executable bit set on /home/jenkins/minikube-integration/20068-609844/.minikube/machines (perms=drwxr-xr-x)
	I1209 11:42:15.310836  658729 main.go:141] libmachine: (old-k8s-version-014592) Setting executable bit set on /home/jenkins/minikube-integration/20068-609844/.minikube (perms=drwxr-xr-x)
	I1209 11:42:15.310867  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20068-609844/.minikube/machines
	I1209 11:42:15.310936  658729 main.go:141] libmachine: (old-k8s-version-014592) Setting executable bit set on /home/jenkins/minikube-integration/20068-609844 (perms=drwxrwxr-x)
	I1209 11:42:15.310949  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20068-609844/.minikube
	I1209 11:42:15.310966  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20068-609844
	I1209 11:42:15.310975  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1209 11:42:15.310986  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | Checking permissions on dir: /home/jenkins
	I1209 11:42:15.310998  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | Checking permissions on dir: /home
	I1209 11:42:15.311014  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | Skipping /home - not owner
	I1209 11:42:15.311027  658729 main.go:141] libmachine: (old-k8s-version-014592) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1209 11:42:15.311043  658729 main.go:141] libmachine: (old-k8s-version-014592) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1209 11:42:15.311058  658729 main.go:141] libmachine: (old-k8s-version-014592) Creating domain...
	I1209 11:42:15.312011  658729 main.go:141] libmachine: (old-k8s-version-014592) define libvirt domain using xml: 
	I1209 11:42:15.312027  658729 main.go:141] libmachine: (old-k8s-version-014592) <domain type='kvm'>
	I1209 11:42:15.312036  658729 main.go:141] libmachine: (old-k8s-version-014592)   <name>old-k8s-version-014592</name>
	I1209 11:42:15.312044  658729 main.go:141] libmachine: (old-k8s-version-014592)   <memory unit='MiB'>2200</memory>
	I1209 11:42:15.312056  658729 main.go:141] libmachine: (old-k8s-version-014592)   <vcpu>2</vcpu>
	I1209 11:42:15.312067  658729 main.go:141] libmachine: (old-k8s-version-014592)   <features>
	I1209 11:42:15.312075  658729 main.go:141] libmachine: (old-k8s-version-014592)     <acpi/>
	I1209 11:42:15.312082  658729 main.go:141] libmachine: (old-k8s-version-014592)     <apic/>
	I1209 11:42:15.312087  658729 main.go:141] libmachine: (old-k8s-version-014592)     <pae/>
	I1209 11:42:15.312091  658729 main.go:141] libmachine: (old-k8s-version-014592)     
	I1209 11:42:15.312099  658729 main.go:141] libmachine: (old-k8s-version-014592)   </features>
	I1209 11:42:15.312104  658729 main.go:141] libmachine: (old-k8s-version-014592)   <cpu mode='host-passthrough'>
	I1209 11:42:15.312111  658729 main.go:141] libmachine: (old-k8s-version-014592)   
	I1209 11:42:15.312115  658729 main.go:141] libmachine: (old-k8s-version-014592)   </cpu>
	I1209 11:42:15.312124  658729 main.go:141] libmachine: (old-k8s-version-014592)   <os>
	I1209 11:42:15.312132  658729 main.go:141] libmachine: (old-k8s-version-014592)     <type>hvm</type>
	I1209 11:42:15.312163  658729 main.go:141] libmachine: (old-k8s-version-014592)     <boot dev='cdrom'/>
	I1209 11:42:15.312186  658729 main.go:141] libmachine: (old-k8s-version-014592)     <boot dev='hd'/>
	I1209 11:42:15.312199  658729 main.go:141] libmachine: (old-k8s-version-014592)     <bootmenu enable='no'/>
	I1209 11:42:15.312212  658729 main.go:141] libmachine: (old-k8s-version-014592)   </os>
	I1209 11:42:15.312224  658729 main.go:141] libmachine: (old-k8s-version-014592)   <devices>
	I1209 11:42:15.312233  658729 main.go:141] libmachine: (old-k8s-version-014592)     <disk type='file' device='cdrom'>
	I1209 11:42:15.312244  658729 main.go:141] libmachine: (old-k8s-version-014592)       <source file='/home/jenkins/minikube-integration/20068-609844/.minikube/machines/old-k8s-version-014592/boot2docker.iso'/>
	I1209 11:42:15.312251  658729 main.go:141] libmachine: (old-k8s-version-014592)       <target dev='hdc' bus='scsi'/>
	I1209 11:42:15.312257  658729 main.go:141] libmachine: (old-k8s-version-014592)       <readonly/>
	I1209 11:42:15.312261  658729 main.go:141] libmachine: (old-k8s-version-014592)     </disk>
	I1209 11:42:15.312288  658729 main.go:141] libmachine: (old-k8s-version-014592)     <disk type='file' device='disk'>
	I1209 11:42:15.312313  658729 main.go:141] libmachine: (old-k8s-version-014592)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1209 11:42:15.312334  658729 main.go:141] libmachine: (old-k8s-version-014592)       <source file='/home/jenkins/minikube-integration/20068-609844/.minikube/machines/old-k8s-version-014592/old-k8s-version-014592.rawdisk'/>
	I1209 11:42:15.312347  658729 main.go:141] libmachine: (old-k8s-version-014592)       <target dev='hda' bus='virtio'/>
	I1209 11:42:15.312361  658729 main.go:141] libmachine: (old-k8s-version-014592)     </disk>
	I1209 11:42:15.312373  658729 main.go:141] libmachine: (old-k8s-version-014592)     <interface type='network'>
	I1209 11:42:15.312392  658729 main.go:141] libmachine: (old-k8s-version-014592)       <source network='mk-old-k8s-version-014592'/>
	I1209 11:42:15.312408  658729 main.go:141] libmachine: (old-k8s-version-014592)       <model type='virtio'/>
	I1209 11:42:15.312432  658729 main.go:141] libmachine: (old-k8s-version-014592)     </interface>
	I1209 11:42:15.312457  658729 main.go:141] libmachine: (old-k8s-version-014592)     <interface type='network'>
	I1209 11:42:15.312470  658729 main.go:141] libmachine: (old-k8s-version-014592)       <source network='default'/>
	I1209 11:42:15.312485  658729 main.go:141] libmachine: (old-k8s-version-014592)       <model type='virtio'/>
	I1209 11:42:15.312496  658729 main.go:141] libmachine: (old-k8s-version-014592)     </interface>
	I1209 11:42:15.312503  658729 main.go:141] libmachine: (old-k8s-version-014592)     <serial type='pty'>
	I1209 11:42:15.312510  658729 main.go:141] libmachine: (old-k8s-version-014592)       <target port='0'/>
	I1209 11:42:15.312517  658729 main.go:141] libmachine: (old-k8s-version-014592)     </serial>
	I1209 11:42:15.312525  658729 main.go:141] libmachine: (old-k8s-version-014592)     <console type='pty'>
	I1209 11:42:15.312532  658729 main.go:141] libmachine: (old-k8s-version-014592)       <target type='serial' port='0'/>
	I1209 11:42:15.312540  658729 main.go:141] libmachine: (old-k8s-version-014592)     </console>
	I1209 11:42:15.312547  658729 main.go:141] libmachine: (old-k8s-version-014592)     <rng model='virtio'>
	I1209 11:42:15.312562  658729 main.go:141] libmachine: (old-k8s-version-014592)       <backend model='random'>/dev/random</backend>
	I1209 11:42:15.312577  658729 main.go:141] libmachine: (old-k8s-version-014592)     </rng>
	I1209 11:42:15.312588  658729 main.go:141] libmachine: (old-k8s-version-014592)     
	I1209 11:42:15.312595  658729 main.go:141] libmachine: (old-k8s-version-014592)     
	I1209 11:42:15.312606  658729 main.go:141] libmachine: (old-k8s-version-014592)   </devices>
	I1209 11:42:15.312612  658729 main.go:141] libmachine: (old-k8s-version-014592) </domain>
	I1209 11:42:15.312659  658729 main.go:141] libmachine: (old-k8s-version-014592) 
	I1209 11:42:15.316652  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:b5:50:b3 in network default
	I1209 11:42:15.317314  658729 main.go:141] libmachine: (old-k8s-version-014592) Ensuring networks are active...
	I1209 11:42:15.317351  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:42:15.317923  658729 main.go:141] libmachine: (old-k8s-version-014592) Ensuring network default is active
	I1209 11:42:15.318360  658729 main.go:141] libmachine: (old-k8s-version-014592) Ensuring network mk-old-k8s-version-014592 is active
	I1209 11:42:15.318942  658729 main.go:141] libmachine: (old-k8s-version-014592) Getting domain xml...
	I1209 11:42:15.319791  658729 main.go:141] libmachine: (old-k8s-version-014592) Creating domain...
	I1209 11:42:16.560532  658729 main.go:141] libmachine: (old-k8s-version-014592) Waiting to get IP...
	I1209 11:42:16.561304  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:42:16.561709  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | unable to find current IP address of domain old-k8s-version-014592 in network mk-old-k8s-version-014592
	I1209 11:42:16.561740  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | I1209 11:42:16.561676  658752 retry.go:31] will retry after 270.379955ms: waiting for machine to come up
	I1209 11:42:16.834244  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:42:16.834912  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | unable to find current IP address of domain old-k8s-version-014592 in network mk-old-k8s-version-014592
	I1209 11:42:16.834937  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | I1209 11:42:16.834866  658752 retry.go:31] will retry after 304.977584ms: waiting for machine to come up
	I1209 11:42:17.141302  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:42:17.141859  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | unable to find current IP address of domain old-k8s-version-014592 in network mk-old-k8s-version-014592
	I1209 11:42:17.141882  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | I1209 11:42:17.141819  658752 retry.go:31] will retry after 352.857098ms: waiting for machine to come up
	I1209 11:42:17.496497  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:42:17.496988  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | unable to find current IP address of domain old-k8s-version-014592 in network mk-old-k8s-version-014592
	I1209 11:42:17.497017  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | I1209 11:42:17.496930  658752 retry.go:31] will retry after 458.507241ms: waiting for machine to come up
	I1209 11:42:17.956539  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:42:17.957026  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | unable to find current IP address of domain old-k8s-version-014592 in network mk-old-k8s-version-014592
	I1209 11:42:17.957053  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | I1209 11:42:17.956969  658752 retry.go:31] will retry after 504.291245ms: waiting for machine to come up
	I1209 11:42:18.463399  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:42:18.463912  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | unable to find current IP address of domain old-k8s-version-014592 in network mk-old-k8s-version-014592
	I1209 11:42:18.463940  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | I1209 11:42:18.463849  658752 retry.go:31] will retry after 574.551001ms: waiting for machine to come up
	I1209 11:42:19.040342  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:42:19.040800  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | unable to find current IP address of domain old-k8s-version-014592 in network mk-old-k8s-version-014592
	I1209 11:42:19.040852  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | I1209 11:42:19.040787  658752 retry.go:31] will retry after 891.332076ms: waiting for machine to come up
	I1209 11:42:19.934092  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:42:19.934641  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | unable to find current IP address of domain old-k8s-version-014592 in network mk-old-k8s-version-014592
	I1209 11:42:19.934668  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | I1209 11:42:19.934569  658752 retry.go:31] will retry after 1.208105244s: waiting for machine to come up
	I1209 11:42:21.144492  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:42:21.144979  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | unable to find current IP address of domain old-k8s-version-014592 in network mk-old-k8s-version-014592
	I1209 11:42:21.145022  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | I1209 11:42:21.144914  658752 retry.go:31] will retry after 1.857601269s: waiting for machine to come up
	I1209 11:42:23.004776  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:42:23.005394  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | unable to find current IP address of domain old-k8s-version-014592 in network mk-old-k8s-version-014592
	I1209 11:42:23.005428  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | I1209 11:42:23.005331  658752 retry.go:31] will retry after 1.454921779s: waiting for machine to come up
	I1209 11:42:24.461695  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:42:24.462338  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | unable to find current IP address of domain old-k8s-version-014592 in network mk-old-k8s-version-014592
	I1209 11:42:24.462375  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | I1209 11:42:24.462269  658752 retry.go:31] will retry after 1.820947714s: waiting for machine to come up
	I1209 11:42:26.284847  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:42:26.285399  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | unable to find current IP address of domain old-k8s-version-014592 in network mk-old-k8s-version-014592
	I1209 11:42:26.285430  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | I1209 11:42:26.285344  658752 retry.go:31] will retry after 2.248896236s: waiting for machine to come up
	I1209 11:42:28.535518  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:42:28.536062  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | unable to find current IP address of domain old-k8s-version-014592 in network mk-old-k8s-version-014592
	I1209 11:42:28.536086  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | I1209 11:42:28.536035  658752 retry.go:31] will retry after 4.135793207s: waiting for machine to come up
	I1209 11:42:32.675656  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:42:32.676216  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | unable to find current IP address of domain old-k8s-version-014592 in network mk-old-k8s-version-014592
	I1209 11:42:32.676249  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | I1209 11:42:32.676169  658752 retry.go:31] will retry after 5.353675444s: waiting for machine to come up
	I1209 11:42:38.031999  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:42:38.032604  658729 main.go:141] libmachine: (old-k8s-version-014592) Found IP for machine: 192.168.61.132
	I1209 11:42:38.032644  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has current primary IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:42:38.032652  658729 main.go:141] libmachine: (old-k8s-version-014592) Reserving static IP address...
	I1209 11:42:38.033148  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-014592", mac: "52:54:00:54:72:3e", ip: "192.168.61.132"} in network mk-old-k8s-version-014592
	I1209 11:42:38.109532  658729 main.go:141] libmachine: (old-k8s-version-014592) Reserved static IP address: 192.168.61.132
	I1209 11:42:38.109565  658729 main.go:141] libmachine: (old-k8s-version-014592) Waiting for SSH to be available...
	I1209 11:42:38.109582  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | Getting to WaitForSSH function...
	I1209 11:42:38.112143  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:42:38.112505  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:72:3e", ip: ""} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:42:29 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:minikube Clientid:01:52:54:00:54:72:3e}
	I1209 11:42:38.112533  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:42:38.112718  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | Using SSH client type: external
	I1209 11:42:38.112747  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | Using SSH private key: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/old-k8s-version-014592/id_rsa (-rw-------)
	I1209 11:42:38.112782  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.132 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20068-609844/.minikube/machines/old-k8s-version-014592/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1209 11:42:38.112800  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | About to run SSH command:
	I1209 11:42:38.112817  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | exit 0
	I1209 11:42:38.238297  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | SSH cmd err, output: <nil>: 
	I1209 11:42:38.238623  658729 main.go:141] libmachine: (old-k8s-version-014592) KVM machine creation complete!
	I1209 11:42:38.238975  658729 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetConfigRaw
	I1209 11:42:38.239631  658729 main.go:141] libmachine: (old-k8s-version-014592) Calling .DriverName
	I1209 11:42:38.239837  658729 main.go:141] libmachine: (old-k8s-version-014592) Calling .DriverName
	I1209 11:42:38.240009  658729 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1209 11:42:38.240025  658729 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetState
	I1209 11:42:38.241540  658729 main.go:141] libmachine: Detecting operating system of created instance...
	I1209 11:42:38.241557  658729 main.go:141] libmachine: Waiting for SSH to be available...
	I1209 11:42:38.241563  658729 main.go:141] libmachine: Getting to WaitForSSH function...
	I1209 11:42:38.241568  658729 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHHostname
	I1209 11:42:38.244386  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:42:38.244787  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:72:3e", ip: ""} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:42:29 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:42:38.244816  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:42:38.244943  658729 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHPort
	I1209 11:42:38.245138  658729 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHKeyPath
	I1209 11:42:38.245309  658729 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHKeyPath
	I1209 11:42:38.245495  658729 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHUsername
	I1209 11:42:38.245708  658729 main.go:141] libmachine: Using SSH client type: native
	I1209 11:42:38.245980  658729 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.132 22 <nil> <nil>}
	I1209 11:42:38.245994  658729 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1209 11:42:38.349235  658729 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 11:42:38.349264  658729 main.go:141] libmachine: Detecting the provisioner...
	I1209 11:42:38.349276  658729 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHHostname
	I1209 11:42:38.352431  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:42:38.352809  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:72:3e", ip: ""} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:42:29 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:42:38.352841  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:42:38.352968  658729 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHPort
	I1209 11:42:38.353162  658729 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHKeyPath
	I1209 11:42:38.353327  658729 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHKeyPath
	I1209 11:42:38.353466  658729 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHUsername
	I1209 11:42:38.353633  658729 main.go:141] libmachine: Using SSH client type: native
	I1209 11:42:38.353809  658729 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.132 22 <nil> <nil>}
	I1209 11:42:38.353820  658729 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1209 11:42:38.458703  658729 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1209 11:42:38.458754  658729 main.go:141] libmachine: found compatible host: buildroot
	I1209 11:42:38.458761  658729 main.go:141] libmachine: Provisioning with buildroot...
	I1209 11:42:38.458769  658729 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetMachineName
	I1209 11:42:38.459034  658729 buildroot.go:166] provisioning hostname "old-k8s-version-014592"
	I1209 11:42:38.459062  658729 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetMachineName
	I1209 11:42:38.459248  658729 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHHostname
	I1209 11:42:38.461833  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:42:38.462135  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:72:3e", ip: ""} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:42:29 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:42:38.462189  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:42:38.462323  658729 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHPort
	I1209 11:42:38.462525  658729 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHKeyPath
	I1209 11:42:38.462715  658729 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHKeyPath
	I1209 11:42:38.462843  658729 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHUsername
	I1209 11:42:38.463019  658729 main.go:141] libmachine: Using SSH client type: native
	I1209 11:42:38.463250  658729 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.132 22 <nil> <nil>}
	I1209 11:42:38.463264  658729 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-014592 && echo "old-k8s-version-014592" | sudo tee /etc/hostname
	I1209 11:42:38.587356  658729 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-014592
	
	I1209 11:42:38.587396  658729 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHHostname
	I1209 11:42:38.590321  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:42:38.590681  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:72:3e", ip: ""} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:42:29 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:42:38.590711  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:42:38.590902  658729 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHPort
	I1209 11:42:38.591109  658729 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHKeyPath
	I1209 11:42:38.591297  658729 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHKeyPath
	I1209 11:42:38.591427  658729 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHUsername
	I1209 11:42:38.591578  658729 main.go:141] libmachine: Using SSH client type: native
	I1209 11:42:38.591788  658729 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.132 22 <nil> <nil>}
	I1209 11:42:38.591815  658729 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-014592' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-014592/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-014592' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 11:42:38.702713  658729 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 11:42:38.702761  658729 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20068-609844/.minikube CaCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20068-609844/.minikube}
	I1209 11:42:38.702791  658729 buildroot.go:174] setting up certificates
	I1209 11:42:38.702801  658729 provision.go:84] configureAuth start
	I1209 11:42:38.702812  658729 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetMachineName
	I1209 11:42:38.703135  658729 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetIP
	I1209 11:42:38.705958  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:42:38.706313  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:72:3e", ip: ""} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:42:29 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:42:38.706341  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:42:38.706504  658729 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHHostname
	I1209 11:42:38.708649  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:42:38.708985  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:72:3e", ip: ""} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:42:29 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:42:38.709018  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:42:38.709156  658729 provision.go:143] copyHostCerts
	I1209 11:42:38.709222  658729 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem, removing ...
	I1209 11:42:38.709247  658729 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem
	I1209 11:42:38.709335  658729 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem (1123 bytes)
	I1209 11:42:38.709509  658729 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem, removing ...
	I1209 11:42:38.709520  658729 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem
	I1209 11:42:38.709559  658729 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem (1679 bytes)
	I1209 11:42:38.709638  658729 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem, removing ...
	I1209 11:42:38.709648  658729 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem
	I1209 11:42:38.709684  658729 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem (1082 bytes)
	I1209 11:42:38.709751  658729 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-014592 san=[127.0.0.1 192.168.61.132 localhost minikube old-k8s-version-014592]
	I1209 11:42:38.992130  658729 provision.go:177] copyRemoteCerts
	I1209 11:42:38.992196  658729 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 11:42:38.992224  658729 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHHostname
	I1209 11:42:38.994844  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:42:38.995194  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:72:3e", ip: ""} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:42:29 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:42:38.995220  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:42:38.995385  658729 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHPort
	I1209 11:42:38.995630  658729 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHKeyPath
	I1209 11:42:38.995847  658729 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHUsername
	I1209 11:42:38.996020  658729 sshutil.go:53] new ssh client: &{IP:192.168.61.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/old-k8s-version-014592/id_rsa Username:docker}
	I1209 11:42:39.076559  658729 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1209 11:42:39.101714  658729 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1209 11:42:39.126853  658729 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1209 11:42:39.151882  658729 provision.go:87] duration metric: took 449.066386ms to configureAuth
	I1209 11:42:39.151917  658729 buildroot.go:189] setting minikube options for container-runtime
	I1209 11:42:39.152087  658729 config.go:182] Loaded profile config "old-k8s-version-014592": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1209 11:42:39.152168  658729 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHHostname
	I1209 11:42:39.154637  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:42:39.154976  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:72:3e", ip: ""} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:42:29 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:42:39.155005  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:42:39.155226  658729 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHPort
	I1209 11:42:39.155475  658729 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHKeyPath
	I1209 11:42:39.155688  658729 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHKeyPath
	I1209 11:42:39.155862  658729 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHUsername
	I1209 11:42:39.156018  658729 main.go:141] libmachine: Using SSH client type: native
	I1209 11:42:39.156221  658729 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.132 22 <nil> <nil>}
	I1209 11:42:39.156242  658729 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 11:42:39.371640  658729 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 11:42:39.371689  658729 main.go:141] libmachine: Checking connection to Docker...
	I1209 11:42:39.371703  658729 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetURL
	I1209 11:42:39.373056  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | Using libvirt version 6000000
	I1209 11:42:39.375593  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:42:39.375986  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:72:3e", ip: ""} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:42:29 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:42:39.376016  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:42:39.376193  658729 main.go:141] libmachine: Docker is up and running!
	I1209 11:42:39.376216  658729 main.go:141] libmachine: Reticulating splines...
	I1209 11:42:39.376224  658729 client.go:171] duration metric: took 24.576926447s to LocalClient.Create
	I1209 11:42:39.376250  658729 start.go:167] duration metric: took 24.577012195s to libmachine.API.Create "old-k8s-version-014592"
	I1209 11:42:39.376262  658729 start.go:293] postStartSetup for "old-k8s-version-014592" (driver="kvm2")
	I1209 11:42:39.376271  658729 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 11:42:39.376291  658729 main.go:141] libmachine: (old-k8s-version-014592) Calling .DriverName
	I1209 11:42:39.376530  658729 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 11:42:39.376556  658729 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHHostname
	I1209 11:42:39.378471  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:42:39.378788  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:72:3e", ip: ""} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:42:29 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:42:39.378834  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:42:39.378933  658729 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHPort
	I1209 11:42:39.379137  658729 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHKeyPath
	I1209 11:42:39.379298  658729 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHUsername
	I1209 11:42:39.379426  658729 sshutil.go:53] new ssh client: &{IP:192.168.61.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/old-k8s-version-014592/id_rsa Username:docker}
	I1209 11:42:39.460747  658729 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 11:42:39.464812  658729 info.go:137] Remote host: Buildroot 2023.02.9
	I1209 11:42:39.464846  658729 filesync.go:126] Scanning /home/jenkins/minikube-integration/20068-609844/.minikube/addons for local assets ...
	I1209 11:42:39.464924  658729 filesync.go:126] Scanning /home/jenkins/minikube-integration/20068-609844/.minikube/files for local assets ...
	I1209 11:42:39.464996  658729 filesync.go:149] local asset: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem -> 6170172.pem in /etc/ssl/certs
	I1209 11:42:39.465097  658729 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 11:42:39.474526  658729 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem --> /etc/ssl/certs/6170172.pem (1708 bytes)
	I1209 11:42:39.497141  658729 start.go:296] duration metric: took 120.862768ms for postStartSetup
	I1209 11:42:39.497204  658729 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetConfigRaw
	I1209 11:42:39.497891  658729 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetIP
	I1209 11:42:39.501207  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:42:39.501619  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:72:3e", ip: ""} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:42:29 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:42:39.501651  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:42:39.501915  658729 profile.go:143] Saving config to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/old-k8s-version-014592/config.json ...
	I1209 11:42:39.502126  658729 start.go:128] duration metric: took 24.723429002s to createHost
	I1209 11:42:39.502156  658729 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHHostname
	I1209 11:42:39.504608  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:42:39.504927  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:72:3e", ip: ""} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:42:29 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:42:39.504958  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:42:39.505087  658729 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHPort
	I1209 11:42:39.505285  658729 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHKeyPath
	I1209 11:42:39.505467  658729 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHKeyPath
	I1209 11:42:39.505595  658729 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHUsername
	I1209 11:42:39.505787  658729 main.go:141] libmachine: Using SSH client type: native
	I1209 11:42:39.505945  658729 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.132 22 <nil> <nil>}
	I1209 11:42:39.505955  658729 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1209 11:42:39.610968  658729 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733744559.588101100
	
	I1209 11:42:39.610995  658729 fix.go:216] guest clock: 1733744559.588101100
	I1209 11:42:39.611003  658729 fix.go:229] Guest: 2024-12-09 11:42:39.5881011 +0000 UTC Remote: 2024-12-09 11:42:39.502141486 +0000 UTC m=+24.851793515 (delta=85.959614ms)
	I1209 11:42:39.611025  658729 fix.go:200] guest clock delta is within tolerance: 85.959614ms
	I1209 11:42:39.611034  658729 start.go:83] releasing machines lock for "old-k8s-version-014592", held for 24.832448397s
	I1209 11:42:39.611061  658729 main.go:141] libmachine: (old-k8s-version-014592) Calling .DriverName
	I1209 11:42:39.611330  658729 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetIP
	I1209 11:42:39.614384  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:42:39.614777  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:72:3e", ip: ""} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:42:29 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:42:39.614815  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:42:39.614976  658729 main.go:141] libmachine: (old-k8s-version-014592) Calling .DriverName
	I1209 11:42:39.615536  658729 main.go:141] libmachine: (old-k8s-version-014592) Calling .DriverName
	I1209 11:42:39.615757  658729 main.go:141] libmachine: (old-k8s-version-014592) Calling .DriverName
	I1209 11:42:39.615850  658729 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 11:42:39.615911  658729 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHHostname
	I1209 11:42:39.615996  658729 ssh_runner.go:195] Run: cat /version.json
	I1209 11:42:39.616026  658729 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHHostname
	I1209 11:42:39.618914  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:42:39.619205  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:42:39.619241  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:72:3e", ip: ""} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:42:29 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:42:39.619268  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:42:39.619386  658729 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHPort
	I1209 11:42:39.619593  658729 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHKeyPath
	I1209 11:42:39.619639  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:72:3e", ip: ""} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:42:29 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:42:39.619667  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:42:39.619747  658729 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHUsername
	I1209 11:42:39.619826  658729 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHPort
	I1209 11:42:39.619897  658729 sshutil.go:53] new ssh client: &{IP:192.168.61.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/old-k8s-version-014592/id_rsa Username:docker}
	I1209 11:42:39.620326  658729 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHKeyPath
	I1209 11:42:39.622357  658729 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHUsername
	I1209 11:42:39.622562  658729 sshutil.go:53] new ssh client: &{IP:192.168.61.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/old-k8s-version-014592/id_rsa Username:docker}
	I1209 11:42:39.732693  658729 ssh_runner.go:195] Run: systemctl --version
	I1209 11:42:39.739190  658729 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 11:42:39.900076  658729 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 11:42:39.906007  658729 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 11:42:39.906092  658729 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 11:42:39.923433  658729 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1209 11:42:39.923464  658729 start.go:495] detecting cgroup driver to use...
	I1209 11:42:39.923547  658729 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 11:42:39.941220  658729 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 11:42:39.955017  658729 docker.go:217] disabling cri-docker service (if available) ...
	I1209 11:42:39.955077  658729 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 11:42:39.968709  658729 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 11:42:39.984894  658729 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 11:42:40.098636  658729 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 11:42:40.233508  658729 docker.go:233] disabling docker service ...
	I1209 11:42:40.233583  658729 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 11:42:40.249351  658729 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 11:42:40.263509  658729 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 11:42:40.433416  658729 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 11:42:40.593201  658729 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 11:42:40.608223  658729 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 11:42:40.627836  658729 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1209 11:42:40.627911  658729 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:42:40.637981  658729 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 11:42:40.638060  658729 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:42:40.648068  658729 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:42:40.657972  658729 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:42:40.667794  658729 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 11:42:40.677649  658729 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 11:42:40.687585  658729 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1209 11:42:40.687654  658729 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1209 11:42:40.698908  658729 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 11:42:40.707813  658729 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 11:42:40.821621  658729 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 11:42:40.914886  658729 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 11:42:40.914984  658729 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 11:42:40.919304  658729 start.go:563] Will wait 60s for crictl version
	I1209 11:42:40.919363  658729 ssh_runner.go:195] Run: which crictl
	I1209 11:42:40.922779  658729 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 11:42:40.962388  658729 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1209 11:42:40.962498  658729 ssh_runner.go:195] Run: crio --version
	I1209 11:42:40.990289  658729 ssh_runner.go:195] Run: crio --version
	I1209 11:42:41.022232  658729 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1209 11:42:41.023392  658729 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetIP
	I1209 11:42:41.025988  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:42:41.026408  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:72:3e", ip: ""} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:42:29 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:42:41.026438  658729 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:42:41.026675  658729 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1209 11:42:41.030686  658729 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 11:42:41.042980  658729 kubeadm.go:883] updating cluster {Name:old-k8s-version-014592 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-014592 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.132 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 11:42:41.043100  658729 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1209 11:42:41.043162  658729 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 11:42:41.078021  658729 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1209 11:42:41.078096  658729 ssh_runner.go:195] Run: which lz4
	I1209 11:42:41.081927  658729 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1209 11:42:41.086142  658729 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1209 11:42:41.086204  658729 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1209 11:42:42.546597  658729 crio.go:462] duration metric: took 1.464694713s to copy over tarball
	I1209 11:42:42.546676  658729 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1209 11:42:45.033181  658729 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.486470557s)
	I1209 11:42:45.033216  658729 crio.go:469] duration metric: took 2.486585107s to extract the tarball
	I1209 11:42:45.033225  658729 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1209 11:42:45.075886  658729 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 11:42:45.119524  658729 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1209 11:42:45.119552  658729 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1209 11:42:45.119622  658729 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1209 11:42:45.119669  658729 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 11:42:45.119637  658729 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1209 11:42:45.119707  658729 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1209 11:42:45.119720  658729 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1209 11:42:45.119720  658729 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1209 11:42:45.119633  658729 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 11:42:45.119761  658729 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1209 11:42:45.121521  658729 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1209 11:42:45.121531  658729 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1209 11:42:45.121528  658729 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1209 11:42:45.121557  658729 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1209 11:42:45.121563  658729 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1209 11:42:45.121522  658729 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 11:42:45.121522  658729 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1209 11:42:45.121524  658729 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 11:42:45.317741  658729 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 11:42:45.339669  658729 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1209 11:42:45.351366  658729 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1209 11:42:45.363941  658729 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1209 11:42:45.363995  658729 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 11:42:45.364038  658729 ssh_runner.go:195] Run: which crictl
	I1209 11:42:45.373518  658729 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1209 11:42:45.377384  658729 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1209 11:42:45.378272  658729 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1209 11:42:45.379088  658729 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1209 11:42:45.406053  658729 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1209 11:42:45.406103  658729 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1209 11:42:45.406152  658729 ssh_runner.go:195] Run: which crictl
	I1209 11:42:45.426041  658729 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1209 11:42:45.426081  658729 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 11:42:45.426098  658729 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1209 11:42:45.426145  658729 ssh_runner.go:195] Run: which crictl
	I1209 11:42:45.494925  658729 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1209 11:42:45.494981  658729 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1209 11:42:45.495036  658729 ssh_runner.go:195] Run: which crictl
	I1209 11:42:45.509186  658729 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1209 11:42:45.509226  658729 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1209 11:42:45.509245  658729 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1209 11:42:45.509265  658729 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1209 11:42:45.509283  658729 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1209 11:42:45.509312  658729 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1209 11:42:45.509316  658729 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1209 11:42:45.509344  658729 ssh_runner.go:195] Run: which crictl
	I1209 11:42:45.509316  658729 ssh_runner.go:195] Run: which crictl
	I1209 11:42:45.509319  658729 ssh_runner.go:195] Run: which crictl
	I1209 11:42:45.528889  658729 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1209 11:42:45.528952  658729 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1209 11:42:45.528889  658729 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 11:42:45.528955  658729 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1209 11:42:45.606256  658729 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1209 11:42:45.606256  658729 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1209 11:42:45.606347  658729 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1209 11:42:45.667018  658729 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 11:42:45.667064  658729 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1209 11:42:45.677405  658729 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1209 11:42:45.677427  658729 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1209 11:42:45.782308  658729 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1209 11:42:45.787806  658729 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1209 11:42:45.787857  658729 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1209 11:42:45.818818  658729 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1209 11:42:45.818855  658729 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1209 11:42:45.820554  658729 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1209 11:42:45.820570  658729 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1209 11:42:45.900243  658729 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1209 11:42:45.900382  658729 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1209 11:42:45.918256  658729 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1209 11:42:45.942570  658729 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1209 11:42:45.947822  658729 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1209 11:42:45.947859  658729 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1209 11:42:45.988543  658729 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1209 11:42:45.988675  658729 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1209 11:42:46.373556  658729 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 11:42:46.516598  658729 cache_images.go:92] duration metric: took 1.397027644s to LoadCachedImages
	W1209 11:42:46.516715  658729 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I1209 11:42:46.516735  658729 kubeadm.go:934] updating node { 192.168.61.132 8443 v1.20.0 crio true true} ...
	I1209 11:42:46.516902  658729 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-014592 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.132
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-014592 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 11:42:46.516994  658729 ssh_runner.go:195] Run: crio config
	I1209 11:42:46.573177  658729 cni.go:84] Creating CNI manager for ""
	I1209 11:42:46.573202  658729 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 11:42:46.573211  658729 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1209 11:42:46.573230  658729 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.132 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-014592 NodeName:old-k8s-version-014592 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.132"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.132 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1209 11:42:46.573357  658729 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.132
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-014592"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.132
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.132"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 11:42:46.573443  658729 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1209 11:42:46.583011  658729 binaries.go:44] Found k8s binaries, skipping transfer
	I1209 11:42:46.583080  658729 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 11:42:46.595547  658729 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1209 11:42:46.615918  658729 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 11:42:46.636344  658729 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1209 11:42:46.656246  658729 ssh_runner.go:195] Run: grep 192.168.61.132	control-plane.minikube.internal$ /etc/hosts
	I1209 11:42:46.660265  658729 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.132	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 11:42:46.672630  658729 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 11:42:46.813726  658729 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 11:42:46.832126  658729 certs.go:68] Setting up /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/old-k8s-version-014592 for IP: 192.168.61.132
	I1209 11:42:46.832155  658729 certs.go:194] generating shared ca certs ...
	I1209 11:42:46.832179  658729 certs.go:226] acquiring lock for ca certs: {Name:mk2df5887e08965d909a9c950da5dfffb8a04ddc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 11:42:46.832372  658729 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.key
	I1209 11:42:46.832429  658729 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.key
	I1209 11:42:46.832443  658729 certs.go:256] generating profile certs ...
	I1209 11:42:46.832526  658729 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/old-k8s-version-014592/client.key
	I1209 11:42:46.832543  658729 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/old-k8s-version-014592/client.crt with IP's: []
	I1209 11:42:46.983641  658729 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/old-k8s-version-014592/client.crt ...
	I1209 11:42:46.983675  658729 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/old-k8s-version-014592/client.crt: {Name:mke4bb61c889b751a8310da03650aca73e580919 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 11:42:46.983858  658729 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/old-k8s-version-014592/client.key ...
	I1209 11:42:46.983876  658729 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/old-k8s-version-014592/client.key: {Name:mka95b1a5c36d674baaf07ed20eb9ffbda90ce1c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 11:42:46.984040  658729 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/old-k8s-version-014592/apiserver.key.28078577
	I1209 11:42:46.984064  658729 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/old-k8s-version-014592/apiserver.crt.28078577 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.132]
	I1209 11:42:47.140934  658729 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/old-k8s-version-014592/apiserver.crt.28078577 ...
	I1209 11:42:47.140968  658729 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/old-k8s-version-014592/apiserver.crt.28078577: {Name:mked6e0475041df09eec0c3dafc2787afbbb28cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 11:42:47.141175  658729 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/old-k8s-version-014592/apiserver.key.28078577 ...
	I1209 11:42:47.141193  658729 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/old-k8s-version-014592/apiserver.key.28078577: {Name:mk5c3e4903ebea68f8e2eb2c013f9f5414a66c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 11:42:47.141306  658729 certs.go:381] copying /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/old-k8s-version-014592/apiserver.crt.28078577 -> /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/old-k8s-version-014592/apiserver.crt
	I1209 11:42:47.141394  658729 certs.go:385] copying /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/old-k8s-version-014592/apiserver.key.28078577 -> /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/old-k8s-version-014592/apiserver.key
	I1209 11:42:47.141449  658729 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/old-k8s-version-014592/proxy-client.key
	I1209 11:42:47.141466  658729 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/old-k8s-version-014592/proxy-client.crt with IP's: []
	I1209 11:42:47.290887  658729 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/old-k8s-version-014592/proxy-client.crt ...
	I1209 11:42:47.290921  658729 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/old-k8s-version-014592/proxy-client.crt: {Name:mkc7bc80fc65d73013d4dd8a44387ed81b0f4e12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 11:42:47.291123  658729 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/old-k8s-version-014592/proxy-client.key ...
	I1209 11:42:47.291141  658729 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/old-k8s-version-014592/proxy-client.key: {Name:mkaf682c2bb73b21fa0af0224ab9b456b51fca55 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 11:42:47.291383  658729 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017.pem (1338 bytes)
	W1209 11:42:47.291437  658729 certs.go:480] ignoring /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017_empty.pem, impossibly tiny 0 bytes
	I1209 11:42:47.291452  658729 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem (1675 bytes)
	I1209 11:42:47.291492  658729 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem (1082 bytes)
	I1209 11:42:47.291525  658729 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem (1123 bytes)
	I1209 11:42:47.291575  658729 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem (1679 bytes)
	I1209 11:42:47.291630  658729 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem (1708 bytes)
	I1209 11:42:47.292454  658729 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 11:42:47.322296  658729 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1209 11:42:47.347075  658729 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 11:42:47.371429  658729 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1209 11:42:47.395332  658729 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/old-k8s-version-014592/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1209 11:42:47.425390  658729 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/old-k8s-version-014592/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1209 11:42:47.455969  658729 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/old-k8s-version-014592/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 11:42:47.480941  658729 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/old-k8s-version-014592/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1209 11:42:47.509136  658729 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem --> /usr/share/ca-certificates/6170172.pem (1708 bytes)
	I1209 11:42:47.533466  658729 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 11:42:47.560658  658729 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017.pem --> /usr/share/ca-certificates/617017.pem (1338 bytes)
	I1209 11:42:47.585245  658729 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 11:42:47.603032  658729 ssh_runner.go:195] Run: openssl version
	I1209 11:42:47.609017  658729 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6170172.pem && ln -fs /usr/share/ca-certificates/6170172.pem /etc/ssl/certs/6170172.pem"
	I1209 11:42:47.620154  658729 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6170172.pem
	I1209 11:42:47.624595  658729 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 10:45 /usr/share/ca-certificates/6170172.pem
	I1209 11:42:47.624669  658729 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6170172.pem
	I1209 11:42:47.630494  658729 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6170172.pem /etc/ssl/certs/3ec20f2e.0"
	I1209 11:42:47.643626  658729 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1209 11:42:47.654062  658729 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 11:42:47.666028  658729 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 10:34 /usr/share/ca-certificates/minikubeCA.pem
	I1209 11:42:47.666114  658729 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 11:42:47.673060  658729 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1209 11:42:47.685697  658729 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/617017.pem && ln -fs /usr/share/ca-certificates/617017.pem /etc/ssl/certs/617017.pem"
	I1209 11:42:47.702356  658729 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/617017.pem
	I1209 11:42:47.707343  658729 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 10:45 /usr/share/ca-certificates/617017.pem
	I1209 11:42:47.707423  658729 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/617017.pem
	I1209 11:42:47.723933  658729 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/617017.pem /etc/ssl/certs/51391683.0"
	I1209 11:42:47.736948  658729 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 11:42:47.743592  658729 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1209 11:42:47.743677  658729 kubeadm.go:392] StartCluster: {Name:old-k8s-version-014592 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-014592 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.132 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 11:42:47.743785  658729 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 11:42:47.743856  658729 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 11:42:47.782528  658729 cri.go:89] found id: ""
	I1209 11:42:47.782644  658729 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 11:42:47.792400  658729 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 11:42:47.802389  658729 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 11:42:47.811829  658729 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 11:42:47.811853  658729 kubeadm.go:157] found existing configuration files:
	
	I1209 11:42:47.811897  658729 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 11:42:47.821040  658729 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 11:42:47.821121  658729 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 11:42:47.830426  658729 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 11:42:47.839068  658729 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 11:42:47.839142  658729 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 11:42:47.848541  658729 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 11:42:47.857169  658729 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 11:42:47.857242  658729 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 11:42:47.866019  658729 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 11:42:47.877490  658729 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 11:42:47.877564  658729 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 11:42:47.890701  658729 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1209 11:42:48.146235  658729 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1209 11:44:46.180080  658729 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1209 11:44:46.180213  658729 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1209 11:44:46.181625  658729 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1209 11:44:46.181696  658729 kubeadm.go:310] [preflight] Running pre-flight checks
	I1209 11:44:46.181793  658729 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1209 11:44:46.181870  658729 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1209 11:44:46.181965  658729 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1209 11:44:46.182045  658729 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1209 11:44:46.183513  658729 out.go:235]   - Generating certificates and keys ...
	I1209 11:44:46.183585  658729 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1209 11:44:46.183679  658729 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1209 11:44:46.183784  658729 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1209 11:44:46.183862  658729 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1209 11:44:46.183964  658729 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1209 11:44:46.184014  658729 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1209 11:44:46.184095  658729 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1209 11:44:46.184292  658729 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-014592] and IPs [192.168.61.132 127.0.0.1 ::1]
	I1209 11:44:46.184353  658729 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1209 11:44:46.184472  658729 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-014592] and IPs [192.168.61.132 127.0.0.1 ::1]
	I1209 11:44:46.184535  658729 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1209 11:44:46.184585  658729 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1209 11:44:46.184636  658729 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1209 11:44:46.184706  658729 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1209 11:44:46.184770  658729 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1209 11:44:46.184815  658729 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1209 11:44:46.184878  658729 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1209 11:44:46.184947  658729 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1209 11:44:46.185102  658729 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1209 11:44:46.185228  658729 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1209 11:44:46.185298  658729 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1209 11:44:46.185370  658729 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1209 11:44:46.186586  658729 out.go:235]   - Booting up control plane ...
	I1209 11:44:46.186677  658729 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1209 11:44:46.186743  658729 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1209 11:44:46.186806  658729 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1209 11:44:46.186873  658729 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1209 11:44:46.186991  658729 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1209 11:44:46.187033  658729 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1209 11:44:46.187091  658729 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1209 11:44:46.187343  658729 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1209 11:44:46.187434  658729 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1209 11:44:46.187630  658729 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1209 11:44:46.187693  658729 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1209 11:44:46.187843  658729 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1209 11:44:46.187909  658729 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1209 11:44:46.188160  658729 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1209 11:44:46.188275  658729 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1209 11:44:46.188528  658729 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1209 11:44:46.188543  658729 kubeadm.go:310] 
	I1209 11:44:46.188610  658729 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1209 11:44:46.188659  658729 kubeadm.go:310] 		timed out waiting for the condition
	I1209 11:44:46.188670  658729 kubeadm.go:310] 
	I1209 11:44:46.188725  658729 kubeadm.go:310] 	This error is likely caused by:
	I1209 11:44:46.188775  658729 kubeadm.go:310] 		- The kubelet is not running
	I1209 11:44:46.188907  658729 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1209 11:44:46.188916  658729 kubeadm.go:310] 
	I1209 11:44:46.189004  658729 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1209 11:44:46.189034  658729 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1209 11:44:46.189066  658729 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1209 11:44:46.189072  658729 kubeadm.go:310] 
	I1209 11:44:46.189186  658729 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1209 11:44:46.189279  658729 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1209 11:44:46.189291  658729 kubeadm.go:310] 
	I1209 11:44:46.189395  658729 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1209 11:44:46.189506  658729 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1209 11:44:46.189576  658729 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1209 11:44:46.189633  658729 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1209 11:44:46.189652  658729 kubeadm.go:310] 
	W1209 11:44:46.189763  658729 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-014592] and IPs [192.168.61.132 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-014592] and IPs [192.168.61.132 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-014592] and IPs [192.168.61.132 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-014592] and IPs [192.168.61.132 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1209 11:44:46.189806  658729 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1209 11:44:46.646262  658729 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 11:44:46.660139  658729 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 11:44:46.669301  658729 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 11:44:46.669325  658729 kubeadm.go:157] found existing configuration files:
	
	I1209 11:44:46.669384  658729 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 11:44:46.678292  658729 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 11:44:46.678350  658729 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 11:44:46.687318  658729 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 11:44:46.695862  658729 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 11:44:46.695924  658729 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 11:44:46.704582  658729 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 11:44:46.712979  658729 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 11:44:46.713044  658729 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 11:44:46.721802  658729 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 11:44:46.730002  658729 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 11:44:46.730063  658729 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 11:44:46.738819  658729 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1209 11:44:46.937717  658729 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1209 11:46:42.882046  658729 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1209 11:46:42.882212  658729 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1209 11:46:42.883401  658729 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1209 11:46:42.883456  658729 kubeadm.go:310] [preflight] Running pre-flight checks
	I1209 11:46:42.883541  658729 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1209 11:46:42.883675  658729 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1209 11:46:42.883824  658729 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1209 11:46:42.883914  658729 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1209 11:46:42.885568  658729 out.go:235]   - Generating certificates and keys ...
	I1209 11:46:42.885640  658729 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1209 11:46:42.885696  658729 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1209 11:46:42.885786  658729 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1209 11:46:42.885861  658729 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1209 11:46:42.885921  658729 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1209 11:46:42.885970  658729 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1209 11:46:42.886023  658729 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1209 11:46:42.886095  658729 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1209 11:46:42.886216  658729 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1209 11:46:42.886332  658729 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1209 11:46:42.886419  658729 kubeadm.go:310] [certs] Using the existing "sa" key
	I1209 11:46:42.886517  658729 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1209 11:46:42.886597  658729 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1209 11:46:42.886667  658729 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1209 11:46:42.886748  658729 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1209 11:46:42.886850  658729 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1209 11:46:42.886994  658729 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1209 11:46:42.887115  658729 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1209 11:46:42.887160  658729 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1209 11:46:42.887219  658729 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1209 11:46:42.888935  658729 out.go:235]   - Booting up control plane ...
	I1209 11:46:42.889029  658729 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1209 11:46:42.889097  658729 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1209 11:46:42.889154  658729 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1209 11:46:42.889227  658729 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1209 11:46:42.889374  658729 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1209 11:46:42.889429  658729 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1209 11:46:42.889491  658729 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1209 11:46:42.889639  658729 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1209 11:46:42.889695  658729 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1209 11:46:42.889839  658729 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1209 11:46:42.889911  658729 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1209 11:46:42.890183  658729 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1209 11:46:42.890274  658729 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1209 11:46:42.890442  658729 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1209 11:46:42.890541  658729 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1209 11:46:42.890742  658729 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1209 11:46:42.890752  658729 kubeadm.go:310] 
	I1209 11:46:42.890805  658729 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1209 11:46:42.890859  658729 kubeadm.go:310] 		timed out waiting for the condition
	I1209 11:46:42.890869  658729 kubeadm.go:310] 
	I1209 11:46:42.890923  658729 kubeadm.go:310] 	This error is likely caused by:
	I1209 11:46:42.890971  658729 kubeadm.go:310] 		- The kubelet is not running
	I1209 11:46:42.891097  658729 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1209 11:46:42.891115  658729 kubeadm.go:310] 
	I1209 11:46:42.891250  658729 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1209 11:46:42.891283  658729 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1209 11:46:42.891321  658729 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1209 11:46:42.891334  658729 kubeadm.go:310] 
	I1209 11:46:42.891468  658729 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1209 11:46:42.891575  658729 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1209 11:46:42.891586  658729 kubeadm.go:310] 
	I1209 11:46:42.891725  658729 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1209 11:46:42.891834  658729 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1209 11:46:42.891931  658729 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1209 11:46:42.892028  658729 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1209 11:46:42.892083  658729 kubeadm.go:310] 
	I1209 11:46:42.892113  658729 kubeadm.go:394] duration metric: took 3m55.148449578s to StartCluster
	I1209 11:46:42.892165  658729 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:46:42.892226  658729 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:46:42.953525  658729 cri.go:89] found id: ""
	I1209 11:46:42.953554  658729 logs.go:282] 0 containers: []
	W1209 11:46:42.953561  658729 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:46:42.953567  658729 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:46:42.953627  658729 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:46:42.987078  658729 cri.go:89] found id: ""
	I1209 11:46:42.987116  658729 logs.go:282] 0 containers: []
	W1209 11:46:42.987125  658729 logs.go:284] No container was found matching "etcd"
	I1209 11:46:42.987130  658729 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:46:42.987184  658729 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:46:43.020295  658729 cri.go:89] found id: ""
	I1209 11:46:43.020323  658729 logs.go:282] 0 containers: []
	W1209 11:46:43.020333  658729 logs.go:284] No container was found matching "coredns"
	I1209 11:46:43.020342  658729 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:46:43.020415  658729 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:46:43.063932  658729 cri.go:89] found id: ""
	I1209 11:46:43.063971  658729 logs.go:282] 0 containers: []
	W1209 11:46:43.063981  658729 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:46:43.063988  658729 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:46:43.064065  658729 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:46:43.098703  658729 cri.go:89] found id: ""
	I1209 11:46:43.098730  658729 logs.go:282] 0 containers: []
	W1209 11:46:43.098739  658729 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:46:43.098746  658729 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:46:43.098810  658729 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:46:43.131784  658729 cri.go:89] found id: ""
	I1209 11:46:43.131826  658729 logs.go:282] 0 containers: []
	W1209 11:46:43.131840  658729 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:46:43.131859  658729 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:46:43.131938  658729 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:46:43.167026  658729 cri.go:89] found id: ""
	I1209 11:46:43.167058  658729 logs.go:282] 0 containers: []
	W1209 11:46:43.167072  658729 logs.go:284] No container was found matching "kindnet"
	I1209 11:46:43.167086  658729 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:46:43.167103  658729 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:46:43.272916  658729 logs.go:123] Gathering logs for container status ...
	I1209 11:46:43.272962  658729 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:46:43.311153  658729 logs.go:123] Gathering logs for kubelet ...
	I1209 11:46:43.311190  658729 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:46:43.360802  658729 logs.go:123] Gathering logs for dmesg ...
	I1209 11:46:43.360845  658729 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:46:43.373562  658729 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:46:43.373589  658729 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:46:43.484970  658729 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1209 11:46:43.485011  658729 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1209 11:46:43.485053  658729 out.go:270] * 
	* 
	W1209 11:46:43.485141  658729 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1209 11:46:43.485192  658729 out.go:270] * 
	* 
	W1209 11:46:43.486024  658729 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 11:46:43.488846  658729 out.go:201] 
	W1209 11:46:43.490005  658729 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1209 11:46:43.490065  658729 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1209 11:46:43.490094  658729 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1209 11:46:43.491599  658729 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-014592 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-014592 -n old-k8s-version-014592
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-014592 -n old-k8s-version-014592: exit status 6 (245.367779ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1209 11:46:43.772309  661654 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-014592" does not appear in /home/jenkins/minikube-integration/20068-609844/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-014592" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (269.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (139.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-005123 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-005123 --alsologtostderr -v=3: exit status 82 (2m0.493898298s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-005123"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 11:44:04.828392  660136 out.go:345] Setting OutFile to fd 1 ...
	I1209 11:44:04.828622  660136 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 11:44:04.828630  660136 out.go:358] Setting ErrFile to fd 2...
	I1209 11:44:04.828633  660136 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 11:44:04.828793  660136 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20068-609844/.minikube/bin
	I1209 11:44:04.829012  660136 out.go:352] Setting JSON to false
	I1209 11:44:04.829085  660136 mustload.go:65] Loading cluster: embed-certs-005123
	I1209 11:44:04.829405  660136 config.go:182] Loaded profile config "embed-certs-005123": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 11:44:04.829474  660136 profile.go:143] Saving config to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/embed-certs-005123/config.json ...
	I1209 11:44:04.829622  660136 mustload.go:65] Loading cluster: embed-certs-005123
	I1209 11:44:04.829736  660136 config.go:182] Loaded profile config "embed-certs-005123": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 11:44:04.829774  660136 stop.go:39] StopHost: embed-certs-005123
	I1209 11:44:04.830193  660136 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:44:04.830246  660136 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:44:04.846130  660136 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40731
	I1209 11:44:04.846681  660136 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:44:04.847747  660136 main.go:141] libmachine: Using API Version  1
	I1209 11:44:04.847776  660136 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:44:04.848865  660136 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:44:04.851090  660136 out.go:177] * Stopping node "embed-certs-005123"  ...
	I1209 11:44:04.852183  660136 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1209 11:44:04.852255  660136 main.go:141] libmachine: (embed-certs-005123) Calling .DriverName
	I1209 11:44:04.852473  660136 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1209 11:44:04.852503  660136 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHHostname
	I1209 11:44:04.855364  660136 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:44:04.855789  660136 main.go:141] libmachine: (embed-certs-005123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:a0:a8", ip: ""} in network mk-embed-certs-005123: {Iface:virbr4 ExpiryTime:2024-12-09 12:43:16 +0000 UTC Type:0 Mac:52:54:00:ee:a0:a8 Iaid: IPaddr:192.168.72.218 Prefix:24 Hostname:embed-certs-005123 Clientid:01:52:54:00:ee:a0:a8}
	I1209 11:44:04.855816  660136 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined IP address 192.168.72.218 and MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:44:04.855974  660136 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHPort
	I1209 11:44:04.856134  660136 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHKeyPath
	I1209 11:44:04.856297  660136 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHUsername
	I1209 11:44:04.856443  660136 sshutil.go:53] new ssh client: &{IP:192.168.72.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/embed-certs-005123/id_rsa Username:docker}
	I1209 11:44:04.951809  660136 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1209 11:44:05.013941  660136 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1209 11:44:05.052285  660136 main.go:141] libmachine: Stopping "embed-certs-005123"...
	I1209 11:44:05.052328  660136 main.go:141] libmachine: (embed-certs-005123) Calling .GetState
	I1209 11:44:05.054430  660136 main.go:141] libmachine: (embed-certs-005123) Calling .Stop
	I1209 11:44:05.059205  660136 main.go:141] libmachine: (embed-certs-005123) Waiting for machine to stop 0/120
	I1209 11:44:06.060547  660136 main.go:141] libmachine: (embed-certs-005123) Waiting for machine to stop 1/120
	I1209 11:44:07.062288  660136 main.go:141] libmachine: (embed-certs-005123) Waiting for machine to stop 2/120
	I1209 11:44:08.063736  660136 main.go:141] libmachine: (embed-certs-005123) Waiting for machine to stop 3/120
	I1209 11:44:09.065258  660136 main.go:141] libmachine: (embed-certs-005123) Waiting for machine to stop 4/120
	I1209 11:44:10.067596  660136 main.go:141] libmachine: (embed-certs-005123) Waiting for machine to stop 5/120
	I1209 11:44:11.068963  660136 main.go:141] libmachine: (embed-certs-005123) Waiting for machine to stop 6/120
	I1209 11:44:12.070922  660136 main.go:141] libmachine: (embed-certs-005123) Waiting for machine to stop 7/120
	I1209 11:44:13.072374  660136 main.go:141] libmachine: (embed-certs-005123) Waiting for machine to stop 8/120
	I1209 11:44:14.073885  660136 main.go:141] libmachine: (embed-certs-005123) Waiting for machine to stop 9/120
	I1209 11:44:15.075340  660136 main.go:141] libmachine: (embed-certs-005123) Waiting for machine to stop 10/120
	I1209 11:44:16.076992  660136 main.go:141] libmachine: (embed-certs-005123) Waiting for machine to stop 11/120
	I1209 11:44:17.079086  660136 main.go:141] libmachine: (embed-certs-005123) Waiting for machine to stop 12/120
	I1209 11:44:18.080604  660136 main.go:141] libmachine: (embed-certs-005123) Waiting for machine to stop 13/120
	I1209 11:44:19.082373  660136 main.go:141] libmachine: (embed-certs-005123) Waiting for machine to stop 14/120
	I1209 11:44:20.084840  660136 main.go:141] libmachine: (embed-certs-005123) Waiting for machine to stop 15/120
	I1209 11:44:21.086563  660136 main.go:141] libmachine: (embed-certs-005123) Waiting for machine to stop 16/120
	I1209 11:44:22.088587  660136 main.go:141] libmachine: (embed-certs-005123) Waiting for machine to stop 17/120
	I1209 11:44:23.090248  660136 main.go:141] libmachine: (embed-certs-005123) Waiting for machine to stop 18/120
	I1209 11:44:24.091749  660136 main.go:141] libmachine: (embed-certs-005123) Waiting for machine to stop 19/120
	I1209 11:44:25.093913  660136 main.go:141] libmachine: (embed-certs-005123) Waiting for machine to stop 20/120
	I1209 11:44:26.095431  660136 main.go:141] libmachine: (embed-certs-005123) Waiting for machine to stop 21/120
	I1209 11:44:27.096932  660136 main.go:141] libmachine: (embed-certs-005123) Waiting for machine to stop 22/120
	I1209 11:44:28.098612  660136 main.go:141] libmachine: (embed-certs-005123) Waiting for machine to stop 23/120
	I1209 11:44:29.100828  660136 main.go:141] libmachine: (embed-certs-005123) Waiting for machine to stop 24/120
	I1209 11:44:30.102961  660136 main.go:141] libmachine: (embed-certs-005123) Waiting for machine to stop 25/120
	I1209 11:44:31.104932  660136 main.go:141] libmachine: (embed-certs-005123) Waiting for machine to stop 26/120
	I1209 11:44:32.106359  660136 main.go:141] libmachine: (embed-certs-005123) Waiting for machine to stop 27/120
	I1209 11:44:33.107850  660136 main.go:141] libmachine: (embed-certs-005123) Waiting for machine to stop 28/120
	I1209 11:44:34.109448  660136 main.go:141] libmachine: (embed-certs-005123) Waiting for machine to stop 29/120
	I1209 11:44:35.111786  660136 main.go:141] libmachine: (embed-certs-005123) Waiting for machine to stop 30/120
	I1209 11:44:36.113231  660136 main.go:141] libmachine: (embed-certs-005123) Waiting for machine to stop 31/120
	I1209 11:44:37.114941  660136 main.go:141] libmachine: (embed-certs-005123) Waiting for machine to stop 32/120
	I1209 11:44:38.116746  660136 main.go:141] libmachine: (embed-certs-005123) Waiting for machine to stop 33/120
	I1209 11:44:39.118180  660136 main.go:141] libmachine: (embed-certs-005123) Waiting for machine to stop 34/120
	I1209 11:44:40.120018  660136 main.go:141] libmachine: (embed-certs-005123) Waiting for machine to stop 35/120
	I1209 11:44:41.121555  660136 main.go:141] libmachine: (embed-certs-005123) Waiting for machine to stop 36/120
	I1209 11:44:42.122921  660136 main.go:141] libmachine: (embed-certs-005123) Waiting for machine to stop 37/120
	I1209 11:44:43.124779  660136 main.go:141] libmachine: (embed-certs-005123) Waiting for machine to stop 38/120
	I1209 11:44:44.126214  660136 main.go:141] libmachine: (embed-certs-005123) Waiting for machine to stop 39/120
	I1209 11:44:45.128667  660136 main.go:141] libmachine: (embed-certs-005123) Waiting for machine to stop 40/120
	I1209 11:44:46.129972  660136 main.go:141] libmachine: (embed-certs-005123) Waiting for machine to stop 41/120
	I1209 11:44:47.131271  660136 main.go:141] libmachine: (embed-certs-005123) Waiting for machine to stop 42/120
	I1209 11:44:48.132889  660136 main.go:141] libmachine: (embed-certs-005123) Waiting for machine to stop 43/120
	I1209 11:44:49.134220  660136 main.go:141] libmachine: (embed-certs-005123) Waiting for machine to stop 44/120
	I1209 11:44:50.136073  660136 main.go:141] libmachine: (embed-certs-005123) Waiting for machine to stop 45/120
	I1209 11:44:51.137565  660136 main.go:141] libmachine: (embed-certs-005123) Waiting for machine to stop 46/120
	I1209 11:44:52.139224  660136 main.go:141] libmachine: (embed-certs-005123) Waiting for machine to stop 47/120
	I1209 11:44:53.141070  660136 main.go:141] libmachine: (embed-certs-005123) Waiting for machine to stop 48/120
	I1209 11:44:54.142611  660136 main.go:141] libmachine: (embed-certs-005123) Waiting for machine to stop 49/120
	I1209 11:44:55.144819  660136 main.go:141] libmachine: (embed-certs-005123) Waiting for machine to stop 50/120
	I1209 11:44:56.146277  660136 main.go:141] libmachine: (embed-certs-005123) Waiting for machine to stop 51/120
	I1209 11:44:57.147812  660136 main.go:141] libmachine: (embed-certs-005123) Waiting for machine to stop 52/120
	I1209 11:44:58.149439  660136 main.go:141] libmachine: (embed-certs-005123) Waiting for machine to stop 53/120
	I1209 11:44:59.151002  660136 main.go:141] libmachine: (embed-certs-005123) Waiting for machine to stop 54/120
	I1209 11:45:00.153392  660136 main.go:141] libmachine: (embed-certs-005123) Waiting for machine to stop 55/120
	I1209 11:45:01.154887  660136 main.go:141] libmachine: (embed-certs-005123) Waiting for machine to stop 56/120
	I1209 11:45:02.156601  660136 main.go:141] libmachine: (embed-certs-005123) Waiting for machine to stop 57/120
	I1209 11:45:03.158300  660136 main.go:141] libmachine: (embed-certs-005123) Waiting for machine to stop 58/120
	I1209 11:45:04.160915  660136 main.go:141] libmachine: (embed-certs-005123) Waiting for machine to stop 59/120
	I1209 11:45:05.163403  660136 main.go:141] libmachine: (embed-certs-005123) Waiting for machine to stop 60/120
	I1209 11:45:06.164883  660136 main.go:141] libmachine: (embed-certs-005123) Waiting for machine to stop 61/120
	I1209 11:45:07.166400  660136 main.go:141] libmachine: (embed-certs-005123) Waiting for machine to stop 62/120
	I1209 11:45:08.167925  660136 main.go:141] libmachine: (embed-certs-005123) Waiting for machine to stop 63/120
	I1209 11:45:09.169409  660136 main.go:141] libmachine: (embed-certs-005123) Waiting for machine to stop 64/120
	I1209 11:45:10.171517  660136 main.go:141] libmachine: (embed-certs-005123) Waiting for machine to stop 65/120
	I1209 11:45:11.173542  660136 main.go:141] libmachine: (embed-certs-005123) Waiting for machine to stop 66/120
	I1209 11:45:12.175269  660136 main.go:141] libmachine: (embed-certs-005123) Waiting for machine to stop 67/120
	I1209 11:45:13.176704  660136 main.go:141] libmachine: (embed-certs-005123) Waiting for machine to stop 68/120
	I1209 11:45:14.178371  660136 main.go:141] libmachine: (embed-certs-005123) Waiting for machine to stop 69/120
	I1209 11:45:15.181054  660136 main.go:141] libmachine: (embed-certs-005123) Waiting for machine to stop 70/120
	I1209 11:45:16.183025  660136 main.go:141] libmachine: (embed-certs-005123) Waiting for machine to stop 71/120
	I1209 11:45:17.184383  660136 main.go:141] libmachine: (embed-certs-005123) Waiting for machine to stop 72/120
	I1209 11:45:18.186159  660136 main.go:141] libmachine: (embed-certs-005123) Waiting for machine to stop 73/120
	I1209 11:45:19.187940  660136 main.go:141] libmachine: (embed-certs-005123) Waiting for machine to stop 74/120
	I1209 11:45:20.190026  660136 main.go:141] libmachine: (embed-certs-005123) Waiting for machine to stop 75/120
	I1209 11:45:21.191534  660136 main.go:141] libmachine: (embed-certs-005123) Waiting for machine to stop 76/120
	I1209 11:45:22.193052  660136 main.go:141] libmachine: (embed-certs-005123) Waiting for machine to stop 77/120
	I1209 11:45:23.194565  660136 main.go:141] libmachine: (embed-certs-005123) Waiting for machine to stop 78/120
	I1209 11:45:24.196825  660136 main.go:141] libmachine: (embed-certs-005123) Waiting for machine to stop 79/120
	I1209 11:45:25.198916  660136 main.go:141] libmachine: (embed-certs-005123) Waiting for machine to stop 80/120
	I1209 11:45:26.200306  660136 main.go:141] libmachine: (embed-certs-005123) Waiting for machine to stop 81/120
	I1209 11:45:27.201741  660136 main.go:141] libmachine: (embed-certs-005123) Waiting for machine to stop 82/120
	I1209 11:45:28.203247  660136 main.go:141] libmachine: (embed-certs-005123) Waiting for machine to stop 83/120
	I1209 11:45:29.204555  660136 main.go:141] libmachine: (embed-certs-005123) Waiting for machine to stop 84/120
	I1209 11:45:30.206677  660136 main.go:141] libmachine: (embed-certs-005123) Waiting for machine to stop 85/120
	I1209 11:45:31.208065  660136 main.go:141] libmachine: (embed-certs-005123) Waiting for machine to stop 86/120
	I1209 11:45:32.209494  660136 main.go:141] libmachine: (embed-certs-005123) Waiting for machine to stop 87/120
	I1209 11:45:33.210926  660136 main.go:141] libmachine: (embed-certs-005123) Waiting for machine to stop 88/120
	I1209 11:45:34.212380  660136 main.go:141] libmachine: (embed-certs-005123) Waiting for machine to stop 89/120
	I1209 11:45:35.214112  660136 main.go:141] libmachine: (embed-certs-005123) Waiting for machine to stop 90/120
	I1209 11:45:36.215887  660136 main.go:141] libmachine: (embed-certs-005123) Waiting for machine to stop 91/120
	I1209 11:45:37.217776  660136 main.go:141] libmachine: (embed-certs-005123) Waiting for machine to stop 92/120
	I1209 11:45:38.219428  660136 main.go:141] libmachine: (embed-certs-005123) Waiting for machine to stop 93/120
	I1209 11:45:39.220952  660136 main.go:141] libmachine: (embed-certs-005123) Waiting for machine to stop 94/120
	I1209 11:45:40.223027  660136 main.go:141] libmachine: (embed-certs-005123) Waiting for machine to stop 95/120
	I1209 11:45:41.224783  660136 main.go:141] libmachine: (embed-certs-005123) Waiting for machine to stop 96/120
	I1209 11:45:42.226124  660136 main.go:141] libmachine: (embed-certs-005123) Waiting for machine to stop 97/120
	I1209 11:45:43.227672  660136 main.go:141] libmachine: (embed-certs-005123) Waiting for machine to stop 98/120
	I1209 11:45:44.228953  660136 main.go:141] libmachine: (embed-certs-005123) Waiting for machine to stop 99/120
	I1209 11:45:45.231183  660136 main.go:141] libmachine: (embed-certs-005123) Waiting for machine to stop 100/120
	I1209 11:45:46.232682  660136 main.go:141] libmachine: (embed-certs-005123) Waiting for machine to stop 101/120
	I1209 11:45:47.234359  660136 main.go:141] libmachine: (embed-certs-005123) Waiting for machine to stop 102/120
	I1209 11:45:48.235804  660136 main.go:141] libmachine: (embed-certs-005123) Waiting for machine to stop 103/120
	I1209 11:45:49.238041  660136 main.go:141] libmachine: (embed-certs-005123) Waiting for machine to stop 104/120
	I1209 11:45:50.239909  660136 main.go:141] libmachine: (embed-certs-005123) Waiting for machine to stop 105/120
	I1209 11:45:51.241187  660136 main.go:141] libmachine: (embed-certs-005123) Waiting for machine to stop 106/120
	I1209 11:45:52.243107  660136 main.go:141] libmachine: (embed-certs-005123) Waiting for machine to stop 107/120
	I1209 11:45:53.244272  660136 main.go:141] libmachine: (embed-certs-005123) Waiting for machine to stop 108/120
	I1209 11:45:54.245884  660136 main.go:141] libmachine: (embed-certs-005123) Waiting for machine to stop 109/120
	I1209 11:45:55.248151  660136 main.go:141] libmachine: (embed-certs-005123) Waiting for machine to stop 110/120
	I1209 11:45:56.249513  660136 main.go:141] libmachine: (embed-certs-005123) Waiting for machine to stop 111/120
	I1209 11:45:57.250956  660136 main.go:141] libmachine: (embed-certs-005123) Waiting for machine to stop 112/120
	I1209 11:45:58.252484  660136 main.go:141] libmachine: (embed-certs-005123) Waiting for machine to stop 113/120
	I1209 11:45:59.253948  660136 main.go:141] libmachine: (embed-certs-005123) Waiting for machine to stop 114/120
	I1209 11:46:00.255798  660136 main.go:141] libmachine: (embed-certs-005123) Waiting for machine to stop 115/120
	I1209 11:46:01.257359  660136 main.go:141] libmachine: (embed-certs-005123) Waiting for machine to stop 116/120
	I1209 11:46:02.258958  660136 main.go:141] libmachine: (embed-certs-005123) Waiting for machine to stop 117/120
	I1209 11:46:03.260244  660136 main.go:141] libmachine: (embed-certs-005123) Waiting for machine to stop 118/120
	I1209 11:46:04.261988  660136 main.go:141] libmachine: (embed-certs-005123) Waiting for machine to stop 119/120
	I1209 11:46:05.263201  660136 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1209 11:46:05.263266  660136 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1209 11:46:05.265130  660136 out.go:201] 
	W1209 11:46:05.266458  660136 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1209 11:46:05.266476  660136 out.go:270] * 
	* 
	W1209 11:46:05.269855  660136 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 11:46:05.272118  660136 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-005123 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-005123 -n embed-certs-005123
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-005123 -n embed-certs-005123: exit status 3 (18.519580464s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1209 11:46:23.790519  661010 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.218:22: connect: no route to host
	E1209 11:46:23.790540  661010 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.72.218:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-005123" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (139.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (138.97s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-820741 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-820741 --alsologtostderr -v=3: exit status 82 (2m0.501440203s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-820741"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 11:44:50.183435  660431 out.go:345] Setting OutFile to fd 1 ...
	I1209 11:44:50.183740  660431 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 11:44:50.183754  660431 out.go:358] Setting ErrFile to fd 2...
	I1209 11:44:50.183759  660431 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 11:44:50.183937  660431 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20068-609844/.minikube/bin
	I1209 11:44:50.184204  660431 out.go:352] Setting JSON to false
	I1209 11:44:50.184302  660431 mustload.go:65] Loading cluster: no-preload-820741
	I1209 11:44:50.184706  660431 config.go:182] Loaded profile config "no-preload-820741": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 11:44:50.184772  660431 profile.go:143] Saving config to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/no-preload-820741/config.json ...
	I1209 11:44:50.184968  660431 mustload.go:65] Loading cluster: no-preload-820741
	I1209 11:44:50.185082  660431 config.go:182] Loaded profile config "no-preload-820741": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 11:44:50.185114  660431 stop.go:39] StopHost: no-preload-820741
	I1209 11:44:50.185554  660431 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:44:50.185616  660431 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:44:50.202916  660431 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37077
	I1209 11:44:50.203420  660431 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:44:50.203993  660431 main.go:141] libmachine: Using API Version  1
	I1209 11:44:50.204020  660431 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:44:50.204382  660431 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:44:50.206525  660431 out.go:177] * Stopping node "no-preload-820741"  ...
	I1209 11:44:50.208094  660431 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1209 11:44:50.208121  660431 main.go:141] libmachine: (no-preload-820741) Calling .DriverName
	I1209 11:44:50.208330  660431 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1209 11:44:50.208371  660431 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHHostname
	I1209 11:44:50.211233  660431 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:44:50.211656  660431 main.go:141] libmachine: (no-preload-820741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4c:0e", ip: ""} in network mk-no-preload-820741: {Iface:virbr1 ExpiryTime:2024-12-09 12:43:41 +0000 UTC Type:0 Mac:52:54:00:27:4c:0e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:no-preload-820741 Clientid:01:52:54:00:27:4c:0e}
	I1209 11:44:50.211689  660431 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined IP address 192.168.39.169 and MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:44:50.211836  660431 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHPort
	I1209 11:44:50.212016  660431 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHKeyPath
	I1209 11:44:50.212187  660431 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHUsername
	I1209 11:44:50.212336  660431 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/no-preload-820741/id_rsa Username:docker}
	I1209 11:44:50.310208  660431 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1209 11:44:50.371789  660431 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1209 11:44:50.410440  660431 main.go:141] libmachine: Stopping "no-preload-820741"...
	I1209 11:44:50.410482  660431 main.go:141] libmachine: (no-preload-820741) Calling .GetState
	I1209 11:44:50.412183  660431 main.go:141] libmachine: (no-preload-820741) Calling .Stop
	I1209 11:44:50.415974  660431 main.go:141] libmachine: (no-preload-820741) Waiting for machine to stop 0/120
	I1209 11:44:51.417445  660431 main.go:141] libmachine: (no-preload-820741) Waiting for machine to stop 1/120
	I1209 11:44:52.419275  660431 main.go:141] libmachine: (no-preload-820741) Waiting for machine to stop 2/120
	I1209 11:44:53.420918  660431 main.go:141] libmachine: (no-preload-820741) Waiting for machine to stop 3/120
	I1209 11:44:54.422471  660431 main.go:141] libmachine: (no-preload-820741) Waiting for machine to stop 4/120
	I1209 11:44:55.425017  660431 main.go:141] libmachine: (no-preload-820741) Waiting for machine to stop 5/120
	I1209 11:44:56.426539  660431 main.go:141] libmachine: (no-preload-820741) Waiting for machine to stop 6/120
	I1209 11:44:57.428105  660431 main.go:141] libmachine: (no-preload-820741) Waiting for machine to stop 7/120
	I1209 11:44:58.429986  660431 main.go:141] libmachine: (no-preload-820741) Waiting for machine to stop 8/120
	I1209 11:44:59.431759  660431 main.go:141] libmachine: (no-preload-820741) Waiting for machine to stop 9/120
	I1209 11:45:00.433263  660431 main.go:141] libmachine: (no-preload-820741) Waiting for machine to stop 10/120
	I1209 11:45:01.435129  660431 main.go:141] libmachine: (no-preload-820741) Waiting for machine to stop 11/120
	I1209 11:45:02.436669  660431 main.go:141] libmachine: (no-preload-820741) Waiting for machine to stop 12/120
	I1209 11:45:03.438055  660431 main.go:141] libmachine: (no-preload-820741) Waiting for machine to stop 13/120
	I1209 11:45:04.439535  660431 main.go:141] libmachine: (no-preload-820741) Waiting for machine to stop 14/120
	I1209 11:45:05.441873  660431 main.go:141] libmachine: (no-preload-820741) Waiting for machine to stop 15/120
	I1209 11:45:06.443497  660431 main.go:141] libmachine: (no-preload-820741) Waiting for machine to stop 16/120
	I1209 11:45:07.444913  660431 main.go:141] libmachine: (no-preload-820741) Waiting for machine to stop 17/120
	I1209 11:45:08.446787  660431 main.go:141] libmachine: (no-preload-820741) Waiting for machine to stop 18/120
	I1209 11:45:09.448272  660431 main.go:141] libmachine: (no-preload-820741) Waiting for machine to stop 19/120
	I1209 11:45:10.450861  660431 main.go:141] libmachine: (no-preload-820741) Waiting for machine to stop 20/120
	I1209 11:45:11.452708  660431 main.go:141] libmachine: (no-preload-820741) Waiting for machine to stop 21/120
	I1209 11:45:12.454554  660431 main.go:141] libmachine: (no-preload-820741) Waiting for machine to stop 22/120
	I1209 11:45:13.456373  660431 main.go:141] libmachine: (no-preload-820741) Waiting for machine to stop 23/120
	I1209 11:45:14.458336  660431 main.go:141] libmachine: (no-preload-820741) Waiting for machine to stop 24/120
	I1209 11:45:15.460489  660431 main.go:141] libmachine: (no-preload-820741) Waiting for machine to stop 25/120
	I1209 11:45:16.462238  660431 main.go:141] libmachine: (no-preload-820741) Waiting for machine to stop 26/120
	I1209 11:45:17.464033  660431 main.go:141] libmachine: (no-preload-820741) Waiting for machine to stop 27/120
	I1209 11:45:18.465719  660431 main.go:141] libmachine: (no-preload-820741) Waiting for machine to stop 28/120
	I1209 11:45:19.467220  660431 main.go:141] libmachine: (no-preload-820741) Waiting for machine to stop 29/120
	I1209 11:45:20.469652  660431 main.go:141] libmachine: (no-preload-820741) Waiting for machine to stop 30/120
	I1209 11:45:21.471151  660431 main.go:141] libmachine: (no-preload-820741) Waiting for machine to stop 31/120
	I1209 11:45:22.472604  660431 main.go:141] libmachine: (no-preload-820741) Waiting for machine to stop 32/120
	I1209 11:45:23.474827  660431 main.go:141] libmachine: (no-preload-820741) Waiting for machine to stop 33/120
	I1209 11:45:24.476859  660431 main.go:141] libmachine: (no-preload-820741) Waiting for machine to stop 34/120
	I1209 11:45:25.479033  660431 main.go:141] libmachine: (no-preload-820741) Waiting for machine to stop 35/120
	I1209 11:45:26.480692  660431 main.go:141] libmachine: (no-preload-820741) Waiting for machine to stop 36/120
	I1209 11:45:27.482024  660431 main.go:141] libmachine: (no-preload-820741) Waiting for machine to stop 37/120
	I1209 11:45:28.483374  660431 main.go:141] libmachine: (no-preload-820741) Waiting for machine to stop 38/120
	I1209 11:45:29.484728  660431 main.go:141] libmachine: (no-preload-820741) Waiting for machine to stop 39/120
	I1209 11:45:30.486892  660431 main.go:141] libmachine: (no-preload-820741) Waiting for machine to stop 40/120
	I1209 11:45:31.488614  660431 main.go:141] libmachine: (no-preload-820741) Waiting for machine to stop 41/120
	I1209 11:45:32.489974  660431 main.go:141] libmachine: (no-preload-820741) Waiting for machine to stop 42/120
	I1209 11:45:33.491374  660431 main.go:141] libmachine: (no-preload-820741) Waiting for machine to stop 43/120
	I1209 11:45:34.492803  660431 main.go:141] libmachine: (no-preload-820741) Waiting for machine to stop 44/120
	I1209 11:45:35.495064  660431 main.go:141] libmachine: (no-preload-820741) Waiting for machine to stop 45/120
	I1209 11:45:36.496604  660431 main.go:141] libmachine: (no-preload-820741) Waiting for machine to stop 46/120
	I1209 11:45:37.498071  660431 main.go:141] libmachine: (no-preload-820741) Waiting for machine to stop 47/120
	I1209 11:45:38.499917  660431 main.go:141] libmachine: (no-preload-820741) Waiting for machine to stop 48/120
	I1209 11:45:39.501503  660431 main.go:141] libmachine: (no-preload-820741) Waiting for machine to stop 49/120
	I1209 11:45:40.503578  660431 main.go:141] libmachine: (no-preload-820741) Waiting for machine to stop 50/120
	I1209 11:45:41.505097  660431 main.go:141] libmachine: (no-preload-820741) Waiting for machine to stop 51/120
	I1209 11:45:42.506481  660431 main.go:141] libmachine: (no-preload-820741) Waiting for machine to stop 52/120
	I1209 11:45:43.508087  660431 main.go:141] libmachine: (no-preload-820741) Waiting for machine to stop 53/120
	I1209 11:45:44.509607  660431 main.go:141] libmachine: (no-preload-820741) Waiting for machine to stop 54/120
	I1209 11:45:45.511949  660431 main.go:141] libmachine: (no-preload-820741) Waiting for machine to stop 55/120
	I1209 11:45:46.513459  660431 main.go:141] libmachine: (no-preload-820741) Waiting for machine to stop 56/120
	I1209 11:45:47.514984  660431 main.go:141] libmachine: (no-preload-820741) Waiting for machine to stop 57/120
	I1209 11:45:48.516564  660431 main.go:141] libmachine: (no-preload-820741) Waiting for machine to stop 58/120
	I1209 11:45:49.518307  660431 main.go:141] libmachine: (no-preload-820741) Waiting for machine to stop 59/120
	I1209 11:45:50.520302  660431 main.go:141] libmachine: (no-preload-820741) Waiting for machine to stop 60/120
	I1209 11:45:51.521799  660431 main.go:141] libmachine: (no-preload-820741) Waiting for machine to stop 61/120
	I1209 11:45:52.523143  660431 main.go:141] libmachine: (no-preload-820741) Waiting for machine to stop 62/120
	I1209 11:45:53.524732  660431 main.go:141] libmachine: (no-preload-820741) Waiting for machine to stop 63/120
	I1209 11:45:54.526061  660431 main.go:141] libmachine: (no-preload-820741) Waiting for machine to stop 64/120
	I1209 11:45:55.528014  660431 main.go:141] libmachine: (no-preload-820741) Waiting for machine to stop 65/120
	I1209 11:45:56.529750  660431 main.go:141] libmachine: (no-preload-820741) Waiting for machine to stop 66/120
	I1209 11:45:57.531217  660431 main.go:141] libmachine: (no-preload-820741) Waiting for machine to stop 67/120
	I1209 11:45:58.532852  660431 main.go:141] libmachine: (no-preload-820741) Waiting for machine to stop 68/120
	I1209 11:45:59.534544  660431 main.go:141] libmachine: (no-preload-820741) Waiting for machine to stop 69/120
	I1209 11:46:00.536070  660431 main.go:141] libmachine: (no-preload-820741) Waiting for machine to stop 70/120
	I1209 11:46:01.537669  660431 main.go:141] libmachine: (no-preload-820741) Waiting for machine to stop 71/120
	I1209 11:46:02.539221  660431 main.go:141] libmachine: (no-preload-820741) Waiting for machine to stop 72/120
	I1209 11:46:03.540709  660431 main.go:141] libmachine: (no-preload-820741) Waiting for machine to stop 73/120
	I1209 11:46:04.542077  660431 main.go:141] libmachine: (no-preload-820741) Waiting for machine to stop 74/120
	I1209 11:46:05.544177  660431 main.go:141] libmachine: (no-preload-820741) Waiting for machine to stop 75/120
	I1209 11:46:06.545647  660431 main.go:141] libmachine: (no-preload-820741) Waiting for machine to stop 76/120
	I1209 11:46:07.547342  660431 main.go:141] libmachine: (no-preload-820741) Waiting for machine to stop 77/120
	I1209 11:46:08.549013  660431 main.go:141] libmachine: (no-preload-820741) Waiting for machine to stop 78/120
	I1209 11:46:09.550598  660431 main.go:141] libmachine: (no-preload-820741) Waiting for machine to stop 79/120
	I1209 11:46:10.552822  660431 main.go:141] libmachine: (no-preload-820741) Waiting for machine to stop 80/120
	I1209 11:46:11.554331  660431 main.go:141] libmachine: (no-preload-820741) Waiting for machine to stop 81/120
	I1209 11:46:12.556578  660431 main.go:141] libmachine: (no-preload-820741) Waiting for machine to stop 82/120
	I1209 11:46:13.558843  660431 main.go:141] libmachine: (no-preload-820741) Waiting for machine to stop 83/120
	I1209 11:46:14.560965  660431 main.go:141] libmachine: (no-preload-820741) Waiting for machine to stop 84/120
	I1209 11:46:15.563023  660431 main.go:141] libmachine: (no-preload-820741) Waiting for machine to stop 85/120
	I1209 11:46:16.564833  660431 main.go:141] libmachine: (no-preload-820741) Waiting for machine to stop 86/120
	I1209 11:46:17.566488  660431 main.go:141] libmachine: (no-preload-820741) Waiting for machine to stop 87/120
	I1209 11:46:18.567938  660431 main.go:141] libmachine: (no-preload-820741) Waiting for machine to stop 88/120
	I1209 11:46:19.569313  660431 main.go:141] libmachine: (no-preload-820741) Waiting for machine to stop 89/120
	I1209 11:46:20.570945  660431 main.go:141] libmachine: (no-preload-820741) Waiting for machine to stop 90/120
	I1209 11:46:21.572894  660431 main.go:141] libmachine: (no-preload-820741) Waiting for machine to stop 91/120
	I1209 11:46:22.575062  660431 main.go:141] libmachine: (no-preload-820741) Waiting for machine to stop 92/120
	I1209 11:46:23.576833  660431 main.go:141] libmachine: (no-preload-820741) Waiting for machine to stop 93/120
	I1209 11:46:24.578160  660431 main.go:141] libmachine: (no-preload-820741) Waiting for machine to stop 94/120
	I1209 11:46:25.580384  660431 main.go:141] libmachine: (no-preload-820741) Waiting for machine to stop 95/120
	I1209 11:46:26.582157  660431 main.go:141] libmachine: (no-preload-820741) Waiting for machine to stop 96/120
	I1209 11:46:27.583621  660431 main.go:141] libmachine: (no-preload-820741) Waiting for machine to stop 97/120
	I1209 11:46:28.585164  660431 main.go:141] libmachine: (no-preload-820741) Waiting for machine to stop 98/120
	I1209 11:46:29.586873  660431 main.go:141] libmachine: (no-preload-820741) Waiting for machine to stop 99/120
	I1209 11:46:30.589216  660431 main.go:141] libmachine: (no-preload-820741) Waiting for machine to stop 100/120
	I1209 11:46:31.590720  660431 main.go:141] libmachine: (no-preload-820741) Waiting for machine to stop 101/120
	I1209 11:46:32.592833  660431 main.go:141] libmachine: (no-preload-820741) Waiting for machine to stop 102/120
	I1209 11:46:33.594375  660431 main.go:141] libmachine: (no-preload-820741) Waiting for machine to stop 103/120
	I1209 11:46:34.595732  660431 main.go:141] libmachine: (no-preload-820741) Waiting for machine to stop 104/120
	I1209 11:46:35.598484  660431 main.go:141] libmachine: (no-preload-820741) Waiting for machine to stop 105/120
	I1209 11:46:36.600960  660431 main.go:141] libmachine: (no-preload-820741) Waiting for machine to stop 106/120
	I1209 11:46:37.602487  660431 main.go:141] libmachine: (no-preload-820741) Waiting for machine to stop 107/120
	I1209 11:46:38.603972  660431 main.go:141] libmachine: (no-preload-820741) Waiting for machine to stop 108/120
	I1209 11:46:39.605551  660431 main.go:141] libmachine: (no-preload-820741) Waiting for machine to stop 109/120
	I1209 11:46:40.607766  660431 main.go:141] libmachine: (no-preload-820741) Waiting for machine to stop 110/120
	I1209 11:46:41.609141  660431 main.go:141] libmachine: (no-preload-820741) Waiting for machine to stop 111/120
	I1209 11:46:42.610578  660431 main.go:141] libmachine: (no-preload-820741) Waiting for machine to stop 112/120
	I1209 11:46:43.612742  660431 main.go:141] libmachine: (no-preload-820741) Waiting for machine to stop 113/120
	I1209 11:46:44.614402  660431 main.go:141] libmachine: (no-preload-820741) Waiting for machine to stop 114/120
	I1209 11:46:45.616238  660431 main.go:141] libmachine: (no-preload-820741) Waiting for machine to stop 115/120
	I1209 11:46:46.617617  660431 main.go:141] libmachine: (no-preload-820741) Waiting for machine to stop 116/120
	I1209 11:46:47.619360  660431 main.go:141] libmachine: (no-preload-820741) Waiting for machine to stop 117/120
	I1209 11:46:48.620753  660431 main.go:141] libmachine: (no-preload-820741) Waiting for machine to stop 118/120
	I1209 11:46:49.622627  660431 main.go:141] libmachine: (no-preload-820741) Waiting for machine to stop 119/120
	I1209 11:46:50.623478  660431 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1209 11:46:50.623572  660431 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1209 11:46:50.625477  660431 out.go:201] 
	W1209 11:46:50.626692  660431 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1209 11:46:50.626717  660431 out.go:270] * 
	* 
	W1209 11:46:50.631105  660431 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 11:46:50.632287  660431 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-820741 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-820741 -n no-preload-820741
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-820741 -n no-preload-820741: exit status 3 (18.467073638s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1209 11:47:09.102628  661835 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.169:22: connect: no route to host
	E1209 11:47:09.102659  661835 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.39.169:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-820741" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (138.97s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-005123 -n embed-certs-005123
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-005123 -n embed-certs-005123: exit status 3 (3.165893066s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1209 11:46:26.958554  661201 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.218:22: connect: no route to host
	E1209 11:46:26.958578  661201 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.72.218:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-005123 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-005123 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152993532s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.72.218:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-005123 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-005123 -n embed-certs-005123
E1209 11:46:33.303741  617017 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/addons-156041/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-005123 -n embed-certs-005123: exit status 3 (3.065155464s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1209 11:46:36.174625  661508 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.218:22: connect: no route to host
	E1209 11:46:36.174651  661508 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.72.218:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-005123" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.5s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-014592 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-014592 create -f testdata/busybox.yaml: exit status 1 (44.626269ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-014592" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-014592 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-014592 -n old-k8s-version-014592
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-014592 -n old-k8s-version-014592: exit status 6 (218.293405ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1209 11:46:44.042706  661694 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-014592" does not appear in /home/jenkins/minikube-integration/20068-609844/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-014592" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-014592 -n old-k8s-version-014592
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-014592 -n old-k8s-version-014592: exit status 6 (235.833903ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1209 11:46:44.274037  661724 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-014592" does not appear in /home/jenkins/minikube-integration/20068-609844/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-014592" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.50s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (99.87s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-014592 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-014592 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m39.589056507s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-014592 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-014592 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-014592 describe deploy/metrics-server -n kube-system: exit status 1 (45.642852ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-014592" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-014592 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-014592 -n old-k8s-version-014592
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-014592 -n old-k8s-version-014592: exit status 6 (230.745229ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1209 11:48:24.141507  662458 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-014592" does not appear in /home/jenkins/minikube-integration/20068-609844/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-014592" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (99.87s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.42s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-820741 -n no-preload-820741
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-820741 -n no-preload-820741: exit status 3 (3.19981528s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1209 11:47:12.302619  661948 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.169:22: connect: no route to host
	E1209 11:47:12.302649  661948 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.39.169:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-820741 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-820741 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.151767764s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.169:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-820741 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-820741 -n no-preload-820741
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-820741 -n no-preload-820741: exit status 3 (3.06381187s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1209 11:47:21.518691  662079 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.169:22: connect: no route to host
	E1209 11:47:21.518715  662079 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.39.169:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-820741" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.42s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (138.98s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-482476 --alsologtostderr -v=3
E1209 11:48:22.653039  617017 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/functional-032350/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-482476 --alsologtostderr -v=3: exit status 82 (2m0.51863477s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-482476"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 11:47:28.122362  662227 out.go:345] Setting OutFile to fd 1 ...
	I1209 11:47:28.122464  662227 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 11:47:28.122471  662227 out.go:358] Setting ErrFile to fd 2...
	I1209 11:47:28.122475  662227 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 11:47:28.122688  662227 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20068-609844/.minikube/bin
	I1209 11:47:28.122905  662227 out.go:352] Setting JSON to false
	I1209 11:47:28.122982  662227 mustload.go:65] Loading cluster: default-k8s-diff-port-482476
	I1209 11:47:28.123371  662227 config.go:182] Loaded profile config "default-k8s-diff-port-482476": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 11:47:28.123439  662227 profile.go:143] Saving config to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/default-k8s-diff-port-482476/config.json ...
	I1209 11:47:28.123606  662227 mustload.go:65] Loading cluster: default-k8s-diff-port-482476
	I1209 11:47:28.123707  662227 config.go:182] Loaded profile config "default-k8s-diff-port-482476": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 11:47:28.123742  662227 stop.go:39] StopHost: default-k8s-diff-port-482476
	I1209 11:47:28.124100  662227 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:47:28.124155  662227 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:47:28.139039  662227 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33719
	I1209 11:47:28.139553  662227 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:47:28.140166  662227 main.go:141] libmachine: Using API Version  1
	I1209 11:47:28.140193  662227 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:47:28.140542  662227 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:47:28.142860  662227 out.go:177] * Stopping node "default-k8s-diff-port-482476"  ...
	I1209 11:47:28.144086  662227 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1209 11:47:28.144130  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .DriverName
	I1209 11:47:28.144332  662227 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1209 11:47:28.144373  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHHostname
	I1209 11:47:28.147248  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:47:28.147703  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:c9:8a", ip: ""} in network mk-default-k8s-diff-port-482476: {Iface:virbr2 ExpiryTime:2024-12-09 12:46:40 +0000 UTC Type:0 Mac:52:54:00:f0:c9:8a Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:default-k8s-diff-port-482476 Clientid:01:52:54:00:f0:c9:8a}
	I1209 11:47:28.147732  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined IP address 192.168.50.25 and MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:47:28.147985  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHPort
	I1209 11:47:28.148170  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHKeyPath
	I1209 11:47:28.148324  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHUsername
	I1209 11:47:28.148518  662227 sshutil.go:53] new ssh client: &{IP:192.168.50.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/default-k8s-diff-port-482476/id_rsa Username:docker}
	I1209 11:47:28.243801  662227 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1209 11:47:28.309576  662227 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1209 11:47:28.381012  662227 main.go:141] libmachine: Stopping "default-k8s-diff-port-482476"...
	I1209 11:47:28.381038  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetState
	I1209 11:47:28.382654  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .Stop
	I1209 11:47:28.386296  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for machine to stop 0/120
	I1209 11:47:29.387843  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for machine to stop 1/120
	I1209 11:47:30.389182  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for machine to stop 2/120
	I1209 11:47:31.390606  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for machine to stop 3/120
	I1209 11:47:32.392280  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for machine to stop 4/120
	I1209 11:47:33.394806  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for machine to stop 5/120
	I1209 11:47:34.396214  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for machine to stop 6/120
	I1209 11:47:35.397930  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for machine to stop 7/120
	I1209 11:47:36.399186  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for machine to stop 8/120
	I1209 11:47:37.400986  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for machine to stop 9/120
	I1209 11:47:38.402580  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for machine to stop 10/120
	I1209 11:47:39.404028  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for machine to stop 11/120
	I1209 11:47:40.405473  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for machine to stop 12/120
	I1209 11:47:41.406933  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for machine to stop 13/120
	I1209 11:47:42.408555  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for machine to stop 14/120
	I1209 11:47:43.410735  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for machine to stop 15/120
	I1209 11:47:44.412160  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for machine to stop 16/120
	I1209 11:47:45.413724  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for machine to stop 17/120
	I1209 11:47:46.415167  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for machine to stop 18/120
	I1209 11:47:47.416690  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for machine to stop 19/120
	I1209 11:47:48.419285  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for machine to stop 20/120
	I1209 11:47:49.420631  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for machine to stop 21/120
	I1209 11:47:50.422281  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for machine to stop 22/120
	I1209 11:47:51.423684  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for machine to stop 23/120
	I1209 11:47:52.425511  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for machine to stop 24/120
	I1209 11:47:53.427856  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for machine to stop 25/120
	I1209 11:47:54.429635  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for machine to stop 26/120
	I1209 11:47:55.431267  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for machine to stop 27/120
	I1209 11:47:56.432652  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for machine to stop 28/120
	I1209 11:47:57.434376  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for machine to stop 29/120
	I1209 11:47:58.436539  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for machine to stop 30/120
	I1209 11:47:59.437877  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for machine to stop 31/120
	I1209 11:48:00.439341  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for machine to stop 32/120
	I1209 11:48:01.440841  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for machine to stop 33/120
	I1209 11:48:02.442286  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for machine to stop 34/120
	I1209 11:48:03.444497  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for machine to stop 35/120
	I1209 11:48:04.446107  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for machine to stop 36/120
	I1209 11:48:05.447523  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for machine to stop 37/120
	I1209 11:48:06.448741  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for machine to stop 38/120
	I1209 11:48:07.450237  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for machine to stop 39/120
	I1209 11:48:08.451708  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for machine to stop 40/120
	I1209 11:48:09.453124  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for machine to stop 41/120
	I1209 11:48:10.454547  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for machine to stop 42/120
	I1209 11:48:11.456036  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for machine to stop 43/120
	I1209 11:48:12.457533  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for machine to stop 44/120
	I1209 11:48:13.459928  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for machine to stop 45/120
	I1209 11:48:14.461298  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for machine to stop 46/120
	I1209 11:48:15.462750  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for machine to stop 47/120
	I1209 11:48:16.464023  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for machine to stop 48/120
	I1209 11:48:17.465649  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for machine to stop 49/120
	I1209 11:48:18.468036  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for machine to stop 50/120
	I1209 11:48:19.469515  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for machine to stop 51/120
	I1209 11:48:20.471214  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for machine to stop 52/120
	I1209 11:48:21.472712  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for machine to stop 53/120
	I1209 11:48:22.474394  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for machine to stop 54/120
	I1209 11:48:23.476749  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for machine to stop 55/120
	I1209 11:48:24.478838  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for machine to stop 56/120
	I1209 11:48:25.480519  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for machine to stop 57/120
	I1209 11:48:26.481999  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for machine to stop 58/120
	I1209 11:48:27.483523  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for machine to stop 59/120
	I1209 11:48:28.485924  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for machine to stop 60/120
	I1209 11:48:29.487757  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for machine to stop 61/120
	I1209 11:48:30.489356  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for machine to stop 62/120
	I1209 11:48:31.490914  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for machine to stop 63/120
	I1209 11:48:32.492633  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for machine to stop 64/120
	I1209 11:48:33.494884  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for machine to stop 65/120
	I1209 11:48:34.496266  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for machine to stop 66/120
	I1209 11:48:35.497849  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for machine to stop 67/120
	I1209 11:48:36.499499  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for machine to stop 68/120
	I1209 11:48:37.500955  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for machine to stop 69/120
	I1209 11:48:38.503379  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for machine to stop 70/120
	I1209 11:48:39.504763  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for machine to stop 71/120
	I1209 11:48:40.506241  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for machine to stop 72/120
	I1209 11:48:41.507584  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for machine to stop 73/120
	I1209 11:48:42.509222  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for machine to stop 74/120
	I1209 11:48:43.511618  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for machine to stop 75/120
	I1209 11:48:44.513061  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for machine to stop 76/120
	I1209 11:48:45.514766  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for machine to stop 77/120
	I1209 11:48:46.516107  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for machine to stop 78/120
	I1209 11:48:47.517616  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for machine to stop 79/120
	I1209 11:48:48.518982  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for machine to stop 80/120
	I1209 11:48:49.520446  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for machine to stop 81/120
	I1209 11:48:50.522261  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for machine to stop 82/120
	I1209 11:48:51.523792  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for machine to stop 83/120
	I1209 11:48:52.525304  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for machine to stop 84/120
	I1209 11:48:53.527523  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for machine to stop 85/120
	I1209 11:48:54.529258  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for machine to stop 86/120
	I1209 11:48:55.530751  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for machine to stop 87/120
	I1209 11:48:56.532146  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for machine to stop 88/120
	I1209 11:48:57.533587  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for machine to stop 89/120
	I1209 11:48:58.535062  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for machine to stop 90/120
	I1209 11:48:59.536702  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for machine to stop 91/120
	I1209 11:49:00.538336  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for machine to stop 92/120
	I1209 11:49:01.539712  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for machine to stop 93/120
	I1209 11:49:02.541177  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for machine to stop 94/120
	I1209 11:49:03.543419  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for machine to stop 95/120
	I1209 11:49:04.545030  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for machine to stop 96/120
	I1209 11:49:05.546559  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for machine to stop 97/120
	I1209 11:49:06.548342  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for machine to stop 98/120
	I1209 11:49:07.550093  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for machine to stop 99/120
	I1209 11:49:08.552794  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for machine to stop 100/120
	I1209 11:49:09.554313  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for machine to stop 101/120
	I1209 11:49:10.555702  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for machine to stop 102/120
	I1209 11:49:11.557524  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for machine to stop 103/120
	I1209 11:49:12.559268  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for machine to stop 104/120
	I1209 11:49:13.561535  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for machine to stop 105/120
	I1209 11:49:14.563061  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for machine to stop 106/120
	I1209 11:49:15.564502  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for machine to stop 107/120
	I1209 11:49:16.566055  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for machine to stop 108/120
	I1209 11:49:17.567435  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for machine to stop 109/120
	I1209 11:49:18.568761  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for machine to stop 110/120
	I1209 11:49:19.570151  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for machine to stop 111/120
	I1209 11:49:20.571504  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for machine to stop 112/120
	I1209 11:49:21.572890  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for machine to stop 113/120
	I1209 11:49:22.574256  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for machine to stop 114/120
	I1209 11:49:23.576494  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for machine to stop 115/120
	I1209 11:49:24.577934  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for machine to stop 116/120
	I1209 11:49:25.579363  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for machine to stop 117/120
	I1209 11:49:26.580808  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for machine to stop 118/120
	I1209 11:49:27.582274  662227 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for machine to stop 119/120
	I1209 11:49:28.582904  662227 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1209 11:49:28.582972  662227 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1209 11:49:28.584923  662227 out.go:201] 
	W1209 11:49:28.586257  662227 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1209 11:49:28.586282  662227 out.go:270] * 
	* 
	W1209 11:49:28.589670  662227 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 11:49:28.590938  662227 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-482476 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-482476 -n default-k8s-diff-port-482476
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-482476 -n default-k8s-diff-port-482476: exit status 3 (18.463482791s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1209 11:49:47.054566  662820 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.25:22: connect: no route to host
	E1209 11:49:47.054590  662820 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.50.25:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-482476" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (138.98s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (708.59s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-014592 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-014592 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (11m44.962604378s)

                                                
                                                
-- stdout --
	* [old-k8s-version-014592] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20068
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20068-609844/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20068-609844/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-014592" primary control-plane node in "old-k8s-version-014592" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-014592" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 11:48:27.686790  662586 out.go:345] Setting OutFile to fd 1 ...
	I1209 11:48:27.686900  662586 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 11:48:27.686910  662586 out.go:358] Setting ErrFile to fd 2...
	I1209 11:48:27.686914  662586 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 11:48:27.687103  662586 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20068-609844/.minikube/bin
	I1209 11:48:27.687652  662586 out.go:352] Setting JSON to false
	I1209 11:48:27.688603  662586 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":16252,"bootTime":1733728656,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 11:48:27.688707  662586 start.go:139] virtualization: kvm guest
	I1209 11:48:27.690712  662586 out.go:177] * [old-k8s-version-014592] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1209 11:48:27.691844  662586 out.go:177]   - MINIKUBE_LOCATION=20068
	I1209 11:48:27.691881  662586 notify.go:220] Checking for updates...
	I1209 11:48:27.694043  662586 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 11:48:27.695074  662586 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20068-609844/kubeconfig
	I1209 11:48:27.696181  662586 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20068-609844/.minikube
	I1209 11:48:27.697336  662586 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1209 11:48:27.698457  662586 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 11:48:27.700069  662586 config.go:182] Loaded profile config "old-k8s-version-014592": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1209 11:48:27.700458  662586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:48:27.700537  662586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:48:27.715373  662586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36239
	I1209 11:48:27.715910  662586 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:48:27.716580  662586 main.go:141] libmachine: Using API Version  1
	I1209 11:48:27.716604  662586 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:48:27.716991  662586 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:48:27.717195  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .DriverName
	I1209 11:48:27.718895  662586 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1209 11:48:27.720018  662586 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 11:48:27.720352  662586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:48:27.720402  662586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:48:27.735380  662586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45483
	I1209 11:48:27.735896  662586 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:48:27.736424  662586 main.go:141] libmachine: Using API Version  1
	I1209 11:48:27.736454  662586 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:48:27.736780  662586 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:48:27.736989  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .DriverName
	I1209 11:48:27.773308  662586 out.go:177] * Using the kvm2 driver based on existing profile
	I1209 11:48:27.774336  662586 start.go:297] selected driver: kvm2
	I1209 11:48:27.774354  662586 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-014592 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-014592 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.132 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 11:48:27.774529  662586 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 11:48:27.775559  662586 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 11:48:27.775664  662586 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20068-609844/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1209 11:48:27.790927  662586 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1209 11:48:27.791354  662586 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 11:48:27.791390  662586 cni.go:84] Creating CNI manager for ""
	I1209 11:48:27.791421  662586 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 11:48:27.791489  662586 start.go:340] cluster config:
	{Name:old-k8s-version-014592 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-014592 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.132 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 11:48:27.791592  662586 iso.go:125] acquiring lock: {Name:mk3bf4d9da9b75cc0ae0a3732f4492bac54a4b46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 11:48:27.793251  662586 out.go:177] * Starting "old-k8s-version-014592" primary control-plane node in "old-k8s-version-014592" cluster
	I1209 11:48:27.794275  662586 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1209 11:48:27.794331  662586 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1209 11:48:27.794343  662586 cache.go:56] Caching tarball of preloaded images
	I1209 11:48:27.794432  662586 preload.go:172] Found /home/jenkins/minikube-integration/20068-609844/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1209 11:48:27.794445  662586 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1209 11:48:27.794560  662586 profile.go:143] Saving config to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/old-k8s-version-014592/config.json ...
	I1209 11:48:27.794770  662586 start.go:360] acquireMachinesLock for old-k8s-version-014592: {Name:mka6afffe9ddf2faebdc603ed0401805d58bf31e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 11:51:44.103077  662586 start.go:364] duration metric: took 3m16.308265809s to acquireMachinesLock for "old-k8s-version-014592"
	I1209 11:51:44.103164  662586 start.go:96] Skipping create...Using existing machine configuration
	I1209 11:51:44.103178  662586 fix.go:54] fixHost starting: 
	I1209 11:51:44.103657  662586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:51:44.103716  662586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:51:44.121162  662586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33531
	I1209 11:51:44.121672  662586 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:51:44.122203  662586 main.go:141] libmachine: Using API Version  1
	I1209 11:51:44.122232  662586 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:51:44.122644  662586 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:51:44.122852  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .DriverName
	I1209 11:51:44.123023  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetState
	I1209 11:51:44.124544  662586 fix.go:112] recreateIfNeeded on old-k8s-version-014592: state=Stopped err=<nil>
	I1209 11:51:44.124567  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .DriverName
	W1209 11:51:44.124704  662586 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 11:51:44.126942  662586 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-014592" ...
	I1209 11:51:44.128421  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .Start
	I1209 11:51:44.128663  662586 main.go:141] libmachine: (old-k8s-version-014592) Ensuring networks are active...
	I1209 11:51:44.129435  662586 main.go:141] libmachine: (old-k8s-version-014592) Ensuring network default is active
	I1209 11:51:44.129805  662586 main.go:141] libmachine: (old-k8s-version-014592) Ensuring network mk-old-k8s-version-014592 is active
	I1209 11:51:44.130314  662586 main.go:141] libmachine: (old-k8s-version-014592) Getting domain xml...
	I1209 11:51:44.131070  662586 main.go:141] libmachine: (old-k8s-version-014592) Creating domain...
	I1209 11:51:45.405214  662586 main.go:141] libmachine: (old-k8s-version-014592) Waiting to get IP...
	I1209 11:51:45.406116  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:51:45.406680  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | unable to find current IP address of domain old-k8s-version-014592 in network mk-old-k8s-version-014592
	I1209 11:51:45.406716  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | I1209 11:51:45.406613  663492 retry.go:31] will retry after 249.130873ms: waiting for machine to come up
	I1209 11:51:45.657224  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:51:45.657727  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | unable to find current IP address of domain old-k8s-version-014592 in network mk-old-k8s-version-014592
	I1209 11:51:45.657756  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | I1209 11:51:45.657687  663492 retry.go:31] will retry after 363.458278ms: waiting for machine to come up
	I1209 11:51:46.023431  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:51:46.023912  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | unable to find current IP address of domain old-k8s-version-014592 in network mk-old-k8s-version-014592
	I1209 11:51:46.023945  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | I1209 11:51:46.023851  663492 retry.go:31] will retry after 313.220722ms: waiting for machine to come up
	I1209 11:51:46.339300  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:51:46.339850  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | unable to find current IP address of domain old-k8s-version-014592 in network mk-old-k8s-version-014592
	I1209 11:51:46.339876  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | I1209 11:51:46.339791  663492 retry.go:31] will retry after 517.613322ms: waiting for machine to come up
	I1209 11:51:46.859825  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:51:46.860229  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | unable to find current IP address of domain old-k8s-version-014592 in network mk-old-k8s-version-014592
	I1209 11:51:46.860260  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | I1209 11:51:46.860198  663492 retry.go:31] will retry after 710.195232ms: waiting for machine to come up
	I1209 11:51:47.572460  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:51:47.573030  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | unable to find current IP address of domain old-k8s-version-014592 in network mk-old-k8s-version-014592
	I1209 11:51:47.573080  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | I1209 11:51:47.573008  663492 retry.go:31] will retry after 620.717522ms: waiting for machine to come up
	I1209 11:51:48.195603  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:51:48.196140  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | unable to find current IP address of domain old-k8s-version-014592 in network mk-old-k8s-version-014592
	I1209 11:51:48.196172  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | I1209 11:51:48.196083  663492 retry.go:31] will retry after 747.45082ms: waiting for machine to come up
	I1209 11:51:48.945230  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:51:48.945682  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | unable to find current IP address of domain old-k8s-version-014592 in network mk-old-k8s-version-014592
	I1209 11:51:48.945737  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | I1209 11:51:48.945661  663492 retry.go:31] will retry after 1.307189412s: waiting for machine to come up
	I1209 11:51:50.254747  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:51:50.255335  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | unable to find current IP address of domain old-k8s-version-014592 in network mk-old-k8s-version-014592
	I1209 11:51:50.255359  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | I1209 11:51:50.255276  663492 retry.go:31] will retry after 1.269881759s: waiting for machine to come up
	I1209 11:51:51.526966  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:51:51.527400  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | unable to find current IP address of domain old-k8s-version-014592 in network mk-old-k8s-version-014592
	I1209 11:51:51.527431  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | I1209 11:51:51.527348  663492 retry.go:31] will retry after 1.424091669s: waiting for machine to come up
	I1209 11:51:52.953290  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:51:52.953711  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | unable to find current IP address of domain old-k8s-version-014592 in network mk-old-k8s-version-014592
	I1209 11:51:52.953743  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | I1209 11:51:52.953658  663492 retry.go:31] will retry after 2.009829783s: waiting for machine to come up
	I1209 11:51:54.965818  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:51:54.966337  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | unable to find current IP address of domain old-k8s-version-014592 in network mk-old-k8s-version-014592
	I1209 11:51:54.966372  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | I1209 11:51:54.966285  663492 retry.go:31] will retry after 2.209879817s: waiting for machine to come up
	I1209 11:51:57.177397  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:51:57.177870  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | unable to find current IP address of domain old-k8s-version-014592 in network mk-old-k8s-version-014592
	I1209 11:51:57.177901  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | I1209 11:51:57.177805  663492 retry.go:31] will retry after 2.999056002s: waiting for machine to come up
	I1209 11:52:00.178781  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:00.179225  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | unable to find current IP address of domain old-k8s-version-014592 in network mk-old-k8s-version-014592
	I1209 11:52:00.179273  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | I1209 11:52:00.179165  663492 retry.go:31] will retry after 4.532370187s: waiting for machine to come up
	I1209 11:52:04.713201  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:04.713711  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has current primary IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:04.713817  662586 main.go:141] libmachine: (old-k8s-version-014592) Found IP for machine: 192.168.61.132
	I1209 11:52:04.713853  662586 main.go:141] libmachine: (old-k8s-version-014592) Reserving static IP address...
	I1209 11:52:04.714267  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "old-k8s-version-014592", mac: "52:54:00:54:72:3e", ip: "192.168.61.132"} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:51:55 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:52:04.714298  662586 main.go:141] libmachine: (old-k8s-version-014592) Reserved static IP address: 192.168.61.132
	I1209 11:52:04.714318  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | skip adding static IP to network mk-old-k8s-version-014592 - found existing host DHCP lease matching {name: "old-k8s-version-014592", mac: "52:54:00:54:72:3e", ip: "192.168.61.132"}
	I1209 11:52:04.714332  662586 main.go:141] libmachine: (old-k8s-version-014592) Waiting for SSH to be available...
	I1209 11:52:04.714347  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | Getting to WaitForSSH function...
	I1209 11:52:04.716632  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:04.716972  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:72:3e", ip: ""} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:51:55 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:52:04.717005  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:04.717129  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | Using SSH client type: external
	I1209 11:52:04.717157  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | Using SSH private key: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/old-k8s-version-014592/id_rsa (-rw-------)
	I1209 11:52:04.717192  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.132 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20068-609844/.minikube/machines/old-k8s-version-014592/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1209 11:52:04.717206  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | About to run SSH command:
	I1209 11:52:04.717223  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | exit 0
	I1209 11:52:04.846290  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | SSH cmd err, output: <nil>: 
	I1209 11:52:04.846675  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetConfigRaw
	I1209 11:52:04.847483  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetIP
	I1209 11:52:04.850430  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:04.850859  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:72:3e", ip: ""} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:51:55 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:52:04.850888  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:04.851113  662586 profile.go:143] Saving config to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/old-k8s-version-014592/config.json ...
	I1209 11:52:04.851328  662586 machine.go:93] provisionDockerMachine start ...
	I1209 11:52:04.851348  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .DriverName
	I1209 11:52:04.851547  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHHostname
	I1209 11:52:04.854318  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:04.854622  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:72:3e", ip: ""} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:51:55 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:52:04.854654  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:04.854782  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHPort
	I1209 11:52:04.854959  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHKeyPath
	I1209 11:52:04.855134  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHKeyPath
	I1209 11:52:04.855276  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHUsername
	I1209 11:52:04.855438  662586 main.go:141] libmachine: Using SSH client type: native
	I1209 11:52:04.855696  662586 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.132 22 <nil> <nil>}
	I1209 11:52:04.855709  662586 main.go:141] libmachine: About to run SSH command:
	hostname
	I1209 11:52:04.963021  662586 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1209 11:52:04.963059  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetMachineName
	I1209 11:52:04.963344  662586 buildroot.go:166] provisioning hostname "old-k8s-version-014592"
	I1209 11:52:04.963368  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetMachineName
	I1209 11:52:04.963545  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHHostname
	I1209 11:52:04.966102  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:04.966461  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:72:3e", ip: ""} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:51:55 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:52:04.966496  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:04.966607  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHPort
	I1209 11:52:04.966780  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHKeyPath
	I1209 11:52:04.966919  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHKeyPath
	I1209 11:52:04.967056  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHUsername
	I1209 11:52:04.967221  662586 main.go:141] libmachine: Using SSH client type: native
	I1209 11:52:04.967407  662586 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.132 22 <nil> <nil>}
	I1209 11:52:04.967419  662586 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-014592 && echo "old-k8s-version-014592" | sudo tee /etc/hostname
	I1209 11:52:05.094147  662586 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-014592
	
	I1209 11:52:05.094210  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHHostname
	I1209 11:52:05.097298  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.097729  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:72:3e", ip: ""} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:51:55 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:52:05.097765  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.097949  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHPort
	I1209 11:52:05.098197  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHKeyPath
	I1209 11:52:05.098460  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHKeyPath
	I1209 11:52:05.098632  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHUsername
	I1209 11:52:05.098829  662586 main.go:141] libmachine: Using SSH client type: native
	I1209 11:52:05.099046  662586 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.132 22 <nil> <nil>}
	I1209 11:52:05.099082  662586 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-014592' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-014592/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-014592' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 11:52:05.210739  662586 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 11:52:05.210785  662586 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20068-609844/.minikube CaCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20068-609844/.minikube}
	I1209 11:52:05.210846  662586 buildroot.go:174] setting up certificates
	I1209 11:52:05.210859  662586 provision.go:84] configureAuth start
	I1209 11:52:05.210881  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetMachineName
	I1209 11:52:05.211210  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetIP
	I1209 11:52:05.214546  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.214937  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:72:3e", ip: ""} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:51:55 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:52:05.214967  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.215167  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHHostname
	I1209 11:52:05.217866  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.218269  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:72:3e", ip: ""} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:51:55 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:52:05.218300  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.218452  662586 provision.go:143] copyHostCerts
	I1209 11:52:05.218530  662586 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem, removing ...
	I1209 11:52:05.218558  662586 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem
	I1209 11:52:05.218630  662586 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem (1082 bytes)
	I1209 11:52:05.218807  662586 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem, removing ...
	I1209 11:52:05.218820  662586 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem
	I1209 11:52:05.218863  662586 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem (1123 bytes)
	I1209 11:52:05.218943  662586 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem, removing ...
	I1209 11:52:05.218953  662586 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem
	I1209 11:52:05.218983  662586 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem (1679 bytes)
	I1209 11:52:05.219060  662586 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-014592 san=[127.0.0.1 192.168.61.132 localhost minikube old-k8s-version-014592]
	I1209 11:52:05.292744  662586 provision.go:177] copyRemoteCerts
	I1209 11:52:05.292830  662586 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 11:52:05.292867  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHHostname
	I1209 11:52:05.296244  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.296670  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:72:3e", ip: ""} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:51:55 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:52:05.296712  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.296896  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHPort
	I1209 11:52:05.297111  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHKeyPath
	I1209 11:52:05.297330  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHUsername
	I1209 11:52:05.297514  662586 sshutil.go:53] new ssh client: &{IP:192.168.61.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/old-k8s-version-014592/id_rsa Username:docker}
	I1209 11:52:05.381148  662586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1209 11:52:05.404883  662586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1209 11:52:05.433421  662586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1209 11:52:05.456775  662586 provision.go:87] duration metric: took 245.894878ms to configureAuth
	I1209 11:52:05.456811  662586 buildroot.go:189] setting minikube options for container-runtime
	I1209 11:52:05.457003  662586 config.go:182] Loaded profile config "old-k8s-version-014592": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1209 11:52:05.457082  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHHostname
	I1209 11:52:05.459984  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.460372  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:72:3e", ip: ""} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:51:55 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:52:05.460415  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.460631  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHPort
	I1209 11:52:05.460851  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHKeyPath
	I1209 11:52:05.461021  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHKeyPath
	I1209 11:52:05.461217  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHUsername
	I1209 11:52:05.461481  662586 main.go:141] libmachine: Using SSH client type: native
	I1209 11:52:05.461702  662586 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.132 22 <nil> <nil>}
	I1209 11:52:05.461722  662586 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 11:52:05.683276  662586 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 11:52:05.683311  662586 machine.go:96] duration metric: took 831.968459ms to provisionDockerMachine
	I1209 11:52:05.683335  662586 start.go:293] postStartSetup for "old-k8s-version-014592" (driver="kvm2")
	I1209 11:52:05.683349  662586 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 11:52:05.683391  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .DriverName
	I1209 11:52:05.683809  662586 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 11:52:05.683850  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHHostname
	I1209 11:52:05.687116  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.687540  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:72:3e", ip: ""} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:51:55 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:52:05.687579  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.687787  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHPort
	I1209 11:52:05.688013  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHKeyPath
	I1209 11:52:05.688204  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHUsername
	I1209 11:52:05.688439  662586 sshutil.go:53] new ssh client: &{IP:192.168.61.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/old-k8s-version-014592/id_rsa Username:docker}
	I1209 11:52:05.768777  662586 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 11:52:05.772572  662586 info.go:137] Remote host: Buildroot 2023.02.9
	I1209 11:52:05.772603  662586 filesync.go:126] Scanning /home/jenkins/minikube-integration/20068-609844/.minikube/addons for local assets ...
	I1209 11:52:05.772690  662586 filesync.go:126] Scanning /home/jenkins/minikube-integration/20068-609844/.minikube/files for local assets ...
	I1209 11:52:05.772813  662586 filesync.go:149] local asset: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem -> 6170172.pem in /etc/ssl/certs
	I1209 11:52:05.772942  662586 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 11:52:05.784153  662586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem --> /etc/ssl/certs/6170172.pem (1708 bytes)
	I1209 11:52:05.808677  662586 start.go:296] duration metric: took 125.320445ms for postStartSetup
	I1209 11:52:05.808736  662586 fix.go:56] duration metric: took 21.705557963s for fixHost
	I1209 11:52:05.808766  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHHostname
	I1209 11:52:05.811685  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.812053  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:72:3e", ip: ""} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:51:55 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:52:05.812090  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.812426  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHPort
	I1209 11:52:05.812639  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHKeyPath
	I1209 11:52:05.812853  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHKeyPath
	I1209 11:52:05.812996  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHUsername
	I1209 11:52:05.813345  662586 main.go:141] libmachine: Using SSH client type: native
	I1209 11:52:05.813562  662586 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.132 22 <nil> <nil>}
	I1209 11:52:05.813572  662586 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1209 11:52:05.914863  662586 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733745125.875320243
	
	I1209 11:52:05.914892  662586 fix.go:216] guest clock: 1733745125.875320243
	I1209 11:52:05.914906  662586 fix.go:229] Guest: 2024-12-09 11:52:05.875320243 +0000 UTC Remote: 2024-12-09 11:52:05.808742373 +0000 UTC m=+218.159686894 (delta=66.57787ms)
	I1209 11:52:05.914941  662586 fix.go:200] guest clock delta is within tolerance: 66.57787ms
	I1209 11:52:05.914952  662586 start.go:83] releasing machines lock for "old-k8s-version-014592", held for 21.811813657s
	I1209 11:52:05.914983  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .DriverName
	I1209 11:52:05.915289  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetIP
	I1209 11:52:05.918015  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.918513  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:72:3e", ip: ""} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:51:55 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:52:05.918546  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.918662  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .DriverName
	I1209 11:52:05.919315  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .DriverName
	I1209 11:52:05.919508  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .DriverName
	I1209 11:52:05.919628  662586 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 11:52:05.919684  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHHostname
	I1209 11:52:05.919739  662586 ssh_runner.go:195] Run: cat /version.json
	I1209 11:52:05.919767  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHHostname
	I1209 11:52:05.922529  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.922816  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.923096  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:72:3e", ip: ""} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:51:55 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:52:05.923121  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.923223  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:72:3e", ip: ""} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:51:55 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:52:05.923258  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.923291  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHPort
	I1209 11:52:05.923459  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHPort
	I1209 11:52:05.923602  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHKeyPath
	I1209 11:52:05.923616  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHKeyPath
	I1209 11:52:05.923848  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHUsername
	I1209 11:52:05.923900  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHUsername
	I1209 11:52:05.924030  662586 sshutil.go:53] new ssh client: &{IP:192.168.61.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/old-k8s-version-014592/id_rsa Username:docker}
	I1209 11:52:05.924104  662586 sshutil.go:53] new ssh client: &{IP:192.168.61.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/old-k8s-version-014592/id_rsa Username:docker}
	I1209 11:52:06.037215  662586 ssh_runner.go:195] Run: systemctl --version
	I1209 11:52:06.043193  662586 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 11:52:06.193717  662586 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 11:52:06.199693  662586 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 11:52:06.199786  662586 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 11:52:06.216007  662586 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1209 11:52:06.216040  662586 start.go:495] detecting cgroup driver to use...
	I1209 11:52:06.216131  662586 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 11:52:06.233631  662586 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 11:52:06.249730  662586 docker.go:217] disabling cri-docker service (if available) ...
	I1209 11:52:06.249817  662586 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 11:52:06.265290  662586 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 11:52:06.281676  662586 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 11:52:06.432116  662586 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 11:52:06.605899  662586 docker.go:233] disabling docker service ...
	I1209 11:52:06.606004  662586 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 11:52:06.622861  662586 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 11:52:06.637605  662586 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 11:52:06.772842  662586 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 11:52:06.905950  662586 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 11:52:06.923048  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 11:52:06.943483  662586 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1209 11:52:06.943542  662586 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:52:06.957647  662586 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 11:52:06.957725  662586 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:52:06.970221  662586 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:52:06.981243  662586 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:52:06.992084  662586 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 11:52:07.004284  662586 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 11:52:07.014329  662586 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1209 11:52:07.014411  662586 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1209 11:52:07.028104  662586 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 11:52:07.038782  662586 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 11:52:07.155779  662586 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 11:52:07.271726  662586 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 11:52:07.271815  662586 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 11:52:07.276994  662586 start.go:563] Will wait 60s for crictl version
	I1209 11:52:07.277061  662586 ssh_runner.go:195] Run: which crictl
	I1209 11:52:07.281212  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 11:52:07.328839  662586 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1209 11:52:07.328959  662586 ssh_runner.go:195] Run: crio --version
	I1209 11:52:07.360632  662586 ssh_runner.go:195] Run: crio --version
	I1209 11:52:07.393046  662586 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1209 11:52:07.394357  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetIP
	I1209 11:52:07.398002  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:07.398539  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:72:3e", ip: ""} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:51:55 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:52:07.398564  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:07.398893  662586 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1209 11:52:07.404512  662586 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 11:52:07.417822  662586 kubeadm.go:883] updating cluster {Name:old-k8s-version-014592 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-014592 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.132 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 11:52:07.418006  662586 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1209 11:52:07.418108  662586 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 11:52:07.473163  662586 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1209 11:52:07.473249  662586 ssh_runner.go:195] Run: which lz4
	I1209 11:52:07.478501  662586 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1209 11:52:07.483744  662586 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1209 11:52:07.483786  662586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1209 11:52:09.097654  662586 crio.go:462] duration metric: took 1.619191765s to copy over tarball
	I1209 11:52:09.097748  662586 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1209 11:52:12.304496  662586 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.20670295s)
	I1209 11:52:12.304543  662586 crio.go:469] duration metric: took 3.206852542s to extract the tarball
	I1209 11:52:12.304553  662586 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1209 11:52:12.347991  662586 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 11:52:12.385411  662586 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1209 11:52:12.385438  662586 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1209 11:52:12.385533  662586 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 11:52:12.385557  662586 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1209 11:52:12.385570  662586 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1209 11:52:12.385609  662586 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 11:52:12.385641  662586 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1209 11:52:12.385650  662586 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1209 11:52:12.385645  662586 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1209 11:52:12.385620  662586 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1209 11:52:12.387326  662586 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1209 11:52:12.387335  662586 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 11:52:12.387371  662586 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1209 11:52:12.387372  662586 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1209 11:52:12.387328  662586 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1209 11:52:12.387338  662586 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 11:52:12.387328  662586 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1209 11:52:12.387383  662586 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1209 11:52:12.621631  662586 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1209 11:52:12.623694  662586 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 11:52:12.632536  662586 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1209 11:52:12.634550  662586 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1209 11:52:12.638401  662586 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1209 11:52:12.641071  662586 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1209 11:52:12.645344  662586 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1209 11:52:12.756066  662586 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1209 11:52:12.756121  662586 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1209 11:52:12.756134  662586 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1209 11:52:12.756175  662586 ssh_runner.go:195] Run: which crictl
	I1209 11:52:12.756179  662586 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 11:52:12.756230  662586 ssh_runner.go:195] Run: which crictl
	I1209 11:52:12.808091  662586 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1209 11:52:12.808139  662586 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1209 11:52:12.808186  662586 ssh_runner.go:195] Run: which crictl
	I1209 11:52:12.809593  662586 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1209 11:52:12.809622  662586 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1209 11:52:12.809637  662586 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1209 11:52:12.809659  662586 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1209 11:52:12.809682  662586 ssh_runner.go:195] Run: which crictl
	I1209 11:52:12.809712  662586 ssh_runner.go:195] Run: which crictl
	I1209 11:52:12.809775  662586 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1209 11:52:12.809803  662586 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1209 11:52:12.809829  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 11:52:12.809841  662586 ssh_runner.go:195] Run: which crictl
	I1209 11:52:12.809724  662586 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1209 11:52:12.809873  662586 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1209 11:52:12.809898  662586 ssh_runner.go:195] Run: which crictl
	I1209 11:52:12.809933  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1209 11:52:12.812256  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1209 11:52:12.819121  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1209 11:52:12.825106  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1209 11:52:12.910431  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1209 11:52:12.910501  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 11:52:12.910560  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1209 11:52:12.910503  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1209 11:52:12.910638  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1209 11:52:12.910713  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1209 11:52:12.930461  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1209 11:52:13.079147  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1209 11:52:13.079189  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 11:52:13.079233  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1209 11:52:13.079276  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1209 11:52:13.079418  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1209 11:52:13.079447  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1209 11:52:13.079517  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1209 11:52:13.224753  662586 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1209 11:52:13.227126  662586 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1209 11:52:13.227190  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1209 11:52:13.227253  662586 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1209 11:52:13.227291  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1209 11:52:13.227332  662586 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1209 11:52:13.227393  662586 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1209 11:52:13.277747  662586 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1209 11:52:13.285286  662586 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1209 11:52:13.663858  662586 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 11:52:13.805603  662586 cache_images.go:92] duration metric: took 1.420145666s to LoadCachedImages
	W1209 11:52:13.805814  662586 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I1209 11:52:13.805848  662586 kubeadm.go:934] updating node { 192.168.61.132 8443 v1.20.0 crio true true} ...
	I1209 11:52:13.805980  662586 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-014592 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.132
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-014592 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 11:52:13.806079  662586 ssh_runner.go:195] Run: crio config
	I1209 11:52:13.870766  662586 cni.go:84] Creating CNI manager for ""
	I1209 11:52:13.870797  662586 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 11:52:13.870813  662586 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1209 11:52:13.870841  662586 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.132 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-014592 NodeName:old-k8s-version-014592 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.132"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.132 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1209 11:52:13.871050  662586 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.132
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-014592"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.132
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.132"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 11:52:13.871136  662586 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1209 11:52:13.881556  662586 binaries.go:44] Found k8s binaries, skipping transfer
	I1209 11:52:13.881628  662586 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 11:52:13.891122  662586 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1209 11:52:13.908181  662586 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 11:52:13.925041  662586 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1209 11:52:13.941567  662586 ssh_runner.go:195] Run: grep 192.168.61.132	control-plane.minikube.internal$ /etc/hosts
	I1209 11:52:13.945502  662586 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.132	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 11:52:13.957476  662586 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 11:52:14.091699  662586 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 11:52:14.108772  662586 certs.go:68] Setting up /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/old-k8s-version-014592 for IP: 192.168.61.132
	I1209 11:52:14.108810  662586 certs.go:194] generating shared ca certs ...
	I1209 11:52:14.108838  662586 certs.go:226] acquiring lock for ca certs: {Name:mk2df5887e08965d909a9c950da5dfffb8a04ddc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 11:52:14.109024  662586 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.key
	I1209 11:52:14.109087  662586 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.key
	I1209 11:52:14.109105  662586 certs.go:256] generating profile certs ...
	I1209 11:52:14.109248  662586 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/old-k8s-version-014592/client.key
	I1209 11:52:14.109323  662586 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/old-k8s-version-014592/apiserver.key.28078577
	I1209 11:52:14.109383  662586 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/old-k8s-version-014592/proxy-client.key
	I1209 11:52:14.109572  662586 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017.pem (1338 bytes)
	W1209 11:52:14.109609  662586 certs.go:480] ignoring /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017_empty.pem, impossibly tiny 0 bytes
	I1209 11:52:14.109619  662586 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem (1675 bytes)
	I1209 11:52:14.109659  662586 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem (1082 bytes)
	I1209 11:52:14.109697  662586 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem (1123 bytes)
	I1209 11:52:14.109737  662586 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem (1679 bytes)
	I1209 11:52:14.109802  662586 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem (1708 bytes)
	I1209 11:52:14.110497  662586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 11:52:14.145815  662586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1209 11:52:14.179452  662586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 11:52:14.217469  662586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1209 11:52:14.250288  662586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/old-k8s-version-014592/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1209 11:52:14.287110  662586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/old-k8s-version-014592/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1209 11:52:14.317190  662586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/old-k8s-version-014592/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 11:52:14.356825  662586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/old-k8s-version-014592/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1209 11:52:14.379756  662586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017.pem --> /usr/share/ca-certificates/617017.pem (1338 bytes)
	I1209 11:52:14.402045  662586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem --> /usr/share/ca-certificates/6170172.pem (1708 bytes)
	I1209 11:52:14.425287  662586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 11:52:14.448025  662586 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 11:52:14.464144  662586 ssh_runner.go:195] Run: openssl version
	I1209 11:52:14.470256  662586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/617017.pem && ln -fs /usr/share/ca-certificates/617017.pem /etc/ssl/certs/617017.pem"
	I1209 11:52:14.481298  662586 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/617017.pem
	I1209 11:52:14.485849  662586 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 10:45 /usr/share/ca-certificates/617017.pem
	I1209 11:52:14.485904  662586 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/617017.pem
	I1209 11:52:14.492321  662586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/617017.pem /etc/ssl/certs/51391683.0"
	I1209 11:52:14.504155  662586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6170172.pem && ln -fs /usr/share/ca-certificates/6170172.pem /etc/ssl/certs/6170172.pem"
	I1209 11:52:14.515819  662586 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6170172.pem
	I1209 11:52:14.520876  662586 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 10:45 /usr/share/ca-certificates/6170172.pem
	I1209 11:52:14.520955  662586 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6170172.pem
	I1209 11:52:14.527295  662586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6170172.pem /etc/ssl/certs/3ec20f2e.0"
	I1209 11:52:14.538319  662586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1209 11:52:14.549753  662586 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 11:52:14.554273  662586 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 10:34 /usr/share/ca-certificates/minikubeCA.pem
	I1209 11:52:14.554341  662586 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 11:52:14.559893  662586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1209 11:52:14.570744  662586 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 11:52:14.575763  662586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1209 11:52:14.582279  662586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1209 11:52:14.588549  662586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1209 11:52:14.594376  662586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1209 11:52:14.599758  662586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1209 11:52:14.605497  662586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1209 11:52:14.611083  662586 kubeadm.go:392] StartCluster: {Name:old-k8s-version-014592 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-014592 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.132 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 11:52:14.611213  662586 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 11:52:14.611288  662586 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 11:52:14.649447  662586 cri.go:89] found id: ""
	I1209 11:52:14.649538  662586 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 11:52:14.660070  662586 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1209 11:52:14.660094  662586 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1209 11:52:14.660145  662586 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1209 11:52:14.670412  662586 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1209 11:52:14.671387  662586 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-014592" does not appear in /home/jenkins/minikube-integration/20068-609844/kubeconfig
	I1209 11:52:14.672043  662586 kubeconfig.go:62] /home/jenkins/minikube-integration/20068-609844/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-014592" cluster setting kubeconfig missing "old-k8s-version-014592" context setting]
	I1209 11:52:14.673337  662586 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/kubeconfig: {Name:mk32876a1ab47f13c0d7c0ba1ee5e32de01b4a4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 11:52:14.708285  662586 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1209 11:52:14.719486  662586 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.132
	I1209 11:52:14.719535  662586 kubeadm.go:1160] stopping kube-system containers ...
	I1209 11:52:14.719563  662586 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1209 11:52:14.719635  662586 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 11:52:14.755280  662586 cri.go:89] found id: ""
	I1209 11:52:14.755369  662586 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1209 11:52:14.771385  662586 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 11:52:14.781364  662586 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 11:52:14.781387  662586 kubeadm.go:157] found existing configuration files:
	
	I1209 11:52:14.781455  662586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 11:52:14.790942  662586 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 11:52:14.791016  662586 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 11:52:14.800481  662586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 11:52:14.809875  662586 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 11:52:14.809948  662586 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 11:52:14.819619  662586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 11:52:14.831670  662586 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 11:52:14.831750  662586 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 11:52:14.844244  662586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 11:52:14.853328  662586 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 11:52:14.853403  662586 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 11:52:14.862428  662586 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 11:52:14.871346  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:15.007799  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:15.697594  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:15.921787  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:16.031826  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:16.132199  662586 api_server.go:52] waiting for apiserver process to appear ...
	I1209 11:52:16.132310  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:16.633329  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:17.133389  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:17.632581  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:18.133165  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:18.632403  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:19.132416  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:19.633332  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:20.133343  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:20.632968  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:21.133411  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:21.632656  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:22.132876  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:22.632816  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:23.133393  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:23.632776  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:24.133286  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:24.632415  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:25.132348  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:25.632478  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:26.132982  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:26.632517  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:27.132692  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:27.633291  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:28.132379  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:28.633377  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:29.132983  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:29.633370  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:30.132748  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:30.633383  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:31.133450  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:31.633210  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:32.132406  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:32.632598  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:33.132924  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:33.632884  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:34.132528  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:34.632989  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:35.133398  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:35.632376  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:36.132936  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:36.633152  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:37.133343  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:37.633367  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:38.133368  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:38.632475  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:39.132993  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:39.633225  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:40.132552  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:40.633292  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:41.132443  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:41.632994  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:42.132631  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:42.633378  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:43.133189  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:43.632726  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:44.132804  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:44.632952  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:45.132474  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:45.633318  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:46.133116  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:46.632595  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:47.133211  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:47.633233  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:48.132348  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:48.632894  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:49.133272  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:49.633015  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:50.132977  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:50.632533  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:51.132939  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:51.632463  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:52.133082  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:52.633298  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:53.132520  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:53.633385  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:54.132432  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:54.632974  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:55.132958  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:55.633343  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:56.132687  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:56.633236  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:57.133489  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:57.633105  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:58.132858  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:58.633386  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:59.132544  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:59.633427  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:00.133402  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:00.632719  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:01.132786  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:01.632909  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:02.133197  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:02.632620  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:03.133091  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:03.633389  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:04.132587  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:04.633239  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:05.132773  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:05.632456  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:06.132989  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:06.632584  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:07.133153  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:07.633389  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:08.132885  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:08.633192  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:09.132446  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:09.633385  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:10.132534  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:10.632399  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:11.132877  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:11.633091  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:12.132592  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:12.633185  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:13.132852  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:13.632863  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:14.132638  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:14.632522  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:15.133201  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:15.632442  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:16.132620  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:53:16.132747  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:53:16.171708  662586 cri.go:89] found id: ""
	I1209 11:53:16.171748  662586 logs.go:282] 0 containers: []
	W1209 11:53:16.171761  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:53:16.171768  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:53:16.171823  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:53:16.206350  662586 cri.go:89] found id: ""
	I1209 11:53:16.206381  662586 logs.go:282] 0 containers: []
	W1209 11:53:16.206390  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:53:16.206398  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:53:16.206468  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:53:16.239292  662586 cri.go:89] found id: ""
	I1209 11:53:16.239325  662586 logs.go:282] 0 containers: []
	W1209 11:53:16.239334  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:53:16.239341  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:53:16.239397  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:53:16.275809  662586 cri.go:89] found id: ""
	I1209 11:53:16.275841  662586 logs.go:282] 0 containers: []
	W1209 11:53:16.275850  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:53:16.275856  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:53:16.275913  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:53:16.310434  662586 cri.go:89] found id: ""
	I1209 11:53:16.310466  662586 logs.go:282] 0 containers: []
	W1209 11:53:16.310474  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:53:16.310480  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:53:16.310540  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:53:16.347697  662586 cri.go:89] found id: ""
	I1209 11:53:16.347729  662586 logs.go:282] 0 containers: []
	W1209 11:53:16.347738  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:53:16.347745  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:53:16.347801  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:53:16.380949  662586 cri.go:89] found id: ""
	I1209 11:53:16.380977  662586 logs.go:282] 0 containers: []
	W1209 11:53:16.380985  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:53:16.380992  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:53:16.381074  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:53:16.415236  662586 cri.go:89] found id: ""
	I1209 11:53:16.415268  662586 logs.go:282] 0 containers: []
	W1209 11:53:16.415290  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:53:16.415304  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:53:16.415321  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:53:16.459614  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:53:16.459645  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:53:16.509575  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:53:16.509617  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:53:16.522864  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:53:16.522898  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:53:16.644997  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:53:16.645059  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:53:16.645106  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:53:19.220978  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:19.233506  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:53:19.233597  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:53:19.268975  662586 cri.go:89] found id: ""
	I1209 11:53:19.269007  662586 logs.go:282] 0 containers: []
	W1209 11:53:19.269019  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:53:19.269027  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:53:19.269103  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:53:19.304898  662586 cri.go:89] found id: ""
	I1209 11:53:19.304935  662586 logs.go:282] 0 containers: []
	W1209 11:53:19.304949  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:53:19.304957  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:53:19.305034  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:53:19.344798  662586 cri.go:89] found id: ""
	I1209 11:53:19.344835  662586 logs.go:282] 0 containers: []
	W1209 11:53:19.344846  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:53:19.344855  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:53:19.344925  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:53:19.395335  662586 cri.go:89] found id: ""
	I1209 11:53:19.395377  662586 logs.go:282] 0 containers: []
	W1209 11:53:19.395387  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:53:19.395395  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:53:19.395464  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:53:19.430334  662586 cri.go:89] found id: ""
	I1209 11:53:19.430364  662586 logs.go:282] 0 containers: []
	W1209 11:53:19.430377  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:53:19.430386  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:53:19.430465  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:53:19.468732  662586 cri.go:89] found id: ""
	I1209 11:53:19.468766  662586 logs.go:282] 0 containers: []
	W1209 11:53:19.468775  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:53:19.468782  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:53:19.468836  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:53:19.503194  662586 cri.go:89] found id: ""
	I1209 11:53:19.503242  662586 logs.go:282] 0 containers: []
	W1209 11:53:19.503255  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:53:19.503263  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:53:19.503328  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:53:19.537074  662586 cri.go:89] found id: ""
	I1209 11:53:19.537114  662586 logs.go:282] 0 containers: []
	W1209 11:53:19.537125  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:53:19.537135  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:53:19.537151  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:53:19.590081  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:53:19.590130  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:53:19.604350  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:53:19.604388  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:53:19.683073  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:53:19.683106  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:53:19.683124  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:53:19.763564  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:53:19.763611  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:53:22.302792  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:22.315992  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:53:22.316079  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:53:22.350464  662586 cri.go:89] found id: ""
	I1209 11:53:22.350495  662586 logs.go:282] 0 containers: []
	W1209 11:53:22.350505  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:53:22.350511  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:53:22.350569  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:53:22.382832  662586 cri.go:89] found id: ""
	I1209 11:53:22.382867  662586 logs.go:282] 0 containers: []
	W1209 11:53:22.382880  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:53:22.382889  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:53:22.382958  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:53:22.417826  662586 cri.go:89] found id: ""
	I1209 11:53:22.417859  662586 logs.go:282] 0 containers: []
	W1209 11:53:22.417871  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:53:22.417880  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:53:22.417963  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:53:22.451545  662586 cri.go:89] found id: ""
	I1209 11:53:22.451579  662586 logs.go:282] 0 containers: []
	W1209 11:53:22.451588  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:53:22.451594  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:53:22.451659  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:53:22.488413  662586 cri.go:89] found id: ""
	I1209 11:53:22.488448  662586 logs.go:282] 0 containers: []
	W1209 11:53:22.488458  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:53:22.488464  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:53:22.488531  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:53:22.523891  662586 cri.go:89] found id: ""
	I1209 11:53:22.523916  662586 logs.go:282] 0 containers: []
	W1209 11:53:22.523925  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:53:22.523931  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:53:22.523990  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:53:22.555828  662586 cri.go:89] found id: ""
	I1209 11:53:22.555866  662586 logs.go:282] 0 containers: []
	W1209 11:53:22.555879  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:53:22.555887  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:53:22.555960  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:53:22.592133  662586 cri.go:89] found id: ""
	I1209 11:53:22.592171  662586 logs.go:282] 0 containers: []
	W1209 11:53:22.592181  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:53:22.592192  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:53:22.592209  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:53:22.641928  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:53:22.641966  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:53:22.655182  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:53:22.655215  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:53:22.724320  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:53:22.724343  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:53:22.724359  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:53:22.811692  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:53:22.811743  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:53:25.347903  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:25.360839  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:53:25.360907  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:53:25.392880  662586 cri.go:89] found id: ""
	I1209 11:53:25.392917  662586 logs.go:282] 0 containers: []
	W1209 11:53:25.392930  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:53:25.392939  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:53:25.393008  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:53:25.427862  662586 cri.go:89] found id: ""
	I1209 11:53:25.427905  662586 logs.go:282] 0 containers: []
	W1209 11:53:25.427914  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:53:25.427921  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:53:25.428009  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:53:25.463733  662586 cri.go:89] found id: ""
	I1209 11:53:25.463767  662586 logs.go:282] 0 containers: []
	W1209 11:53:25.463778  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:53:25.463788  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:53:25.463884  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:53:25.501653  662586 cri.go:89] found id: ""
	I1209 11:53:25.501681  662586 logs.go:282] 0 containers: []
	W1209 11:53:25.501690  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:53:25.501697  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:53:25.501751  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:53:25.535368  662586 cri.go:89] found id: ""
	I1209 11:53:25.535410  662586 logs.go:282] 0 containers: []
	W1209 11:53:25.535422  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:53:25.535431  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:53:25.535511  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:53:25.569709  662586 cri.go:89] found id: ""
	I1209 11:53:25.569739  662586 logs.go:282] 0 containers: []
	W1209 11:53:25.569748  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:53:25.569761  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:53:25.569827  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:53:25.604352  662586 cri.go:89] found id: ""
	I1209 11:53:25.604389  662586 logs.go:282] 0 containers: []
	W1209 11:53:25.604404  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:53:25.604413  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:53:25.604477  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:53:25.635832  662586 cri.go:89] found id: ""
	I1209 11:53:25.635865  662586 logs.go:282] 0 containers: []
	W1209 11:53:25.635878  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:53:25.635892  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:53:25.635908  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:53:25.650611  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:53:25.650647  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:53:25.721092  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:53:25.721121  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:53:25.721139  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:53:25.795552  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:53:25.795598  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:53:25.858088  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:53:25.858161  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:53:28.410683  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:28.422993  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:53:28.423072  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:53:28.455054  662586 cri.go:89] found id: ""
	I1209 11:53:28.455083  662586 logs.go:282] 0 containers: []
	W1209 11:53:28.455092  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:53:28.455098  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:53:28.455162  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:53:28.493000  662586 cri.go:89] found id: ""
	I1209 11:53:28.493037  662586 logs.go:282] 0 containers: []
	W1209 11:53:28.493046  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:53:28.493052  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:53:28.493104  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:53:28.526294  662586 cri.go:89] found id: ""
	I1209 11:53:28.526333  662586 logs.go:282] 0 containers: []
	W1209 11:53:28.526346  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:53:28.526354  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:53:28.526417  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:53:28.560383  662586 cri.go:89] found id: ""
	I1209 11:53:28.560414  662586 logs.go:282] 0 containers: []
	W1209 11:53:28.560423  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:53:28.560430  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:53:28.560485  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:53:28.595906  662586 cri.go:89] found id: ""
	I1209 11:53:28.595935  662586 logs.go:282] 0 containers: []
	W1209 11:53:28.595946  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:53:28.595954  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:53:28.596021  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:53:28.629548  662586 cri.go:89] found id: ""
	I1209 11:53:28.629584  662586 logs.go:282] 0 containers: []
	W1209 11:53:28.629597  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:53:28.629607  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:53:28.629673  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:53:28.666362  662586 cri.go:89] found id: ""
	I1209 11:53:28.666398  662586 logs.go:282] 0 containers: []
	W1209 11:53:28.666410  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:53:28.666418  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:53:28.666494  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:53:28.697704  662586 cri.go:89] found id: ""
	I1209 11:53:28.697736  662586 logs.go:282] 0 containers: []
	W1209 11:53:28.697746  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:53:28.697756  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:53:28.697769  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:53:28.745774  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:53:28.745816  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:53:28.759543  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:53:28.759582  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:53:28.834772  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:53:28.834795  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:53:28.834812  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:53:28.913137  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:53:28.913178  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:53:31.460658  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:31.473503  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:53:31.473575  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:53:31.506710  662586 cri.go:89] found id: ""
	I1209 11:53:31.506748  662586 logs.go:282] 0 containers: []
	W1209 11:53:31.506760  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:53:31.506770  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:53:31.506842  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:53:31.544127  662586 cri.go:89] found id: ""
	I1209 11:53:31.544188  662586 logs.go:282] 0 containers: []
	W1209 11:53:31.544202  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:53:31.544211  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:53:31.544289  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:53:31.591081  662586 cri.go:89] found id: ""
	I1209 11:53:31.591116  662586 logs.go:282] 0 containers: []
	W1209 11:53:31.591128  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:53:31.591135  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:53:31.591213  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:53:31.629311  662586 cri.go:89] found id: ""
	I1209 11:53:31.629340  662586 logs.go:282] 0 containers: []
	W1209 11:53:31.629348  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:53:31.629355  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:53:31.629432  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:53:31.671035  662586 cri.go:89] found id: ""
	I1209 11:53:31.671069  662586 logs.go:282] 0 containers: []
	W1209 11:53:31.671081  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:53:31.671090  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:53:31.671162  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:53:31.705753  662586 cri.go:89] found id: ""
	I1209 11:53:31.705792  662586 logs.go:282] 0 containers: []
	W1209 11:53:31.705805  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:53:31.705815  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:53:31.705889  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:53:31.739118  662586 cri.go:89] found id: ""
	I1209 11:53:31.739146  662586 logs.go:282] 0 containers: []
	W1209 11:53:31.739155  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:53:31.739162  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:53:31.739225  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:53:31.771085  662586 cri.go:89] found id: ""
	I1209 11:53:31.771120  662586 logs.go:282] 0 containers: []
	W1209 11:53:31.771129  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:53:31.771139  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:53:31.771152  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:53:31.820993  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:53:31.821049  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:53:31.835576  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:53:31.835612  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:53:31.903011  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:53:31.903039  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:53:31.903056  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:53:31.977784  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:53:31.977830  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:53:34.514654  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:34.529156  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:53:34.529236  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:53:34.567552  662586 cri.go:89] found id: ""
	I1209 11:53:34.567580  662586 logs.go:282] 0 containers: []
	W1209 11:53:34.567590  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:53:34.567598  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:53:34.567665  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:53:34.608863  662586 cri.go:89] found id: ""
	I1209 11:53:34.608891  662586 logs.go:282] 0 containers: []
	W1209 11:53:34.608900  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:53:34.608907  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:53:34.608970  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:53:34.647204  662586 cri.go:89] found id: ""
	I1209 11:53:34.647242  662586 logs.go:282] 0 containers: []
	W1209 11:53:34.647254  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:53:34.647263  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:53:34.647333  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:53:34.682511  662586 cri.go:89] found id: ""
	I1209 11:53:34.682565  662586 logs.go:282] 0 containers: []
	W1209 11:53:34.682580  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:53:34.682596  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:53:34.682674  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:53:34.717557  662586 cri.go:89] found id: ""
	I1209 11:53:34.717585  662586 logs.go:282] 0 containers: []
	W1209 11:53:34.717595  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:53:34.717602  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:53:34.717670  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:53:34.749814  662586 cri.go:89] found id: ""
	I1209 11:53:34.749851  662586 logs.go:282] 0 containers: []
	W1209 11:53:34.749865  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:53:34.749876  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:53:34.749949  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:53:34.782732  662586 cri.go:89] found id: ""
	I1209 11:53:34.782766  662586 logs.go:282] 0 containers: []
	W1209 11:53:34.782776  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:53:34.782782  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:53:34.782846  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:53:34.817114  662586 cri.go:89] found id: ""
	I1209 11:53:34.817149  662586 logs.go:282] 0 containers: []
	W1209 11:53:34.817162  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:53:34.817175  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:53:34.817192  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:53:34.885963  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:53:34.885986  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:53:34.886001  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:53:34.969858  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:53:34.969905  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:53:35.006981  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:53:35.007024  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:53:35.055360  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:53:35.055401  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:53:37.570641  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:37.595904  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:53:37.595986  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:53:37.642205  662586 cri.go:89] found id: ""
	I1209 11:53:37.642248  662586 logs.go:282] 0 containers: []
	W1209 11:53:37.642261  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:53:37.642270  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:53:37.642347  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:53:37.676666  662586 cri.go:89] found id: ""
	I1209 11:53:37.676692  662586 logs.go:282] 0 containers: []
	W1209 11:53:37.676701  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:53:37.676707  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:53:37.676760  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:53:37.714201  662586 cri.go:89] found id: ""
	I1209 11:53:37.714233  662586 logs.go:282] 0 containers: []
	W1209 11:53:37.714243  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:53:37.714249  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:53:37.714311  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:53:37.748018  662586 cri.go:89] found id: ""
	I1209 11:53:37.748047  662586 logs.go:282] 0 containers: []
	W1209 11:53:37.748058  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:53:37.748067  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:53:37.748127  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:53:37.783763  662586 cri.go:89] found id: ""
	I1209 11:53:37.783799  662586 logs.go:282] 0 containers: []
	W1209 11:53:37.783807  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:53:37.783823  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:53:37.783898  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:53:37.822470  662586 cri.go:89] found id: ""
	I1209 11:53:37.822502  662586 logs.go:282] 0 containers: []
	W1209 11:53:37.822514  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:53:37.822523  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:53:37.822585  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:53:37.858493  662586 cri.go:89] found id: ""
	I1209 11:53:37.858527  662586 logs.go:282] 0 containers: []
	W1209 11:53:37.858537  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:53:37.858543  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:53:37.858599  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:53:37.899263  662586 cri.go:89] found id: ""
	I1209 11:53:37.899288  662586 logs.go:282] 0 containers: []
	W1209 11:53:37.899295  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:53:37.899304  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:53:37.899321  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:53:37.972531  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:53:37.972559  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:53:37.972575  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:53:38.046271  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:53:38.046315  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:53:38.088829  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:53:38.088867  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:53:38.141935  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:53:38.141985  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:53:40.657131  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:40.669884  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:53:40.669954  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:53:40.704291  662586 cri.go:89] found id: ""
	I1209 11:53:40.704332  662586 logs.go:282] 0 containers: []
	W1209 11:53:40.704345  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:53:40.704357  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:53:40.704435  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:53:40.738637  662586 cri.go:89] found id: ""
	I1209 11:53:40.738673  662586 logs.go:282] 0 containers: []
	W1209 11:53:40.738684  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:53:40.738690  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:53:40.738747  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:53:40.770737  662586 cri.go:89] found id: ""
	I1209 11:53:40.770774  662586 logs.go:282] 0 containers: []
	W1209 11:53:40.770787  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:53:40.770796  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:53:40.770865  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:53:40.805667  662586 cri.go:89] found id: ""
	I1209 11:53:40.805702  662586 logs.go:282] 0 containers: []
	W1209 11:53:40.805729  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:53:40.805739  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:53:40.805812  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:53:40.838444  662586 cri.go:89] found id: ""
	I1209 11:53:40.838482  662586 logs.go:282] 0 containers: []
	W1209 11:53:40.838496  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:53:40.838505  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:53:40.838578  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:53:40.871644  662586 cri.go:89] found id: ""
	I1209 11:53:40.871679  662586 logs.go:282] 0 containers: []
	W1209 11:53:40.871691  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:53:40.871700  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:53:40.871763  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:53:40.907242  662586 cri.go:89] found id: ""
	I1209 11:53:40.907275  662586 logs.go:282] 0 containers: []
	W1209 11:53:40.907284  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:53:40.907291  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:53:40.907359  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:53:40.941542  662586 cri.go:89] found id: ""
	I1209 11:53:40.941570  662586 logs.go:282] 0 containers: []
	W1209 11:53:40.941583  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:53:40.941595  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:53:40.941616  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:53:41.022344  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:53:41.022373  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:53:41.022387  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:53:41.097083  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:53:41.097129  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:53:41.135303  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:53:41.135349  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:53:41.191400  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:53:41.191447  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:53:43.705246  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:43.717939  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:53:43.718001  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:53:43.750027  662586 cri.go:89] found id: ""
	I1209 11:53:43.750066  662586 logs.go:282] 0 containers: []
	W1209 11:53:43.750079  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:53:43.750087  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:53:43.750156  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:53:43.782028  662586 cri.go:89] found id: ""
	I1209 11:53:43.782067  662586 logs.go:282] 0 containers: []
	W1209 11:53:43.782081  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:53:43.782090  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:53:43.782153  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:53:43.815509  662586 cri.go:89] found id: ""
	I1209 11:53:43.815549  662586 logs.go:282] 0 containers: []
	W1209 11:53:43.815562  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:53:43.815570  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:53:43.815629  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:53:43.852803  662586 cri.go:89] found id: ""
	I1209 11:53:43.852834  662586 logs.go:282] 0 containers: []
	W1209 11:53:43.852842  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:53:43.852850  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:53:43.852915  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:53:43.886761  662586 cri.go:89] found id: ""
	I1209 11:53:43.886789  662586 logs.go:282] 0 containers: []
	W1209 11:53:43.886798  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:53:43.886805  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:53:43.886883  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:53:43.924427  662586 cri.go:89] found id: ""
	I1209 11:53:43.924458  662586 logs.go:282] 0 containers: []
	W1209 11:53:43.924466  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:53:43.924478  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:53:43.924542  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:53:43.960351  662586 cri.go:89] found id: ""
	I1209 11:53:43.960381  662586 logs.go:282] 0 containers: []
	W1209 11:53:43.960398  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:53:43.960407  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:53:43.960476  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:53:43.993933  662586 cri.go:89] found id: ""
	I1209 11:53:43.993960  662586 logs.go:282] 0 containers: []
	W1209 11:53:43.993969  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:53:43.993979  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:53:43.994002  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:53:44.006915  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:53:44.006952  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:53:44.078928  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:53:44.078981  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:53:44.078999  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:53:44.158129  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:53:44.158188  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:53:44.199543  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:53:44.199577  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:53:46.748607  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:46.762381  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:53:46.762494  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:53:46.795674  662586 cri.go:89] found id: ""
	I1209 11:53:46.795713  662586 logs.go:282] 0 containers: []
	W1209 11:53:46.795727  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:53:46.795737  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:53:46.795812  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:53:46.834027  662586 cri.go:89] found id: ""
	I1209 11:53:46.834055  662586 logs.go:282] 0 containers: []
	W1209 11:53:46.834065  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:53:46.834072  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:53:46.834124  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:53:46.872116  662586 cri.go:89] found id: ""
	I1209 11:53:46.872156  662586 logs.go:282] 0 containers: []
	W1209 11:53:46.872169  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:53:46.872179  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:53:46.872264  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:53:46.906571  662586 cri.go:89] found id: ""
	I1209 11:53:46.906599  662586 logs.go:282] 0 containers: []
	W1209 11:53:46.906608  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:53:46.906615  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:53:46.906676  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:53:46.938266  662586 cri.go:89] found id: ""
	I1209 11:53:46.938303  662586 logs.go:282] 0 containers: []
	W1209 11:53:46.938315  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:53:46.938323  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:53:46.938381  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:53:46.972281  662586 cri.go:89] found id: ""
	I1209 11:53:46.972318  662586 logs.go:282] 0 containers: []
	W1209 11:53:46.972329  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:53:46.972337  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:53:46.972391  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:53:47.004797  662586 cri.go:89] found id: ""
	I1209 11:53:47.004828  662586 logs.go:282] 0 containers: []
	W1209 11:53:47.004837  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:53:47.004843  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:53:47.004908  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:53:47.035877  662586 cri.go:89] found id: ""
	I1209 11:53:47.035905  662586 logs.go:282] 0 containers: []
	W1209 11:53:47.035917  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:53:47.035931  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:53:47.035947  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:53:47.087654  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:53:47.087706  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:53:47.102311  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:53:47.102346  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:53:47.195370  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:53:47.195396  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:53:47.195414  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:53:47.279103  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:53:47.279158  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:53:49.817942  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:49.830291  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:53:49.830357  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:53:49.862917  662586 cri.go:89] found id: ""
	I1209 11:53:49.862950  662586 logs.go:282] 0 containers: []
	W1209 11:53:49.862959  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:53:49.862965  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:53:49.863033  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:53:49.894971  662586 cri.go:89] found id: ""
	I1209 11:53:49.895005  662586 logs.go:282] 0 containers: []
	W1209 11:53:49.895018  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:53:49.895027  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:53:49.895097  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:53:49.931737  662586 cri.go:89] found id: ""
	I1209 11:53:49.931775  662586 logs.go:282] 0 containers: []
	W1209 11:53:49.931786  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:53:49.931800  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:53:49.931862  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:53:49.971064  662586 cri.go:89] found id: ""
	I1209 11:53:49.971097  662586 logs.go:282] 0 containers: []
	W1209 11:53:49.971109  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:53:49.971118  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:53:49.971210  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:53:50.005354  662586 cri.go:89] found id: ""
	I1209 11:53:50.005393  662586 logs.go:282] 0 containers: []
	W1209 11:53:50.005417  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:53:50.005427  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:53:50.005501  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:53:50.044209  662586 cri.go:89] found id: ""
	I1209 11:53:50.044240  662586 logs.go:282] 0 containers: []
	W1209 11:53:50.044249  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:53:50.044257  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:53:50.044313  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:53:50.076360  662586 cri.go:89] found id: ""
	I1209 11:53:50.076408  662586 logs.go:282] 0 containers: []
	W1209 11:53:50.076418  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:53:50.076426  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:53:50.076494  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:53:50.112125  662586 cri.go:89] found id: ""
	I1209 11:53:50.112168  662586 logs.go:282] 0 containers: []
	W1209 11:53:50.112196  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:53:50.112210  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:53:50.112228  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:53:50.164486  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:53:50.164530  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:53:50.178489  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:53:50.178525  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:53:50.250131  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:53:50.250165  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:53:50.250196  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:53:50.329733  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:53:50.329779  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:53:52.874887  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:52.888518  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:53:52.888607  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:53:52.924361  662586 cri.go:89] found id: ""
	I1209 11:53:52.924389  662586 logs.go:282] 0 containers: []
	W1209 11:53:52.924398  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:53:52.924404  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:53:52.924467  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:53:52.957769  662586 cri.go:89] found id: ""
	I1209 11:53:52.957803  662586 logs.go:282] 0 containers: []
	W1209 11:53:52.957816  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:53:52.957824  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:53:52.957891  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:53:52.990339  662586 cri.go:89] found id: ""
	I1209 11:53:52.990376  662586 logs.go:282] 0 containers: []
	W1209 11:53:52.990388  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:53:52.990397  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:53:52.990461  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:53:53.022959  662586 cri.go:89] found id: ""
	I1209 11:53:53.023003  662586 logs.go:282] 0 containers: []
	W1209 11:53:53.023017  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:53:53.023028  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:53:53.023111  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:53:53.060271  662586 cri.go:89] found id: ""
	I1209 11:53:53.060299  662586 logs.go:282] 0 containers: []
	W1209 11:53:53.060315  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:53:53.060321  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:53:53.060390  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:53:53.093470  662586 cri.go:89] found id: ""
	I1209 11:53:53.093500  662586 logs.go:282] 0 containers: []
	W1209 11:53:53.093511  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:53:53.093519  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:53:53.093575  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:53:53.128902  662586 cri.go:89] found id: ""
	I1209 11:53:53.128941  662586 logs.go:282] 0 containers: []
	W1209 11:53:53.128955  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:53:53.128963  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:53:53.129036  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:53:53.161927  662586 cri.go:89] found id: ""
	I1209 11:53:53.161955  662586 logs.go:282] 0 containers: []
	W1209 11:53:53.161964  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:53:53.161974  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:53:53.161988  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:53:53.214098  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:53:53.214140  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:53:53.229191  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:53:53.229232  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:53:53.308648  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:53:53.308678  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:53:53.308695  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:53:53.386776  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:53:53.386816  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:53:55.929307  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:55.942217  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:53:55.942285  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:53:55.983522  662586 cri.go:89] found id: ""
	I1209 11:53:55.983563  662586 logs.go:282] 0 containers: []
	W1209 11:53:55.983572  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:53:55.983579  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:53:55.983645  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:53:56.017262  662586 cri.go:89] found id: ""
	I1209 11:53:56.017293  662586 logs.go:282] 0 containers: []
	W1209 11:53:56.017308  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:53:56.017314  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:53:56.017367  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:53:56.052385  662586 cri.go:89] found id: ""
	I1209 11:53:56.052419  662586 logs.go:282] 0 containers: []
	W1209 11:53:56.052429  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:53:56.052436  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:53:56.052489  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:53:56.085385  662586 cri.go:89] found id: ""
	I1209 11:53:56.085432  662586 logs.go:282] 0 containers: []
	W1209 11:53:56.085444  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:53:56.085452  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:53:56.085519  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:53:56.122754  662586 cri.go:89] found id: ""
	I1209 11:53:56.122785  662586 logs.go:282] 0 containers: []
	W1209 11:53:56.122794  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:53:56.122800  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:53:56.122862  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:53:56.159033  662586 cri.go:89] found id: ""
	I1209 11:53:56.159061  662586 logs.go:282] 0 containers: []
	W1209 11:53:56.159070  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:53:56.159077  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:53:56.159128  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:53:56.198022  662586 cri.go:89] found id: ""
	I1209 11:53:56.198058  662586 logs.go:282] 0 containers: []
	W1209 11:53:56.198070  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:53:56.198078  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:53:56.198148  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:53:56.231475  662586 cri.go:89] found id: ""
	I1209 11:53:56.231515  662586 logs.go:282] 0 containers: []
	W1209 11:53:56.231528  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:53:56.231542  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:53:56.231559  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:53:56.304922  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:53:56.304969  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:53:56.339875  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:53:56.339916  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:53:56.392893  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:53:56.392929  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:53:56.406334  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:53:56.406376  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:53:56.474037  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:53:58.974725  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:58.987817  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:53:58.987890  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:53:59.020951  662586 cri.go:89] found id: ""
	I1209 11:53:59.020987  662586 logs.go:282] 0 containers: []
	W1209 11:53:59.020996  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:53:59.021003  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:53:59.021055  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:53:59.055675  662586 cri.go:89] found id: ""
	I1209 11:53:59.055715  662586 logs.go:282] 0 containers: []
	W1209 11:53:59.055727  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:53:59.055733  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:53:59.055800  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:53:59.090099  662586 cri.go:89] found id: ""
	I1209 11:53:59.090138  662586 logs.go:282] 0 containers: []
	W1209 11:53:59.090150  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:53:59.090158  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:53:59.090252  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:53:59.124680  662586 cri.go:89] found id: ""
	I1209 11:53:59.124718  662586 logs.go:282] 0 containers: []
	W1209 11:53:59.124730  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:53:59.124739  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:53:59.124802  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:53:59.157772  662586 cri.go:89] found id: ""
	I1209 11:53:59.157808  662586 logs.go:282] 0 containers: []
	W1209 11:53:59.157819  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:53:59.157828  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:53:59.157892  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:53:59.191098  662586 cri.go:89] found id: ""
	I1209 11:53:59.191132  662586 logs.go:282] 0 containers: []
	W1209 11:53:59.191141  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:53:59.191148  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:53:59.191212  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:53:59.224050  662586 cri.go:89] found id: ""
	I1209 11:53:59.224090  662586 logs.go:282] 0 containers: []
	W1209 11:53:59.224102  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:53:59.224110  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:53:59.224198  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:53:59.262361  662586 cri.go:89] found id: ""
	I1209 11:53:59.262397  662586 logs.go:282] 0 containers: []
	W1209 11:53:59.262418  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:53:59.262432  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:53:59.262456  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:53:59.276811  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:53:59.276844  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:53:59.349465  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:53:59.349492  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:53:59.349506  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:53:59.429146  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:53:59.429192  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:53:59.470246  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:53:59.470287  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:02.021651  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:02.036039  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:02.036109  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:02.070999  662586 cri.go:89] found id: ""
	I1209 11:54:02.071034  662586 logs.go:282] 0 containers: []
	W1209 11:54:02.071045  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:02.071052  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:02.071119  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:02.107506  662586 cri.go:89] found id: ""
	I1209 11:54:02.107536  662586 logs.go:282] 0 containers: []
	W1209 11:54:02.107546  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:02.107554  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:02.107624  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:02.146279  662586 cri.go:89] found id: ""
	I1209 11:54:02.146314  662586 logs.go:282] 0 containers: []
	W1209 11:54:02.146326  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:02.146342  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:02.146408  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:02.178349  662586 cri.go:89] found id: ""
	I1209 11:54:02.178378  662586 logs.go:282] 0 containers: []
	W1209 11:54:02.178387  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:02.178402  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:02.178460  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:02.211916  662586 cri.go:89] found id: ""
	I1209 11:54:02.211952  662586 logs.go:282] 0 containers: []
	W1209 11:54:02.211963  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:02.211969  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:02.212038  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:02.246334  662586 cri.go:89] found id: ""
	I1209 11:54:02.246370  662586 logs.go:282] 0 containers: []
	W1209 11:54:02.246380  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:02.246387  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:02.246452  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:02.280111  662586 cri.go:89] found id: ""
	I1209 11:54:02.280157  662586 logs.go:282] 0 containers: []
	W1209 11:54:02.280168  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:02.280175  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:02.280246  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:02.314141  662586 cri.go:89] found id: ""
	I1209 11:54:02.314188  662586 logs.go:282] 0 containers: []
	W1209 11:54:02.314203  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:02.314216  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:02.314236  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:02.327220  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:02.327253  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:02.396099  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:02.396127  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:02.396142  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:02.478096  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:02.478148  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:02.515144  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:02.515175  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:05.069286  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:05.082453  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:05.082540  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:05.116263  662586 cri.go:89] found id: ""
	I1209 11:54:05.116299  662586 logs.go:282] 0 containers: []
	W1209 11:54:05.116313  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:05.116321  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:05.116388  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:05.150736  662586 cri.go:89] found id: ""
	I1209 11:54:05.150775  662586 logs.go:282] 0 containers: []
	W1209 11:54:05.150788  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:05.150796  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:05.150864  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:05.183757  662586 cri.go:89] found id: ""
	I1209 11:54:05.183792  662586 logs.go:282] 0 containers: []
	W1209 11:54:05.183804  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:05.183812  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:05.183873  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:05.215986  662586 cri.go:89] found id: ""
	I1209 11:54:05.216017  662586 logs.go:282] 0 containers: []
	W1209 11:54:05.216029  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:05.216038  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:05.216096  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:05.247648  662586 cri.go:89] found id: ""
	I1209 11:54:05.247686  662586 logs.go:282] 0 containers: []
	W1209 11:54:05.247698  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:05.247707  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:05.247776  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:05.279455  662586 cri.go:89] found id: ""
	I1209 11:54:05.279484  662586 logs.go:282] 0 containers: []
	W1209 11:54:05.279495  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:05.279504  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:05.279567  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:05.320334  662586 cri.go:89] found id: ""
	I1209 11:54:05.320374  662586 logs.go:282] 0 containers: []
	W1209 11:54:05.320387  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:05.320398  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:05.320490  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:05.353475  662586 cri.go:89] found id: ""
	I1209 11:54:05.353503  662586 logs.go:282] 0 containers: []
	W1209 11:54:05.353512  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:05.353522  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:05.353536  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:05.368320  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:05.368357  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:05.442152  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:05.442193  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:05.442212  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:05.523726  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:05.523768  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:05.562405  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:05.562438  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:08.115564  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:08.129426  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:08.129523  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:08.162412  662586 cri.go:89] found id: ""
	I1209 11:54:08.162454  662586 logs.go:282] 0 containers: []
	W1209 11:54:08.162467  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:08.162477  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:08.162543  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:08.196821  662586 cri.go:89] found id: ""
	I1209 11:54:08.196860  662586 logs.go:282] 0 containers: []
	W1209 11:54:08.196873  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:08.196882  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:08.196949  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:08.233068  662586 cri.go:89] found id: ""
	I1209 11:54:08.233106  662586 logs.go:282] 0 containers: []
	W1209 11:54:08.233117  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:08.233124  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:08.233184  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:08.268683  662586 cri.go:89] found id: ""
	I1209 11:54:08.268715  662586 logs.go:282] 0 containers: []
	W1209 11:54:08.268724  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:08.268731  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:08.268790  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:08.303237  662586 cri.go:89] found id: ""
	I1209 11:54:08.303276  662586 logs.go:282] 0 containers: []
	W1209 11:54:08.303288  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:08.303309  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:08.303393  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:08.339513  662586 cri.go:89] found id: ""
	I1209 11:54:08.339543  662586 logs.go:282] 0 containers: []
	W1209 11:54:08.339551  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:08.339557  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:08.339612  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:08.376237  662586 cri.go:89] found id: ""
	I1209 11:54:08.376268  662586 logs.go:282] 0 containers: []
	W1209 11:54:08.376289  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:08.376298  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:08.376363  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:08.410530  662586 cri.go:89] found id: ""
	I1209 11:54:08.410560  662586 logs.go:282] 0 containers: []
	W1209 11:54:08.410568  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:08.410577  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:08.410589  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:08.460064  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:08.460101  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:08.474547  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:08.474582  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:08.544231  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:08.544260  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:08.544277  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:08.624727  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:08.624775  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:11.167943  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:11.183210  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:11.183294  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:11.221326  662586 cri.go:89] found id: ""
	I1209 11:54:11.221356  662586 logs.go:282] 0 containers: []
	W1209 11:54:11.221365  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:11.221371  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:11.221434  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:11.254688  662586 cri.go:89] found id: ""
	I1209 11:54:11.254721  662586 logs.go:282] 0 containers: []
	W1209 11:54:11.254730  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:11.254736  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:11.254801  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:11.287611  662586 cri.go:89] found id: ""
	I1209 11:54:11.287649  662586 logs.go:282] 0 containers: []
	W1209 11:54:11.287660  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:11.287666  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:11.287738  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:11.320533  662586 cri.go:89] found id: ""
	I1209 11:54:11.320565  662586 logs.go:282] 0 containers: []
	W1209 11:54:11.320574  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:11.320580  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:11.320638  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:11.362890  662586 cri.go:89] found id: ""
	I1209 11:54:11.362923  662586 logs.go:282] 0 containers: []
	W1209 11:54:11.362933  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:11.362939  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:11.363007  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:11.418729  662586 cri.go:89] found id: ""
	I1209 11:54:11.418762  662586 logs.go:282] 0 containers: []
	W1209 11:54:11.418772  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:11.418779  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:11.418837  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:11.455336  662586 cri.go:89] found id: ""
	I1209 11:54:11.455374  662586 logs.go:282] 0 containers: []
	W1209 11:54:11.455388  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:11.455397  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:11.455479  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:11.491307  662586 cri.go:89] found id: ""
	I1209 11:54:11.491334  662586 logs.go:282] 0 containers: []
	W1209 11:54:11.491344  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:11.491355  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:11.491369  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:11.543161  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:11.543204  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:11.556633  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:11.556670  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:11.626971  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:11.627001  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:11.627025  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:11.702061  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:11.702107  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:14.245241  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:14.258461  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:14.258544  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:14.292108  662586 cri.go:89] found id: ""
	I1209 11:54:14.292147  662586 logs.go:282] 0 containers: []
	W1209 11:54:14.292156  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:14.292163  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:14.292219  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:14.327347  662586 cri.go:89] found id: ""
	I1209 11:54:14.327381  662586 logs.go:282] 0 containers: []
	W1209 11:54:14.327394  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:14.327403  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:14.327484  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:14.361188  662586 cri.go:89] found id: ""
	I1209 11:54:14.361220  662586 logs.go:282] 0 containers: []
	W1209 11:54:14.361229  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:14.361236  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:14.361290  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:14.394898  662586 cri.go:89] found id: ""
	I1209 11:54:14.394936  662586 logs.go:282] 0 containers: []
	W1209 11:54:14.394948  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:14.394960  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:14.395027  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:14.429326  662586 cri.go:89] found id: ""
	I1209 11:54:14.429402  662586 logs.go:282] 0 containers: []
	W1209 11:54:14.429420  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:14.429431  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:14.429510  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:14.462903  662586 cri.go:89] found id: ""
	I1209 11:54:14.462938  662586 logs.go:282] 0 containers: []
	W1209 11:54:14.462947  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:14.462954  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:14.463009  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:14.496362  662586 cri.go:89] found id: ""
	I1209 11:54:14.496396  662586 logs.go:282] 0 containers: []
	W1209 11:54:14.496409  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:14.496418  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:14.496562  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:14.530052  662586 cri.go:89] found id: ""
	I1209 11:54:14.530085  662586 logs.go:282] 0 containers: []
	W1209 11:54:14.530098  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:14.530111  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:14.530131  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:14.543096  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:14.543133  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:14.611030  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:14.611055  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:14.611067  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:14.684984  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:14.685041  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:14.722842  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:14.722881  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:17.275868  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:17.288812  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:17.288898  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:17.323732  662586 cri.go:89] found id: ""
	I1209 11:54:17.323766  662586 logs.go:282] 0 containers: []
	W1209 11:54:17.323777  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:17.323786  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:17.323852  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:17.367753  662586 cri.go:89] found id: ""
	I1209 11:54:17.367788  662586 logs.go:282] 0 containers: []
	W1209 11:54:17.367801  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:17.367810  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:17.367878  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:17.411444  662586 cri.go:89] found id: ""
	I1209 11:54:17.411476  662586 logs.go:282] 0 containers: []
	W1209 11:54:17.411488  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:17.411496  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:17.411563  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:17.450790  662586 cri.go:89] found id: ""
	I1209 11:54:17.450821  662586 logs.go:282] 0 containers: []
	W1209 11:54:17.450832  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:17.450840  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:17.450913  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:17.488824  662586 cri.go:89] found id: ""
	I1209 11:54:17.488859  662586 logs.go:282] 0 containers: []
	W1209 11:54:17.488869  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:17.488876  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:17.488948  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:17.522051  662586 cri.go:89] found id: ""
	I1209 11:54:17.522085  662586 logs.go:282] 0 containers: []
	W1209 11:54:17.522094  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:17.522102  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:17.522165  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:17.556653  662586 cri.go:89] found id: ""
	I1209 11:54:17.556687  662586 logs.go:282] 0 containers: []
	W1209 11:54:17.556700  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:17.556707  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:17.556783  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:17.591303  662586 cri.go:89] found id: ""
	I1209 11:54:17.591337  662586 logs.go:282] 0 containers: []
	W1209 11:54:17.591355  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:17.591367  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:17.591384  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:17.656675  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:17.656699  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:17.656712  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:17.739894  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:17.739939  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:17.789486  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:17.789517  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:17.843606  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:17.843648  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:20.361896  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:20.378015  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:20.378105  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:20.412252  662586 cri.go:89] found id: ""
	I1209 11:54:20.412299  662586 logs.go:282] 0 containers: []
	W1209 11:54:20.412311  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:20.412327  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:20.412396  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:20.443638  662586 cri.go:89] found id: ""
	I1209 11:54:20.443671  662586 logs.go:282] 0 containers: []
	W1209 11:54:20.443682  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:20.443690  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:20.443758  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:20.478578  662586 cri.go:89] found id: ""
	I1209 11:54:20.478613  662586 logs.go:282] 0 containers: []
	W1209 11:54:20.478625  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:20.478634  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:20.478704  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:20.512232  662586 cri.go:89] found id: ""
	I1209 11:54:20.512266  662586 logs.go:282] 0 containers: []
	W1209 11:54:20.512279  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:20.512295  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:20.512357  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:20.544358  662586 cri.go:89] found id: ""
	I1209 11:54:20.544398  662586 logs.go:282] 0 containers: []
	W1209 11:54:20.544413  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:20.544429  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:20.544494  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:20.579476  662586 cri.go:89] found id: ""
	I1209 11:54:20.579513  662586 logs.go:282] 0 containers: []
	W1209 11:54:20.579525  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:20.579533  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:20.579600  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:20.613851  662586 cri.go:89] found id: ""
	I1209 11:54:20.613884  662586 logs.go:282] 0 containers: []
	W1209 11:54:20.613897  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:20.613903  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:20.613973  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:20.647311  662586 cri.go:89] found id: ""
	I1209 11:54:20.647342  662586 logs.go:282] 0 containers: []
	W1209 11:54:20.647351  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:20.647362  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:20.647375  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:20.695798  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:20.695839  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:20.709443  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:20.709478  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:20.779211  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:20.779237  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:20.779253  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:20.857966  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:20.858012  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:23.398095  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:23.412622  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:23.412686  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:23.446582  662586 cri.go:89] found id: ""
	I1209 11:54:23.446616  662586 logs.go:282] 0 containers: []
	W1209 11:54:23.446628  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:23.446637  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:23.446705  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:23.487896  662586 cri.go:89] found id: ""
	I1209 11:54:23.487926  662586 logs.go:282] 0 containers: []
	W1209 11:54:23.487935  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:23.487941  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:23.488007  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:23.521520  662586 cri.go:89] found id: ""
	I1209 11:54:23.521559  662586 logs.go:282] 0 containers: []
	W1209 11:54:23.521571  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:23.521579  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:23.521651  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:23.561296  662586 cri.go:89] found id: ""
	I1209 11:54:23.561329  662586 logs.go:282] 0 containers: []
	W1209 11:54:23.561342  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:23.561350  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:23.561417  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:23.604936  662586 cri.go:89] found id: ""
	I1209 11:54:23.604965  662586 logs.go:282] 0 containers: []
	W1209 11:54:23.604976  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:23.604985  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:23.605055  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:23.665193  662586 cri.go:89] found id: ""
	I1209 11:54:23.665225  662586 logs.go:282] 0 containers: []
	W1209 11:54:23.665237  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:23.665247  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:23.665315  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:23.700202  662586 cri.go:89] found id: ""
	I1209 11:54:23.700239  662586 logs.go:282] 0 containers: []
	W1209 11:54:23.700251  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:23.700259  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:23.700336  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:23.734877  662586 cri.go:89] found id: ""
	I1209 11:54:23.734907  662586 logs.go:282] 0 containers: []
	W1209 11:54:23.734917  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:23.734927  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:23.734941  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:23.817328  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:23.817371  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:23.855052  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:23.855085  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:23.909107  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:23.909154  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:23.924198  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:23.924227  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:23.991976  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:26.492366  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:26.506223  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:26.506299  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:26.544932  662586 cri.go:89] found id: ""
	I1209 11:54:26.544974  662586 logs.go:282] 0 containers: []
	W1209 11:54:26.544987  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:26.544997  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:26.545080  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:26.579581  662586 cri.go:89] found id: ""
	I1209 11:54:26.579621  662586 logs.go:282] 0 containers: []
	W1209 11:54:26.579634  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:26.579643  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:26.579716  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:26.612510  662586 cri.go:89] found id: ""
	I1209 11:54:26.612545  662586 logs.go:282] 0 containers: []
	W1209 11:54:26.612567  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:26.612577  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:26.612646  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:26.646273  662586 cri.go:89] found id: ""
	I1209 11:54:26.646306  662586 logs.go:282] 0 containers: []
	W1209 11:54:26.646316  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:26.646322  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:26.646376  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:26.682027  662586 cri.go:89] found id: ""
	I1209 11:54:26.682063  662586 logs.go:282] 0 containers: []
	W1209 11:54:26.682072  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:26.682078  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:26.682132  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:26.715822  662586 cri.go:89] found id: ""
	I1209 11:54:26.715876  662586 logs.go:282] 0 containers: []
	W1209 11:54:26.715889  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:26.715898  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:26.715964  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:26.755976  662586 cri.go:89] found id: ""
	I1209 11:54:26.756016  662586 logs.go:282] 0 containers: []
	W1209 11:54:26.756031  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:26.756040  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:26.756122  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:26.787258  662586 cri.go:89] found id: ""
	I1209 11:54:26.787297  662586 logs.go:282] 0 containers: []
	W1209 11:54:26.787308  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:26.787319  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:26.787333  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:26.800534  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:26.800573  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:26.865767  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:26.865798  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:26.865824  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:26.950409  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:26.950460  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:26.994281  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:26.994320  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:29.544568  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:29.565182  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:29.565263  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:29.625116  662586 cri.go:89] found id: ""
	I1209 11:54:29.625155  662586 logs.go:282] 0 containers: []
	W1209 11:54:29.625168  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:29.625181  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:29.625257  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:29.673689  662586 cri.go:89] found id: ""
	I1209 11:54:29.673727  662586 logs.go:282] 0 containers: []
	W1209 11:54:29.673739  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:29.673747  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:29.673811  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:29.705925  662586 cri.go:89] found id: ""
	I1209 11:54:29.705959  662586 logs.go:282] 0 containers: []
	W1209 11:54:29.705971  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:29.705979  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:29.706033  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:29.738731  662586 cri.go:89] found id: ""
	I1209 11:54:29.738759  662586 logs.go:282] 0 containers: []
	W1209 11:54:29.738767  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:29.738774  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:29.738832  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:29.770778  662586 cri.go:89] found id: ""
	I1209 11:54:29.770814  662586 logs.go:282] 0 containers: []
	W1209 11:54:29.770826  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:29.770833  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:29.770899  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:29.801925  662586 cri.go:89] found id: ""
	I1209 11:54:29.801961  662586 logs.go:282] 0 containers: []
	W1209 11:54:29.801973  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:29.801981  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:29.802050  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:29.833681  662586 cri.go:89] found id: ""
	I1209 11:54:29.833712  662586 logs.go:282] 0 containers: []
	W1209 11:54:29.833722  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:29.833727  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:29.833791  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:29.873666  662586 cri.go:89] found id: ""
	I1209 11:54:29.873700  662586 logs.go:282] 0 containers: []
	W1209 11:54:29.873712  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:29.873722  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:29.873735  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:29.914855  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:29.914895  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:29.967730  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:29.967772  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:29.982037  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:29.982070  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:30.047168  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:30.047195  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:30.047212  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:32.623371  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:32.636346  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:32.636411  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:32.677709  662586 cri.go:89] found id: ""
	I1209 11:54:32.677736  662586 logs.go:282] 0 containers: []
	W1209 11:54:32.677744  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:32.677753  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:32.677805  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:32.710906  662586 cri.go:89] found id: ""
	I1209 11:54:32.710933  662586 logs.go:282] 0 containers: []
	W1209 11:54:32.710942  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:32.710948  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:32.711000  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:32.744623  662586 cri.go:89] found id: ""
	I1209 11:54:32.744654  662586 logs.go:282] 0 containers: []
	W1209 11:54:32.744667  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:32.744676  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:32.744736  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:32.779334  662586 cri.go:89] found id: ""
	I1209 11:54:32.779364  662586 logs.go:282] 0 containers: []
	W1209 11:54:32.779375  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:32.779382  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:32.779443  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:32.814998  662586 cri.go:89] found id: ""
	I1209 11:54:32.815032  662586 logs.go:282] 0 containers: []
	W1209 11:54:32.815046  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:32.815055  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:32.815128  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:32.850054  662586 cri.go:89] found id: ""
	I1209 11:54:32.850099  662586 logs.go:282] 0 containers: []
	W1209 11:54:32.850116  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:32.850127  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:32.850213  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:32.885769  662586 cri.go:89] found id: ""
	I1209 11:54:32.885805  662586 logs.go:282] 0 containers: []
	W1209 11:54:32.885818  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:32.885827  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:32.885899  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:32.927973  662586 cri.go:89] found id: ""
	I1209 11:54:32.928001  662586 logs.go:282] 0 containers: []
	W1209 11:54:32.928010  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:32.928019  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:32.928032  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:32.981915  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:32.981966  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:32.995817  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:32.995851  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:33.062409  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:33.062445  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:33.062462  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:33.146967  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:33.147011  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:35.688225  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:35.701226  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:35.701325  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:35.738628  662586 cri.go:89] found id: ""
	I1209 11:54:35.738655  662586 logs.go:282] 0 containers: []
	W1209 11:54:35.738663  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:35.738670  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:35.738737  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:35.771125  662586 cri.go:89] found id: ""
	I1209 11:54:35.771163  662586 logs.go:282] 0 containers: []
	W1209 11:54:35.771177  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:35.771187  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:35.771260  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:35.806244  662586 cri.go:89] found id: ""
	I1209 11:54:35.806277  662586 logs.go:282] 0 containers: []
	W1209 11:54:35.806290  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:35.806301  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:35.806376  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:35.839871  662586 cri.go:89] found id: ""
	I1209 11:54:35.839912  662586 logs.go:282] 0 containers: []
	W1209 11:54:35.839925  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:35.839932  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:35.840010  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:35.874994  662586 cri.go:89] found id: ""
	I1209 11:54:35.875034  662586 logs.go:282] 0 containers: []
	W1209 11:54:35.875046  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:35.875054  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:35.875129  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:35.910802  662586 cri.go:89] found id: ""
	I1209 11:54:35.910834  662586 logs.go:282] 0 containers: []
	W1209 11:54:35.910846  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:35.910855  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:35.910927  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:35.944633  662586 cri.go:89] found id: ""
	I1209 11:54:35.944663  662586 logs.go:282] 0 containers: []
	W1209 11:54:35.944672  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:35.944678  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:35.944749  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:35.982732  662586 cri.go:89] found id: ""
	I1209 11:54:35.982781  662586 logs.go:282] 0 containers: []
	W1209 11:54:35.982796  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:35.982811  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:35.982830  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:35.996271  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:35.996302  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:36.063463  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:36.063533  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:36.063554  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:36.141789  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:36.141833  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:36.187015  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:36.187047  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:38.739585  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:38.754322  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:38.754394  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:38.792497  662586 cri.go:89] found id: ""
	I1209 11:54:38.792525  662586 logs.go:282] 0 containers: []
	W1209 11:54:38.792535  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:38.792543  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:38.792608  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:38.829730  662586 cri.go:89] found id: ""
	I1209 11:54:38.829759  662586 logs.go:282] 0 containers: []
	W1209 11:54:38.829768  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:38.829774  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:38.829834  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:38.869942  662586 cri.go:89] found id: ""
	I1209 11:54:38.869981  662586 logs.go:282] 0 containers: []
	W1209 11:54:38.869994  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:38.870015  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:38.870085  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:38.906001  662586 cri.go:89] found id: ""
	I1209 11:54:38.906041  662586 logs.go:282] 0 containers: []
	W1209 11:54:38.906054  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:38.906063  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:38.906133  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:38.944389  662586 cri.go:89] found id: ""
	I1209 11:54:38.944427  662586 logs.go:282] 0 containers: []
	W1209 11:54:38.944445  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:38.944453  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:38.944534  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:38.979633  662586 cri.go:89] found id: ""
	I1209 11:54:38.979665  662586 logs.go:282] 0 containers: []
	W1209 11:54:38.979674  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:38.979681  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:38.979735  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:39.016366  662586 cri.go:89] found id: ""
	I1209 11:54:39.016402  662586 logs.go:282] 0 containers: []
	W1209 11:54:39.016416  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:39.016424  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:39.016489  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:39.049084  662586 cri.go:89] found id: ""
	I1209 11:54:39.049116  662586 logs.go:282] 0 containers: []
	W1209 11:54:39.049125  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:39.049134  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:39.049148  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:39.113953  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:39.113985  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:39.114004  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:39.191715  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:39.191767  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:39.232127  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:39.232167  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:39.281406  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:39.281448  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:41.795395  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:41.810293  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:41.810364  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:41.849819  662586 cri.go:89] found id: ""
	I1209 11:54:41.849858  662586 logs.go:282] 0 containers: []
	W1209 11:54:41.849872  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:41.849882  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:41.849952  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:41.883871  662586 cri.go:89] found id: ""
	I1209 11:54:41.883908  662586 logs.go:282] 0 containers: []
	W1209 11:54:41.883934  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:41.883942  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:41.884017  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:41.918194  662586 cri.go:89] found id: ""
	I1209 11:54:41.918230  662586 logs.go:282] 0 containers: []
	W1209 11:54:41.918239  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:41.918245  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:41.918312  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:41.950878  662586 cri.go:89] found id: ""
	I1209 11:54:41.950912  662586 logs.go:282] 0 containers: []
	W1209 11:54:41.950924  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:41.950933  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:41.950995  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:41.982922  662586 cri.go:89] found id: ""
	I1209 11:54:41.982964  662586 logs.go:282] 0 containers: []
	W1209 11:54:41.982976  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:41.982985  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:41.983064  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:42.014066  662586 cri.go:89] found id: ""
	I1209 11:54:42.014107  662586 logs.go:282] 0 containers: []
	W1209 11:54:42.014120  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:42.014129  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:42.014229  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:42.048017  662586 cri.go:89] found id: ""
	I1209 11:54:42.048056  662586 logs.go:282] 0 containers: []
	W1209 11:54:42.048070  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:42.048079  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:42.048146  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:42.080585  662586 cri.go:89] found id: ""
	I1209 11:54:42.080614  662586 logs.go:282] 0 containers: []
	W1209 11:54:42.080624  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:42.080634  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:42.080646  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:42.135012  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:42.135054  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:42.148424  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:42.148462  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:42.219179  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:42.219206  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:42.219230  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:42.305855  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:42.305902  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:44.843158  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:44.856317  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:44.856380  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:44.890940  662586 cri.go:89] found id: ""
	I1209 11:54:44.890984  662586 logs.go:282] 0 containers: []
	W1209 11:54:44.891003  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:44.891012  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:44.891081  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:44.923657  662586 cri.go:89] found id: ""
	I1209 11:54:44.923684  662586 logs.go:282] 0 containers: []
	W1209 11:54:44.923692  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:44.923698  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:44.923769  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:44.957512  662586 cri.go:89] found id: ""
	I1209 11:54:44.957545  662586 logs.go:282] 0 containers: []
	W1209 11:54:44.957558  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:44.957566  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:44.957636  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:44.998084  662586 cri.go:89] found id: ""
	I1209 11:54:44.998112  662586 logs.go:282] 0 containers: []
	W1209 11:54:44.998121  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:44.998128  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:44.998210  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:45.030335  662586 cri.go:89] found id: ""
	I1209 11:54:45.030360  662586 logs.go:282] 0 containers: []
	W1209 11:54:45.030369  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:45.030375  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:45.030447  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:45.063098  662586 cri.go:89] found id: ""
	I1209 11:54:45.063127  662586 logs.go:282] 0 containers: []
	W1209 11:54:45.063135  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:45.063141  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:45.063210  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:45.098430  662586 cri.go:89] found id: ""
	I1209 11:54:45.098458  662586 logs.go:282] 0 containers: []
	W1209 11:54:45.098466  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:45.098472  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:45.098526  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:45.132064  662586 cri.go:89] found id: ""
	I1209 11:54:45.132094  662586 logs.go:282] 0 containers: []
	W1209 11:54:45.132102  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:45.132113  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:45.132131  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:45.185512  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:45.185556  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:45.199543  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:45.199572  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:45.268777  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:45.268803  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:45.268817  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:45.352250  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:45.352299  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:47.892201  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:47.906961  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:47.907053  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:47.941349  662586 cri.go:89] found id: ""
	I1209 11:54:47.941394  662586 logs.go:282] 0 containers: []
	W1209 11:54:47.941408  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:47.941418  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:47.941479  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:47.981086  662586 cri.go:89] found id: ""
	I1209 11:54:47.981120  662586 logs.go:282] 0 containers: []
	W1209 11:54:47.981133  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:47.981141  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:47.981210  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:48.014105  662586 cri.go:89] found id: ""
	I1209 11:54:48.014142  662586 logs.go:282] 0 containers: []
	W1209 11:54:48.014151  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:48.014162  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:48.014249  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:48.049506  662586 cri.go:89] found id: ""
	I1209 11:54:48.049535  662586 logs.go:282] 0 containers: []
	W1209 11:54:48.049544  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:48.049552  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:48.049619  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:48.084284  662586 cri.go:89] found id: ""
	I1209 11:54:48.084314  662586 logs.go:282] 0 containers: []
	W1209 11:54:48.084324  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:48.084336  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:48.084406  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:48.117318  662586 cri.go:89] found id: ""
	I1209 11:54:48.117349  662586 logs.go:282] 0 containers: []
	W1209 11:54:48.117362  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:48.117371  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:48.117441  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:48.150121  662586 cri.go:89] found id: ""
	I1209 11:54:48.150151  662586 logs.go:282] 0 containers: []
	W1209 11:54:48.150187  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:48.150198  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:48.150266  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:48.180919  662586 cri.go:89] found id: ""
	I1209 11:54:48.180947  662586 logs.go:282] 0 containers: []
	W1209 11:54:48.180955  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:48.180966  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:48.180978  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:48.249572  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:48.249602  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:48.249617  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:48.324508  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:48.324552  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:48.363856  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:48.363901  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:48.415662  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:48.415721  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:50.929811  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:50.943650  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:50.943714  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:50.976444  662586 cri.go:89] found id: ""
	I1209 11:54:50.976480  662586 logs.go:282] 0 containers: []
	W1209 11:54:50.976493  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:50.976502  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:50.976574  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:51.016567  662586 cri.go:89] found id: ""
	I1209 11:54:51.016600  662586 logs.go:282] 0 containers: []
	W1209 11:54:51.016613  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:51.016621  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:51.016699  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:51.048933  662586 cri.go:89] found id: ""
	I1209 11:54:51.048967  662586 logs.go:282] 0 containers: []
	W1209 11:54:51.048977  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:51.048986  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:51.049073  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:51.083292  662586 cri.go:89] found id: ""
	I1209 11:54:51.083333  662586 logs.go:282] 0 containers: []
	W1209 11:54:51.083345  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:51.083354  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:51.083423  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:51.118505  662586 cri.go:89] found id: ""
	I1209 11:54:51.118547  662586 logs.go:282] 0 containers: []
	W1209 11:54:51.118560  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:51.118571  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:51.118644  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:51.152818  662586 cri.go:89] found id: ""
	I1209 11:54:51.152847  662586 logs.go:282] 0 containers: []
	W1209 11:54:51.152856  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:51.152870  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:51.152922  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:51.186953  662586 cri.go:89] found id: ""
	I1209 11:54:51.186981  662586 logs.go:282] 0 containers: []
	W1209 11:54:51.186991  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:51.186997  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:51.187063  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:51.219305  662586 cri.go:89] found id: ""
	I1209 11:54:51.219337  662586 logs.go:282] 0 containers: []
	W1209 11:54:51.219348  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:51.219361  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:51.219380  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:51.256295  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:51.256338  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:51.313751  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:51.313806  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:51.326940  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:51.326977  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:51.397395  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:51.397428  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:51.397445  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:53.975557  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:53.989509  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:53.989581  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:54.024363  662586 cri.go:89] found id: ""
	I1209 11:54:54.024403  662586 logs.go:282] 0 containers: []
	W1209 11:54:54.024416  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:54.024423  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:54.024484  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:54.062618  662586 cri.go:89] found id: ""
	I1209 11:54:54.062649  662586 logs.go:282] 0 containers: []
	W1209 11:54:54.062659  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:54.062667  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:54.062739  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:54.100194  662586 cri.go:89] found id: ""
	I1209 11:54:54.100231  662586 logs.go:282] 0 containers: []
	W1209 11:54:54.100243  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:54.100252  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:54.100324  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:54.135302  662586 cri.go:89] found id: ""
	I1209 11:54:54.135341  662586 logs.go:282] 0 containers: []
	W1209 11:54:54.135354  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:54.135363  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:54.135447  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:54.170898  662586 cri.go:89] found id: ""
	I1209 11:54:54.170940  662586 logs.go:282] 0 containers: []
	W1209 11:54:54.170953  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:54.170963  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:54.171035  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:54.205098  662586 cri.go:89] found id: ""
	I1209 11:54:54.205138  662586 logs.go:282] 0 containers: []
	W1209 11:54:54.205151  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:54.205159  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:54.205223  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:54.239153  662586 cri.go:89] found id: ""
	I1209 11:54:54.239210  662586 logs.go:282] 0 containers: []
	W1209 11:54:54.239226  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:54.239234  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:54.239307  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:54.278213  662586 cri.go:89] found id: ""
	I1209 11:54:54.278248  662586 logs.go:282] 0 containers: []
	W1209 11:54:54.278260  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:54.278275  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:54.278296  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:54.348095  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:54.348128  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:54.348156  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:54.427181  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:54.427224  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:54.467623  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:54.467656  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:54.519690  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:54.519734  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:57.033524  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:57.046420  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:57.046518  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:57.079588  662586 cri.go:89] found id: ""
	I1209 11:54:57.079616  662586 logs.go:282] 0 containers: []
	W1209 11:54:57.079626  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:57.079633  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:57.079687  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:57.114944  662586 cri.go:89] found id: ""
	I1209 11:54:57.114973  662586 logs.go:282] 0 containers: []
	W1209 11:54:57.114982  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:57.114988  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:57.115043  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:57.147667  662586 cri.go:89] found id: ""
	I1209 11:54:57.147708  662586 logs.go:282] 0 containers: []
	W1209 11:54:57.147721  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:57.147730  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:57.147794  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:57.182339  662586 cri.go:89] found id: ""
	I1209 11:54:57.182370  662586 logs.go:282] 0 containers: []
	W1209 11:54:57.182386  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:57.182395  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:57.182470  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:57.223129  662586 cri.go:89] found id: ""
	I1209 11:54:57.223170  662586 logs.go:282] 0 containers: []
	W1209 11:54:57.223186  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:57.223197  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:57.223270  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:57.262351  662586 cri.go:89] found id: ""
	I1209 11:54:57.262386  662586 logs.go:282] 0 containers: []
	W1209 11:54:57.262398  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:57.262409  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:57.262471  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:57.298743  662586 cri.go:89] found id: ""
	I1209 11:54:57.298772  662586 logs.go:282] 0 containers: []
	W1209 11:54:57.298782  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:57.298789  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:57.298856  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:57.339030  662586 cri.go:89] found id: ""
	I1209 11:54:57.339064  662586 logs.go:282] 0 containers: []
	W1209 11:54:57.339073  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:57.339085  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:57.339122  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:57.352603  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:57.352637  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:57.426627  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:57.426653  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:57.426669  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:57.515357  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:57.515401  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:57.554882  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:57.554925  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:00.112082  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:00.124977  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:00.125056  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:00.159003  662586 cri.go:89] found id: ""
	I1209 11:55:00.159032  662586 logs.go:282] 0 containers: []
	W1209 11:55:00.159041  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:00.159048  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:00.159101  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:00.192479  662586 cri.go:89] found id: ""
	I1209 11:55:00.192515  662586 logs.go:282] 0 containers: []
	W1209 11:55:00.192527  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:00.192533  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:00.192587  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:00.226146  662586 cri.go:89] found id: ""
	I1209 11:55:00.226194  662586 logs.go:282] 0 containers: []
	W1209 11:55:00.226208  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:00.226216  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:00.226273  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:00.260389  662586 cri.go:89] found id: ""
	I1209 11:55:00.260420  662586 logs.go:282] 0 containers: []
	W1209 11:55:00.260430  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:00.260442  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:00.260500  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:00.296091  662586 cri.go:89] found id: ""
	I1209 11:55:00.296121  662586 logs.go:282] 0 containers: []
	W1209 11:55:00.296131  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:00.296138  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:00.296195  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:00.332101  662586 cri.go:89] found id: ""
	I1209 11:55:00.332137  662586 logs.go:282] 0 containers: []
	W1209 11:55:00.332150  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:00.332158  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:00.332244  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:00.377329  662586 cri.go:89] found id: ""
	I1209 11:55:00.377358  662586 logs.go:282] 0 containers: []
	W1209 11:55:00.377368  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:00.377374  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:00.377438  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:00.415660  662586 cri.go:89] found id: ""
	I1209 11:55:00.415688  662586 logs.go:282] 0 containers: []
	W1209 11:55:00.415751  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:00.415767  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:00.415781  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:00.467734  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:00.467776  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:00.481244  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:00.481280  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:00.545721  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:00.545755  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:00.545777  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:00.624482  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:00.624533  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:03.168340  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:03.183354  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:03.183439  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:03.223131  662586 cri.go:89] found id: ""
	I1209 11:55:03.223171  662586 logs.go:282] 0 containers: []
	W1209 11:55:03.223185  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:03.223193  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:03.223263  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:03.256561  662586 cri.go:89] found id: ""
	I1209 11:55:03.256595  662586 logs.go:282] 0 containers: []
	W1209 11:55:03.256603  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:03.256609  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:03.256667  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:03.289670  662586 cri.go:89] found id: ""
	I1209 11:55:03.289707  662586 logs.go:282] 0 containers: []
	W1209 11:55:03.289722  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:03.289738  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:03.289813  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:03.323687  662586 cri.go:89] found id: ""
	I1209 11:55:03.323714  662586 logs.go:282] 0 containers: []
	W1209 11:55:03.323724  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:03.323730  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:03.323786  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:03.358163  662586 cri.go:89] found id: ""
	I1209 11:55:03.358221  662586 logs.go:282] 0 containers: []
	W1209 11:55:03.358233  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:03.358241  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:03.358311  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:03.399688  662586 cri.go:89] found id: ""
	I1209 11:55:03.399721  662586 logs.go:282] 0 containers: []
	W1209 11:55:03.399734  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:03.399744  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:03.399812  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:03.433909  662586 cri.go:89] found id: ""
	I1209 11:55:03.433939  662586 logs.go:282] 0 containers: []
	W1209 11:55:03.433948  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:03.433954  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:03.434011  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:03.470208  662586 cri.go:89] found id: ""
	I1209 11:55:03.470239  662586 logs.go:282] 0 containers: []
	W1209 11:55:03.470248  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:03.470270  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:03.470289  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:03.545801  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:03.545848  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:03.584357  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:03.584389  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:03.641241  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:03.641283  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:03.657034  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:03.657080  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:03.731285  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:06.232380  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:06.246339  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:06.246411  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:06.281323  662586 cri.go:89] found id: ""
	I1209 11:55:06.281362  662586 logs.go:282] 0 containers: []
	W1209 11:55:06.281377  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:06.281385  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:06.281444  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:06.318225  662586 cri.go:89] found id: ""
	I1209 11:55:06.318261  662586 logs.go:282] 0 containers: []
	W1209 11:55:06.318277  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:06.318293  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:06.318364  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:06.353649  662586 cri.go:89] found id: ""
	I1209 11:55:06.353685  662586 logs.go:282] 0 containers: []
	W1209 11:55:06.353699  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:06.353708  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:06.353782  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:06.395204  662586 cri.go:89] found id: ""
	I1209 11:55:06.395242  662586 logs.go:282] 0 containers: []
	W1209 11:55:06.395257  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:06.395266  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:06.395335  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:06.436421  662586 cri.go:89] found id: ""
	I1209 11:55:06.436452  662586 logs.go:282] 0 containers: []
	W1209 11:55:06.436462  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:06.436469  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:06.436524  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:06.472218  662586 cri.go:89] found id: ""
	I1209 11:55:06.472246  662586 logs.go:282] 0 containers: []
	W1209 11:55:06.472255  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:06.472268  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:06.472335  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:06.506585  662586 cri.go:89] found id: ""
	I1209 11:55:06.506629  662586 logs.go:282] 0 containers: []
	W1209 11:55:06.506640  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:06.506647  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:06.506702  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:06.541442  662586 cri.go:89] found id: ""
	I1209 11:55:06.541472  662586 logs.go:282] 0 containers: []
	W1209 11:55:06.541481  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:06.541493  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:06.541512  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:06.592642  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:06.592682  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:06.606764  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:06.606805  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:06.677693  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:06.677720  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:06.677740  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:06.766074  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:06.766124  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:09.305144  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:09.319352  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:09.319444  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:09.357918  662586 cri.go:89] found id: ""
	I1209 11:55:09.358027  662586 logs.go:282] 0 containers: []
	W1209 11:55:09.358066  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:09.358077  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:09.358139  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:09.413181  662586 cri.go:89] found id: ""
	I1209 11:55:09.413213  662586 logs.go:282] 0 containers: []
	W1209 11:55:09.413226  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:09.413234  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:09.413310  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:09.448417  662586 cri.go:89] found id: ""
	I1209 11:55:09.448460  662586 logs.go:282] 0 containers: []
	W1209 11:55:09.448471  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:09.448480  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:09.448566  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:09.489732  662586 cri.go:89] found id: ""
	I1209 11:55:09.489765  662586 logs.go:282] 0 containers: []
	W1209 11:55:09.489775  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:09.489781  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:09.489845  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:09.524919  662586 cri.go:89] found id: ""
	I1209 11:55:09.524948  662586 logs.go:282] 0 containers: []
	W1209 11:55:09.524959  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:09.524968  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:09.525051  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:09.563268  662586 cri.go:89] found id: ""
	I1209 11:55:09.563301  662586 logs.go:282] 0 containers: []
	W1209 11:55:09.563311  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:09.563318  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:09.563373  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:09.598747  662586 cri.go:89] found id: ""
	I1209 11:55:09.598780  662586 logs.go:282] 0 containers: []
	W1209 11:55:09.598790  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:09.598798  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:09.598866  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:09.634447  662586 cri.go:89] found id: ""
	I1209 11:55:09.634479  662586 logs.go:282] 0 containers: []
	W1209 11:55:09.634492  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:09.634505  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:09.634520  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:09.647380  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:09.647419  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:09.721335  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:09.721363  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:09.721380  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:09.801039  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:09.801088  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:09.840929  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:09.840971  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:12.393810  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:12.407553  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:12.407654  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:12.444391  662586 cri.go:89] found id: ""
	I1209 11:55:12.444437  662586 logs.go:282] 0 containers: []
	W1209 11:55:12.444450  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:12.444459  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:12.444533  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:12.482714  662586 cri.go:89] found id: ""
	I1209 11:55:12.482752  662586 logs.go:282] 0 containers: []
	W1209 11:55:12.482764  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:12.482771  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:12.482853  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:12.518139  662586 cri.go:89] found id: ""
	I1209 11:55:12.518187  662586 logs.go:282] 0 containers: []
	W1209 11:55:12.518202  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:12.518211  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:12.518281  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:12.556903  662586 cri.go:89] found id: ""
	I1209 11:55:12.556938  662586 logs.go:282] 0 containers: []
	W1209 11:55:12.556950  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:12.556958  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:12.557028  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:12.591915  662586 cri.go:89] found id: ""
	I1209 11:55:12.591953  662586 logs.go:282] 0 containers: []
	W1209 11:55:12.591963  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:12.591971  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:12.592038  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:12.629767  662586 cri.go:89] found id: ""
	I1209 11:55:12.629797  662586 logs.go:282] 0 containers: []
	W1209 11:55:12.629806  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:12.629812  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:12.629878  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:12.667677  662586 cri.go:89] found id: ""
	I1209 11:55:12.667710  662586 logs.go:282] 0 containers: []
	W1209 11:55:12.667720  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:12.667727  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:12.667781  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:12.705720  662586 cri.go:89] found id: ""
	I1209 11:55:12.705747  662586 logs.go:282] 0 containers: []
	W1209 11:55:12.705756  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:12.705766  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:12.705780  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:12.758399  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:12.758441  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:12.772297  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:12.772336  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:12.839545  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:12.839569  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:12.839582  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:12.918424  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:12.918467  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:15.458122  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:15.473193  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:15.473284  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:15.508756  662586 cri.go:89] found id: ""
	I1209 11:55:15.508790  662586 logs.go:282] 0 containers: []
	W1209 11:55:15.508799  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:15.508806  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:15.508862  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:15.544735  662586 cri.go:89] found id: ""
	I1209 11:55:15.544770  662586 logs.go:282] 0 containers: []
	W1209 11:55:15.544782  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:15.544791  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:15.544866  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:15.577169  662586 cri.go:89] found id: ""
	I1209 11:55:15.577200  662586 logs.go:282] 0 containers: []
	W1209 11:55:15.577210  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:15.577216  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:15.577277  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:15.610662  662586 cri.go:89] found id: ""
	I1209 11:55:15.610690  662586 logs.go:282] 0 containers: []
	W1209 11:55:15.610700  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:15.610707  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:15.610763  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:15.645339  662586 cri.go:89] found id: ""
	I1209 11:55:15.645375  662586 logs.go:282] 0 containers: []
	W1209 11:55:15.645386  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:15.645394  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:15.645469  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:15.682044  662586 cri.go:89] found id: ""
	I1209 11:55:15.682079  662586 logs.go:282] 0 containers: []
	W1209 11:55:15.682096  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:15.682106  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:15.682201  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:15.717193  662586 cri.go:89] found id: ""
	I1209 11:55:15.717228  662586 logs.go:282] 0 containers: []
	W1209 11:55:15.717245  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:15.717256  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:15.717332  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:15.751756  662586 cri.go:89] found id: ""
	I1209 11:55:15.751792  662586 logs.go:282] 0 containers: []
	W1209 11:55:15.751803  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:15.751813  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:15.751827  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:15.811010  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:15.811063  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:15.842556  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:15.842597  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:15.920169  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:15.920195  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:15.920209  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:16.003180  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:16.003226  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:18.542563  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:18.555968  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:18.556059  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:18.588746  662586 cri.go:89] found id: ""
	I1209 11:55:18.588780  662586 logs.go:282] 0 containers: []
	W1209 11:55:18.588790  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:18.588797  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:18.588854  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:18.623664  662586 cri.go:89] found id: ""
	I1209 11:55:18.623707  662586 logs.go:282] 0 containers: []
	W1209 11:55:18.623720  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:18.623728  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:18.623798  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:18.659012  662586 cri.go:89] found id: ""
	I1209 11:55:18.659051  662586 logs.go:282] 0 containers: []
	W1209 11:55:18.659065  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:18.659074  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:18.659148  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:18.693555  662586 cri.go:89] found id: ""
	I1209 11:55:18.693588  662586 logs.go:282] 0 containers: []
	W1209 11:55:18.693600  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:18.693607  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:18.693661  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:18.726609  662586 cri.go:89] found id: ""
	I1209 11:55:18.726641  662586 logs.go:282] 0 containers: []
	W1209 11:55:18.726652  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:18.726659  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:18.726712  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:18.760654  662586 cri.go:89] found id: ""
	I1209 11:55:18.760682  662586 logs.go:282] 0 containers: []
	W1209 11:55:18.760694  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:18.760704  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:18.760761  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:18.794656  662586 cri.go:89] found id: ""
	I1209 11:55:18.794688  662586 logs.go:282] 0 containers: []
	W1209 11:55:18.794699  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:18.794706  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:18.794769  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:18.829988  662586 cri.go:89] found id: ""
	I1209 11:55:18.830030  662586 logs.go:282] 0 containers: []
	W1209 11:55:18.830045  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:18.830059  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:18.830073  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:18.872523  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:18.872558  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:18.929408  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:18.929449  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:18.943095  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:18.943133  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:19.009125  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:19.009150  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:19.009164  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:21.587418  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:21.606271  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:21.606358  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:21.653536  662586 cri.go:89] found id: ""
	I1209 11:55:21.653574  662586 logs.go:282] 0 containers: []
	W1209 11:55:21.653586  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:21.653595  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:21.653671  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:21.687023  662586 cri.go:89] found id: ""
	I1209 11:55:21.687049  662586 logs.go:282] 0 containers: []
	W1209 11:55:21.687060  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:21.687068  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:21.687131  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:21.720112  662586 cri.go:89] found id: ""
	I1209 11:55:21.720150  662586 logs.go:282] 0 containers: []
	W1209 11:55:21.720163  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:21.720171  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:21.720243  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:21.754697  662586 cri.go:89] found id: ""
	I1209 11:55:21.754729  662586 logs.go:282] 0 containers: []
	W1209 11:55:21.754740  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:21.754749  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:21.754814  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:21.793926  662586 cri.go:89] found id: ""
	I1209 11:55:21.793957  662586 logs.go:282] 0 containers: []
	W1209 11:55:21.793967  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:21.793973  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:21.794040  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:21.827572  662586 cri.go:89] found id: ""
	I1209 11:55:21.827609  662586 logs.go:282] 0 containers: []
	W1209 11:55:21.827622  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:21.827633  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:21.827700  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:21.861442  662586 cri.go:89] found id: ""
	I1209 11:55:21.861472  662586 logs.go:282] 0 containers: []
	W1209 11:55:21.861490  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:21.861499  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:21.861565  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:21.894858  662586 cri.go:89] found id: ""
	I1209 11:55:21.894884  662586 logs.go:282] 0 containers: []
	W1209 11:55:21.894892  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:21.894901  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:21.894914  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:21.942567  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:21.942625  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:21.956849  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:21.956879  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:22.020700  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:22.020724  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:22.020738  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:22.095730  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:22.095767  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:24.631715  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:24.644165  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:24.644252  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:24.677720  662586 cri.go:89] found id: ""
	I1209 11:55:24.677757  662586 logs.go:282] 0 containers: []
	W1209 11:55:24.677769  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:24.677778  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:24.677835  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:24.711053  662586 cri.go:89] found id: ""
	I1209 11:55:24.711086  662586 logs.go:282] 0 containers: []
	W1209 11:55:24.711095  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:24.711101  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:24.711154  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:24.744107  662586 cri.go:89] found id: ""
	I1209 11:55:24.744139  662586 logs.go:282] 0 containers: []
	W1209 11:55:24.744148  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:24.744154  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:24.744210  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:24.777811  662586 cri.go:89] found id: ""
	I1209 11:55:24.777853  662586 logs.go:282] 0 containers: []
	W1209 11:55:24.777866  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:24.777876  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:24.777938  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:24.810524  662586 cri.go:89] found id: ""
	I1209 11:55:24.810558  662586 logs.go:282] 0 containers: []
	W1209 11:55:24.810571  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:24.810580  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:24.810648  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:24.843551  662586 cri.go:89] found id: ""
	I1209 11:55:24.843582  662586 logs.go:282] 0 containers: []
	W1209 11:55:24.843590  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:24.843597  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:24.843649  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:24.875342  662586 cri.go:89] found id: ""
	I1209 11:55:24.875371  662586 logs.go:282] 0 containers: []
	W1209 11:55:24.875384  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:24.875390  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:24.875446  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:24.910298  662586 cri.go:89] found id: ""
	I1209 11:55:24.910329  662586 logs.go:282] 0 containers: []
	W1209 11:55:24.910340  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:24.910352  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:24.910377  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:24.962151  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:24.962204  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:24.976547  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:24.976577  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:25.050606  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:25.050635  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:25.050652  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:25.134204  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:25.134254  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:27.671220  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:27.685132  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:27.685194  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:27.718113  662586 cri.go:89] found id: ""
	I1209 11:55:27.718141  662586 logs.go:282] 0 containers: []
	W1209 11:55:27.718150  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:27.718160  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:27.718242  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:27.752350  662586 cri.go:89] found id: ""
	I1209 11:55:27.752384  662586 logs.go:282] 0 containers: []
	W1209 11:55:27.752395  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:27.752401  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:27.752481  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:27.797360  662586 cri.go:89] found id: ""
	I1209 11:55:27.797393  662586 logs.go:282] 0 containers: []
	W1209 11:55:27.797406  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:27.797415  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:27.797488  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:27.834549  662586 cri.go:89] found id: ""
	I1209 11:55:27.834579  662586 logs.go:282] 0 containers: []
	W1209 11:55:27.834588  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:27.834594  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:27.834655  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:27.874403  662586 cri.go:89] found id: ""
	I1209 11:55:27.874440  662586 logs.go:282] 0 containers: []
	W1209 11:55:27.874465  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:27.874474  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:27.874557  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:27.914324  662586 cri.go:89] found id: ""
	I1209 11:55:27.914360  662586 logs.go:282] 0 containers: []
	W1209 11:55:27.914373  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:27.914380  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:27.914450  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:27.948001  662586 cri.go:89] found id: ""
	I1209 11:55:27.948043  662586 logs.go:282] 0 containers: []
	W1209 11:55:27.948056  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:27.948066  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:27.948219  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:27.982329  662586 cri.go:89] found id: ""
	I1209 11:55:27.982360  662586 logs.go:282] 0 containers: []
	W1209 11:55:27.982369  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:27.982379  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:27.982391  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:28.038165  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:28.038228  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:28.051578  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:28.051609  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:28.119914  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:28.119937  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:28.119951  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:28.195634  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:28.195679  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:30.735392  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:30.748430  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:30.748521  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:30.780500  662586 cri.go:89] found id: ""
	I1209 11:55:30.780528  662586 logs.go:282] 0 containers: []
	W1209 11:55:30.780537  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:30.780544  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:30.780606  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:30.812430  662586 cri.go:89] found id: ""
	I1209 11:55:30.812462  662586 logs.go:282] 0 containers: []
	W1209 11:55:30.812470  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:30.812477  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:30.812530  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:30.854030  662586 cri.go:89] found id: ""
	I1209 11:55:30.854057  662586 logs.go:282] 0 containers: []
	W1209 11:55:30.854066  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:30.854073  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:30.854130  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:30.892144  662586 cri.go:89] found id: ""
	I1209 11:55:30.892182  662586 logs.go:282] 0 containers: []
	W1209 11:55:30.892202  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:30.892211  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:30.892284  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:30.927540  662586 cri.go:89] found id: ""
	I1209 11:55:30.927576  662586 logs.go:282] 0 containers: []
	W1209 11:55:30.927590  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:30.927597  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:30.927660  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:30.963820  662586 cri.go:89] found id: ""
	I1209 11:55:30.963852  662586 logs.go:282] 0 containers: []
	W1209 11:55:30.963861  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:30.963867  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:30.963920  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:30.997793  662586 cri.go:89] found id: ""
	I1209 11:55:30.997819  662586 logs.go:282] 0 containers: []
	W1209 11:55:30.997828  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:30.997836  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:30.997902  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:31.031649  662586 cri.go:89] found id: ""
	I1209 11:55:31.031699  662586 logs.go:282] 0 containers: []
	W1209 11:55:31.031712  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:31.031726  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:31.031746  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:31.101464  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:31.101492  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:31.101509  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:31.184635  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:31.184681  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:31.222690  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:31.222732  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:31.276518  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:31.276566  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:33.790941  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:33.805299  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:33.805390  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:33.844205  662586 cri.go:89] found id: ""
	I1209 11:55:33.844241  662586 logs.go:282] 0 containers: []
	W1209 11:55:33.844253  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:33.844262  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:33.844337  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:33.883378  662586 cri.go:89] found id: ""
	I1209 11:55:33.883410  662586 logs.go:282] 0 containers: []
	W1209 11:55:33.883424  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:33.883431  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:33.883505  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:33.920007  662586 cri.go:89] found id: ""
	I1209 11:55:33.920049  662586 logs.go:282] 0 containers: []
	W1209 11:55:33.920061  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:33.920074  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:33.920141  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:33.956111  662586 cri.go:89] found id: ""
	I1209 11:55:33.956163  662586 logs.go:282] 0 containers: []
	W1209 11:55:33.956175  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:33.956183  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:33.956241  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:33.990057  662586 cri.go:89] found id: ""
	I1209 11:55:33.990092  662586 logs.go:282] 0 containers: []
	W1209 11:55:33.990102  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:33.990109  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:33.990166  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:34.023046  662586 cri.go:89] found id: ""
	I1209 11:55:34.023082  662586 logs.go:282] 0 containers: []
	W1209 11:55:34.023096  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:34.023103  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:34.023171  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:34.055864  662586 cri.go:89] found id: ""
	I1209 11:55:34.055898  662586 logs.go:282] 0 containers: []
	W1209 11:55:34.055909  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:34.055916  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:34.055987  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:34.091676  662586 cri.go:89] found id: ""
	I1209 11:55:34.091710  662586 logs.go:282] 0 containers: []
	W1209 11:55:34.091722  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:34.091733  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:34.091747  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:34.142959  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:34.143002  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:34.156431  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:34.156466  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:34.230277  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:34.230303  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:34.230320  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:34.313660  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:34.313713  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:36.850056  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:36.862486  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:36.862582  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:36.893134  662586 cri.go:89] found id: ""
	I1209 11:55:36.893163  662586 logs.go:282] 0 containers: []
	W1209 11:55:36.893173  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:36.893179  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:36.893257  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:36.927438  662586 cri.go:89] found id: ""
	I1209 11:55:36.927469  662586 logs.go:282] 0 containers: []
	W1209 11:55:36.927479  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:36.927485  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:36.927546  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:36.958787  662586 cri.go:89] found id: ""
	I1209 11:55:36.958818  662586 logs.go:282] 0 containers: []
	W1209 11:55:36.958829  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:36.958837  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:36.958901  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:36.995470  662586 cri.go:89] found id: ""
	I1209 11:55:36.995508  662586 logs.go:282] 0 containers: []
	W1209 11:55:36.995520  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:36.995529  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:36.995590  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:37.026705  662586 cri.go:89] found id: ""
	I1209 11:55:37.026736  662586 logs.go:282] 0 containers: []
	W1209 11:55:37.026746  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:37.026752  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:37.026805  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:37.059717  662586 cri.go:89] found id: ""
	I1209 11:55:37.059748  662586 logs.go:282] 0 containers: []
	W1209 11:55:37.059756  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:37.059762  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:37.059820  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:37.094049  662586 cri.go:89] found id: ""
	I1209 11:55:37.094076  662586 logs.go:282] 0 containers: []
	W1209 11:55:37.094088  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:37.094097  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:37.094190  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:37.128684  662586 cri.go:89] found id: ""
	I1209 11:55:37.128715  662586 logs.go:282] 0 containers: []
	W1209 11:55:37.128724  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:37.128735  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:37.128755  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:37.177932  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:37.177973  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:37.191218  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:37.191252  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:37.256488  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:37.256521  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:37.256538  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:37.330603  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:37.330647  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:39.868604  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:39.881991  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:39.882063  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:39.916750  662586 cri.go:89] found id: ""
	I1209 11:55:39.916786  662586 logs.go:282] 0 containers: []
	W1209 11:55:39.916799  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:39.916806  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:39.916874  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:39.957744  662586 cri.go:89] found id: ""
	I1209 11:55:39.957773  662586 logs.go:282] 0 containers: []
	W1209 11:55:39.957781  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:39.957788  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:39.957854  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:39.994613  662586 cri.go:89] found id: ""
	I1209 11:55:39.994645  662586 logs.go:282] 0 containers: []
	W1209 11:55:39.994654  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:39.994661  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:39.994726  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:40.032606  662586 cri.go:89] found id: ""
	I1209 11:55:40.032635  662586 logs.go:282] 0 containers: []
	W1209 11:55:40.032644  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:40.032650  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:40.032710  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:40.067172  662586 cri.go:89] found id: ""
	I1209 11:55:40.067204  662586 logs.go:282] 0 containers: []
	W1209 11:55:40.067214  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:40.067221  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:40.067278  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:40.101391  662586 cri.go:89] found id: ""
	I1209 11:55:40.101423  662586 logs.go:282] 0 containers: []
	W1209 11:55:40.101432  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:40.101439  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:40.101510  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:40.133160  662586 cri.go:89] found id: ""
	I1209 11:55:40.133196  662586 logs.go:282] 0 containers: []
	W1209 11:55:40.133209  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:40.133217  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:40.133283  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:40.166105  662586 cri.go:89] found id: ""
	I1209 11:55:40.166137  662586 logs.go:282] 0 containers: []
	W1209 11:55:40.166145  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:40.166160  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:40.166187  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:40.231525  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:40.231559  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:40.231582  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:40.311298  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:40.311354  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:40.350040  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:40.350077  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:40.404024  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:40.404061  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:42.917868  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:42.930289  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:42.930357  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:42.962822  662586 cri.go:89] found id: ""
	I1209 11:55:42.962856  662586 logs.go:282] 0 containers: []
	W1209 11:55:42.962869  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:42.962878  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:42.962950  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:42.996932  662586 cri.go:89] found id: ""
	I1209 11:55:42.996962  662586 logs.go:282] 0 containers: []
	W1209 11:55:42.996972  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:42.996979  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:42.997040  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:43.031782  662586 cri.go:89] found id: ""
	I1209 11:55:43.031824  662586 logs.go:282] 0 containers: []
	W1209 11:55:43.031837  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:43.031846  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:43.031915  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:43.064717  662586 cri.go:89] found id: ""
	I1209 11:55:43.064751  662586 logs.go:282] 0 containers: []
	W1209 11:55:43.064764  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:43.064774  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:43.064851  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:43.097248  662586 cri.go:89] found id: ""
	I1209 11:55:43.097278  662586 logs.go:282] 0 containers: []
	W1209 11:55:43.097287  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:43.097294  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:43.097356  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:43.135726  662586 cri.go:89] found id: ""
	I1209 11:55:43.135766  662586 logs.go:282] 0 containers: []
	W1209 11:55:43.135779  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:43.135788  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:43.135881  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:43.171120  662586 cri.go:89] found id: ""
	I1209 11:55:43.171148  662586 logs.go:282] 0 containers: []
	W1209 11:55:43.171157  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:43.171163  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:43.171216  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:43.207488  662586 cri.go:89] found id: ""
	I1209 11:55:43.207523  662586 logs.go:282] 0 containers: []
	W1209 11:55:43.207533  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:43.207545  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:43.207565  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:43.276112  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:43.276142  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:43.276159  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:43.354942  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:43.354990  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:43.392755  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:43.392800  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:43.445708  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:43.445752  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:45.962533  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:45.975508  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:45.975589  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:46.009619  662586 cri.go:89] found id: ""
	I1209 11:55:46.009653  662586 logs.go:282] 0 containers: []
	W1209 11:55:46.009663  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:46.009670  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:46.009726  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:46.042218  662586 cri.go:89] found id: ""
	I1209 11:55:46.042250  662586 logs.go:282] 0 containers: []
	W1209 11:55:46.042259  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:46.042265  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:46.042318  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:46.076204  662586 cri.go:89] found id: ""
	I1209 11:55:46.076239  662586 logs.go:282] 0 containers: []
	W1209 11:55:46.076249  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:46.076255  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:46.076326  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:46.113117  662586 cri.go:89] found id: ""
	I1209 11:55:46.113145  662586 logs.go:282] 0 containers: []
	W1209 11:55:46.113154  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:46.113160  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:46.113225  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:46.148232  662586 cri.go:89] found id: ""
	I1209 11:55:46.148277  662586 logs.go:282] 0 containers: []
	W1209 11:55:46.148293  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:46.148303  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:46.148379  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:46.185028  662586 cri.go:89] found id: ""
	I1209 11:55:46.185083  662586 logs.go:282] 0 containers: []
	W1209 11:55:46.185096  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:46.185106  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:46.185200  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:46.222882  662586 cri.go:89] found id: ""
	I1209 11:55:46.222920  662586 logs.go:282] 0 containers: []
	W1209 11:55:46.222933  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:46.222941  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:46.223007  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:46.263486  662586 cri.go:89] found id: ""
	I1209 11:55:46.263528  662586 logs.go:282] 0 containers: []
	W1209 11:55:46.263538  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:46.263549  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:46.263565  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:46.340524  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:46.340550  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:46.340567  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:46.422768  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:46.422810  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:46.464344  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:46.464382  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:46.517311  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:46.517354  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:49.031192  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:49.043840  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:49.043929  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:49.077648  662586 cri.go:89] found id: ""
	I1209 11:55:49.077705  662586 logs.go:282] 0 containers: []
	W1209 11:55:49.077720  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:49.077730  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:49.077802  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:49.114111  662586 cri.go:89] found id: ""
	I1209 11:55:49.114138  662586 logs.go:282] 0 containers: []
	W1209 11:55:49.114146  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:49.114154  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:49.114236  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:49.147870  662586 cri.go:89] found id: ""
	I1209 11:55:49.147908  662586 logs.go:282] 0 containers: []
	W1209 11:55:49.147917  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:49.147923  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:49.147976  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:49.185223  662586 cri.go:89] found id: ""
	I1209 11:55:49.185256  662586 logs.go:282] 0 containers: []
	W1209 11:55:49.185269  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:49.185277  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:49.185350  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:49.218037  662586 cri.go:89] found id: ""
	I1209 11:55:49.218068  662586 logs.go:282] 0 containers: []
	W1209 11:55:49.218077  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:49.218084  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:49.218138  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:49.255483  662586 cri.go:89] found id: ""
	I1209 11:55:49.255522  662586 logs.go:282] 0 containers: []
	W1209 11:55:49.255535  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:49.255549  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:49.255629  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:49.288623  662586 cri.go:89] found id: ""
	I1209 11:55:49.288650  662586 logs.go:282] 0 containers: []
	W1209 11:55:49.288659  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:49.288666  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:49.288732  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:49.322880  662586 cri.go:89] found id: ""
	I1209 11:55:49.322913  662586 logs.go:282] 0 containers: []
	W1209 11:55:49.322921  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:49.322930  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:49.322943  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:49.372380  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:49.372428  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:49.385877  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:49.385914  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:49.460078  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:49.460101  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:49.460114  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:49.534588  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:49.534647  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:52.071408  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:52.084198  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:52.084276  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:52.118908  662586 cri.go:89] found id: ""
	I1209 11:55:52.118937  662586 logs.go:282] 0 containers: []
	W1209 11:55:52.118950  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:52.118958  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:52.119026  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:52.156494  662586 cri.go:89] found id: ""
	I1209 11:55:52.156521  662586 logs.go:282] 0 containers: []
	W1209 11:55:52.156530  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:52.156535  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:52.156586  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:52.196037  662586 cri.go:89] found id: ""
	I1209 11:55:52.196075  662586 logs.go:282] 0 containers: []
	W1209 11:55:52.196094  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:52.196102  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:52.196177  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:52.229436  662586 cri.go:89] found id: ""
	I1209 11:55:52.229465  662586 logs.go:282] 0 containers: []
	W1209 11:55:52.229477  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:52.229486  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:52.229558  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:52.268751  662586 cri.go:89] found id: ""
	I1209 11:55:52.268785  662586 logs.go:282] 0 containers: []
	W1209 11:55:52.268797  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:52.268805  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:52.268871  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:52.302405  662586 cri.go:89] found id: ""
	I1209 11:55:52.302436  662586 logs.go:282] 0 containers: []
	W1209 11:55:52.302446  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:52.302453  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:52.302522  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:52.338641  662586 cri.go:89] found id: ""
	I1209 11:55:52.338676  662586 logs.go:282] 0 containers: []
	W1209 11:55:52.338688  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:52.338698  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:52.338754  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:52.375541  662586 cri.go:89] found id: ""
	I1209 11:55:52.375578  662586 logs.go:282] 0 containers: []
	W1209 11:55:52.375591  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:52.375604  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:52.375624  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:52.389140  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:52.389190  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:52.460520  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:52.460546  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:52.460562  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:52.535234  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:52.535280  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:52.573317  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:52.573354  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:55.124068  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:55.136800  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:55.136868  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:55.169724  662586 cri.go:89] found id: ""
	I1209 11:55:55.169757  662586 logs.go:282] 0 containers: []
	W1209 11:55:55.169769  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:55.169777  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:55.169843  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:55.207466  662586 cri.go:89] found id: ""
	I1209 11:55:55.207514  662586 logs.go:282] 0 containers: []
	W1209 11:55:55.207528  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:55.207537  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:55.207600  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:55.241761  662586 cri.go:89] found id: ""
	I1209 11:55:55.241790  662586 logs.go:282] 0 containers: []
	W1209 11:55:55.241801  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:55.241809  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:55.241874  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:55.274393  662586 cri.go:89] found id: ""
	I1209 11:55:55.274434  662586 logs.go:282] 0 containers: []
	W1209 11:55:55.274447  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:55.274455  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:55.274522  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:55.307942  662586 cri.go:89] found id: ""
	I1209 11:55:55.307988  662586 logs.go:282] 0 containers: []
	W1209 11:55:55.308002  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:55.308012  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:55.308088  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:55.340074  662586 cri.go:89] found id: ""
	I1209 11:55:55.340107  662586 logs.go:282] 0 containers: []
	W1209 11:55:55.340116  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:55.340122  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:55.340196  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:55.388077  662586 cri.go:89] found id: ""
	I1209 11:55:55.388119  662586 logs.go:282] 0 containers: []
	W1209 11:55:55.388140  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:55.388149  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:55.388230  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:55.422923  662586 cri.go:89] found id: ""
	I1209 11:55:55.422961  662586 logs.go:282] 0 containers: []
	W1209 11:55:55.422975  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:55.422990  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:55.423008  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:55.476178  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:55.476219  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:55.489891  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:55.489919  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:55.555705  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:55.555726  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:55.555745  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:55.634818  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:55.634862  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:58.173169  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:58.188529  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:58.188620  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:58.225602  662586 cri.go:89] found id: ""
	I1209 11:55:58.225630  662586 logs.go:282] 0 containers: []
	W1209 11:55:58.225641  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:58.225649  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:58.225709  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:58.259597  662586 cri.go:89] found id: ""
	I1209 11:55:58.259638  662586 logs.go:282] 0 containers: []
	W1209 11:55:58.259652  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:58.259662  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:58.259744  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:58.293287  662586 cri.go:89] found id: ""
	I1209 11:55:58.293320  662586 logs.go:282] 0 containers: []
	W1209 11:55:58.293329  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:58.293336  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:58.293390  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:58.326581  662586 cri.go:89] found id: ""
	I1209 11:55:58.326611  662586 logs.go:282] 0 containers: []
	W1209 11:55:58.326622  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:58.326630  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:58.326699  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:58.359636  662586 cri.go:89] found id: ""
	I1209 11:55:58.359665  662586 logs.go:282] 0 containers: []
	W1209 11:55:58.359675  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:58.359681  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:58.359736  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:58.396767  662586 cri.go:89] found id: ""
	I1209 11:55:58.396798  662586 logs.go:282] 0 containers: []
	W1209 11:55:58.396809  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:58.396818  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:58.396887  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:58.428907  662586 cri.go:89] found id: ""
	I1209 11:55:58.428941  662586 logs.go:282] 0 containers: []
	W1209 11:55:58.428954  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:58.428962  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:58.429032  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:58.466082  662586 cri.go:89] found id: ""
	I1209 11:55:58.466124  662586 logs.go:282] 0 containers: []
	W1209 11:55:58.466136  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:58.466149  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:58.466186  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:58.542333  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:58.542378  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:58.582397  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:58.582436  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:58.632980  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:58.633030  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:58.648464  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:58.648514  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:58.711714  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:56:01.212475  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:56:01.225574  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:56:01.225642  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:56:01.259666  662586 cri.go:89] found id: ""
	I1209 11:56:01.259704  662586 logs.go:282] 0 containers: []
	W1209 11:56:01.259718  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:56:01.259726  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:56:01.259800  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:56:01.295433  662586 cri.go:89] found id: ""
	I1209 11:56:01.295474  662586 logs.go:282] 0 containers: []
	W1209 11:56:01.295495  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:56:01.295503  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:56:01.295561  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:56:01.330316  662586 cri.go:89] found id: ""
	I1209 11:56:01.330352  662586 logs.go:282] 0 containers: []
	W1209 11:56:01.330364  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:56:01.330373  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:56:01.330447  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:56:01.366762  662586 cri.go:89] found id: ""
	I1209 11:56:01.366797  662586 logs.go:282] 0 containers: []
	W1209 11:56:01.366808  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:56:01.366814  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:56:01.366878  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:56:01.403511  662586 cri.go:89] found id: ""
	I1209 11:56:01.403539  662586 logs.go:282] 0 containers: []
	W1209 11:56:01.403547  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:56:01.403553  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:56:01.403604  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:56:01.436488  662586 cri.go:89] found id: ""
	I1209 11:56:01.436526  662586 logs.go:282] 0 containers: []
	W1209 11:56:01.436538  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:56:01.436546  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:56:01.436617  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:56:01.471647  662586 cri.go:89] found id: ""
	I1209 11:56:01.471676  662586 logs.go:282] 0 containers: []
	W1209 11:56:01.471685  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:56:01.471690  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:56:01.471744  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:56:01.504065  662586 cri.go:89] found id: ""
	I1209 11:56:01.504099  662586 logs.go:282] 0 containers: []
	W1209 11:56:01.504111  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:56:01.504124  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:56:01.504143  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:56:01.553434  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:56:01.553482  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:56:01.567537  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:56:01.567579  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:56:01.636968  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:56:01.636995  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:56:01.637012  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:56:01.713008  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:56:01.713049  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:56:04.253143  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:56:04.266428  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:56:04.266512  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:56:04.298769  662586 cri.go:89] found id: ""
	I1209 11:56:04.298810  662586 logs.go:282] 0 containers: []
	W1209 11:56:04.298823  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:56:04.298833  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:56:04.298913  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:56:04.330392  662586 cri.go:89] found id: ""
	I1209 11:56:04.330428  662586 logs.go:282] 0 containers: []
	W1209 11:56:04.330441  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:56:04.330449  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:56:04.330528  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:56:04.362409  662586 cri.go:89] found id: ""
	I1209 11:56:04.362443  662586 logs.go:282] 0 containers: []
	W1209 11:56:04.362455  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:56:04.362463  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:56:04.362544  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:56:04.396853  662586 cri.go:89] found id: ""
	I1209 11:56:04.396884  662586 logs.go:282] 0 containers: []
	W1209 11:56:04.396893  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:56:04.396899  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:56:04.396966  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:56:04.430425  662586 cri.go:89] found id: ""
	I1209 11:56:04.430461  662586 logs.go:282] 0 containers: []
	W1209 11:56:04.430470  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:56:04.430477  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:56:04.430531  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:56:04.465354  662586 cri.go:89] found id: ""
	I1209 11:56:04.465391  662586 logs.go:282] 0 containers: []
	W1209 11:56:04.465403  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:56:04.465411  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:56:04.465480  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:56:04.500114  662586 cri.go:89] found id: ""
	I1209 11:56:04.500156  662586 logs.go:282] 0 containers: []
	W1209 11:56:04.500167  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:56:04.500179  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:56:04.500259  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:56:04.534853  662586 cri.go:89] found id: ""
	I1209 11:56:04.534888  662586 logs.go:282] 0 containers: []
	W1209 11:56:04.534902  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:56:04.534914  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:56:04.534928  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:56:04.586419  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:56:04.586457  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:56:04.600690  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:56:04.600728  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:56:04.669645  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:56:04.669685  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:56:04.669703  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:56:04.747973  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:56:04.748026  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:56:07.288721  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:56:07.302905  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:56:07.302975  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:56:07.336686  662586 cri.go:89] found id: ""
	I1209 11:56:07.336720  662586 logs.go:282] 0 containers: []
	W1209 11:56:07.336728  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:56:07.336735  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:56:07.336798  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:56:07.370119  662586 cri.go:89] found id: ""
	I1209 11:56:07.370150  662586 logs.go:282] 0 containers: []
	W1209 11:56:07.370159  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:56:07.370165  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:56:07.370245  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:56:07.402818  662586 cri.go:89] found id: ""
	I1209 11:56:07.402845  662586 logs.go:282] 0 containers: []
	W1209 11:56:07.402853  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:56:07.402861  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:56:07.402923  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:56:07.437694  662586 cri.go:89] found id: ""
	I1209 11:56:07.437722  662586 logs.go:282] 0 containers: []
	W1209 11:56:07.437732  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:56:07.437741  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:56:07.437806  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:56:07.474576  662586 cri.go:89] found id: ""
	I1209 11:56:07.474611  662586 logs.go:282] 0 containers: []
	W1209 11:56:07.474622  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:56:07.474629  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:56:07.474705  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:56:07.508538  662586 cri.go:89] found id: ""
	I1209 11:56:07.508575  662586 logs.go:282] 0 containers: []
	W1209 11:56:07.508585  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:56:07.508592  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:56:07.508661  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:56:07.548863  662586 cri.go:89] found id: ""
	I1209 11:56:07.548897  662586 logs.go:282] 0 containers: []
	W1209 11:56:07.548911  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:56:07.548922  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:56:07.549093  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:56:07.592515  662586 cri.go:89] found id: ""
	I1209 11:56:07.592543  662586 logs.go:282] 0 containers: []
	W1209 11:56:07.592555  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:56:07.592564  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:56:07.592579  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:56:07.652176  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:56:07.652219  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:56:07.703040  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:56:07.703094  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:56:07.717880  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:56:07.717924  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:56:07.783396  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:56:07.783425  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:56:07.783441  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:56:10.362395  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:56:10.377478  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:56:10.377574  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:56:10.411923  662586 cri.go:89] found id: ""
	I1209 11:56:10.411956  662586 logs.go:282] 0 containers: []
	W1209 11:56:10.411969  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:56:10.411978  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:56:10.412049  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:56:10.444601  662586 cri.go:89] found id: ""
	I1209 11:56:10.444633  662586 logs.go:282] 0 containers: []
	W1209 11:56:10.444642  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:56:10.444648  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:56:10.444705  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:56:10.486720  662586 cri.go:89] found id: ""
	I1209 11:56:10.486753  662586 logs.go:282] 0 containers: []
	W1209 11:56:10.486763  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:56:10.486769  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:56:10.486822  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:56:10.523535  662586 cri.go:89] found id: ""
	I1209 11:56:10.523572  662586 logs.go:282] 0 containers: []
	W1209 11:56:10.523581  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:56:10.523587  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:56:10.523641  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:56:10.557701  662586 cri.go:89] found id: ""
	I1209 11:56:10.557741  662586 logs.go:282] 0 containers: []
	W1209 11:56:10.557754  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:56:10.557762  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:56:10.557834  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:56:10.593914  662586 cri.go:89] found id: ""
	I1209 11:56:10.593949  662586 logs.go:282] 0 containers: []
	W1209 11:56:10.593959  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:56:10.593965  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:56:10.594017  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:56:10.626367  662586 cri.go:89] found id: ""
	I1209 11:56:10.626469  662586 logs.go:282] 0 containers: []
	W1209 11:56:10.626482  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:56:10.626489  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:56:10.626547  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:56:10.665415  662586 cri.go:89] found id: ""
	I1209 11:56:10.665446  662586 logs.go:282] 0 containers: []
	W1209 11:56:10.665456  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:56:10.665467  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:56:10.665480  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:56:10.747483  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:56:10.747532  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:56:10.787728  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:56:10.787758  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:56:10.840678  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:56:10.840722  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:56:10.855774  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:56:10.855809  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:56:10.929638  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:56:13.430793  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:56:13.446156  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:56:13.446261  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:56:13.491624  662586 cri.go:89] found id: ""
	I1209 11:56:13.491662  662586 logs.go:282] 0 containers: []
	W1209 11:56:13.491675  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:56:13.491684  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:56:13.491758  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:56:13.537619  662586 cri.go:89] found id: ""
	I1209 11:56:13.537653  662586 logs.go:282] 0 containers: []
	W1209 11:56:13.537666  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:56:13.537675  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:56:13.537750  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:56:13.585761  662586 cri.go:89] found id: ""
	I1209 11:56:13.585796  662586 logs.go:282] 0 containers: []
	W1209 11:56:13.585810  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:56:13.585819  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:56:13.585883  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:56:13.620740  662586 cri.go:89] found id: ""
	I1209 11:56:13.620774  662586 logs.go:282] 0 containers: []
	W1209 11:56:13.620785  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:56:13.620791  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:56:13.620858  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:56:13.654405  662586 cri.go:89] found id: ""
	I1209 11:56:13.654433  662586 logs.go:282] 0 containers: []
	W1209 11:56:13.654442  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:56:13.654448  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:56:13.654509  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:56:13.687520  662586 cri.go:89] found id: ""
	I1209 11:56:13.687547  662586 logs.go:282] 0 containers: []
	W1209 11:56:13.687558  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:56:13.687566  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:56:13.687642  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:56:13.721105  662586 cri.go:89] found id: ""
	I1209 11:56:13.721140  662586 logs.go:282] 0 containers: []
	W1209 11:56:13.721153  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:56:13.721162  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:56:13.721238  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:56:13.753900  662586 cri.go:89] found id: ""
	I1209 11:56:13.753933  662586 logs.go:282] 0 containers: []
	W1209 11:56:13.753945  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:56:13.753960  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:56:13.753978  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:56:13.805864  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:56:13.805909  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:56:13.819356  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:56:13.819393  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:56:13.896097  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:56:13.896128  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:56:13.896150  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:56:13.979041  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:56:13.979084  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:56:16.516777  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:56:16.529916  662586 kubeadm.go:597] duration metric: took 4m1.869807937s to restartPrimaryControlPlane
	W1209 11:56:16.530015  662586 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1209 11:56:16.530067  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1209 11:56:18.635832  662586 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.105742271s)
	I1209 11:56:18.635914  662586 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 11:56:18.651678  662586 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 11:56:18.661965  662586 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 11:56:18.672060  662586 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 11:56:18.672082  662586 kubeadm.go:157] found existing configuration files:
	
	I1209 11:56:18.672147  662586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 11:56:18.681627  662586 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 11:56:18.681697  662586 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 11:56:18.691514  662586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 11:56:18.701210  662586 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 11:56:18.701292  662586 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 11:56:18.710934  662586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 11:56:18.720506  662586 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 11:56:18.720583  662586 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 11:56:18.729996  662586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 11:56:18.739425  662586 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 11:56:18.739486  662586 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 11:56:18.748788  662586 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1209 11:56:18.981849  662586 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1209 11:58:14.994765  662586 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1209 11:58:14.994918  662586 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1209 11:58:14.995050  662586 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1209 11:58:14.995118  662586 kubeadm.go:310] [preflight] Running pre-flight checks
	I1209 11:58:14.995182  662586 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1209 11:58:14.995272  662586 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1209 11:58:14.995353  662586 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1209 11:58:14.995410  662586 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1209 11:58:14.996905  662586 out.go:235]   - Generating certificates and keys ...
	I1209 11:58:14.997000  662586 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1209 11:58:14.997055  662586 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1209 11:58:14.997123  662586 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1209 11:58:14.997184  662586 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1209 11:58:14.997278  662586 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1209 11:58:14.997349  662586 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1209 11:58:14.997474  662586 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1209 11:58:14.997567  662586 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1209 11:58:14.997631  662586 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1209 11:58:14.997700  662586 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1209 11:58:14.997736  662586 kubeadm.go:310] [certs] Using the existing "sa" key
	I1209 11:58:14.997783  662586 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1209 11:58:14.997826  662586 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1209 11:58:14.997871  662586 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1209 11:58:14.997930  662586 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1209 11:58:14.997977  662586 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1209 11:58:14.998063  662586 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1209 11:58:14.998141  662586 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1209 11:58:14.998199  662586 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1209 11:58:14.998264  662586 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1209 11:58:14.999539  662586 out.go:235]   - Booting up control plane ...
	I1209 11:58:14.999663  662586 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1209 11:58:14.999748  662586 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1209 11:58:14.999824  662586 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1209 11:58:14.999946  662586 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1209 11:58:15.000148  662586 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1209 11:58:15.000221  662586 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1209 11:58:15.000326  662586 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1209 11:58:15.000532  662586 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1209 11:58:15.000598  662586 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1209 11:58:15.000753  662586 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1209 11:58:15.000814  662586 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1209 11:58:15.000971  662586 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1209 11:58:15.001064  662586 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1209 11:58:15.001273  662586 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1209 11:58:15.001335  662586 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1209 11:58:15.001486  662586 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1209 11:58:15.001493  662586 kubeadm.go:310] 
	I1209 11:58:15.001553  662586 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1209 11:58:15.001616  662586 kubeadm.go:310] 		timed out waiting for the condition
	I1209 11:58:15.001631  662586 kubeadm.go:310] 
	I1209 11:58:15.001685  662586 kubeadm.go:310] 	This error is likely caused by:
	I1209 11:58:15.001732  662586 kubeadm.go:310] 		- The kubelet is not running
	I1209 11:58:15.001883  662586 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1209 11:58:15.001897  662586 kubeadm.go:310] 
	I1209 11:58:15.002041  662586 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1209 11:58:15.002087  662586 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1209 11:58:15.002146  662586 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1209 11:58:15.002156  662586 kubeadm.go:310] 
	I1209 11:58:15.002294  662586 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1209 11:58:15.002373  662586 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1209 11:58:15.002380  662586 kubeadm.go:310] 
	I1209 11:58:15.002502  662586 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1209 11:58:15.002623  662586 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1209 11:58:15.002725  662586 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1209 11:58:15.002799  662586 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1209 11:58:15.002835  662586 kubeadm.go:310] 
	W1209 11:58:15.002956  662586 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1209 11:58:15.003022  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1209 11:58:15.469838  662586 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 11:58:15.484503  662586 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 11:58:15.493409  662586 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 11:58:15.493430  662586 kubeadm.go:157] found existing configuration files:
	
	I1209 11:58:15.493487  662586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 11:58:15.502508  662586 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 11:58:15.502568  662586 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 11:58:15.511743  662586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 11:58:15.519855  662586 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 11:58:15.519913  662586 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 11:58:15.528743  662586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 11:58:15.537000  662586 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 11:58:15.537072  662586 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 11:58:15.546520  662586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 11:58:15.555448  662586 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 11:58:15.555526  662586 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 11:58:15.565618  662586 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1209 11:58:15.631763  662586 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1209 11:58:15.631832  662586 kubeadm.go:310] [preflight] Running pre-flight checks
	I1209 11:58:15.798683  662586 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1209 11:58:15.798822  662586 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1209 11:58:15.798957  662586 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1209 11:58:15.974522  662586 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1209 11:58:15.976286  662586 out.go:235]   - Generating certificates and keys ...
	I1209 11:58:15.976408  662586 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1209 11:58:15.976492  662586 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1209 11:58:15.976616  662586 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1209 11:58:15.976714  662586 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1209 11:58:15.976813  662586 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1209 11:58:15.976889  662586 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1209 11:58:15.976978  662586 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1209 11:58:15.977064  662586 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1209 11:58:15.977184  662586 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1209 11:58:15.977251  662586 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1209 11:58:15.977287  662586 kubeadm.go:310] [certs] Using the existing "sa" key
	I1209 11:58:15.977363  662586 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1209 11:58:16.193383  662586 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1209 11:58:16.324912  662586 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1209 11:58:16.541372  662586 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1209 11:58:16.786389  662586 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1209 11:58:16.807241  662586 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1209 11:58:16.808750  662586 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1209 11:58:16.808823  662586 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1209 11:58:16.951756  662586 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1209 11:58:16.954338  662586 out.go:235]   - Booting up control plane ...
	I1209 11:58:16.954486  662586 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1209 11:58:16.968892  662586 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1209 11:58:16.970556  662586 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1209 11:58:16.971301  662586 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1209 11:58:16.974040  662586 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1209 11:58:56.976537  662586 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1209 11:58:56.976966  662586 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1209 11:58:56.977214  662586 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1209 11:59:01.977861  662586 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1209 11:59:01.978074  662586 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1209 11:59:11.978821  662586 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1209 11:59:11.979056  662586 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1209 11:59:31.980118  662586 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1209 11:59:31.980386  662586 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1209 12:00:11.981507  662586 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1209 12:00:11.981791  662586 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1209 12:00:11.981804  662586 kubeadm.go:310] 
	I1209 12:00:11.981863  662586 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1209 12:00:11.981916  662586 kubeadm.go:310] 		timed out waiting for the condition
	I1209 12:00:11.981926  662586 kubeadm.go:310] 
	I1209 12:00:11.981977  662586 kubeadm.go:310] 	This error is likely caused by:
	I1209 12:00:11.982028  662586 kubeadm.go:310] 		- The kubelet is not running
	I1209 12:00:11.982232  662586 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1209 12:00:11.982262  662586 kubeadm.go:310] 
	I1209 12:00:11.982449  662586 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1209 12:00:11.982506  662586 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1209 12:00:11.982555  662586 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1209 12:00:11.982564  662586 kubeadm.go:310] 
	I1209 12:00:11.982709  662586 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1209 12:00:11.982824  662586 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1209 12:00:11.982837  662586 kubeadm.go:310] 
	I1209 12:00:11.982975  662586 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1209 12:00:11.983092  662586 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1209 12:00:11.983186  662586 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1209 12:00:11.983259  662586 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1209 12:00:11.983308  662586 kubeadm.go:310] 
	I1209 12:00:11.983442  662586 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1209 12:00:11.983534  662586 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1209 12:00:11.983622  662586 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1209 12:00:11.983692  662586 kubeadm.go:394] duration metric: took 7m57.372617524s to StartCluster
	I1209 12:00:11.983778  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 12:00:11.983852  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 12:00:12.032068  662586 cri.go:89] found id: ""
	I1209 12:00:12.032110  662586 logs.go:282] 0 containers: []
	W1209 12:00:12.032126  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 12:00:12.032139  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 12:00:12.032232  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 12:00:12.074929  662586 cri.go:89] found id: ""
	I1209 12:00:12.074977  662586 logs.go:282] 0 containers: []
	W1209 12:00:12.074990  662586 logs.go:284] No container was found matching "etcd"
	I1209 12:00:12.075001  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 12:00:12.075074  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 12:00:12.113547  662586 cri.go:89] found id: ""
	I1209 12:00:12.113582  662586 logs.go:282] 0 containers: []
	W1209 12:00:12.113592  662586 logs.go:284] No container was found matching "coredns"
	I1209 12:00:12.113598  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 12:00:12.113661  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 12:00:12.147436  662586 cri.go:89] found id: ""
	I1209 12:00:12.147465  662586 logs.go:282] 0 containers: []
	W1209 12:00:12.147475  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 12:00:12.147481  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 12:00:12.147535  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 12:00:12.184398  662586 cri.go:89] found id: ""
	I1209 12:00:12.184439  662586 logs.go:282] 0 containers: []
	W1209 12:00:12.184453  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 12:00:12.184463  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 12:00:12.184541  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 12:00:12.230844  662586 cri.go:89] found id: ""
	I1209 12:00:12.230884  662586 logs.go:282] 0 containers: []
	W1209 12:00:12.230896  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 12:00:12.230905  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 12:00:12.230981  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 12:00:12.264897  662586 cri.go:89] found id: ""
	I1209 12:00:12.264930  662586 logs.go:282] 0 containers: []
	W1209 12:00:12.264939  662586 logs.go:284] No container was found matching "kindnet"
	I1209 12:00:12.264946  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 12:00:12.265001  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 12:00:12.303553  662586 cri.go:89] found id: ""
	I1209 12:00:12.303594  662586 logs.go:282] 0 containers: []
	W1209 12:00:12.303607  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 12:00:12.303622  662586 logs.go:123] Gathering logs for container status ...
	I1209 12:00:12.303638  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 12:00:12.342799  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 12:00:12.342838  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 12:00:12.392992  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 12:00:12.393039  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 12:00:12.407065  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 12:00:12.407100  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 12:00:12.483599  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 12:00:12.483651  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 12:00:12.483675  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1209 12:00:12.591518  662586 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1209 12:00:12.591615  662586 out.go:270] * 
	* 
	W1209 12:00:12.591715  662586 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1209 12:00:12.591737  662586 out.go:270] * 
	* 
	W1209 12:00:12.592644  662586 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 12:00:12.596340  662586 out.go:201] 
	W1209 12:00:12.597706  662586 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1209 12:00:12.597757  662586 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1209 12:00:12.597798  662586 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1209 12:00:12.599219  662586 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-014592 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-014592 -n old-k8s-version-014592
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-014592 -n old-k8s-version-014592: exit status 2 (262.841077ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-014592 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-014592 logs -n 25: (1.562543232s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p running-upgrade-119214                              | running-upgrade-119214       | jenkins | v1.34.0 | 09 Dec 24 11:43 UTC | 09 Dec 24 11:43 UTC |
	| delete  | -p                                                     | disable-driver-mounts-905993 | jenkins | v1.34.0 | 09 Dec 24 11:43 UTC | 09 Dec 24 11:43 UTC |
	|         | disable-driver-mounts-905993                           |                              |         |         |                     |                     |
	| start   | -p no-preload-820741                                   | no-preload-820741            | jenkins | v1.34.0 | 09 Dec 24 11:43 UTC | 09 Dec 24 11:44 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-005123            | embed-certs-005123           | jenkins | v1.34.0 | 09 Dec 24 11:44 UTC | 09 Dec 24 11:44 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-005123                                  | embed-certs-005123           | jenkins | v1.34.0 | 09 Dec 24 11:44 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-820741             | no-preload-820741            | jenkins | v1.34.0 | 09 Dec 24 11:44 UTC | 09 Dec 24 11:44 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-820741                                   | no-preload-820741            | jenkins | v1.34.0 | 09 Dec 24 11:44 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-835095                           | kubernetes-upgrade-835095    | jenkins | v1.34.0 | 09 Dec 24 11:45 UTC | 09 Dec 24 11:45 UTC |
	| start   | -p kubernetes-upgrade-835095                           | kubernetes-upgrade-835095    | jenkins | v1.34.0 | 09 Dec 24 11:45 UTC | 09 Dec 24 11:45 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-835095                           | kubernetes-upgrade-835095    | jenkins | v1.34.0 | 09 Dec 24 11:45 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-835095                           | kubernetes-upgrade-835095    | jenkins | v1.34.0 | 09 Dec 24 11:45 UTC | 09 Dec 24 11:46 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-835095                           | kubernetes-upgrade-835095    | jenkins | v1.34.0 | 09 Dec 24 11:46 UTC | 09 Dec 24 11:46 UTC |
	| start   | -p                                                     | default-k8s-diff-port-482476 | jenkins | v1.34.0 | 09 Dec 24 11:46 UTC | 09 Dec 24 11:47 UTC |
	|         | default-k8s-diff-port-482476                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-005123                 | embed-certs-005123           | jenkins | v1.34.0 | 09 Dec 24 11:46 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-005123                                  | embed-certs-005123           | jenkins | v1.34.0 | 09 Dec 24 11:46 UTC | 09 Dec 24 11:58 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-014592        | old-k8s-version-014592       | jenkins | v1.34.0 | 09 Dec 24 11:46 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-820741                  | no-preload-820741            | jenkins | v1.34.0 | 09 Dec 24 11:47 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-820741                                   | no-preload-820741            | jenkins | v1.34.0 | 09 Dec 24 11:47 UTC | 09 Dec 24 11:56 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-482476  | default-k8s-diff-port-482476 | jenkins | v1.34.0 | 09 Dec 24 11:47 UTC | 09 Dec 24 11:47 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-482476 | jenkins | v1.34.0 | 09 Dec 24 11:47 UTC |                     |
	|         | default-k8s-diff-port-482476                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-014592                              | old-k8s-version-014592       | jenkins | v1.34.0 | 09 Dec 24 11:48 UTC | 09 Dec 24 11:48 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-014592             | old-k8s-version-014592       | jenkins | v1.34.0 | 09 Dec 24 11:48 UTC | 09 Dec 24 11:48 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-014592                              | old-k8s-version-014592       | jenkins | v1.34.0 | 09 Dec 24 11:48 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-482476       | default-k8s-diff-port-482476 | jenkins | v1.34.0 | 09 Dec 24 11:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-482476 | jenkins | v1.34.0 | 09 Dec 24 11:49 UTC | 09 Dec 24 11:57 UTC |
	|         | default-k8s-diff-port-482476                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/09 11:49:59
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 11:49:59.489110  663024 out.go:345] Setting OutFile to fd 1 ...
	I1209 11:49:59.489218  663024 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 11:49:59.489223  663024 out.go:358] Setting ErrFile to fd 2...
	I1209 11:49:59.489227  663024 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 11:49:59.489393  663024 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20068-609844/.minikube/bin
	I1209 11:49:59.489968  663024 out.go:352] Setting JSON to false
	I1209 11:49:59.491001  663024 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":16343,"bootTime":1733728656,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 11:49:59.491116  663024 start.go:139] virtualization: kvm guest
	I1209 11:49:59.493422  663024 out.go:177] * [default-k8s-diff-port-482476] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1209 11:49:59.495230  663024 notify.go:220] Checking for updates...
	I1209 11:49:59.495310  663024 out.go:177]   - MINIKUBE_LOCATION=20068
	I1209 11:49:59.496833  663024 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 11:49:59.498350  663024 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20068-609844/kubeconfig
	I1209 11:49:59.499799  663024 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20068-609844/.minikube
	I1209 11:49:59.501159  663024 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1209 11:49:59.502351  663024 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 11:49:59.503976  663024 config.go:182] Loaded profile config "default-k8s-diff-port-482476": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 11:49:59.504355  663024 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:49:59.504434  663024 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:49:59.519867  663024 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39863
	I1209 11:49:59.520292  663024 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:49:59.520859  663024 main.go:141] libmachine: Using API Version  1
	I1209 11:49:59.520886  663024 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:49:59.521235  663024 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:49:59.521438  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .DriverName
	I1209 11:49:59.521739  663024 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 11:49:59.522124  663024 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:49:59.522225  663024 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:49:59.537355  663024 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39881
	I1209 11:49:59.537882  663024 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:49:59.538473  663024 main.go:141] libmachine: Using API Version  1
	I1209 11:49:59.538507  663024 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:49:59.538862  663024 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:49:59.539111  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .DriverName
	I1209 11:49:59.573642  663024 out.go:177] * Using the kvm2 driver based on existing profile
	I1209 11:49:59.574808  663024 start.go:297] selected driver: kvm2
	I1209 11:49:59.574821  663024 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-482476 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-482476 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.25 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 11:49:59.574939  663024 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 11:49:59.575618  663024 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 11:49:59.575711  663024 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20068-609844/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1209 11:49:59.591990  663024 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1209 11:49:59.592425  663024 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 11:49:59.592468  663024 cni.go:84] Creating CNI manager for ""
	I1209 11:49:59.592500  663024 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 11:49:59.592535  663024 start.go:340] cluster config:
	{Name:default-k8s-diff-port-482476 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-482476 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.25 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-h
ost Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 11:49:59.592645  663024 iso.go:125] acquiring lock: {Name:mk3bf4d9da9b75cc0ae0a3732f4492bac54a4b46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 11:49:59.594451  663024 out.go:177] * Starting "default-k8s-diff-port-482476" primary control-plane node in "default-k8s-diff-port-482476" cluster
	I1209 11:49:56.270467  661546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.218:22: connect: no route to host
	I1209 11:49:59.342522  661546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.218:22: connect: no route to host
	I1209 11:49:59.595812  663024 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 11:49:59.595868  663024 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1209 11:49:59.595876  663024 cache.go:56] Caching tarball of preloaded images
	I1209 11:49:59.595966  663024 preload.go:172] Found /home/jenkins/minikube-integration/20068-609844/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1209 11:49:59.595978  663024 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1209 11:49:59.596080  663024 profile.go:143] Saving config to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/default-k8s-diff-port-482476/config.json ...
	I1209 11:49:59.596311  663024 start.go:360] acquireMachinesLock for default-k8s-diff-port-482476: {Name:mka6afffe9ddf2faebdc603ed0401805d58bf31e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 11:50:05.422464  661546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.218:22: connect: no route to host
	I1209 11:50:08.494459  661546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.218:22: connect: no route to host
	I1209 11:50:14.574530  661546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.218:22: connect: no route to host
	I1209 11:50:17.646514  661546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.218:22: connect: no route to host
	I1209 11:50:23.726481  661546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.218:22: connect: no route to host
	I1209 11:50:26.798485  661546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.218:22: connect: no route to host
	I1209 11:50:32.878439  661546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.218:22: connect: no route to host
	I1209 11:50:35.950501  661546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.218:22: connect: no route to host
	I1209 11:50:42.030519  661546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.218:22: connect: no route to host
	I1209 11:50:45.102528  661546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.218:22: connect: no route to host
	I1209 11:50:51.182489  661546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.218:22: connect: no route to host
	I1209 11:50:54.254539  661546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.218:22: connect: no route to host
	I1209 11:51:00.334461  661546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.218:22: connect: no route to host
	I1209 11:51:03.406475  661546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.218:22: connect: no route to host
	I1209 11:51:09.486483  661546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.218:22: connect: no route to host
	I1209 11:51:12.558522  661546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.218:22: connect: no route to host
	I1209 11:51:18.638454  661546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.218:22: connect: no route to host
	I1209 11:51:24.715494  662109 start.go:364] duration metric: took 4m3.035196519s to acquireMachinesLock for "no-preload-820741"
	I1209 11:51:24.715567  662109 start.go:96] Skipping create...Using existing machine configuration
	I1209 11:51:24.715578  662109 fix.go:54] fixHost starting: 
	I1209 11:51:24.715984  662109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:51:24.716040  662109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:51:24.731722  662109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43799
	I1209 11:51:24.732247  662109 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:51:24.732853  662109 main.go:141] libmachine: Using API Version  1
	I1209 11:51:24.732876  662109 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:51:24.733244  662109 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:51:24.733437  662109 main.go:141] libmachine: (no-preload-820741) Calling .DriverName
	I1209 11:51:24.733606  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetState
	I1209 11:51:24.735295  662109 fix.go:112] recreateIfNeeded on no-preload-820741: state=Stopped err=<nil>
	I1209 11:51:24.735325  662109 main.go:141] libmachine: (no-preload-820741) Calling .DriverName
	W1209 11:51:24.735521  662109 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 11:51:24.737237  662109 out.go:177] * Restarting existing kvm2 VM for "no-preload-820741" ...
	I1209 11:51:21.710446  661546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.218:22: connect: no route to host
	I1209 11:51:24.712631  661546 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 11:51:24.712695  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetMachineName
	I1209 11:51:24.713111  661546 buildroot.go:166] provisioning hostname "embed-certs-005123"
	I1209 11:51:24.713140  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetMachineName
	I1209 11:51:24.713398  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHHostname
	I1209 11:51:24.715321  661546 machine.go:96] duration metric: took 4m34.547615205s to provisionDockerMachine
	I1209 11:51:24.715372  661546 fix.go:56] duration metric: took 4m34.572283015s for fixHost
	I1209 11:51:24.715381  661546 start.go:83] releasing machines lock for "embed-certs-005123", held for 4m34.572321017s
	W1209 11:51:24.715401  661546 start.go:714] error starting host: provision: host is not running
	W1209 11:51:24.715538  661546 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1209 11:51:24.715550  661546 start.go:729] Will try again in 5 seconds ...
	I1209 11:51:24.738507  662109 main.go:141] libmachine: (no-preload-820741) Calling .Start
	I1209 11:51:24.738692  662109 main.go:141] libmachine: (no-preload-820741) Ensuring networks are active...
	I1209 11:51:24.739450  662109 main.go:141] libmachine: (no-preload-820741) Ensuring network default is active
	I1209 11:51:24.739799  662109 main.go:141] libmachine: (no-preload-820741) Ensuring network mk-no-preload-820741 is active
	I1209 11:51:24.740206  662109 main.go:141] libmachine: (no-preload-820741) Getting domain xml...
	I1209 11:51:24.740963  662109 main.go:141] libmachine: (no-preload-820741) Creating domain...
	I1209 11:51:25.958244  662109 main.go:141] libmachine: (no-preload-820741) Waiting to get IP...
	I1209 11:51:25.959122  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:25.959507  662109 main.go:141] libmachine: (no-preload-820741) DBG | unable to find current IP address of domain no-preload-820741 in network mk-no-preload-820741
	I1209 11:51:25.959585  662109 main.go:141] libmachine: (no-preload-820741) DBG | I1209 11:51:25.959486  663348 retry.go:31] will retry after 256.759149ms: waiting for machine to come up
	I1209 11:51:26.218626  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:26.219187  662109 main.go:141] libmachine: (no-preload-820741) DBG | unable to find current IP address of domain no-preload-820741 in network mk-no-preload-820741
	I1209 11:51:26.219222  662109 main.go:141] libmachine: (no-preload-820741) DBG | I1209 11:51:26.219121  663348 retry.go:31] will retry after 259.957451ms: waiting for machine to come up
	I1209 11:51:26.480403  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:26.480800  662109 main.go:141] libmachine: (no-preload-820741) DBG | unable to find current IP address of domain no-preload-820741 in network mk-no-preload-820741
	I1209 11:51:26.480828  662109 main.go:141] libmachine: (no-preload-820741) DBG | I1209 11:51:26.480753  663348 retry.go:31] will retry after 482.242492ms: waiting for machine to come up
	I1209 11:51:29.718422  661546 start.go:360] acquireMachinesLock for embed-certs-005123: {Name:mka6afffe9ddf2faebdc603ed0401805d58bf31e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 11:51:26.964420  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:26.964870  662109 main.go:141] libmachine: (no-preload-820741) DBG | unable to find current IP address of domain no-preload-820741 in network mk-no-preload-820741
	I1209 11:51:26.964903  662109 main.go:141] libmachine: (no-preload-820741) DBG | I1209 11:51:26.964821  663348 retry.go:31] will retry after 386.489156ms: waiting for machine to come up
	I1209 11:51:27.353471  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:27.353850  662109 main.go:141] libmachine: (no-preload-820741) DBG | unable to find current IP address of domain no-preload-820741 in network mk-no-preload-820741
	I1209 11:51:27.353875  662109 main.go:141] libmachine: (no-preload-820741) DBG | I1209 11:51:27.353796  663348 retry.go:31] will retry after 602.322538ms: waiting for machine to come up
	I1209 11:51:27.957621  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:27.958020  662109 main.go:141] libmachine: (no-preload-820741) DBG | unable to find current IP address of domain no-preload-820741 in network mk-no-preload-820741
	I1209 11:51:27.958051  662109 main.go:141] libmachine: (no-preload-820741) DBG | I1209 11:51:27.957967  663348 retry.go:31] will retry after 747.355263ms: waiting for machine to come up
	I1209 11:51:28.707049  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:28.707486  662109 main.go:141] libmachine: (no-preload-820741) DBG | unable to find current IP address of domain no-preload-820741 in network mk-no-preload-820741
	I1209 11:51:28.707515  662109 main.go:141] libmachine: (no-preload-820741) DBG | I1209 11:51:28.707436  663348 retry.go:31] will retry after 1.034218647s: waiting for machine to come up
	I1209 11:51:29.743755  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:29.744171  662109 main.go:141] libmachine: (no-preload-820741) DBG | unable to find current IP address of domain no-preload-820741 in network mk-no-preload-820741
	I1209 11:51:29.744213  662109 main.go:141] libmachine: (no-preload-820741) DBG | I1209 11:51:29.744119  663348 retry.go:31] will retry after 1.348194555s: waiting for machine to come up
	I1209 11:51:31.094696  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:31.095202  662109 main.go:141] libmachine: (no-preload-820741) DBG | unable to find current IP address of domain no-preload-820741 in network mk-no-preload-820741
	I1209 11:51:31.095234  662109 main.go:141] libmachine: (no-preload-820741) DBG | I1209 11:51:31.095124  663348 retry.go:31] will retry after 1.226653754s: waiting for machine to come up
	I1209 11:51:32.323529  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:32.323935  662109 main.go:141] libmachine: (no-preload-820741) DBG | unable to find current IP address of domain no-preload-820741 in network mk-no-preload-820741
	I1209 11:51:32.323959  662109 main.go:141] libmachine: (no-preload-820741) DBG | I1209 11:51:32.323884  663348 retry.go:31] will retry after 2.008914491s: waiting for machine to come up
	I1209 11:51:34.335246  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:34.335619  662109 main.go:141] libmachine: (no-preload-820741) DBG | unable to find current IP address of domain no-preload-820741 in network mk-no-preload-820741
	I1209 11:51:34.335658  662109 main.go:141] libmachine: (no-preload-820741) DBG | I1209 11:51:34.335593  663348 retry.go:31] will retry after 1.835576732s: waiting for machine to come up
	I1209 11:51:36.173316  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:36.173752  662109 main.go:141] libmachine: (no-preload-820741) DBG | unable to find current IP address of domain no-preload-820741 in network mk-no-preload-820741
	I1209 11:51:36.173786  662109 main.go:141] libmachine: (no-preload-820741) DBG | I1209 11:51:36.173711  663348 retry.go:31] will retry after 3.204076548s: waiting for machine to come up
	I1209 11:51:39.382184  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:39.382619  662109 main.go:141] libmachine: (no-preload-820741) DBG | unable to find current IP address of domain no-preload-820741 in network mk-no-preload-820741
	I1209 11:51:39.382656  662109 main.go:141] libmachine: (no-preload-820741) DBG | I1209 11:51:39.382560  663348 retry.go:31] will retry after 3.298451611s: waiting for machine to come up
	I1209 11:51:44.103077  662586 start.go:364] duration metric: took 3m16.308265809s to acquireMachinesLock for "old-k8s-version-014592"
	I1209 11:51:44.103164  662586 start.go:96] Skipping create...Using existing machine configuration
	I1209 11:51:44.103178  662586 fix.go:54] fixHost starting: 
	I1209 11:51:44.103657  662586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:51:44.103716  662586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:51:44.121162  662586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33531
	I1209 11:51:44.121672  662586 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:51:44.122203  662586 main.go:141] libmachine: Using API Version  1
	I1209 11:51:44.122232  662586 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:51:44.122644  662586 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:51:44.122852  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .DriverName
	I1209 11:51:44.123023  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetState
	I1209 11:51:44.124544  662586 fix.go:112] recreateIfNeeded on old-k8s-version-014592: state=Stopped err=<nil>
	I1209 11:51:44.124567  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .DriverName
	W1209 11:51:44.124704  662586 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 11:51:44.126942  662586 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-014592" ...
	I1209 11:51:42.684438  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:42.684824  662109 main.go:141] libmachine: (no-preload-820741) Found IP for machine: 192.168.39.169
	I1209 11:51:42.684859  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has current primary IP address 192.168.39.169 and MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:42.684867  662109 main.go:141] libmachine: (no-preload-820741) Reserving static IP address...
	I1209 11:51:42.685269  662109 main.go:141] libmachine: (no-preload-820741) DBG | found host DHCP lease matching {name: "no-preload-820741", mac: "52:54:00:27:4c:0e", ip: "192.168.39.169"} in network mk-no-preload-820741: {Iface:virbr1 ExpiryTime:2024-12-09 12:43:41 +0000 UTC Type:0 Mac:52:54:00:27:4c:0e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:no-preload-820741 Clientid:01:52:54:00:27:4c:0e}
	I1209 11:51:42.685296  662109 main.go:141] libmachine: (no-preload-820741) DBG | skip adding static IP to network mk-no-preload-820741 - found existing host DHCP lease matching {name: "no-preload-820741", mac: "52:54:00:27:4c:0e", ip: "192.168.39.169"}
	I1209 11:51:42.685311  662109 main.go:141] libmachine: (no-preload-820741) Reserved static IP address: 192.168.39.169
	I1209 11:51:42.685334  662109 main.go:141] libmachine: (no-preload-820741) Waiting for SSH to be available...
	I1209 11:51:42.685348  662109 main.go:141] libmachine: (no-preload-820741) DBG | Getting to WaitForSSH function...
	I1209 11:51:42.687295  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:42.687588  662109 main.go:141] libmachine: (no-preload-820741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4c:0e", ip: ""} in network mk-no-preload-820741: {Iface:virbr1 ExpiryTime:2024-12-09 12:43:41 +0000 UTC Type:0 Mac:52:54:00:27:4c:0e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:no-preload-820741 Clientid:01:52:54:00:27:4c:0e}
	I1209 11:51:42.687625  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined IP address 192.168.39.169 and MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:42.687702  662109 main.go:141] libmachine: (no-preload-820741) DBG | Using SSH client type: external
	I1209 11:51:42.687790  662109 main.go:141] libmachine: (no-preload-820741) DBG | Using SSH private key: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/no-preload-820741/id_rsa (-rw-------)
	I1209 11:51:42.687824  662109 main.go:141] libmachine: (no-preload-820741) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.169 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20068-609844/.minikube/machines/no-preload-820741/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1209 11:51:42.687844  662109 main.go:141] libmachine: (no-preload-820741) DBG | About to run SSH command:
	I1209 11:51:42.687857  662109 main.go:141] libmachine: (no-preload-820741) DBG | exit 0
	I1209 11:51:42.822609  662109 main.go:141] libmachine: (no-preload-820741) DBG | SSH cmd err, output: <nil>: 
	I1209 11:51:42.822996  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetConfigRaw
	I1209 11:51:42.823665  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetIP
	I1209 11:51:42.826484  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:42.826783  662109 main.go:141] libmachine: (no-preload-820741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4c:0e", ip: ""} in network mk-no-preload-820741: {Iface:virbr1 ExpiryTime:2024-12-09 12:43:41 +0000 UTC Type:0 Mac:52:54:00:27:4c:0e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:no-preload-820741 Clientid:01:52:54:00:27:4c:0e}
	I1209 11:51:42.826808  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined IP address 192.168.39.169 and MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:42.827050  662109 profile.go:143] Saving config to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/no-preload-820741/config.json ...
	I1209 11:51:42.827323  662109 machine.go:93] provisionDockerMachine start ...
	I1209 11:51:42.827346  662109 main.go:141] libmachine: (no-preload-820741) Calling .DriverName
	I1209 11:51:42.827620  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHHostname
	I1209 11:51:42.830224  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:42.830569  662109 main.go:141] libmachine: (no-preload-820741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4c:0e", ip: ""} in network mk-no-preload-820741: {Iface:virbr1 ExpiryTime:2024-12-09 12:43:41 +0000 UTC Type:0 Mac:52:54:00:27:4c:0e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:no-preload-820741 Clientid:01:52:54:00:27:4c:0e}
	I1209 11:51:42.830599  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined IP address 192.168.39.169 and MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:42.830717  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHPort
	I1209 11:51:42.830909  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHKeyPath
	I1209 11:51:42.831107  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHKeyPath
	I1209 11:51:42.831274  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHUsername
	I1209 11:51:42.831454  662109 main.go:141] libmachine: Using SSH client type: native
	I1209 11:51:42.831790  662109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.169 22 <nil> <nil>}
	I1209 11:51:42.831807  662109 main.go:141] libmachine: About to run SSH command:
	hostname
	I1209 11:51:42.938456  662109 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1209 11:51:42.938500  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetMachineName
	I1209 11:51:42.938778  662109 buildroot.go:166] provisioning hostname "no-preload-820741"
	I1209 11:51:42.938813  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetMachineName
	I1209 11:51:42.939023  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHHostname
	I1209 11:51:42.941706  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:42.942236  662109 main.go:141] libmachine: (no-preload-820741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4c:0e", ip: ""} in network mk-no-preload-820741: {Iface:virbr1 ExpiryTime:2024-12-09 12:43:41 +0000 UTC Type:0 Mac:52:54:00:27:4c:0e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:no-preload-820741 Clientid:01:52:54:00:27:4c:0e}
	I1209 11:51:42.942267  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined IP address 192.168.39.169 and MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:42.942390  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHPort
	I1209 11:51:42.942606  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHKeyPath
	I1209 11:51:42.942773  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHKeyPath
	I1209 11:51:42.942922  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHUsername
	I1209 11:51:42.943177  662109 main.go:141] libmachine: Using SSH client type: native
	I1209 11:51:42.943382  662109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.169 22 <nil> <nil>}
	I1209 11:51:42.943406  662109 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-820741 && echo "no-preload-820741" | sudo tee /etc/hostname
	I1209 11:51:43.065816  662109 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-820741
	
	I1209 11:51:43.065849  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHHostname
	I1209 11:51:43.068607  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:43.068916  662109 main.go:141] libmachine: (no-preload-820741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4c:0e", ip: ""} in network mk-no-preload-820741: {Iface:virbr1 ExpiryTime:2024-12-09 12:43:41 +0000 UTC Type:0 Mac:52:54:00:27:4c:0e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:no-preload-820741 Clientid:01:52:54:00:27:4c:0e}
	I1209 11:51:43.068951  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined IP address 192.168.39.169 and MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:43.069127  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHPort
	I1209 11:51:43.069256  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHKeyPath
	I1209 11:51:43.069351  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHKeyPath
	I1209 11:51:43.069514  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHUsername
	I1209 11:51:43.069637  662109 main.go:141] libmachine: Using SSH client type: native
	I1209 11:51:43.069841  662109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.169 22 <nil> <nil>}
	I1209 11:51:43.069861  662109 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-820741' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-820741/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-820741' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 11:51:43.182210  662109 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 11:51:43.182257  662109 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20068-609844/.minikube CaCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20068-609844/.minikube}
	I1209 11:51:43.182289  662109 buildroot.go:174] setting up certificates
	I1209 11:51:43.182305  662109 provision.go:84] configureAuth start
	I1209 11:51:43.182323  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetMachineName
	I1209 11:51:43.182674  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetIP
	I1209 11:51:43.185513  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:43.185872  662109 main.go:141] libmachine: (no-preload-820741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4c:0e", ip: ""} in network mk-no-preload-820741: {Iface:virbr1 ExpiryTime:2024-12-09 12:43:41 +0000 UTC Type:0 Mac:52:54:00:27:4c:0e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:no-preload-820741 Clientid:01:52:54:00:27:4c:0e}
	I1209 11:51:43.185897  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined IP address 192.168.39.169 and MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:43.186018  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHHostname
	I1209 11:51:43.188128  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:43.188482  662109 main.go:141] libmachine: (no-preload-820741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4c:0e", ip: ""} in network mk-no-preload-820741: {Iface:virbr1 ExpiryTime:2024-12-09 12:43:41 +0000 UTC Type:0 Mac:52:54:00:27:4c:0e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:no-preload-820741 Clientid:01:52:54:00:27:4c:0e}
	I1209 11:51:43.188534  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined IP address 192.168.39.169 and MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:43.188668  662109 provision.go:143] copyHostCerts
	I1209 11:51:43.188752  662109 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem, removing ...
	I1209 11:51:43.188774  662109 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem
	I1209 11:51:43.188840  662109 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem (1679 bytes)
	I1209 11:51:43.188928  662109 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem, removing ...
	I1209 11:51:43.188936  662109 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem
	I1209 11:51:43.188963  662109 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem (1082 bytes)
	I1209 11:51:43.189019  662109 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem, removing ...
	I1209 11:51:43.189027  662109 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem
	I1209 11:51:43.189049  662109 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem (1123 bytes)
	I1209 11:51:43.189104  662109 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem org=jenkins.no-preload-820741 san=[127.0.0.1 192.168.39.169 localhost minikube no-preload-820741]
	I1209 11:51:43.488258  662109 provision.go:177] copyRemoteCerts
	I1209 11:51:43.488336  662109 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 11:51:43.488367  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHHostname
	I1209 11:51:43.491689  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:43.492025  662109 main.go:141] libmachine: (no-preload-820741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4c:0e", ip: ""} in network mk-no-preload-820741: {Iface:virbr1 ExpiryTime:2024-12-09 12:43:41 +0000 UTC Type:0 Mac:52:54:00:27:4c:0e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:no-preload-820741 Clientid:01:52:54:00:27:4c:0e}
	I1209 11:51:43.492059  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined IP address 192.168.39.169 and MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:43.492267  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHPort
	I1209 11:51:43.492465  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHKeyPath
	I1209 11:51:43.492635  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHUsername
	I1209 11:51:43.492768  662109 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/no-preload-820741/id_rsa Username:docker}
	I1209 11:51:43.577708  662109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1209 11:51:43.602000  662109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1209 11:51:43.627251  662109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1209 11:51:43.651591  662109 provision.go:87] duration metric: took 469.266358ms to configureAuth
	I1209 11:51:43.651626  662109 buildroot.go:189] setting minikube options for container-runtime
	I1209 11:51:43.651863  662109 config.go:182] Loaded profile config "no-preload-820741": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 11:51:43.652059  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHHostname
	I1209 11:51:43.655150  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:43.655489  662109 main.go:141] libmachine: (no-preload-820741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4c:0e", ip: ""} in network mk-no-preload-820741: {Iface:virbr1 ExpiryTime:2024-12-09 12:43:41 +0000 UTC Type:0 Mac:52:54:00:27:4c:0e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:no-preload-820741 Clientid:01:52:54:00:27:4c:0e}
	I1209 11:51:43.655518  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined IP address 192.168.39.169 and MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:43.655738  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHPort
	I1209 11:51:43.655963  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHKeyPath
	I1209 11:51:43.656146  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHKeyPath
	I1209 11:51:43.656295  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHUsername
	I1209 11:51:43.656483  662109 main.go:141] libmachine: Using SSH client type: native
	I1209 11:51:43.656688  662109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.169 22 <nil> <nil>}
	I1209 11:51:43.656710  662109 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 11:51:43.870704  662109 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 11:51:43.870738  662109 machine.go:96] duration metric: took 1.043398486s to provisionDockerMachine
	I1209 11:51:43.870756  662109 start.go:293] postStartSetup for "no-preload-820741" (driver="kvm2")
	I1209 11:51:43.870771  662109 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 11:51:43.870796  662109 main.go:141] libmachine: (no-preload-820741) Calling .DriverName
	I1209 11:51:43.871158  662109 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 11:51:43.871186  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHHostname
	I1209 11:51:43.873863  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:43.874207  662109 main.go:141] libmachine: (no-preload-820741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4c:0e", ip: ""} in network mk-no-preload-820741: {Iface:virbr1 ExpiryTime:2024-12-09 12:43:41 +0000 UTC Type:0 Mac:52:54:00:27:4c:0e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:no-preload-820741 Clientid:01:52:54:00:27:4c:0e}
	I1209 11:51:43.874230  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined IP address 192.168.39.169 and MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:43.874408  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHPort
	I1209 11:51:43.874610  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHKeyPath
	I1209 11:51:43.874800  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHUsername
	I1209 11:51:43.874925  662109 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/no-preload-820741/id_rsa Username:docker}
	I1209 11:51:43.956874  662109 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 11:51:43.960825  662109 info.go:137] Remote host: Buildroot 2023.02.9
	I1209 11:51:43.960853  662109 filesync.go:126] Scanning /home/jenkins/minikube-integration/20068-609844/.minikube/addons for local assets ...
	I1209 11:51:43.960919  662109 filesync.go:126] Scanning /home/jenkins/minikube-integration/20068-609844/.minikube/files for local assets ...
	I1209 11:51:43.960993  662109 filesync.go:149] local asset: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem -> 6170172.pem in /etc/ssl/certs
	I1209 11:51:43.961095  662109 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 11:51:43.970138  662109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem --> /etc/ssl/certs/6170172.pem (1708 bytes)
	I1209 11:51:43.991975  662109 start.go:296] duration metric: took 121.20118ms for postStartSetup
	I1209 11:51:43.992020  662109 fix.go:56] duration metric: took 19.276442325s for fixHost
	I1209 11:51:43.992043  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHHostname
	I1209 11:51:43.994707  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:43.995035  662109 main.go:141] libmachine: (no-preload-820741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4c:0e", ip: ""} in network mk-no-preload-820741: {Iface:virbr1 ExpiryTime:2024-12-09 12:43:41 +0000 UTC Type:0 Mac:52:54:00:27:4c:0e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:no-preload-820741 Clientid:01:52:54:00:27:4c:0e}
	I1209 11:51:43.995069  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined IP address 192.168.39.169 and MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:43.995186  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHPort
	I1209 11:51:43.995403  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHKeyPath
	I1209 11:51:43.995568  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHKeyPath
	I1209 11:51:43.995716  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHUsername
	I1209 11:51:43.995927  662109 main.go:141] libmachine: Using SSH client type: native
	I1209 11:51:43.996107  662109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.169 22 <nil> <nil>}
	I1209 11:51:43.996117  662109 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1209 11:51:44.102890  662109 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733745104.077047488
	
	I1209 11:51:44.102914  662109 fix.go:216] guest clock: 1733745104.077047488
	I1209 11:51:44.102922  662109 fix.go:229] Guest: 2024-12-09 11:51:44.077047488 +0000 UTC Remote: 2024-12-09 11:51:43.992024296 +0000 UTC m=+262.463051778 (delta=85.023192ms)
	I1209 11:51:44.102952  662109 fix.go:200] guest clock delta is within tolerance: 85.023192ms
	I1209 11:51:44.102957  662109 start.go:83] releasing machines lock for "no-preload-820741", held for 19.387413234s
	I1209 11:51:44.102980  662109 main.go:141] libmachine: (no-preload-820741) Calling .DriverName
	I1209 11:51:44.103272  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetIP
	I1209 11:51:44.105929  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:44.106314  662109 main.go:141] libmachine: (no-preload-820741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4c:0e", ip: ""} in network mk-no-preload-820741: {Iface:virbr1 ExpiryTime:2024-12-09 12:43:41 +0000 UTC Type:0 Mac:52:54:00:27:4c:0e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:no-preload-820741 Clientid:01:52:54:00:27:4c:0e}
	I1209 11:51:44.106341  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined IP address 192.168.39.169 and MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:44.106567  662109 main.go:141] libmachine: (no-preload-820741) Calling .DriverName
	I1209 11:51:44.107102  662109 main.go:141] libmachine: (no-preload-820741) Calling .DriverName
	I1209 11:51:44.107323  662109 main.go:141] libmachine: (no-preload-820741) Calling .DriverName
	I1209 11:51:44.107453  662109 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 11:51:44.107507  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHHostname
	I1209 11:51:44.107640  662109 ssh_runner.go:195] Run: cat /version.json
	I1209 11:51:44.107672  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHHostname
	I1209 11:51:44.110422  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:44.110792  662109 main.go:141] libmachine: (no-preload-820741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4c:0e", ip: ""} in network mk-no-preload-820741: {Iface:virbr1 ExpiryTime:2024-12-09 12:43:41 +0000 UTC Type:0 Mac:52:54:00:27:4c:0e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:no-preload-820741 Clientid:01:52:54:00:27:4c:0e}
	I1209 11:51:44.110822  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined IP address 192.168.39.169 and MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:44.110840  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:44.110984  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHPort
	I1209 11:51:44.111194  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHKeyPath
	I1209 11:51:44.111376  662109 main.go:141] libmachine: (no-preload-820741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4c:0e", ip: ""} in network mk-no-preload-820741: {Iface:virbr1 ExpiryTime:2024-12-09 12:43:41 +0000 UTC Type:0 Mac:52:54:00:27:4c:0e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:no-preload-820741 Clientid:01:52:54:00:27:4c:0e}
	I1209 11:51:44.111395  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined IP address 192.168.39.169 and MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:44.111408  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHUsername
	I1209 11:51:44.111569  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHPort
	I1209 11:51:44.111589  662109 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/no-preload-820741/id_rsa Username:docker}
	I1209 11:51:44.111722  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHKeyPath
	I1209 11:51:44.111827  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHUsername
	I1209 11:51:44.111986  662109 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/no-preload-820741/id_rsa Username:docker}
	I1209 11:51:44.228799  662109 ssh_runner.go:195] Run: systemctl --version
	I1209 11:51:44.234678  662109 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 11:51:44.383290  662109 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 11:51:44.388906  662109 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 11:51:44.388981  662109 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 11:51:44.405271  662109 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1209 11:51:44.405308  662109 start.go:495] detecting cgroup driver to use...
	I1209 11:51:44.405389  662109 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 11:51:44.425480  662109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 11:51:44.439827  662109 docker.go:217] disabling cri-docker service (if available) ...
	I1209 11:51:44.439928  662109 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 11:51:44.454750  662109 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 11:51:44.470828  662109 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 11:51:44.595400  662109 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 11:51:44.756743  662109 docker.go:233] disabling docker service ...
	I1209 11:51:44.756817  662109 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 11:51:44.774069  662109 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 11:51:44.788188  662109 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 11:51:44.909156  662109 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 11:51:45.036992  662109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 11:51:45.051284  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 11:51:45.071001  662109 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1209 11:51:45.071074  662109 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:51:45.081491  662109 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 11:51:45.081549  662109 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:51:45.091476  662109 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:51:45.103237  662109 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:51:45.114723  662109 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 11:51:45.126330  662109 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:51:45.136501  662109 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:51:45.152804  662109 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:51:45.163221  662109 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 11:51:45.173297  662109 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1209 11:51:45.173379  662109 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1209 11:51:45.186209  662109 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 11:51:45.195773  662109 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 11:51:45.339593  662109 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 11:51:45.438766  662109 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 11:51:45.438851  662109 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 11:51:45.444775  662109 start.go:563] Will wait 60s for crictl version
	I1209 11:51:45.444847  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:51:45.449585  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 11:51:45.493796  662109 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1209 11:51:45.493899  662109 ssh_runner.go:195] Run: crio --version
	I1209 11:51:45.521391  662109 ssh_runner.go:195] Run: crio --version
	I1209 11:51:45.551249  662109 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1209 11:51:45.552714  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetIP
	I1209 11:51:45.555910  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:45.556271  662109 main.go:141] libmachine: (no-preload-820741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4c:0e", ip: ""} in network mk-no-preload-820741: {Iface:virbr1 ExpiryTime:2024-12-09 12:43:41 +0000 UTC Type:0 Mac:52:54:00:27:4c:0e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:no-preload-820741 Clientid:01:52:54:00:27:4c:0e}
	I1209 11:51:45.556298  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined IP address 192.168.39.169 and MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:45.556571  662109 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1209 11:51:45.560718  662109 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 11:51:45.573027  662109 kubeadm.go:883] updating cluster {Name:no-preload-820741 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:no-preload-820741 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.169 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 11:51:45.573171  662109 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 11:51:45.573226  662109 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 11:51:45.613696  662109 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1209 11:51:45.613724  662109 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.2 registry.k8s.io/kube-controller-manager:v1.31.2 registry.k8s.io/kube-scheduler:v1.31.2 registry.k8s.io/kube-proxy:v1.31.2 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1209 11:51:45.613810  662109 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1209 11:51:45.613847  662109 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.2
	I1209 11:51:45.613864  662109 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.2
	I1209 11:51:45.613880  662109 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1209 11:51:45.613857  662109 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1209 11:51:45.613939  662109 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1209 11:51:45.613801  662109 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.2
	I1209 11:51:45.613810  662109 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 11:51:45.615885  662109 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1209 11:51:45.615983  662109 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.2
	I1209 11:51:45.615889  662109 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1209 11:51:45.615885  662109 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 11:51:45.615893  662109 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1209 11:51:45.615891  662109 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1209 11:51:45.615897  662109 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.2
	I1209 11:51:45.615893  662109 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.2
	I1209 11:51:45.819757  662109 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1209 11:51:45.836546  662109 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1209 11:51:45.851918  662109 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.2
	I1209 11:51:45.857461  662109 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.2
	I1209 11:51:45.857468  662109 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.2
	I1209 11:51:45.863981  662109 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1209 11:51:45.864038  662109 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1209 11:51:45.864122  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:51:45.865289  662109 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.2
	I1209 11:51:45.868361  662109 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I1209 11:51:46.030476  662109 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.2" does not exist at hash "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173" in container runtime
	I1209 11:51:46.030525  662109 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.2
	I1209 11:51:46.030582  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:51:46.030525  662109 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.2" needs transfer: "registry.k8s.io/kube-proxy:v1.31.2" does not exist at hash "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38" in container runtime
	I1209 11:51:46.030603  662109 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.2" does not exist at hash "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856" in container runtime
	I1209 11:51:46.030625  662109 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.2
	I1209 11:51:46.030652  662109 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.2
	I1209 11:51:46.030694  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:51:46.030655  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:51:46.030720  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1209 11:51:46.030760  662109 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.2" does not exist at hash "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503" in container runtime
	I1209 11:51:46.030794  662109 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1209 11:51:46.030823  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:51:46.030823  662109 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I1209 11:51:46.030845  662109 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I1209 11:51:46.030868  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:51:46.041983  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1209 11:51:46.042072  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1209 11:51:46.042088  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1209 11:51:46.086909  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1209 11:51:46.086966  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1209 11:51:46.086997  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1209 11:51:46.141636  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1209 11:51:46.141723  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1209 11:51:46.141779  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1209 11:51:46.249908  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1209 11:51:46.249972  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1209 11:51:46.250024  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1209 11:51:46.250056  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1209 11:51:46.266345  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1209 11:51:46.266425  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1209 11:51:46.376691  662109 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1209 11:51:46.376784  662109 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2
	I1209 11:51:46.376904  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1209 11:51:46.376937  662109 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1209 11:51:46.376911  662109 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1209 11:51:46.376980  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1209 11:51:46.407997  662109 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2
	I1209 11:51:46.408015  662109 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2
	I1209 11:51:46.408120  662109 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.2
	I1209 11:51:46.408120  662109 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1209 11:51:46.450341  662109 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.2 (exists)
	I1209 11:51:46.450374  662109 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1209 11:51:46.450445  662109 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1209 11:51:46.450503  662109 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I1209 11:51:46.450537  662109 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I1209 11:51:46.450541  662109 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2
	I1209 11:51:46.450570  662109 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.2 (exists)
	I1209 11:51:46.450621  662109 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I1209 11:51:46.450621  662109 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1209 11:51:46.450621  662109 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.2 (exists)
	I1209 11:51:44.128421  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .Start
	I1209 11:51:44.128663  662586 main.go:141] libmachine: (old-k8s-version-014592) Ensuring networks are active...
	I1209 11:51:44.129435  662586 main.go:141] libmachine: (old-k8s-version-014592) Ensuring network default is active
	I1209 11:51:44.129805  662586 main.go:141] libmachine: (old-k8s-version-014592) Ensuring network mk-old-k8s-version-014592 is active
	I1209 11:51:44.130314  662586 main.go:141] libmachine: (old-k8s-version-014592) Getting domain xml...
	I1209 11:51:44.131070  662586 main.go:141] libmachine: (old-k8s-version-014592) Creating domain...
	I1209 11:51:45.405214  662586 main.go:141] libmachine: (old-k8s-version-014592) Waiting to get IP...
	I1209 11:51:45.406116  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:51:45.406680  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | unable to find current IP address of domain old-k8s-version-014592 in network mk-old-k8s-version-014592
	I1209 11:51:45.406716  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | I1209 11:51:45.406613  663492 retry.go:31] will retry after 249.130873ms: waiting for machine to come up
	I1209 11:51:45.657224  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:51:45.657727  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | unable to find current IP address of domain old-k8s-version-014592 in network mk-old-k8s-version-014592
	I1209 11:51:45.657756  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | I1209 11:51:45.657687  663492 retry.go:31] will retry after 363.458278ms: waiting for machine to come up
	I1209 11:51:46.023431  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:51:46.023912  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | unable to find current IP address of domain old-k8s-version-014592 in network mk-old-k8s-version-014592
	I1209 11:51:46.023945  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | I1209 11:51:46.023851  663492 retry.go:31] will retry after 313.220722ms: waiting for machine to come up
	I1209 11:51:46.339300  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:51:46.339850  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | unable to find current IP address of domain old-k8s-version-014592 in network mk-old-k8s-version-014592
	I1209 11:51:46.339876  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | I1209 11:51:46.339791  663492 retry.go:31] will retry after 517.613322ms: waiting for machine to come up
	I1209 11:51:46.859825  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:51:46.860229  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | unable to find current IP address of domain old-k8s-version-014592 in network mk-old-k8s-version-014592
	I1209 11:51:46.860260  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | I1209 11:51:46.860198  663492 retry.go:31] will retry after 710.195232ms: waiting for machine to come up
	I1209 11:51:47.572460  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:51:47.573030  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | unable to find current IP address of domain old-k8s-version-014592 in network mk-old-k8s-version-014592
	I1209 11:51:47.573080  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | I1209 11:51:47.573008  663492 retry.go:31] will retry after 620.717522ms: waiting for machine to come up
	I1209 11:51:46.869631  662109 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 11:51:48.822213  662109 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2: (2.371704342s)
	I1209 11:51:48.822263  662109 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2 from cache
	I1209 11:51:48.822262  662109 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0: (2.371603127s)
	I1209 11:51:48.822296  662109 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I1209 11:51:48.822295  662109 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.2: (2.371584353s)
	I1209 11:51:48.822298  662109 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1209 11:51:48.822309  662109 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.2 (exists)
	I1209 11:51:48.822324  662109 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.952666874s)
	I1209 11:51:48.822364  662109 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1209 11:51:48.822367  662109 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1209 11:51:48.822416  662109 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 11:51:48.822460  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:51:50.794288  662109 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (1.971891497s)
	I1209 11:51:50.794330  662109 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1209 11:51:50.794357  662109 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.2
	I1209 11:51:50.794357  662109 ssh_runner.go:235] Completed: which crictl: (1.971876587s)
	I1209 11:51:50.794417  662109 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2
	I1209 11:51:50.794437  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 11:51:48.195603  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:51:48.196140  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | unable to find current IP address of domain old-k8s-version-014592 in network mk-old-k8s-version-014592
	I1209 11:51:48.196172  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | I1209 11:51:48.196083  663492 retry.go:31] will retry after 747.45082ms: waiting for machine to come up
	I1209 11:51:48.945230  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:51:48.945682  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | unable to find current IP address of domain old-k8s-version-014592 in network mk-old-k8s-version-014592
	I1209 11:51:48.945737  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | I1209 11:51:48.945661  663492 retry.go:31] will retry after 1.307189412s: waiting for machine to come up
	I1209 11:51:50.254747  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:51:50.255335  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | unable to find current IP address of domain old-k8s-version-014592 in network mk-old-k8s-version-014592
	I1209 11:51:50.255359  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | I1209 11:51:50.255276  663492 retry.go:31] will retry after 1.269881759s: waiting for machine to come up
	I1209 11:51:51.526966  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:51:51.527400  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | unable to find current IP address of domain old-k8s-version-014592 in network mk-old-k8s-version-014592
	I1209 11:51:51.527431  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | I1209 11:51:51.527348  663492 retry.go:31] will retry after 1.424091669s: waiting for machine to come up
	I1209 11:51:52.958981  662109 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.164517823s)
	I1209 11:51:52.959044  662109 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2: (2.164597978s)
	I1209 11:51:52.959089  662109 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2 from cache
	I1209 11:51:52.959120  662109 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1209 11:51:52.959057  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 11:51:52.959203  662109 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1209 11:51:53.007629  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 11:51:54.832641  662109 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2: (1.873398185s)
	I1209 11:51:54.832686  662109 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2 from cache
	I1209 11:51:54.832694  662109 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.825022672s)
	I1209 11:51:54.832714  662109 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I1209 11:51:54.832748  662109 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1209 11:51:54.832769  662109 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I1209 11:51:54.832853  662109 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1209 11:51:52.953290  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:51:52.953711  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | unable to find current IP address of domain old-k8s-version-014592 in network mk-old-k8s-version-014592
	I1209 11:51:52.953743  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | I1209 11:51:52.953658  663492 retry.go:31] will retry after 2.009829783s: waiting for machine to come up
	I1209 11:51:54.965818  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:51:54.966337  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | unable to find current IP address of domain old-k8s-version-014592 in network mk-old-k8s-version-014592
	I1209 11:51:54.966372  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | I1209 11:51:54.966285  663492 retry.go:31] will retry after 2.209879817s: waiting for machine to come up
	I1209 11:51:57.177397  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:51:57.177870  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | unable to find current IP address of domain old-k8s-version-014592 in network mk-old-k8s-version-014592
	I1209 11:51:57.177901  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | I1209 11:51:57.177805  663492 retry.go:31] will retry after 2.999056002s: waiting for machine to come up
	I1209 11:51:58.433813  662109 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.600992195s)
	I1209 11:51:58.433889  662109 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I1209 11:51:58.433913  662109 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1209 11:51:58.433831  662109 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (3.600948593s)
	I1209 11:51:58.433947  662109 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1209 11:51:58.433961  662109 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1209 11:51:59.792012  662109 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2: (1.35801884s)
	I1209 11:51:59.792049  662109 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2 from cache
	I1209 11:51:59.792078  662109 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1209 11:51:59.792127  662109 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1209 11:52:00.635140  662109 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1209 11:52:00.635193  662109 cache_images.go:123] Successfully loaded all cached images
	I1209 11:52:00.635212  662109 cache_images.go:92] duration metric: took 15.021464053s to LoadCachedImages
	I1209 11:52:00.635232  662109 kubeadm.go:934] updating node { 192.168.39.169 8443 v1.31.2 crio true true} ...
	I1209 11:52:00.635395  662109 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-820741 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.169
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:no-preload-820741 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 11:52:00.635481  662109 ssh_runner.go:195] Run: crio config
	I1209 11:52:00.680321  662109 cni.go:84] Creating CNI manager for ""
	I1209 11:52:00.680345  662109 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 11:52:00.680370  662109 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1209 11:52:00.680394  662109 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.169 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-820741 NodeName:no-preload-820741 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.169"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.169 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 11:52:00.680545  662109 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.169
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-820741"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.169"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.169"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 11:52:00.680614  662109 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1209 11:52:00.690391  662109 binaries.go:44] Found k8s binaries, skipping transfer
	I1209 11:52:00.690484  662109 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 11:52:00.699034  662109 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1209 11:52:00.714710  662109 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 11:52:00.730375  662109 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2297 bytes)
	I1209 11:52:00.747519  662109 ssh_runner.go:195] Run: grep 192.168.39.169	control-plane.minikube.internal$ /etc/hosts
	I1209 11:52:00.751163  662109 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.169	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 11:52:00.762405  662109 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 11:52:00.881308  662109 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 11:52:00.898028  662109 certs.go:68] Setting up /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/no-preload-820741 for IP: 192.168.39.169
	I1209 11:52:00.898060  662109 certs.go:194] generating shared ca certs ...
	I1209 11:52:00.898085  662109 certs.go:226] acquiring lock for ca certs: {Name:mk2df5887e08965d909a9c950da5dfffb8a04ddc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 11:52:00.898349  662109 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.key
	I1209 11:52:00.898415  662109 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.key
	I1209 11:52:00.898429  662109 certs.go:256] generating profile certs ...
	I1209 11:52:00.898565  662109 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/no-preload-820741/client.key
	I1209 11:52:00.898646  662109 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/no-preload-820741/apiserver.key.814e22a1
	I1209 11:52:00.898701  662109 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/no-preload-820741/proxy-client.key
	I1209 11:52:00.898859  662109 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017.pem (1338 bytes)
	W1209 11:52:00.898904  662109 certs.go:480] ignoring /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017_empty.pem, impossibly tiny 0 bytes
	I1209 11:52:00.898918  662109 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem (1675 bytes)
	I1209 11:52:00.898949  662109 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem (1082 bytes)
	I1209 11:52:00.898982  662109 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem (1123 bytes)
	I1209 11:52:00.899007  662109 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem (1679 bytes)
	I1209 11:52:00.899045  662109 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem (1708 bytes)
	I1209 11:52:00.899994  662109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 11:52:00.943848  662109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1209 11:52:00.970587  662109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 11:52:01.025164  662109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1209 11:52:01.055766  662109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/no-preload-820741/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1209 11:52:01.089756  662109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/no-preload-820741/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1209 11:52:01.112171  662109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/no-preload-820741/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 11:52:01.135928  662109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/no-preload-820741/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1209 11:52:01.157703  662109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017.pem --> /usr/share/ca-certificates/617017.pem (1338 bytes)
	I1209 11:52:01.179806  662109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem --> /usr/share/ca-certificates/6170172.pem (1708 bytes)
	I1209 11:52:01.201663  662109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 11:52:01.223314  662109 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 11:52:01.239214  662109 ssh_runner.go:195] Run: openssl version
	I1209 11:52:01.244687  662109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/617017.pem && ln -fs /usr/share/ca-certificates/617017.pem /etc/ssl/certs/617017.pem"
	I1209 11:52:01.254630  662109 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/617017.pem
	I1209 11:52:01.258801  662109 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 10:45 /usr/share/ca-certificates/617017.pem
	I1209 11:52:01.258849  662109 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/617017.pem
	I1209 11:52:01.264219  662109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/617017.pem /etc/ssl/certs/51391683.0"
	I1209 11:52:01.274077  662109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6170172.pem && ln -fs /usr/share/ca-certificates/6170172.pem /etc/ssl/certs/6170172.pem"
	I1209 11:52:01.284511  662109 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6170172.pem
	I1209 11:52:01.289141  662109 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 10:45 /usr/share/ca-certificates/6170172.pem
	I1209 11:52:01.289216  662109 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6170172.pem
	I1209 11:52:01.295079  662109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6170172.pem /etc/ssl/certs/3ec20f2e.0"
	I1209 11:52:01.305606  662109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1209 11:52:01.315795  662109 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 11:52:01.320085  662109 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 10:34 /usr/share/ca-certificates/minikubeCA.pem
	I1209 11:52:01.320147  662109 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 11:52:01.325590  662109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1209 11:52:01.335747  662109 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 11:52:01.340113  662109 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1209 11:52:01.346217  662109 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1209 11:52:01.351799  662109 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1209 11:52:01.357441  662109 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1209 11:52:01.362784  662109 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1209 11:52:01.368210  662109 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1209 11:52:01.373975  662109 kubeadm.go:392] StartCluster: {Name:no-preload-820741 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:no-preload-820741 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.169 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 11:52:01.374101  662109 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 11:52:01.374160  662109 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 11:52:01.409780  662109 cri.go:89] found id: ""
	I1209 11:52:01.409852  662109 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 11:52:01.419505  662109 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1209 11:52:01.419550  662109 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1209 11:52:01.419603  662109 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1209 11:52:01.429000  662109 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1209 11:52:01.429999  662109 kubeconfig.go:125] found "no-preload-820741" server: "https://192.168.39.169:8443"
	I1209 11:52:01.432151  662109 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1209 11:52:01.440964  662109 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.169
	I1209 11:52:01.441003  662109 kubeadm.go:1160] stopping kube-system containers ...
	I1209 11:52:01.441021  662109 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1209 11:52:01.441084  662109 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 11:52:01.474788  662109 cri.go:89] found id: ""
	I1209 11:52:01.474865  662109 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1209 11:52:01.491360  662109 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 11:52:01.500483  662109 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 11:52:01.500505  662109 kubeadm.go:157] found existing configuration files:
	
	I1209 11:52:01.500558  662109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 11:52:01.509190  662109 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 11:52:01.509251  662109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 11:52:01.518248  662109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 11:52:01.526845  662109 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 11:52:01.526909  662109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 11:52:01.535849  662109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 11:52:01.544609  662109 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 11:52:01.544672  662109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 11:52:01.553527  662109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 11:52:01.561876  662109 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 11:52:01.561928  662109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 11:52:00.178781  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:00.179225  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | unable to find current IP address of domain old-k8s-version-014592 in network mk-old-k8s-version-014592
	I1209 11:52:00.179273  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | I1209 11:52:00.179165  663492 retry.go:31] will retry after 4.532370187s: waiting for machine to come up
	I1209 11:52:05.915073  663024 start.go:364] duration metric: took 2m6.318720193s to acquireMachinesLock for "default-k8s-diff-port-482476"
	I1209 11:52:05.915166  663024 start.go:96] Skipping create...Using existing machine configuration
	I1209 11:52:05.915179  663024 fix.go:54] fixHost starting: 
	I1209 11:52:05.915652  663024 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:52:05.915716  663024 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:52:05.933810  663024 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35679
	I1209 11:52:05.934363  663024 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:52:05.935019  663024 main.go:141] libmachine: Using API Version  1
	I1209 11:52:05.935071  663024 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:52:05.935489  663024 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:52:05.935682  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .DriverName
	I1209 11:52:05.935879  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetState
	I1209 11:52:05.937627  663024 fix.go:112] recreateIfNeeded on default-k8s-diff-port-482476: state=Stopped err=<nil>
	I1209 11:52:05.937660  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .DriverName
	W1209 11:52:05.937842  663024 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 11:52:05.939893  663024 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-482476" ...
	I1209 11:52:01.570657  662109 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 11:52:01.579782  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:01.680268  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:02.573653  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:02.762024  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:02.826444  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:02.932170  662109 api_server.go:52] waiting for apiserver process to appear ...
	I1209 11:52:02.932291  662109 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:03.432933  662109 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:03.933186  662109 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:03.948529  662109 api_server.go:72] duration metric: took 1.016357501s to wait for apiserver process to appear ...
	I1209 11:52:03.948565  662109 api_server.go:88] waiting for apiserver healthz status ...
	I1209 11:52:03.948595  662109 api_server.go:253] Checking apiserver healthz at https://192.168.39.169:8443/healthz ...
	I1209 11:52:06.443635  662109 api_server.go:279] https://192.168.39.169:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1209 11:52:06.443675  662109 api_server.go:103] status: https://192.168.39.169:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1209 11:52:06.443692  662109 api_server.go:253] Checking apiserver healthz at https://192.168.39.169:8443/healthz ...
	I1209 11:52:06.490801  662109 api_server.go:279] https://192.168.39.169:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1209 11:52:06.490839  662109 api_server.go:103] status: https://192.168.39.169:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1209 11:52:06.490860  662109 api_server.go:253] Checking apiserver healthz at https://192.168.39.169:8443/healthz ...
	I1209 11:52:06.502460  662109 api_server.go:279] https://192.168.39.169:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1209 11:52:06.502497  662109 api_server.go:103] status: https://192.168.39.169:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1209 11:52:04.713201  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:04.713711  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has current primary IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:04.713817  662586 main.go:141] libmachine: (old-k8s-version-014592) Found IP for machine: 192.168.61.132
	I1209 11:52:04.713853  662586 main.go:141] libmachine: (old-k8s-version-014592) Reserving static IP address...
	I1209 11:52:04.714267  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "old-k8s-version-014592", mac: "52:54:00:54:72:3e", ip: "192.168.61.132"} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:51:55 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:52:04.714298  662586 main.go:141] libmachine: (old-k8s-version-014592) Reserved static IP address: 192.168.61.132
	I1209 11:52:04.714318  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | skip adding static IP to network mk-old-k8s-version-014592 - found existing host DHCP lease matching {name: "old-k8s-version-014592", mac: "52:54:00:54:72:3e", ip: "192.168.61.132"}
	I1209 11:52:04.714332  662586 main.go:141] libmachine: (old-k8s-version-014592) Waiting for SSH to be available...
	I1209 11:52:04.714347  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | Getting to WaitForSSH function...
	I1209 11:52:04.716632  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:04.716972  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:72:3e", ip: ""} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:51:55 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:52:04.717005  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:04.717129  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | Using SSH client type: external
	I1209 11:52:04.717157  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | Using SSH private key: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/old-k8s-version-014592/id_rsa (-rw-------)
	I1209 11:52:04.717192  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.132 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20068-609844/.minikube/machines/old-k8s-version-014592/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1209 11:52:04.717206  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | About to run SSH command:
	I1209 11:52:04.717223  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | exit 0
	I1209 11:52:04.846290  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | SSH cmd err, output: <nil>: 
	I1209 11:52:04.846675  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetConfigRaw
	I1209 11:52:04.847483  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetIP
	I1209 11:52:04.850430  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:04.850859  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:72:3e", ip: ""} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:51:55 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:52:04.850888  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:04.851113  662586 profile.go:143] Saving config to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/old-k8s-version-014592/config.json ...
	I1209 11:52:04.851328  662586 machine.go:93] provisionDockerMachine start ...
	I1209 11:52:04.851348  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .DriverName
	I1209 11:52:04.851547  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHHostname
	I1209 11:52:04.854318  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:04.854622  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:72:3e", ip: ""} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:51:55 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:52:04.854654  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:04.854782  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHPort
	I1209 11:52:04.854959  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHKeyPath
	I1209 11:52:04.855134  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHKeyPath
	I1209 11:52:04.855276  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHUsername
	I1209 11:52:04.855438  662586 main.go:141] libmachine: Using SSH client type: native
	I1209 11:52:04.855696  662586 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.132 22 <nil> <nil>}
	I1209 11:52:04.855709  662586 main.go:141] libmachine: About to run SSH command:
	hostname
	I1209 11:52:04.963021  662586 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1209 11:52:04.963059  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetMachineName
	I1209 11:52:04.963344  662586 buildroot.go:166] provisioning hostname "old-k8s-version-014592"
	I1209 11:52:04.963368  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetMachineName
	I1209 11:52:04.963545  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHHostname
	I1209 11:52:04.966102  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:04.966461  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:72:3e", ip: ""} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:51:55 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:52:04.966496  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:04.966607  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHPort
	I1209 11:52:04.966780  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHKeyPath
	I1209 11:52:04.966919  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHKeyPath
	I1209 11:52:04.967056  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHUsername
	I1209 11:52:04.967221  662586 main.go:141] libmachine: Using SSH client type: native
	I1209 11:52:04.967407  662586 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.132 22 <nil> <nil>}
	I1209 11:52:04.967419  662586 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-014592 && echo "old-k8s-version-014592" | sudo tee /etc/hostname
	I1209 11:52:05.094147  662586 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-014592
	
	I1209 11:52:05.094210  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHHostname
	I1209 11:52:05.097298  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.097729  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:72:3e", ip: ""} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:51:55 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:52:05.097765  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.097949  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHPort
	I1209 11:52:05.098197  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHKeyPath
	I1209 11:52:05.098460  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHKeyPath
	I1209 11:52:05.098632  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHUsername
	I1209 11:52:05.098829  662586 main.go:141] libmachine: Using SSH client type: native
	I1209 11:52:05.099046  662586 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.132 22 <nil> <nil>}
	I1209 11:52:05.099082  662586 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-014592' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-014592/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-014592' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 11:52:05.210739  662586 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 11:52:05.210785  662586 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20068-609844/.minikube CaCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20068-609844/.minikube}
	I1209 11:52:05.210846  662586 buildroot.go:174] setting up certificates
	I1209 11:52:05.210859  662586 provision.go:84] configureAuth start
	I1209 11:52:05.210881  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetMachineName
	I1209 11:52:05.211210  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetIP
	I1209 11:52:05.214546  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.214937  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:72:3e", ip: ""} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:51:55 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:52:05.214967  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.215167  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHHostname
	I1209 11:52:05.217866  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.218269  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:72:3e", ip: ""} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:51:55 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:52:05.218300  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.218452  662586 provision.go:143] copyHostCerts
	I1209 11:52:05.218530  662586 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem, removing ...
	I1209 11:52:05.218558  662586 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem
	I1209 11:52:05.218630  662586 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem (1082 bytes)
	I1209 11:52:05.218807  662586 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem, removing ...
	I1209 11:52:05.218820  662586 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem
	I1209 11:52:05.218863  662586 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem (1123 bytes)
	I1209 11:52:05.218943  662586 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem, removing ...
	I1209 11:52:05.218953  662586 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem
	I1209 11:52:05.218983  662586 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem (1679 bytes)
	I1209 11:52:05.219060  662586 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-014592 san=[127.0.0.1 192.168.61.132 localhost minikube old-k8s-version-014592]
	I1209 11:52:05.292744  662586 provision.go:177] copyRemoteCerts
	I1209 11:52:05.292830  662586 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 11:52:05.292867  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHHostname
	I1209 11:52:05.296244  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.296670  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:72:3e", ip: ""} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:51:55 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:52:05.296712  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.296896  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHPort
	I1209 11:52:05.297111  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHKeyPath
	I1209 11:52:05.297330  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHUsername
	I1209 11:52:05.297514  662586 sshutil.go:53] new ssh client: &{IP:192.168.61.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/old-k8s-version-014592/id_rsa Username:docker}
	I1209 11:52:05.381148  662586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1209 11:52:05.404883  662586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1209 11:52:05.433421  662586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1209 11:52:05.456775  662586 provision.go:87] duration metric: took 245.894878ms to configureAuth
	I1209 11:52:05.456811  662586 buildroot.go:189] setting minikube options for container-runtime
	I1209 11:52:05.457003  662586 config.go:182] Loaded profile config "old-k8s-version-014592": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1209 11:52:05.457082  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHHostname
	I1209 11:52:05.459984  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.460372  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:72:3e", ip: ""} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:51:55 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:52:05.460415  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.460631  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHPort
	I1209 11:52:05.460851  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHKeyPath
	I1209 11:52:05.461021  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHKeyPath
	I1209 11:52:05.461217  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHUsername
	I1209 11:52:05.461481  662586 main.go:141] libmachine: Using SSH client type: native
	I1209 11:52:05.461702  662586 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.132 22 <nil> <nil>}
	I1209 11:52:05.461722  662586 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 11:52:05.683276  662586 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 11:52:05.683311  662586 machine.go:96] duration metric: took 831.968459ms to provisionDockerMachine
	I1209 11:52:05.683335  662586 start.go:293] postStartSetup for "old-k8s-version-014592" (driver="kvm2")
	I1209 11:52:05.683349  662586 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 11:52:05.683391  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .DriverName
	I1209 11:52:05.683809  662586 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 11:52:05.683850  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHHostname
	I1209 11:52:05.687116  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.687540  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:72:3e", ip: ""} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:51:55 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:52:05.687579  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.687787  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHPort
	I1209 11:52:05.688013  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHKeyPath
	I1209 11:52:05.688204  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHUsername
	I1209 11:52:05.688439  662586 sshutil.go:53] new ssh client: &{IP:192.168.61.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/old-k8s-version-014592/id_rsa Username:docker}
	I1209 11:52:05.768777  662586 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 11:52:05.772572  662586 info.go:137] Remote host: Buildroot 2023.02.9
	I1209 11:52:05.772603  662586 filesync.go:126] Scanning /home/jenkins/minikube-integration/20068-609844/.minikube/addons for local assets ...
	I1209 11:52:05.772690  662586 filesync.go:126] Scanning /home/jenkins/minikube-integration/20068-609844/.minikube/files for local assets ...
	I1209 11:52:05.772813  662586 filesync.go:149] local asset: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem -> 6170172.pem in /etc/ssl/certs
	I1209 11:52:05.772942  662586 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 11:52:05.784153  662586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem --> /etc/ssl/certs/6170172.pem (1708 bytes)
	I1209 11:52:05.808677  662586 start.go:296] duration metric: took 125.320445ms for postStartSetup
	I1209 11:52:05.808736  662586 fix.go:56] duration metric: took 21.705557963s for fixHost
	I1209 11:52:05.808766  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHHostname
	I1209 11:52:05.811685  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.812053  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:72:3e", ip: ""} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:51:55 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:52:05.812090  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.812426  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHPort
	I1209 11:52:05.812639  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHKeyPath
	I1209 11:52:05.812853  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHKeyPath
	I1209 11:52:05.812996  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHUsername
	I1209 11:52:05.813345  662586 main.go:141] libmachine: Using SSH client type: native
	I1209 11:52:05.813562  662586 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.132 22 <nil> <nil>}
	I1209 11:52:05.813572  662586 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1209 11:52:05.914863  662586 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733745125.875320243
	
	I1209 11:52:05.914892  662586 fix.go:216] guest clock: 1733745125.875320243
	I1209 11:52:05.914906  662586 fix.go:229] Guest: 2024-12-09 11:52:05.875320243 +0000 UTC Remote: 2024-12-09 11:52:05.808742373 +0000 UTC m=+218.159686894 (delta=66.57787ms)
	I1209 11:52:05.914941  662586 fix.go:200] guest clock delta is within tolerance: 66.57787ms
	I1209 11:52:05.914952  662586 start.go:83] releasing machines lock for "old-k8s-version-014592", held for 21.811813657s
	I1209 11:52:05.914983  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .DriverName
	I1209 11:52:05.915289  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetIP
	I1209 11:52:05.918015  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.918513  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:72:3e", ip: ""} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:51:55 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:52:05.918546  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.918662  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .DriverName
	I1209 11:52:05.919315  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .DriverName
	I1209 11:52:05.919508  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .DriverName
	I1209 11:52:05.919628  662586 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 11:52:05.919684  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHHostname
	I1209 11:52:05.919739  662586 ssh_runner.go:195] Run: cat /version.json
	I1209 11:52:05.919767  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHHostname
	I1209 11:52:05.922529  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.922816  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.923096  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:72:3e", ip: ""} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:51:55 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:52:05.923121  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.923223  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:72:3e", ip: ""} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:51:55 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:52:05.923258  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.923291  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHPort
	I1209 11:52:05.923459  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHPort
	I1209 11:52:05.923602  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHKeyPath
	I1209 11:52:05.923616  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHKeyPath
	I1209 11:52:05.923848  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHUsername
	I1209 11:52:05.923900  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHUsername
	I1209 11:52:05.924030  662586 sshutil.go:53] new ssh client: &{IP:192.168.61.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/old-k8s-version-014592/id_rsa Username:docker}
	I1209 11:52:05.924104  662586 sshutil.go:53] new ssh client: &{IP:192.168.61.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/old-k8s-version-014592/id_rsa Username:docker}
	I1209 11:52:06.037215  662586 ssh_runner.go:195] Run: systemctl --version
	I1209 11:52:06.043193  662586 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 11:52:06.193717  662586 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 11:52:06.199693  662586 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 11:52:06.199786  662586 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 11:52:06.216007  662586 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1209 11:52:06.216040  662586 start.go:495] detecting cgroup driver to use...
	I1209 11:52:06.216131  662586 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 11:52:06.233631  662586 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 11:52:06.249730  662586 docker.go:217] disabling cri-docker service (if available) ...
	I1209 11:52:06.249817  662586 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 11:52:06.265290  662586 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 11:52:06.281676  662586 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 11:52:06.432116  662586 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 11:52:06.605899  662586 docker.go:233] disabling docker service ...
	I1209 11:52:06.606004  662586 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 11:52:06.622861  662586 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 11:52:06.637605  662586 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 11:52:06.772842  662586 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 11:52:06.905950  662586 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 11:52:06.923048  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 11:52:06.943483  662586 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1209 11:52:06.943542  662586 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:52:06.957647  662586 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 11:52:06.957725  662586 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:52:06.970221  662586 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:52:06.981243  662586 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:52:06.992084  662586 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 11:52:07.004284  662586 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 11:52:07.014329  662586 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1209 11:52:07.014411  662586 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1209 11:52:07.028104  662586 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 11:52:07.038782  662586 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 11:52:07.155779  662586 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 11:52:07.271726  662586 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 11:52:07.271815  662586 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 11:52:07.276994  662586 start.go:563] Will wait 60s for crictl version
	I1209 11:52:07.277061  662586 ssh_runner.go:195] Run: which crictl
	I1209 11:52:07.281212  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 11:52:07.328839  662586 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1209 11:52:07.328959  662586 ssh_runner.go:195] Run: crio --version
	I1209 11:52:07.360632  662586 ssh_runner.go:195] Run: crio --version
	I1209 11:52:07.393046  662586 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1209 11:52:07.394357  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetIP
	I1209 11:52:07.398002  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:07.398539  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:72:3e", ip: ""} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:51:55 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:52:07.398564  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:07.398893  662586 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1209 11:52:07.404512  662586 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 11:52:07.417822  662586 kubeadm.go:883] updating cluster {Name:old-k8s-version-014592 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-014592 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.132 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 11:52:07.418006  662586 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1209 11:52:07.418108  662586 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 11:52:07.473163  662586 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1209 11:52:07.473249  662586 ssh_runner.go:195] Run: which lz4
	I1209 11:52:07.478501  662586 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1209 11:52:07.483744  662586 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1209 11:52:07.483786  662586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1209 11:52:06.949438  662109 api_server.go:253] Checking apiserver healthz at https://192.168.39.169:8443/healthz ...
	I1209 11:52:06.959097  662109 api_server.go:279] https://192.168.39.169:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1209 11:52:06.959150  662109 api_server.go:103] status: https://192.168.39.169:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1209 11:52:07.449249  662109 api_server.go:253] Checking apiserver healthz at https://192.168.39.169:8443/healthz ...
	I1209 11:52:07.466817  662109 api_server.go:279] https://192.168.39.169:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1209 11:52:07.466860  662109 api_server.go:103] status: https://192.168.39.169:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1209 11:52:07.948998  662109 api_server.go:253] Checking apiserver healthz at https://192.168.39.169:8443/healthz ...
	I1209 11:52:07.958340  662109 api_server.go:279] https://192.168.39.169:8443/healthz returned 200:
	ok
	I1209 11:52:07.966049  662109 api_server.go:141] control plane version: v1.31.2
	I1209 11:52:07.966095  662109 api_server.go:131] duration metric: took 4.017521352s to wait for apiserver health ...
	I1209 11:52:07.966111  662109 cni.go:84] Creating CNI manager for ""
	I1209 11:52:07.966121  662109 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 11:52:07.967962  662109 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1209 11:52:05.941206  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .Start
	I1209 11:52:05.941411  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Ensuring networks are active...
	I1209 11:52:05.942245  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Ensuring network default is active
	I1209 11:52:05.942724  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Ensuring network mk-default-k8s-diff-port-482476 is active
	I1209 11:52:05.943274  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Getting domain xml...
	I1209 11:52:05.944080  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Creating domain...
	I1209 11:52:07.394633  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting to get IP...
	I1209 11:52:07.396032  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:07.397444  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | unable to find current IP address of domain default-k8s-diff-port-482476 in network mk-default-k8s-diff-port-482476
	I1209 11:52:07.397560  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | I1209 11:52:07.397434  663663 retry.go:31] will retry after 205.256699ms: waiting for machine to come up
	I1209 11:52:07.604209  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:07.604884  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | unable to find current IP address of domain default-k8s-diff-port-482476 in network mk-default-k8s-diff-port-482476
	I1209 11:52:07.604920  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | I1209 11:52:07.604828  663663 retry.go:31] will retry after 291.255961ms: waiting for machine to come up
	I1209 11:52:07.897467  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:07.898992  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | unable to find current IP address of domain default-k8s-diff-port-482476 in network mk-default-k8s-diff-port-482476
	I1209 11:52:07.899020  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | I1209 11:52:07.898866  663663 retry.go:31] will retry after 437.180412ms: waiting for machine to come up
	I1209 11:52:08.337664  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:08.338195  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | unable to find current IP address of domain default-k8s-diff-port-482476 in network mk-default-k8s-diff-port-482476
	I1209 11:52:08.338235  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | I1209 11:52:08.338151  663663 retry.go:31] will retry after 603.826089ms: waiting for machine to come up
	I1209 11:52:08.944048  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:08.944672  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | unable to find current IP address of domain default-k8s-diff-port-482476 in network mk-default-k8s-diff-port-482476
	I1209 11:52:08.944702  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | I1209 11:52:08.944612  663663 retry.go:31] will retry after 557.882868ms: waiting for machine to come up
	I1209 11:52:07.969367  662109 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1209 11:52:07.986045  662109 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1209 11:52:08.075377  662109 system_pods.go:43] waiting for kube-system pods to appear ...
	I1209 11:52:08.091609  662109 system_pods.go:59] 8 kube-system pods found
	I1209 11:52:08.091648  662109 system_pods.go:61] "coredns-7c65d6cfc9-z647g" [0e15e13e-efe6-4ae2-8bac-205aadf8f95a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1209 11:52:08.091656  662109 system_pods.go:61] "etcd-no-preload-820741" [3ea97088-f3b5-4c8f-ac11-7ab96f37bc5b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1209 11:52:08.091664  662109 system_pods.go:61] "kube-apiserver-no-preload-820741" [bd0d3e20-bdab-41c1-bb25-0c5149b4e456] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1209 11:52:08.091670  662109 system_pods.go:61] "kube-controller-manager-no-preload-820741" [5eda77af-a169-4be5-9f96-6ad5406f9036] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1209 11:52:08.091675  662109 system_pods.go:61] "kube-proxy-hpvvp" [0945206c-8d1e-47e0-b35b-9011073423b2] Running
	I1209 11:52:08.091681  662109 system_pods.go:61] "kube-scheduler-no-preload-820741" [e7003668-a70d-4334-bb99-c63c463d87e0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1209 11:52:08.091686  662109 system_pods.go:61] "metrics-server-6867b74b74-pwcsr" [40d4df7e-de82-478b-a77b-b27208d8262e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1209 11:52:08.091691  662109 system_pods.go:61] "storage-provisioner" [aeba46d3-ecf1-4923-b89c-75b34e75a06d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1209 11:52:08.091699  662109 system_pods.go:74] duration metric: took 16.289433ms to wait for pod list to return data ...
	I1209 11:52:08.091707  662109 node_conditions.go:102] verifying NodePressure condition ...
	I1209 11:52:08.096961  662109 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1209 11:52:08.097010  662109 node_conditions.go:123] node cpu capacity is 2
	I1209 11:52:08.097047  662109 node_conditions.go:105] duration metric: took 5.334194ms to run NodePressure ...
	I1209 11:52:08.097073  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:08.573868  662109 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1209 11:52:08.583670  662109 kubeadm.go:739] kubelet initialised
	I1209 11:52:08.583700  662109 kubeadm.go:740] duration metric: took 9.800796ms waiting for restarted kubelet to initialise ...
	I1209 11:52:08.583713  662109 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 11:52:08.592490  662109 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-z647g" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:08.600581  662109 pod_ready.go:98] node "no-preload-820741" hosting pod "coredns-7c65d6cfc9-z647g" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-820741" has status "Ready":"False"
	I1209 11:52:08.600611  662109 pod_ready.go:82] duration metric: took 8.087599ms for pod "coredns-7c65d6cfc9-z647g" in "kube-system" namespace to be "Ready" ...
	E1209 11:52:08.600623  662109 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-820741" hosting pod "coredns-7c65d6cfc9-z647g" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-820741" has status "Ready":"False"
	I1209 11:52:08.600633  662109 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-820741" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:08.609663  662109 pod_ready.go:98] node "no-preload-820741" hosting pod "etcd-no-preload-820741" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-820741" has status "Ready":"False"
	I1209 11:52:08.609698  662109 pod_ready.go:82] duration metric: took 9.054194ms for pod "etcd-no-preload-820741" in "kube-system" namespace to be "Ready" ...
	E1209 11:52:08.609712  662109 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-820741" hosting pod "etcd-no-preload-820741" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-820741" has status "Ready":"False"
	I1209 11:52:08.609722  662109 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-820741" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:08.615482  662109 pod_ready.go:98] node "no-preload-820741" hosting pod "kube-apiserver-no-preload-820741" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-820741" has status "Ready":"False"
	I1209 11:52:08.615514  662109 pod_ready.go:82] duration metric: took 5.78152ms for pod "kube-apiserver-no-preload-820741" in "kube-system" namespace to be "Ready" ...
	E1209 11:52:08.615526  662109 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-820741" hosting pod "kube-apiserver-no-preload-820741" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-820741" has status "Ready":"False"
	I1209 11:52:08.615536  662109 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-820741" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:08.623662  662109 pod_ready.go:98] node "no-preload-820741" hosting pod "kube-controller-manager-no-preload-820741" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-820741" has status "Ready":"False"
	I1209 11:52:08.623698  662109 pod_ready.go:82] duration metric: took 8.151877ms for pod "kube-controller-manager-no-preload-820741" in "kube-system" namespace to be "Ready" ...
	E1209 11:52:08.623713  662109 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-820741" hosting pod "kube-controller-manager-no-preload-820741" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-820741" has status "Ready":"False"
	I1209 11:52:08.623722  662109 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-hpvvp" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:08.978286  662109 pod_ready.go:98] node "no-preload-820741" hosting pod "kube-proxy-hpvvp" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-820741" has status "Ready":"False"
	I1209 11:52:08.978323  662109 pod_ready.go:82] duration metric: took 354.589596ms for pod "kube-proxy-hpvvp" in "kube-system" namespace to be "Ready" ...
	E1209 11:52:08.978344  662109 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-820741" hosting pod "kube-proxy-hpvvp" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-820741" has status "Ready":"False"
	I1209 11:52:08.978356  662109 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-820741" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:09.378434  662109 pod_ready.go:98] node "no-preload-820741" hosting pod "kube-scheduler-no-preload-820741" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-820741" has status "Ready":"False"
	I1209 11:52:09.378471  662109 pod_ready.go:82] duration metric: took 400.107028ms for pod "kube-scheduler-no-preload-820741" in "kube-system" namespace to be "Ready" ...
	E1209 11:52:09.378484  662109 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-820741" hosting pod "kube-scheduler-no-preload-820741" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-820741" has status "Ready":"False"
	I1209 11:52:09.378494  662109 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:09.778087  662109 pod_ready.go:98] node "no-preload-820741" hosting pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-820741" has status "Ready":"False"
	I1209 11:52:09.778117  662109 pod_ready.go:82] duration metric: took 399.613592ms for pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace to be "Ready" ...
	E1209 11:52:09.778129  662109 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-820741" hosting pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-820741" has status "Ready":"False"
	I1209 11:52:09.778138  662109 pod_ready.go:39] duration metric: took 1.194413796s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 11:52:09.778162  662109 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1209 11:52:09.793629  662109 ops.go:34] apiserver oom_adj: -16
	I1209 11:52:09.793663  662109 kubeadm.go:597] duration metric: took 8.374104555s to restartPrimaryControlPlane
	I1209 11:52:09.793681  662109 kubeadm.go:394] duration metric: took 8.419719684s to StartCluster
	I1209 11:52:09.793708  662109 settings.go:142] acquiring lock: {Name:mkd819f3469f56ef651f2e5bb461e93bb6b2b5fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 11:52:09.793848  662109 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20068-609844/kubeconfig
	I1209 11:52:09.796407  662109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/kubeconfig: {Name:mk32876a1ab47f13c0d7c0ba1ee5e32de01b4a4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 11:52:09.796774  662109 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.169 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 11:52:09.796837  662109 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1209 11:52:09.796954  662109 addons.go:69] Setting storage-provisioner=true in profile "no-preload-820741"
	I1209 11:52:09.796975  662109 addons.go:234] Setting addon storage-provisioner=true in "no-preload-820741"
	W1209 11:52:09.796984  662109 addons.go:243] addon storage-provisioner should already be in state true
	I1209 11:52:09.797023  662109 host.go:66] Checking if "no-preload-820741" exists ...
	I1209 11:52:09.797048  662109 config.go:182] Loaded profile config "no-preload-820741": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 11:52:09.797086  662109 addons.go:69] Setting default-storageclass=true in profile "no-preload-820741"
	I1209 11:52:09.797110  662109 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-820741"
	I1209 11:52:09.797119  662109 addons.go:69] Setting metrics-server=true in profile "no-preload-820741"
	I1209 11:52:09.797150  662109 addons.go:234] Setting addon metrics-server=true in "no-preload-820741"
	W1209 11:52:09.797160  662109 addons.go:243] addon metrics-server should already be in state true
	I1209 11:52:09.797204  662109 host.go:66] Checking if "no-preload-820741" exists ...
	I1209 11:52:09.797545  662109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:52:09.797571  662109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:52:09.797579  662109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:52:09.797596  662109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:52:09.797611  662109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:52:09.797620  662109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:52:09.799690  662109 out.go:177] * Verifying Kubernetes components...
	I1209 11:52:09.801035  662109 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 11:52:09.814968  662109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33991
	I1209 11:52:09.815010  662109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34319
	I1209 11:52:09.815576  662109 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:52:09.815715  662109 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:52:09.816340  662109 main.go:141] libmachine: Using API Version  1
	I1209 11:52:09.816361  662109 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:52:09.816666  662109 main.go:141] libmachine: Using API Version  1
	I1209 11:52:09.816683  662109 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:52:09.816745  662109 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:52:09.817402  662109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:52:09.817449  662109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:52:09.818118  662109 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:52:09.818680  662109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:52:09.818718  662109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:52:09.842345  662109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37501
	I1209 11:52:09.842582  662109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44183
	I1209 11:52:09.842703  662109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38793
	I1209 11:52:09.843479  662109 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:52:09.843608  662109 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:52:09.843667  662109 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:52:09.843973  662109 main.go:141] libmachine: Using API Version  1
	I1209 11:52:09.843999  662109 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:52:09.844168  662109 main.go:141] libmachine: Using API Version  1
	I1209 11:52:09.844180  662109 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:52:09.844575  662109 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:52:09.844773  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetState
	I1209 11:52:09.845107  662109 main.go:141] libmachine: Using API Version  1
	I1209 11:52:09.845122  662109 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:52:09.845633  662109 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:52:09.845887  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetState
	I1209 11:52:09.847386  662109 main.go:141] libmachine: (no-preload-820741) Calling .DriverName
	I1209 11:52:09.848553  662109 main.go:141] libmachine: (no-preload-820741) Calling .DriverName
	I1209 11:52:09.849410  662109 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1209 11:52:09.849690  662109 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:52:09.850230  662109 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 11:52:09.850303  662109 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1209 11:52:09.850323  662109 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1209 11:52:09.850346  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHHostname
	I1209 11:52:09.851051  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetState
	I1209 11:52:09.851404  662109 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 11:52:09.851426  662109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1209 11:52:09.851447  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHHostname
	I1209 11:52:09.855303  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:52:09.855935  662109 addons.go:234] Setting addon default-storageclass=true in "no-preload-820741"
	W1209 11:52:09.855958  662109 addons.go:243] addon default-storageclass should already be in state true
	I1209 11:52:09.855991  662109 host.go:66] Checking if "no-preload-820741" exists ...
	I1209 11:52:09.856373  662109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:52:09.856429  662109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:52:09.857583  662109 main.go:141] libmachine: (no-preload-820741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4c:0e", ip: ""} in network mk-no-preload-820741: {Iface:virbr1 ExpiryTime:2024-12-09 12:43:41 +0000 UTC Type:0 Mac:52:54:00:27:4c:0e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:no-preload-820741 Clientid:01:52:54:00:27:4c:0e}
	I1209 11:52:09.857614  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined IP address 192.168.39.169 and MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:52:09.857874  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHPort
	I1209 11:52:09.858206  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHKeyPath
	I1209 11:52:09.858588  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHUsername
	I1209 11:52:09.858766  662109 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/no-preload-820741/id_rsa Username:docker}
	I1209 11:52:09.859464  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:52:09.859875  662109 main.go:141] libmachine: (no-preload-820741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4c:0e", ip: ""} in network mk-no-preload-820741: {Iface:virbr1 ExpiryTime:2024-12-09 12:43:41 +0000 UTC Type:0 Mac:52:54:00:27:4c:0e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:no-preload-820741 Clientid:01:52:54:00:27:4c:0e}
	I1209 11:52:09.859897  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined IP address 192.168.39.169 and MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:52:09.860238  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHPort
	I1209 11:52:09.860449  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHKeyPath
	I1209 11:52:09.860597  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHUsername
	I1209 11:52:09.860736  662109 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/no-preload-820741/id_rsa Username:docker}
	I1209 11:52:09.880235  662109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38341
	I1209 11:52:09.880846  662109 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:52:09.881409  662109 main.go:141] libmachine: Using API Version  1
	I1209 11:52:09.881429  662109 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:52:09.881855  662109 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:52:09.882651  662109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:52:09.882711  662109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:52:09.904576  662109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36779
	I1209 11:52:09.905132  662109 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:52:09.905765  662109 main.go:141] libmachine: Using API Version  1
	I1209 11:52:09.905788  662109 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:52:09.906224  662109 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:52:09.906469  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetState
	I1209 11:52:09.908475  662109 main.go:141] libmachine: (no-preload-820741) Calling .DriverName
	I1209 11:52:09.908715  662109 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1209 11:52:09.908735  662109 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1209 11:52:09.908756  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHHostname
	I1209 11:52:09.912294  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:52:09.912928  662109 main.go:141] libmachine: (no-preload-820741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4c:0e", ip: ""} in network mk-no-preload-820741: {Iface:virbr1 ExpiryTime:2024-12-09 12:43:41 +0000 UTC Type:0 Mac:52:54:00:27:4c:0e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:no-preload-820741 Clientid:01:52:54:00:27:4c:0e}
	I1209 11:52:09.912963  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined IP address 192.168.39.169 and MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:52:09.913128  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHPort
	I1209 11:52:09.913383  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHKeyPath
	I1209 11:52:09.913563  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHUsername
	I1209 11:52:09.913711  662109 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/no-preload-820741/id_rsa Username:docker}
	I1209 11:52:10.141200  662109 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 11:52:10.172182  662109 node_ready.go:35] waiting up to 6m0s for node "no-preload-820741" to be "Ready" ...
	I1209 11:52:10.306617  662109 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1209 11:52:10.306646  662109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1209 11:52:10.321962  662109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 11:52:10.326125  662109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1209 11:52:10.360534  662109 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1209 11:52:10.360568  662109 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1209 11:52:10.470875  662109 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1209 11:52:10.470917  662109 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1209 11:52:10.555610  662109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1209 11:52:11.721480  662109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.395310752s)
	I1209 11:52:11.721571  662109 main.go:141] libmachine: Making call to close driver server
	I1209 11:52:11.721638  662109 main.go:141] libmachine: (no-preload-820741) Calling .Close
	I1209 11:52:11.721581  662109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.165925756s)
	I1209 11:52:11.721735  662109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.399738143s)
	I1209 11:52:11.721753  662109 main.go:141] libmachine: Making call to close driver server
	I1209 11:52:11.721766  662109 main.go:141] libmachine: (no-preload-820741) Calling .Close
	I1209 11:52:11.721765  662109 main.go:141] libmachine: Making call to close driver server
	I1209 11:52:11.721779  662109 main.go:141] libmachine: (no-preload-820741) Calling .Close
	I1209 11:52:11.722002  662109 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:52:11.722014  662109 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:52:11.722021  662109 main.go:141] libmachine: Making call to close driver server
	I1209 11:52:11.722028  662109 main.go:141] libmachine: (no-preload-820741) Calling .Close
	I1209 11:52:11.722201  662109 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:52:11.722213  662109 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:52:11.722221  662109 main.go:141] libmachine: Making call to close driver server
	I1209 11:52:11.722226  662109 main.go:141] libmachine: (no-preload-820741) Calling .Close
	I1209 11:52:11.722320  662109 main.go:141] libmachine: (no-preload-820741) DBG | Closing plugin on server side
	I1209 11:52:11.722329  662109 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:52:11.722349  662109 main.go:141] libmachine: (no-preload-820741) DBG | Closing plugin on server side
	I1209 11:52:11.722360  662109 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:52:11.722384  662109 main.go:141] libmachine: Making call to close driver server
	I1209 11:52:11.722395  662109 main.go:141] libmachine: (no-preload-820741) Calling .Close
	I1209 11:52:11.722424  662109 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:52:11.722438  662109 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:52:11.722465  662109 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:52:11.722475  662109 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:52:11.722490  662109 addons.go:475] Verifying addon metrics-server=true in "no-preload-820741"
	I1209 11:52:11.722560  662109 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:52:11.722579  662109 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:52:11.722564  662109 main.go:141] libmachine: (no-preload-820741) DBG | Closing plugin on server side
	I1209 11:52:11.729638  662109 main.go:141] libmachine: Making call to close driver server
	I1209 11:52:11.729660  662109 main.go:141] libmachine: (no-preload-820741) Calling .Close
	I1209 11:52:11.729934  662109 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:52:11.729950  662109 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:52:11.731642  662109 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I1209 11:52:09.097654  662586 crio.go:462] duration metric: took 1.619191765s to copy over tarball
	I1209 11:52:09.097748  662586 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1209 11:52:12.304496  662586 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.20670295s)
	I1209 11:52:12.304543  662586 crio.go:469] duration metric: took 3.206852542s to extract the tarball
	I1209 11:52:12.304553  662586 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1209 11:52:12.347991  662586 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 11:52:12.385411  662586 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1209 11:52:12.385438  662586 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1209 11:52:12.385533  662586 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 11:52:12.385557  662586 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1209 11:52:12.385570  662586 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1209 11:52:12.385609  662586 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 11:52:12.385641  662586 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1209 11:52:12.385650  662586 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1209 11:52:12.385645  662586 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1209 11:52:12.385620  662586 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1209 11:52:12.387326  662586 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1209 11:52:12.387335  662586 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 11:52:12.387371  662586 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1209 11:52:12.387372  662586 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1209 11:52:12.387328  662586 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1209 11:52:12.387338  662586 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 11:52:12.387328  662586 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1209 11:52:12.387383  662586 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1209 11:52:12.621631  662586 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1209 11:52:12.623694  662586 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 11:52:12.632536  662586 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1209 11:52:12.634550  662586 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1209 11:52:12.638401  662586 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1209 11:52:12.641071  662586 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1209 11:52:12.645344  662586 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1209 11:52:09.504566  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:09.505124  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | unable to find current IP address of domain default-k8s-diff-port-482476 in network mk-default-k8s-diff-port-482476
	I1209 11:52:09.505155  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | I1209 11:52:09.505076  663663 retry.go:31] will retry after 636.87343ms: waiting for machine to come up
	I1209 11:52:10.144387  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:10.145090  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | unable to find current IP address of domain default-k8s-diff-port-482476 in network mk-default-k8s-diff-port-482476
	I1209 11:52:10.145119  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | I1209 11:52:10.145037  663663 retry.go:31] will retry after 716.448577ms: waiting for machine to come up
	I1209 11:52:10.863113  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:10.863816  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | unable to find current IP address of domain default-k8s-diff-port-482476 in network mk-default-k8s-diff-port-482476
	I1209 11:52:10.863848  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | I1209 11:52:10.863762  663663 retry.go:31] will retry after 901.007245ms: waiting for machine to come up
	I1209 11:52:11.766356  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:11.766745  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | unable to find current IP address of domain default-k8s-diff-port-482476 in network mk-default-k8s-diff-port-482476
	I1209 11:52:11.766773  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | I1209 11:52:11.766688  663663 retry.go:31] will retry after 1.570604193s: waiting for machine to come up
	I1209 11:52:13.339318  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:13.339796  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | unable to find current IP address of domain default-k8s-diff-port-482476 in network mk-default-k8s-diff-port-482476
	I1209 11:52:13.339828  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | I1209 11:52:13.339744  663663 retry.go:31] will retry after 1.928200683s: waiting for machine to come up
	I1209 11:52:11.732956  662109 addons.go:510] duration metric: took 1.936137102s for enable addons: enabled=[metrics-server storage-provisioner default-storageclass]
	I1209 11:52:12.175844  662109 node_ready.go:53] node "no-preload-820741" has status "Ready":"False"
	I1209 11:52:14.504491  662109 node_ready.go:53] node "no-preload-820741" has status "Ready":"False"
	I1209 11:52:12.756066  662586 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1209 11:52:12.756121  662586 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1209 11:52:12.756134  662586 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1209 11:52:12.756175  662586 ssh_runner.go:195] Run: which crictl
	I1209 11:52:12.756179  662586 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 11:52:12.756230  662586 ssh_runner.go:195] Run: which crictl
	I1209 11:52:12.808091  662586 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1209 11:52:12.808139  662586 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1209 11:52:12.808186  662586 ssh_runner.go:195] Run: which crictl
	I1209 11:52:12.809593  662586 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1209 11:52:12.809622  662586 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1209 11:52:12.809637  662586 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1209 11:52:12.809659  662586 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1209 11:52:12.809682  662586 ssh_runner.go:195] Run: which crictl
	I1209 11:52:12.809712  662586 ssh_runner.go:195] Run: which crictl
	I1209 11:52:12.809775  662586 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1209 11:52:12.809803  662586 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1209 11:52:12.809829  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 11:52:12.809841  662586 ssh_runner.go:195] Run: which crictl
	I1209 11:52:12.809724  662586 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1209 11:52:12.809873  662586 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1209 11:52:12.809898  662586 ssh_runner.go:195] Run: which crictl
	I1209 11:52:12.809933  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1209 11:52:12.812256  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1209 11:52:12.819121  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1209 11:52:12.825106  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1209 11:52:12.910431  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1209 11:52:12.910501  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 11:52:12.910560  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1209 11:52:12.910503  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1209 11:52:12.910638  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1209 11:52:12.910713  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1209 11:52:12.930461  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1209 11:52:13.079147  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1209 11:52:13.079189  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 11:52:13.079233  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1209 11:52:13.079276  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1209 11:52:13.079418  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1209 11:52:13.079447  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1209 11:52:13.079517  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1209 11:52:13.224753  662586 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1209 11:52:13.227126  662586 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1209 11:52:13.227190  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1209 11:52:13.227253  662586 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1209 11:52:13.227291  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1209 11:52:13.227332  662586 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1209 11:52:13.227393  662586 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1209 11:52:13.277747  662586 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1209 11:52:13.285286  662586 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1209 11:52:13.663858  662586 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 11:52:13.805603  662586 cache_images.go:92] duration metric: took 1.420145666s to LoadCachedImages
	W1209 11:52:13.805814  662586 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I1209 11:52:13.805848  662586 kubeadm.go:934] updating node { 192.168.61.132 8443 v1.20.0 crio true true} ...
	I1209 11:52:13.805980  662586 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-014592 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.132
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-014592 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 11:52:13.806079  662586 ssh_runner.go:195] Run: crio config
	I1209 11:52:13.870766  662586 cni.go:84] Creating CNI manager for ""
	I1209 11:52:13.870797  662586 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 11:52:13.870813  662586 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1209 11:52:13.870841  662586 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.132 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-014592 NodeName:old-k8s-version-014592 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.132"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.132 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1209 11:52:13.871050  662586 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.132
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-014592"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.132
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.132"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 11:52:13.871136  662586 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1209 11:52:13.881556  662586 binaries.go:44] Found k8s binaries, skipping transfer
	I1209 11:52:13.881628  662586 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 11:52:13.891122  662586 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1209 11:52:13.908181  662586 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 11:52:13.925041  662586 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1209 11:52:13.941567  662586 ssh_runner.go:195] Run: grep 192.168.61.132	control-plane.minikube.internal$ /etc/hosts
	I1209 11:52:13.945502  662586 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.132	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 11:52:13.957476  662586 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 11:52:14.091699  662586 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 11:52:14.108772  662586 certs.go:68] Setting up /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/old-k8s-version-014592 for IP: 192.168.61.132
	I1209 11:52:14.108810  662586 certs.go:194] generating shared ca certs ...
	I1209 11:52:14.108838  662586 certs.go:226] acquiring lock for ca certs: {Name:mk2df5887e08965d909a9c950da5dfffb8a04ddc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 11:52:14.109024  662586 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.key
	I1209 11:52:14.109087  662586 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.key
	I1209 11:52:14.109105  662586 certs.go:256] generating profile certs ...
	I1209 11:52:14.109248  662586 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/old-k8s-version-014592/client.key
	I1209 11:52:14.109323  662586 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/old-k8s-version-014592/apiserver.key.28078577
	I1209 11:52:14.109383  662586 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/old-k8s-version-014592/proxy-client.key
	I1209 11:52:14.109572  662586 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017.pem (1338 bytes)
	W1209 11:52:14.109609  662586 certs.go:480] ignoring /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017_empty.pem, impossibly tiny 0 bytes
	I1209 11:52:14.109619  662586 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem (1675 bytes)
	I1209 11:52:14.109659  662586 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem (1082 bytes)
	I1209 11:52:14.109697  662586 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem (1123 bytes)
	I1209 11:52:14.109737  662586 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem (1679 bytes)
	I1209 11:52:14.109802  662586 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem (1708 bytes)
	I1209 11:52:14.110497  662586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 11:52:14.145815  662586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1209 11:52:14.179452  662586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 11:52:14.217469  662586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1209 11:52:14.250288  662586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/old-k8s-version-014592/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1209 11:52:14.287110  662586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/old-k8s-version-014592/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1209 11:52:14.317190  662586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/old-k8s-version-014592/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 11:52:14.356825  662586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/old-k8s-version-014592/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1209 11:52:14.379756  662586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017.pem --> /usr/share/ca-certificates/617017.pem (1338 bytes)
	I1209 11:52:14.402045  662586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem --> /usr/share/ca-certificates/6170172.pem (1708 bytes)
	I1209 11:52:14.425287  662586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 11:52:14.448025  662586 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 11:52:14.464144  662586 ssh_runner.go:195] Run: openssl version
	I1209 11:52:14.470256  662586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/617017.pem && ln -fs /usr/share/ca-certificates/617017.pem /etc/ssl/certs/617017.pem"
	I1209 11:52:14.481298  662586 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/617017.pem
	I1209 11:52:14.485849  662586 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 10:45 /usr/share/ca-certificates/617017.pem
	I1209 11:52:14.485904  662586 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/617017.pem
	I1209 11:52:14.492321  662586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/617017.pem /etc/ssl/certs/51391683.0"
	I1209 11:52:14.504155  662586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6170172.pem && ln -fs /usr/share/ca-certificates/6170172.pem /etc/ssl/certs/6170172.pem"
	I1209 11:52:14.515819  662586 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6170172.pem
	I1209 11:52:14.520876  662586 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 10:45 /usr/share/ca-certificates/6170172.pem
	I1209 11:52:14.520955  662586 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6170172.pem
	I1209 11:52:14.527295  662586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6170172.pem /etc/ssl/certs/3ec20f2e.0"
	I1209 11:52:14.538319  662586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1209 11:52:14.549753  662586 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 11:52:14.554273  662586 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 10:34 /usr/share/ca-certificates/minikubeCA.pem
	I1209 11:52:14.554341  662586 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 11:52:14.559893  662586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1209 11:52:14.570744  662586 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 11:52:14.575763  662586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1209 11:52:14.582279  662586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1209 11:52:14.588549  662586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1209 11:52:14.594376  662586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1209 11:52:14.599758  662586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1209 11:52:14.605497  662586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1209 11:52:14.611083  662586 kubeadm.go:392] StartCluster: {Name:old-k8s-version-014592 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-014592 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.132 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 11:52:14.611213  662586 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 11:52:14.611288  662586 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 11:52:14.649447  662586 cri.go:89] found id: ""
	I1209 11:52:14.649538  662586 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 11:52:14.660070  662586 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1209 11:52:14.660094  662586 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1209 11:52:14.660145  662586 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1209 11:52:14.670412  662586 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1209 11:52:14.671387  662586 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-014592" does not appear in /home/jenkins/minikube-integration/20068-609844/kubeconfig
	I1209 11:52:14.672043  662586 kubeconfig.go:62] /home/jenkins/minikube-integration/20068-609844/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-014592" cluster setting kubeconfig missing "old-k8s-version-014592" context setting]
	I1209 11:52:14.673337  662586 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/kubeconfig: {Name:mk32876a1ab47f13c0d7c0ba1ee5e32de01b4a4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 11:52:14.708285  662586 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1209 11:52:14.719486  662586 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.132
	I1209 11:52:14.719535  662586 kubeadm.go:1160] stopping kube-system containers ...
	I1209 11:52:14.719563  662586 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1209 11:52:14.719635  662586 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 11:52:14.755280  662586 cri.go:89] found id: ""
	I1209 11:52:14.755369  662586 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1209 11:52:14.771385  662586 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 11:52:14.781364  662586 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 11:52:14.781387  662586 kubeadm.go:157] found existing configuration files:
	
	I1209 11:52:14.781455  662586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 11:52:14.790942  662586 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 11:52:14.791016  662586 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 11:52:14.800481  662586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 11:52:14.809875  662586 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 11:52:14.809948  662586 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 11:52:14.819619  662586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 11:52:14.831670  662586 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 11:52:14.831750  662586 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 11:52:14.844244  662586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 11:52:14.853328  662586 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 11:52:14.853403  662586 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 11:52:14.862428  662586 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 11:52:14.871346  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:15.007799  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:15.697594  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:15.921787  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:16.031826  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:16.132199  662586 api_server.go:52] waiting for apiserver process to appear ...
	I1209 11:52:16.132310  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:16.633329  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:17.133389  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:17.632581  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:15.270255  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:15.270804  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | unable to find current IP address of domain default-k8s-diff-port-482476 in network mk-default-k8s-diff-port-482476
	I1209 11:52:15.270836  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | I1209 11:52:15.270741  663663 retry.go:31] will retry after 2.90998032s: waiting for machine to come up
	I1209 11:52:18.182069  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:18.182741  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | unable to find current IP address of domain default-k8s-diff-port-482476 in network mk-default-k8s-diff-port-482476
	I1209 11:52:18.182774  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | I1209 11:52:18.182689  663663 retry.go:31] will retry after 3.196470388s: waiting for machine to come up
	I1209 11:52:16.676188  662109 node_ready.go:53] node "no-preload-820741" has status "Ready":"False"
	I1209 11:52:17.175894  662109 node_ready.go:49] node "no-preload-820741" has status "Ready":"True"
	I1209 11:52:17.175928  662109 node_ready.go:38] duration metric: took 7.003696159s for node "no-preload-820741" to be "Ready" ...
	I1209 11:52:17.175945  662109 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 11:52:17.180647  662109 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-z647g" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:19.188583  662109 pod_ready.go:103] pod "coredns-7c65d6cfc9-z647g" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:18.133165  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:18.632403  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:19.132416  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:19.633332  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:20.133343  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:20.632968  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:21.133411  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:21.632656  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:22.132876  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:22.632816  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:21.381260  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:21.381912  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | unable to find current IP address of domain default-k8s-diff-port-482476 in network mk-default-k8s-diff-port-482476
	I1209 11:52:21.381943  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | I1209 11:52:21.381834  663663 retry.go:31] will retry after 3.621023528s: waiting for machine to come up
	I1209 11:52:26.142813  661546 start.go:364] duration metric: took 56.424295065s to acquireMachinesLock for "embed-certs-005123"
	I1209 11:52:26.142877  661546 start.go:96] Skipping create...Using existing machine configuration
	I1209 11:52:26.142886  661546 fix.go:54] fixHost starting: 
	I1209 11:52:26.143376  661546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:52:26.143416  661546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:52:26.164438  661546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45591
	I1209 11:52:26.165041  661546 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:52:26.165779  661546 main.go:141] libmachine: Using API Version  1
	I1209 11:52:26.165828  661546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:52:26.166318  661546 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:52:26.166544  661546 main.go:141] libmachine: (embed-certs-005123) Calling .DriverName
	I1209 11:52:26.166745  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetState
	I1209 11:52:26.168534  661546 fix.go:112] recreateIfNeeded on embed-certs-005123: state=Stopped err=<nil>
	I1209 11:52:26.168564  661546 main.go:141] libmachine: (embed-certs-005123) Calling .DriverName
	W1209 11:52:26.168753  661546 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 11:52:26.170973  661546 out.go:177] * Restarting existing kvm2 VM for "embed-certs-005123" ...
	I1209 11:52:26.172269  661546 main.go:141] libmachine: (embed-certs-005123) Calling .Start
	I1209 11:52:26.172500  661546 main.go:141] libmachine: (embed-certs-005123) Ensuring networks are active...
	I1209 11:52:26.173391  661546 main.go:141] libmachine: (embed-certs-005123) Ensuring network default is active
	I1209 11:52:26.173747  661546 main.go:141] libmachine: (embed-certs-005123) Ensuring network mk-embed-certs-005123 is active
	I1209 11:52:26.174208  661546 main.go:141] libmachine: (embed-certs-005123) Getting domain xml...
	I1209 11:52:26.174990  661546 main.go:141] libmachine: (embed-certs-005123) Creating domain...
	I1209 11:52:21.687274  662109 pod_ready.go:103] pod "coredns-7c65d6cfc9-z647g" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:23.688011  662109 pod_ready.go:103] pod "coredns-7c65d6cfc9-z647g" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:24.187886  662109 pod_ready.go:93] pod "coredns-7c65d6cfc9-z647g" in "kube-system" namespace has status "Ready":"True"
	I1209 11:52:24.187917  662109 pod_ready.go:82] duration metric: took 7.007243363s for pod "coredns-7c65d6cfc9-z647g" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:24.187928  662109 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-820741" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:24.193936  662109 pod_ready.go:93] pod "etcd-no-preload-820741" in "kube-system" namespace has status "Ready":"True"
	I1209 11:52:24.193958  662109 pod_ready.go:82] duration metric: took 6.02353ms for pod "etcd-no-preload-820741" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:24.193966  662109 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-820741" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:24.203685  662109 pod_ready.go:93] pod "kube-apiserver-no-preload-820741" in "kube-system" namespace has status "Ready":"True"
	I1209 11:52:24.203712  662109 pod_ready.go:82] duration metric: took 9.739287ms for pod "kube-apiserver-no-preload-820741" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:24.203722  662109 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-820741" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:24.210004  662109 pod_ready.go:93] pod "kube-controller-manager-no-preload-820741" in "kube-system" namespace has status "Ready":"True"
	I1209 11:52:24.210034  662109 pod_ready.go:82] duration metric: took 6.304008ms for pod "kube-controller-manager-no-preload-820741" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:24.210048  662109 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-hpvvp" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:24.216225  662109 pod_ready.go:93] pod "kube-proxy-hpvvp" in "kube-system" namespace has status "Ready":"True"
	I1209 11:52:24.216249  662109 pod_ready.go:82] duration metric: took 6.193945ms for pod "kube-proxy-hpvvp" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:24.216258  662109 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-820741" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:24.584682  662109 pod_ready.go:93] pod "kube-scheduler-no-preload-820741" in "kube-system" namespace has status "Ready":"True"
	I1209 11:52:24.584711  662109 pod_ready.go:82] duration metric: took 368.445803ms for pod "kube-scheduler-no-preload-820741" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:24.584724  662109 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:25.004323  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.004761  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Found IP for machine: 192.168.50.25
	I1209 11:52:25.004791  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has current primary IP address 192.168.50.25 and MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.004798  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Reserving static IP address...
	I1209 11:52:25.005275  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-482476", mac: "52:54:00:f0:c9:8a", ip: "192.168.50.25"} in network mk-default-k8s-diff-port-482476: {Iface:virbr2 ExpiryTime:2024-12-09 12:52:18 +0000 UTC Type:0 Mac:52:54:00:f0:c9:8a Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:default-k8s-diff-port-482476 Clientid:01:52:54:00:f0:c9:8a}
	I1209 11:52:25.005301  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | skip adding static IP to network mk-default-k8s-diff-port-482476 - found existing host DHCP lease matching {name: "default-k8s-diff-port-482476", mac: "52:54:00:f0:c9:8a", ip: "192.168.50.25"}
	I1209 11:52:25.005314  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Reserved static IP address: 192.168.50.25
	I1209 11:52:25.005328  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for SSH to be available...
	I1209 11:52:25.005342  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | Getting to WaitForSSH function...
	I1209 11:52:25.007758  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.008146  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:c9:8a", ip: ""} in network mk-default-k8s-diff-port-482476: {Iface:virbr2 ExpiryTime:2024-12-09 12:52:18 +0000 UTC Type:0 Mac:52:54:00:f0:c9:8a Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:default-k8s-diff-port-482476 Clientid:01:52:54:00:f0:c9:8a}
	I1209 11:52:25.008189  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined IP address 192.168.50.25 and MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.008291  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | Using SSH client type: external
	I1209 11:52:25.008318  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | Using SSH private key: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/default-k8s-diff-port-482476/id_rsa (-rw-------)
	I1209 11:52:25.008348  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.25 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20068-609844/.minikube/machines/default-k8s-diff-port-482476/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1209 11:52:25.008361  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | About to run SSH command:
	I1209 11:52:25.008369  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | exit 0
	I1209 11:52:25.130532  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | SSH cmd err, output: <nil>: 
	I1209 11:52:25.130901  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetConfigRaw
	I1209 11:52:25.131568  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetIP
	I1209 11:52:25.134487  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.134816  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:c9:8a", ip: ""} in network mk-default-k8s-diff-port-482476: {Iface:virbr2 ExpiryTime:2024-12-09 12:52:18 +0000 UTC Type:0 Mac:52:54:00:f0:c9:8a Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:default-k8s-diff-port-482476 Clientid:01:52:54:00:f0:c9:8a}
	I1209 11:52:25.134854  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined IP address 192.168.50.25 and MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.135163  663024 profile.go:143] Saving config to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/default-k8s-diff-port-482476/config.json ...
	I1209 11:52:25.135451  663024 machine.go:93] provisionDockerMachine start ...
	I1209 11:52:25.135480  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .DriverName
	I1209 11:52:25.135736  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHHostname
	I1209 11:52:25.138444  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.138853  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:c9:8a", ip: ""} in network mk-default-k8s-diff-port-482476: {Iface:virbr2 ExpiryTime:2024-12-09 12:52:18 +0000 UTC Type:0 Mac:52:54:00:f0:c9:8a Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:default-k8s-diff-port-482476 Clientid:01:52:54:00:f0:c9:8a}
	I1209 11:52:25.138894  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined IP address 192.168.50.25 and MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.138981  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHPort
	I1209 11:52:25.139188  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHKeyPath
	I1209 11:52:25.139327  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHKeyPath
	I1209 11:52:25.139491  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHUsername
	I1209 11:52:25.139655  663024 main.go:141] libmachine: Using SSH client type: native
	I1209 11:52:25.139895  663024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.25 22 <nil> <nil>}
	I1209 11:52:25.139906  663024 main.go:141] libmachine: About to run SSH command:
	hostname
	I1209 11:52:25.242441  663024 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1209 11:52:25.242472  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetMachineName
	I1209 11:52:25.242837  663024 buildroot.go:166] provisioning hostname "default-k8s-diff-port-482476"
	I1209 11:52:25.242878  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetMachineName
	I1209 11:52:25.243093  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHHostname
	I1209 11:52:25.245995  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.246447  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:c9:8a", ip: ""} in network mk-default-k8s-diff-port-482476: {Iface:virbr2 ExpiryTime:2024-12-09 12:52:18 +0000 UTC Type:0 Mac:52:54:00:f0:c9:8a Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:default-k8s-diff-port-482476 Clientid:01:52:54:00:f0:c9:8a}
	I1209 11:52:25.246478  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined IP address 192.168.50.25 and MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.246685  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHPort
	I1209 11:52:25.246900  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHKeyPath
	I1209 11:52:25.247052  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHKeyPath
	I1209 11:52:25.247175  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHUsername
	I1209 11:52:25.247330  663024 main.go:141] libmachine: Using SSH client type: native
	I1209 11:52:25.247518  663024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.25 22 <nil> <nil>}
	I1209 11:52:25.247531  663024 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-482476 && echo "default-k8s-diff-port-482476" | sudo tee /etc/hostname
	I1209 11:52:25.361366  663024 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-482476
	
	I1209 11:52:25.361397  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHHostname
	I1209 11:52:25.364194  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.364608  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:c9:8a", ip: ""} in network mk-default-k8s-diff-port-482476: {Iface:virbr2 ExpiryTime:2024-12-09 12:52:18 +0000 UTC Type:0 Mac:52:54:00:f0:c9:8a Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:default-k8s-diff-port-482476 Clientid:01:52:54:00:f0:c9:8a}
	I1209 11:52:25.364639  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined IP address 192.168.50.25 and MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.364813  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHPort
	I1209 11:52:25.365064  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHKeyPath
	I1209 11:52:25.365267  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHKeyPath
	I1209 11:52:25.365369  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHUsername
	I1209 11:52:25.365613  663024 main.go:141] libmachine: Using SSH client type: native
	I1209 11:52:25.365790  663024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.25 22 <nil> <nil>}
	I1209 11:52:25.365808  663024 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-482476' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-482476/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-482476' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 11:52:25.475311  663024 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 11:52:25.475346  663024 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20068-609844/.minikube CaCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20068-609844/.minikube}
	I1209 11:52:25.475386  663024 buildroot.go:174] setting up certificates
	I1209 11:52:25.475403  663024 provision.go:84] configureAuth start
	I1209 11:52:25.475412  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetMachineName
	I1209 11:52:25.475711  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetIP
	I1209 11:52:25.478574  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.478903  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:c9:8a", ip: ""} in network mk-default-k8s-diff-port-482476: {Iface:virbr2 ExpiryTime:2024-12-09 12:52:18 +0000 UTC Type:0 Mac:52:54:00:f0:c9:8a Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:default-k8s-diff-port-482476 Clientid:01:52:54:00:f0:c9:8a}
	I1209 11:52:25.478935  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined IP address 192.168.50.25 and MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.479055  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHHostname
	I1209 11:52:25.481280  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.481655  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:c9:8a", ip: ""} in network mk-default-k8s-diff-port-482476: {Iface:virbr2 ExpiryTime:2024-12-09 12:52:18 +0000 UTC Type:0 Mac:52:54:00:f0:c9:8a Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:default-k8s-diff-port-482476 Clientid:01:52:54:00:f0:c9:8a}
	I1209 11:52:25.481688  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined IP address 192.168.50.25 and MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.481788  663024 provision.go:143] copyHostCerts
	I1209 11:52:25.481845  663024 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem, removing ...
	I1209 11:52:25.481876  663024 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem
	I1209 11:52:25.481957  663024 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem (1123 bytes)
	I1209 11:52:25.482056  663024 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem, removing ...
	I1209 11:52:25.482065  663024 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem
	I1209 11:52:25.482090  663024 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem (1679 bytes)
	I1209 11:52:25.482243  663024 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem, removing ...
	I1209 11:52:25.482254  663024 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem
	I1209 11:52:25.482279  663024 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem (1082 bytes)
	I1209 11:52:25.482336  663024 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-482476 san=[127.0.0.1 192.168.50.25 default-k8s-diff-port-482476 localhost minikube]
	I1209 11:52:25.534856  663024 provision.go:177] copyRemoteCerts
	I1209 11:52:25.534921  663024 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 11:52:25.534951  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHHostname
	I1209 11:52:25.537732  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.538138  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:c9:8a", ip: ""} in network mk-default-k8s-diff-port-482476: {Iface:virbr2 ExpiryTime:2024-12-09 12:52:18 +0000 UTC Type:0 Mac:52:54:00:f0:c9:8a Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:default-k8s-diff-port-482476 Clientid:01:52:54:00:f0:c9:8a}
	I1209 11:52:25.538190  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined IP address 192.168.50.25 and MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.538390  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHPort
	I1209 11:52:25.538611  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHKeyPath
	I1209 11:52:25.538783  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHUsername
	I1209 11:52:25.538943  663024 sshutil.go:53] new ssh client: &{IP:192.168.50.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/default-k8s-diff-port-482476/id_rsa Username:docker}
	I1209 11:52:25.619772  663024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1209 11:52:25.643527  663024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1209 11:52:25.668517  663024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1209 11:52:25.693573  663024 provision.go:87] duration metric: took 218.153182ms to configureAuth
	I1209 11:52:25.693615  663024 buildroot.go:189] setting minikube options for container-runtime
	I1209 11:52:25.693807  663024 config.go:182] Loaded profile config "default-k8s-diff-port-482476": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 11:52:25.693906  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHHostname
	I1209 11:52:25.696683  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.697058  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:c9:8a", ip: ""} in network mk-default-k8s-diff-port-482476: {Iface:virbr2 ExpiryTime:2024-12-09 12:52:18 +0000 UTC Type:0 Mac:52:54:00:f0:c9:8a Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:default-k8s-diff-port-482476 Clientid:01:52:54:00:f0:c9:8a}
	I1209 11:52:25.697092  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined IP address 192.168.50.25 and MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.697344  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHPort
	I1209 11:52:25.697548  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHKeyPath
	I1209 11:52:25.697741  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHKeyPath
	I1209 11:52:25.697868  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHUsername
	I1209 11:52:25.698033  663024 main.go:141] libmachine: Using SSH client type: native
	I1209 11:52:25.698229  663024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.25 22 <nil> <nil>}
	I1209 11:52:25.698254  663024 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 11:52:25.915568  663024 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 11:52:25.915595  663024 machine.go:96] duration metric: took 780.126343ms to provisionDockerMachine
	I1209 11:52:25.915610  663024 start.go:293] postStartSetup for "default-k8s-diff-port-482476" (driver="kvm2")
	I1209 11:52:25.915620  663024 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 11:52:25.915644  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .DriverName
	I1209 11:52:25.916005  663024 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 11:52:25.916047  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHHostname
	I1209 11:52:25.919268  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.919602  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:c9:8a", ip: ""} in network mk-default-k8s-diff-port-482476: {Iface:virbr2 ExpiryTime:2024-12-09 12:52:18 +0000 UTC Type:0 Mac:52:54:00:f0:c9:8a Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:default-k8s-diff-port-482476 Clientid:01:52:54:00:f0:c9:8a}
	I1209 11:52:25.919628  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined IP address 192.168.50.25 and MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.919775  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHPort
	I1209 11:52:25.919967  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHKeyPath
	I1209 11:52:25.920133  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHUsername
	I1209 11:52:25.920285  663024 sshutil.go:53] new ssh client: &{IP:192.168.50.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/default-k8s-diff-port-482476/id_rsa Username:docker}
	I1209 11:52:26.000530  663024 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 11:52:26.004544  663024 info.go:137] Remote host: Buildroot 2023.02.9
	I1209 11:52:26.004574  663024 filesync.go:126] Scanning /home/jenkins/minikube-integration/20068-609844/.minikube/addons for local assets ...
	I1209 11:52:26.004651  663024 filesync.go:126] Scanning /home/jenkins/minikube-integration/20068-609844/.minikube/files for local assets ...
	I1209 11:52:26.004759  663024 filesync.go:149] local asset: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem -> 6170172.pem in /etc/ssl/certs
	I1209 11:52:26.004885  663024 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 11:52:26.013444  663024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem --> /etc/ssl/certs/6170172.pem (1708 bytes)
	I1209 11:52:26.036052  663024 start.go:296] duration metric: took 120.422739ms for postStartSetup
	I1209 11:52:26.036110  663024 fix.go:56] duration metric: took 20.120932786s for fixHost
	I1209 11:52:26.036135  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHHostname
	I1209 11:52:26.039079  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:26.039445  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:c9:8a", ip: ""} in network mk-default-k8s-diff-port-482476: {Iface:virbr2 ExpiryTime:2024-12-09 12:52:18 +0000 UTC Type:0 Mac:52:54:00:f0:c9:8a Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:default-k8s-diff-port-482476 Clientid:01:52:54:00:f0:c9:8a}
	I1209 11:52:26.039478  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined IP address 192.168.50.25 and MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:26.039797  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHPort
	I1209 11:52:26.040065  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHKeyPath
	I1209 11:52:26.040228  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHKeyPath
	I1209 11:52:26.040427  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHUsername
	I1209 11:52:26.040620  663024 main.go:141] libmachine: Using SSH client type: native
	I1209 11:52:26.040906  663024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.25 22 <nil> <nil>}
	I1209 11:52:26.040924  663024 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1209 11:52:26.142590  663024 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733745146.090497627
	
	I1209 11:52:26.142623  663024 fix.go:216] guest clock: 1733745146.090497627
	I1209 11:52:26.142634  663024 fix.go:229] Guest: 2024-12-09 11:52:26.090497627 +0000 UTC Remote: 2024-12-09 11:52:26.036115182 +0000 UTC m=+146.587055001 (delta=54.382445ms)
	I1209 11:52:26.142669  663024 fix.go:200] guest clock delta is within tolerance: 54.382445ms
	I1209 11:52:26.142681  663024 start.go:83] releasing machines lock for "default-k8s-diff-port-482476", held for 20.227543026s
	I1209 11:52:26.142723  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .DriverName
	I1209 11:52:26.143032  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetIP
	I1209 11:52:26.146118  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:26.146602  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:c9:8a", ip: ""} in network mk-default-k8s-diff-port-482476: {Iface:virbr2 ExpiryTime:2024-12-09 12:52:18 +0000 UTC Type:0 Mac:52:54:00:f0:c9:8a Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:default-k8s-diff-port-482476 Clientid:01:52:54:00:f0:c9:8a}
	I1209 11:52:26.146634  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined IP address 192.168.50.25 and MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:26.146841  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .DriverName
	I1209 11:52:26.147440  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .DriverName
	I1209 11:52:26.147709  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .DriverName
	I1209 11:52:26.147833  663024 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 11:52:26.147872  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHHostname
	I1209 11:52:26.147980  663024 ssh_runner.go:195] Run: cat /version.json
	I1209 11:52:26.148009  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHHostname
	I1209 11:52:26.151002  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:26.151346  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:26.151379  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:c9:8a", ip: ""} in network mk-default-k8s-diff-port-482476: {Iface:virbr2 ExpiryTime:2024-12-09 12:52:18 +0000 UTC Type:0 Mac:52:54:00:f0:c9:8a Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:default-k8s-diff-port-482476 Clientid:01:52:54:00:f0:c9:8a}
	I1209 11:52:26.151410  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined IP address 192.168.50.25 and MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:26.151534  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHPort
	I1209 11:52:26.151729  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHKeyPath
	I1209 11:52:26.151848  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:c9:8a", ip: ""} in network mk-default-k8s-diff-port-482476: {Iface:virbr2 ExpiryTime:2024-12-09 12:52:18 +0000 UTC Type:0 Mac:52:54:00:f0:c9:8a Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:default-k8s-diff-port-482476 Clientid:01:52:54:00:f0:c9:8a}
	I1209 11:52:26.151876  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined IP address 192.168.50.25 and MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:26.151904  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHUsername
	I1209 11:52:26.152003  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHPort
	I1209 11:52:26.152082  663024 sshutil.go:53] new ssh client: &{IP:192.168.50.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/default-k8s-diff-port-482476/id_rsa Username:docker}
	I1209 11:52:26.152159  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHKeyPath
	I1209 11:52:26.152322  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHUsername
	I1209 11:52:26.152565  663024 sshutil.go:53] new ssh client: &{IP:192.168.50.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/default-k8s-diff-port-482476/id_rsa Username:docker}
	I1209 11:52:26.231575  663024 ssh_runner.go:195] Run: systemctl --version
	I1209 11:52:26.267939  663024 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 11:52:26.418953  663024 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 11:52:26.426243  663024 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 11:52:26.426337  663024 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 11:52:26.448407  663024 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1209 11:52:26.448442  663024 start.go:495] detecting cgroup driver to use...
	I1209 11:52:26.448540  663024 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 11:52:26.469675  663024 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 11:52:26.488825  663024 docker.go:217] disabling cri-docker service (if available) ...
	I1209 11:52:26.488902  663024 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 11:52:26.507716  663024 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 11:52:26.525232  663024 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 11:52:26.664062  663024 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 11:52:26.854813  663024 docker.go:233] disabling docker service ...
	I1209 11:52:26.854883  663024 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 11:52:26.870021  663024 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 11:52:26.883610  663024 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 11:52:27.001237  663024 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 11:52:27.126865  663024 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 11:52:27.144121  663024 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 11:52:27.168073  663024 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1209 11:52:27.168242  663024 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:52:27.180516  663024 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 11:52:27.180587  663024 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:52:27.191681  663024 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:52:27.204047  663024 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:52:27.214157  663024 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 11:52:27.225934  663024 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:52:27.236691  663024 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:52:27.258774  663024 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:52:27.271986  663024 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 11:52:27.283488  663024 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1209 11:52:27.283539  663024 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1209 11:52:27.299065  663024 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 11:52:27.309203  663024 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 11:52:27.431740  663024 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 11:52:27.529577  663024 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 11:52:27.529668  663024 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 11:52:27.534733  663024 start.go:563] Will wait 60s for crictl version
	I1209 11:52:27.534800  663024 ssh_runner.go:195] Run: which crictl
	I1209 11:52:27.538544  663024 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 11:52:27.577577  663024 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1209 11:52:27.577684  663024 ssh_runner.go:195] Run: crio --version
	I1209 11:52:27.607938  663024 ssh_runner.go:195] Run: crio --version
	I1209 11:52:27.645210  663024 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1209 11:52:23.133393  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:23.632776  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:24.133286  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:24.632415  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:25.132348  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:25.632478  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:26.132982  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:26.632517  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:27.132692  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:27.633291  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:27.646510  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetIP
	I1209 11:52:27.650014  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:27.650439  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:c9:8a", ip: ""} in network mk-default-k8s-diff-port-482476: {Iface:virbr2 ExpiryTime:2024-12-09 12:52:18 +0000 UTC Type:0 Mac:52:54:00:f0:c9:8a Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:default-k8s-diff-port-482476 Clientid:01:52:54:00:f0:c9:8a}
	I1209 11:52:27.650469  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined IP address 192.168.50.25 and MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:27.650705  663024 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1209 11:52:27.654738  663024 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 11:52:27.668671  663024 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-482476 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:default-k8s-diff-port-482476 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.25 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 11:52:27.668808  663024 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 11:52:27.668873  663024 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 11:52:27.709582  663024 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1209 11:52:27.709679  663024 ssh_runner.go:195] Run: which lz4
	I1209 11:52:27.713702  663024 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1209 11:52:27.717851  663024 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1209 11:52:27.717887  663024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1209 11:52:29.037160  663024 crio.go:462] duration metric: took 1.32348676s to copy over tarball
	I1209 11:52:29.037262  663024 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1209 11:52:27.500098  661546 main.go:141] libmachine: (embed-certs-005123) Waiting to get IP...
	I1209 11:52:27.501088  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:27.501538  661546 main.go:141] libmachine: (embed-certs-005123) DBG | unable to find current IP address of domain embed-certs-005123 in network mk-embed-certs-005123
	I1209 11:52:27.501605  661546 main.go:141] libmachine: (embed-certs-005123) DBG | I1209 11:52:27.501510  663907 retry.go:31] will retry after 191.187925ms: waiting for machine to come up
	I1209 11:52:27.694017  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:27.694574  661546 main.go:141] libmachine: (embed-certs-005123) DBG | unable to find current IP address of domain embed-certs-005123 in network mk-embed-certs-005123
	I1209 11:52:27.694605  661546 main.go:141] libmachine: (embed-certs-005123) DBG | I1209 11:52:27.694512  663907 retry.go:31] will retry after 256.268ms: waiting for machine to come up
	I1209 11:52:27.952185  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:27.952863  661546 main.go:141] libmachine: (embed-certs-005123) DBG | unable to find current IP address of domain embed-certs-005123 in network mk-embed-certs-005123
	I1209 11:52:27.952908  661546 main.go:141] libmachine: (embed-certs-005123) DBG | I1209 11:52:27.952759  663907 retry.go:31] will retry after 460.272204ms: waiting for machine to come up
	I1209 11:52:28.414403  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:28.414925  661546 main.go:141] libmachine: (embed-certs-005123) DBG | unable to find current IP address of domain embed-certs-005123 in network mk-embed-certs-005123
	I1209 11:52:28.414967  661546 main.go:141] libmachine: (embed-certs-005123) DBG | I1209 11:52:28.414873  663907 retry.go:31] will retry after 450.761189ms: waiting for machine to come up
	I1209 11:52:28.867687  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:28.868350  661546 main.go:141] libmachine: (embed-certs-005123) DBG | unable to find current IP address of domain embed-certs-005123 in network mk-embed-certs-005123
	I1209 11:52:28.868389  661546 main.go:141] libmachine: (embed-certs-005123) DBG | I1209 11:52:28.868313  663907 retry.go:31] will retry after 615.800863ms: waiting for machine to come up
	I1209 11:52:29.486566  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:29.487179  661546 main.go:141] libmachine: (embed-certs-005123) DBG | unable to find current IP address of domain embed-certs-005123 in network mk-embed-certs-005123
	I1209 11:52:29.487218  661546 main.go:141] libmachine: (embed-certs-005123) DBG | I1209 11:52:29.487108  663907 retry.go:31] will retry after 628.641045ms: waiting for machine to come up
	I1209 11:52:30.117051  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:30.117424  661546 main.go:141] libmachine: (embed-certs-005123) DBG | unable to find current IP address of domain embed-certs-005123 in network mk-embed-certs-005123
	I1209 11:52:30.117459  661546 main.go:141] libmachine: (embed-certs-005123) DBG | I1209 11:52:30.117356  663907 retry.go:31] will retry after 902.465226ms: waiting for machine to come up
	I1209 11:52:31.021756  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:31.022268  661546 main.go:141] libmachine: (embed-certs-005123) DBG | unable to find current IP address of domain embed-certs-005123 in network mk-embed-certs-005123
	I1209 11:52:31.022298  661546 main.go:141] libmachine: (embed-certs-005123) DBG | I1209 11:52:31.022229  663907 retry.go:31] will retry after 918.939368ms: waiting for machine to come up
	I1209 11:52:26.594953  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:29.093499  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:28.132379  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:28.633377  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:29.132983  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:29.633370  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:30.132748  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:30.633383  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:31.133450  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:31.633210  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:32.132406  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:32.632598  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:31.234956  663024 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.197609203s)
	I1209 11:52:31.235007  663024 crio.go:469] duration metric: took 2.197798334s to extract the tarball
	I1209 11:52:31.235018  663024 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1209 11:52:31.275616  663024 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 11:52:31.320918  663024 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 11:52:31.320945  663024 cache_images.go:84] Images are preloaded, skipping loading
	I1209 11:52:31.320961  663024 kubeadm.go:934] updating node { 192.168.50.25 8444 v1.31.2 crio true true} ...
	I1209 11:52:31.321122  663024 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-482476 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.25
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-482476 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 11:52:31.321246  663024 ssh_runner.go:195] Run: crio config
	I1209 11:52:31.367805  663024 cni.go:84] Creating CNI manager for ""
	I1209 11:52:31.367827  663024 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 11:52:31.367839  663024 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1209 11:52:31.367863  663024 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.25 APIServerPort:8444 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-482476 NodeName:default-k8s-diff-port-482476 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.25"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.25 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 11:52:31.368005  663024 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.25
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-482476"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.25"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.25"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 11:52:31.368074  663024 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1209 11:52:31.377831  663024 binaries.go:44] Found k8s binaries, skipping transfer
	I1209 11:52:31.377902  663024 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 11:52:31.386872  663024 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I1209 11:52:31.403764  663024 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 11:52:31.419295  663024 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2305 bytes)
	I1209 11:52:31.435856  663024 ssh_runner.go:195] Run: grep 192.168.50.25	control-plane.minikube.internal$ /etc/hosts
	I1209 11:52:31.439480  663024 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.25	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 11:52:31.455136  663024 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 11:52:31.573295  663024 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 11:52:31.589679  663024 certs.go:68] Setting up /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/default-k8s-diff-port-482476 for IP: 192.168.50.25
	I1209 11:52:31.589703  663024 certs.go:194] generating shared ca certs ...
	I1209 11:52:31.589741  663024 certs.go:226] acquiring lock for ca certs: {Name:mk2df5887e08965d909a9c950da5dfffb8a04ddc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 11:52:31.589930  663024 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.key
	I1209 11:52:31.589982  663024 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.key
	I1209 11:52:31.589995  663024 certs.go:256] generating profile certs ...
	I1209 11:52:31.590137  663024 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/default-k8s-diff-port-482476/client.key
	I1209 11:52:31.590256  663024 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/default-k8s-diff-port-482476/apiserver.key.e2346b12
	I1209 11:52:31.590322  663024 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/default-k8s-diff-port-482476/proxy-client.key
	I1209 11:52:31.590479  663024 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017.pem (1338 bytes)
	W1209 11:52:31.590522  663024 certs.go:480] ignoring /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017_empty.pem, impossibly tiny 0 bytes
	I1209 11:52:31.590535  663024 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem (1675 bytes)
	I1209 11:52:31.590571  663024 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem (1082 bytes)
	I1209 11:52:31.590612  663024 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem (1123 bytes)
	I1209 11:52:31.590649  663024 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem (1679 bytes)
	I1209 11:52:31.590710  663024 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem (1708 bytes)
	I1209 11:52:31.591643  663024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 11:52:31.634363  663024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1209 11:52:31.660090  663024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 11:52:31.692933  663024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1209 11:52:31.726010  663024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/default-k8s-diff-port-482476/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1209 11:52:31.757565  663024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/default-k8s-diff-port-482476/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1209 11:52:31.781368  663024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/default-k8s-diff-port-482476/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 11:52:31.805233  663024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/default-k8s-diff-port-482476/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1209 11:52:31.828391  663024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem --> /usr/share/ca-certificates/6170172.pem (1708 bytes)
	I1209 11:52:31.850407  663024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 11:52:31.873159  663024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017.pem --> /usr/share/ca-certificates/617017.pem (1338 bytes)
	I1209 11:52:31.895503  663024 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 11:52:31.911754  663024 ssh_runner.go:195] Run: openssl version
	I1209 11:52:31.917771  663024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6170172.pem && ln -fs /usr/share/ca-certificates/6170172.pem /etc/ssl/certs/6170172.pem"
	I1209 11:52:31.929857  663024 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6170172.pem
	I1209 11:52:31.934518  663024 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 10:45 /usr/share/ca-certificates/6170172.pem
	I1209 11:52:31.934596  663024 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6170172.pem
	I1209 11:52:31.940382  663024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6170172.pem /etc/ssl/certs/3ec20f2e.0"
	I1209 11:52:31.951417  663024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1209 11:52:31.961966  663024 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 11:52:31.966234  663024 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 10:34 /usr/share/ca-certificates/minikubeCA.pem
	I1209 11:52:31.966286  663024 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 11:52:31.972070  663024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1209 11:52:31.982547  663024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/617017.pem && ln -fs /usr/share/ca-certificates/617017.pem /etc/ssl/certs/617017.pem"
	I1209 11:52:31.993215  663024 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/617017.pem
	I1209 11:52:31.997579  663024 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 10:45 /usr/share/ca-certificates/617017.pem
	I1209 11:52:31.997641  663024 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/617017.pem
	I1209 11:52:32.003050  663024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/617017.pem /etc/ssl/certs/51391683.0"
	I1209 11:52:32.013463  663024 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 11:52:32.017936  663024 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1209 11:52:32.024029  663024 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1209 11:52:32.029686  663024 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1209 11:52:32.035260  663024 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1209 11:52:32.040696  663024 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1209 11:52:32.046116  663024 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1209 11:52:32.051521  663024 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-482476 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.2 ClusterName:default-k8s-diff-port-482476 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.25 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 11:52:32.051605  663024 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 11:52:32.051676  663024 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 11:52:32.092529  663024 cri.go:89] found id: ""
	I1209 11:52:32.092623  663024 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 11:52:32.103153  663024 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1209 11:52:32.103183  663024 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1209 11:52:32.103247  663024 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1209 11:52:32.113029  663024 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1209 11:52:32.114506  663024 kubeconfig.go:125] found "default-k8s-diff-port-482476" server: "https://192.168.50.25:8444"
	I1209 11:52:32.116929  663024 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1209 11:52:32.127055  663024 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.25
	I1209 11:52:32.127108  663024 kubeadm.go:1160] stopping kube-system containers ...
	I1209 11:52:32.127124  663024 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1209 11:52:32.127189  663024 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 11:52:32.169401  663024 cri.go:89] found id: ""
	I1209 11:52:32.169507  663024 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1209 11:52:32.187274  663024 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 11:52:32.196843  663024 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 11:52:32.196867  663024 kubeadm.go:157] found existing configuration files:
	
	I1209 11:52:32.196925  663024 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1209 11:52:32.205670  663024 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 11:52:32.205754  663024 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 11:52:32.214977  663024 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1209 11:52:32.223707  663024 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 11:52:32.223782  663024 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 11:52:32.232514  663024 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1209 11:52:32.240999  663024 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 11:52:32.241076  663024 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 11:52:32.250049  663024 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1209 11:52:32.258782  663024 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 11:52:32.258846  663024 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 11:52:32.268447  663024 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 11:52:32.277875  663024 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:32.394016  663024 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:33.494978  663024 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.100920844s)
	I1209 11:52:33.495030  663024 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:33.719319  663024 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:33.787272  663024 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:33.882783  663024 api_server.go:52] waiting for apiserver process to appear ...
	I1209 11:52:33.882876  663024 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:34.383090  663024 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:31.942735  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:31.943207  661546 main.go:141] libmachine: (embed-certs-005123) DBG | unable to find current IP address of domain embed-certs-005123 in network mk-embed-certs-005123
	I1209 11:52:31.943244  661546 main.go:141] libmachine: (embed-certs-005123) DBG | I1209 11:52:31.943141  663907 retry.go:31] will retry after 1.153139191s: waiting for machine to come up
	I1209 11:52:33.097672  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:33.098233  661546 main.go:141] libmachine: (embed-certs-005123) DBG | unable to find current IP address of domain embed-certs-005123 in network mk-embed-certs-005123
	I1209 11:52:33.098299  661546 main.go:141] libmachine: (embed-certs-005123) DBG | I1209 11:52:33.098199  663907 retry.go:31] will retry after 2.002880852s: waiting for machine to come up
	I1209 11:52:35.103239  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:35.103693  661546 main.go:141] libmachine: (embed-certs-005123) DBG | unable to find current IP address of domain embed-certs-005123 in network mk-embed-certs-005123
	I1209 11:52:35.103724  661546 main.go:141] libmachine: (embed-certs-005123) DBG | I1209 11:52:35.103639  663907 retry.go:31] will retry after 2.219510124s: waiting for machine to come up
	I1209 11:52:31.593184  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:34.090877  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:36.094569  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:33.132924  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:33.632884  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:34.132528  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:34.632989  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:35.133398  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:35.632376  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:36.132936  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:36.633152  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:37.133343  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:37.633367  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:34.883172  663024 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:35.384008  663024 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:35.883940  663024 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:35.901453  663024 api_server.go:72] duration metric: took 2.018670363s to wait for apiserver process to appear ...
	I1209 11:52:35.901489  663024 api_server.go:88] waiting for apiserver healthz status ...
	I1209 11:52:35.901524  663024 api_server.go:253] Checking apiserver healthz at https://192.168.50.25:8444/healthz ...
	I1209 11:52:38.225976  663024 api_server.go:279] https://192.168.50.25:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1209 11:52:38.226017  663024 api_server.go:103] status: https://192.168.50.25:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1209 11:52:38.226037  663024 api_server.go:253] Checking apiserver healthz at https://192.168.50.25:8444/healthz ...
	I1209 11:52:38.269459  663024 api_server.go:279] https://192.168.50.25:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1209 11:52:38.269549  663024 api_server.go:103] status: https://192.168.50.25:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1209 11:52:38.401652  663024 api_server.go:253] Checking apiserver healthz at https://192.168.50.25:8444/healthz ...
	I1209 11:52:38.407995  663024 api_server.go:279] https://192.168.50.25:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1209 11:52:38.408028  663024 api_server.go:103] status: https://192.168.50.25:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1209 11:52:38.902416  663024 api_server.go:253] Checking apiserver healthz at https://192.168.50.25:8444/healthz ...
	I1209 11:52:38.914550  663024 api_server.go:279] https://192.168.50.25:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1209 11:52:38.914579  663024 api_server.go:103] status: https://192.168.50.25:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1209 11:52:39.401719  663024 api_server.go:253] Checking apiserver healthz at https://192.168.50.25:8444/healthz ...
	I1209 11:52:39.409382  663024 api_server.go:279] https://192.168.50.25:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1209 11:52:39.409427  663024 api_server.go:103] status: https://192.168.50.25:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1209 11:52:39.902488  663024 api_server.go:253] Checking apiserver healthz at https://192.168.50.25:8444/healthz ...
	I1209 11:52:39.907511  663024 api_server.go:279] https://192.168.50.25:8444/healthz returned 200:
	ok
	I1209 11:52:39.914532  663024 api_server.go:141] control plane version: v1.31.2
	I1209 11:52:39.914562  663024 api_server.go:131] duration metric: took 4.013066199s to wait for apiserver health ...
	I1209 11:52:39.914586  663024 cni.go:84] Creating CNI manager for ""
	I1209 11:52:39.914594  663024 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 11:52:39.915954  663024 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1209 11:52:37.324833  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:37.325397  661546 main.go:141] libmachine: (embed-certs-005123) DBG | unable to find current IP address of domain embed-certs-005123 in network mk-embed-certs-005123
	I1209 11:52:37.325430  661546 main.go:141] libmachine: (embed-certs-005123) DBG | I1209 11:52:37.325338  663907 retry.go:31] will retry after 3.636796307s: waiting for machine to come up
	I1209 11:52:40.966039  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:40.966438  661546 main.go:141] libmachine: (embed-certs-005123) DBG | unable to find current IP address of domain embed-certs-005123 in network mk-embed-certs-005123
	I1209 11:52:40.966463  661546 main.go:141] libmachine: (embed-certs-005123) DBG | I1209 11:52:40.966419  663907 retry.go:31] will retry after 3.704289622s: waiting for machine to come up
	I1209 11:52:38.592804  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:40.593407  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:38.133368  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:38.632475  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:39.132993  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:39.633225  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:40.132552  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:40.633292  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:41.132443  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:41.632994  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:42.132631  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:42.633378  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:39.917397  663024 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1209 11:52:39.928995  663024 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1209 11:52:39.953045  663024 system_pods.go:43] waiting for kube-system pods to appear ...
	I1209 11:52:39.962582  663024 system_pods.go:59] 8 kube-system pods found
	I1209 11:52:39.962628  663024 system_pods.go:61] "coredns-7c65d6cfc9-zzrgn" [dca7a835-3b66-4515-b571-6420afc42c44] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1209 11:52:39.962639  663024 system_pods.go:61] "etcd-default-k8s-diff-port-482476" [2323dbbc-9e7f-4047-b0be-b68b851f4986] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1209 11:52:39.962649  663024 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-482476" [0b7a4936-5282-46a4-a08a-e225b303f6f2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1209 11:52:39.962658  663024 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-482476" [c6ff79a0-2177-4c79-8021-c523f8d53e9b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1209 11:52:39.962666  663024 system_pods.go:61] "kube-proxy-6th5d" [0cff6df1-1adb-4b7e-8d59-a837db026339] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1209 11:52:39.962682  663024 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-482476" [524125eb-afd4-4e20-b0f0-e58019e84962] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1209 11:52:39.962694  663024 system_pods.go:61] "metrics-server-6867b74b74-bpccn" [7426c800-9ff7-4778-82a0-6c71fd05a222] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1209 11:52:39.962702  663024 system_pods.go:61] "storage-provisioner" [4478313a-58e8-4d24-ab0b-c087e664200d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1209 11:52:39.962711  663024 system_pods.go:74] duration metric: took 9.637672ms to wait for pod list to return data ...
	I1209 11:52:39.962725  663024 node_conditions.go:102] verifying NodePressure condition ...
	I1209 11:52:39.969576  663024 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1209 11:52:39.969611  663024 node_conditions.go:123] node cpu capacity is 2
	I1209 11:52:39.969627  663024 node_conditions.go:105] duration metric: took 6.893708ms to run NodePressure ...
	I1209 11:52:39.969660  663024 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:40.340239  663024 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1209 11:52:40.345384  663024 kubeadm.go:739] kubelet initialised
	I1209 11:52:40.345412  663024 kubeadm.go:740] duration metric: took 5.145751ms waiting for restarted kubelet to initialise ...
	I1209 11:52:40.345425  663024 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 11:52:40.350721  663024 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-zzrgn" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:42.357138  663024 pod_ready.go:103] pod "coredns-7c65d6cfc9-zzrgn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:44.361981  663024 pod_ready.go:103] pod "coredns-7c65d6cfc9-zzrgn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:44.674598  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:44.675048  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has current primary IP address 192.168.72.218 and MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:44.675068  661546 main.go:141] libmachine: (embed-certs-005123) Found IP for machine: 192.168.72.218
	I1209 11:52:44.675075  661546 main.go:141] libmachine: (embed-certs-005123) Reserving static IP address...
	I1209 11:52:44.675492  661546 main.go:141] libmachine: (embed-certs-005123) DBG | found host DHCP lease matching {name: "embed-certs-005123", mac: "52:54:00:ee:a0:a8", ip: "192.168.72.218"} in network mk-embed-certs-005123: {Iface:virbr4 ExpiryTime:2024-12-09 12:52:37 +0000 UTC Type:0 Mac:52:54:00:ee:a0:a8 Iaid: IPaddr:192.168.72.218 Prefix:24 Hostname:embed-certs-005123 Clientid:01:52:54:00:ee:a0:a8}
	I1209 11:52:44.675522  661546 main.go:141] libmachine: (embed-certs-005123) DBG | skip adding static IP to network mk-embed-certs-005123 - found existing host DHCP lease matching {name: "embed-certs-005123", mac: "52:54:00:ee:a0:a8", ip: "192.168.72.218"}
	I1209 11:52:44.675537  661546 main.go:141] libmachine: (embed-certs-005123) Reserved static IP address: 192.168.72.218
	I1209 11:52:44.675555  661546 main.go:141] libmachine: (embed-certs-005123) Waiting for SSH to be available...
	I1209 11:52:44.675566  661546 main.go:141] libmachine: (embed-certs-005123) DBG | Getting to WaitForSSH function...
	I1209 11:52:44.677490  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:44.677814  661546 main.go:141] libmachine: (embed-certs-005123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:a0:a8", ip: ""} in network mk-embed-certs-005123: {Iface:virbr4 ExpiryTime:2024-12-09 12:52:37 +0000 UTC Type:0 Mac:52:54:00:ee:a0:a8 Iaid: IPaddr:192.168.72.218 Prefix:24 Hostname:embed-certs-005123 Clientid:01:52:54:00:ee:a0:a8}
	I1209 11:52:44.677860  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined IP address 192.168.72.218 and MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:44.677952  661546 main.go:141] libmachine: (embed-certs-005123) DBG | Using SSH client type: external
	I1209 11:52:44.678012  661546 main.go:141] libmachine: (embed-certs-005123) DBG | Using SSH private key: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/embed-certs-005123/id_rsa (-rw-------)
	I1209 11:52:44.678042  661546 main.go:141] libmachine: (embed-certs-005123) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.218 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20068-609844/.minikube/machines/embed-certs-005123/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1209 11:52:44.678056  661546 main.go:141] libmachine: (embed-certs-005123) DBG | About to run SSH command:
	I1209 11:52:44.678068  661546 main.go:141] libmachine: (embed-certs-005123) DBG | exit 0
	I1209 11:52:44.798377  661546 main.go:141] libmachine: (embed-certs-005123) DBG | SSH cmd err, output: <nil>: 
	I1209 11:52:44.798782  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetConfigRaw
	I1209 11:52:44.799532  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetIP
	I1209 11:52:44.801853  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:44.802223  661546 main.go:141] libmachine: (embed-certs-005123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:a0:a8", ip: ""} in network mk-embed-certs-005123: {Iface:virbr4 ExpiryTime:2024-12-09 12:52:37 +0000 UTC Type:0 Mac:52:54:00:ee:a0:a8 Iaid: IPaddr:192.168.72.218 Prefix:24 Hostname:embed-certs-005123 Clientid:01:52:54:00:ee:a0:a8}
	I1209 11:52:44.802255  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined IP address 192.168.72.218 and MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:44.802539  661546 profile.go:143] Saving config to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/embed-certs-005123/config.json ...
	I1209 11:52:44.802777  661546 machine.go:93] provisionDockerMachine start ...
	I1209 11:52:44.802799  661546 main.go:141] libmachine: (embed-certs-005123) Calling .DriverName
	I1209 11:52:44.802994  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHHostname
	I1209 11:52:44.805481  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:44.805803  661546 main.go:141] libmachine: (embed-certs-005123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:a0:a8", ip: ""} in network mk-embed-certs-005123: {Iface:virbr4 ExpiryTime:2024-12-09 12:52:37 +0000 UTC Type:0 Mac:52:54:00:ee:a0:a8 Iaid: IPaddr:192.168.72.218 Prefix:24 Hostname:embed-certs-005123 Clientid:01:52:54:00:ee:a0:a8}
	I1209 11:52:44.805838  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined IP address 192.168.72.218 and MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:44.805999  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHPort
	I1209 11:52:44.806219  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHKeyPath
	I1209 11:52:44.806386  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHKeyPath
	I1209 11:52:44.806555  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHUsername
	I1209 11:52:44.806716  661546 main.go:141] libmachine: Using SSH client type: native
	I1209 11:52:44.806886  661546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.218 22 <nil> <nil>}
	I1209 11:52:44.806897  661546 main.go:141] libmachine: About to run SSH command:
	hostname
	I1209 11:52:44.914443  661546 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1209 11:52:44.914480  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetMachineName
	I1209 11:52:44.914783  661546 buildroot.go:166] provisioning hostname "embed-certs-005123"
	I1209 11:52:44.914810  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetMachineName
	I1209 11:52:44.914973  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHHostname
	I1209 11:52:44.918053  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:44.918471  661546 main.go:141] libmachine: (embed-certs-005123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:a0:a8", ip: ""} in network mk-embed-certs-005123: {Iface:virbr4 ExpiryTime:2024-12-09 12:52:37 +0000 UTC Type:0 Mac:52:54:00:ee:a0:a8 Iaid: IPaddr:192.168.72.218 Prefix:24 Hostname:embed-certs-005123 Clientid:01:52:54:00:ee:a0:a8}
	I1209 11:52:44.918508  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined IP address 192.168.72.218 and MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:44.918701  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHPort
	I1209 11:52:44.918935  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHKeyPath
	I1209 11:52:44.919087  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHKeyPath
	I1209 11:52:44.919267  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHUsername
	I1209 11:52:44.919452  661546 main.go:141] libmachine: Using SSH client type: native
	I1209 11:52:44.919624  661546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.218 22 <nil> <nil>}
	I1209 11:52:44.919645  661546 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-005123 && echo "embed-certs-005123" | sudo tee /etc/hostname
	I1209 11:52:45.032725  661546 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-005123
	
	I1209 11:52:45.032760  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHHostname
	I1209 11:52:45.035820  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:45.036222  661546 main.go:141] libmachine: (embed-certs-005123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:a0:a8", ip: ""} in network mk-embed-certs-005123: {Iface:virbr4 ExpiryTime:2024-12-09 12:52:37 +0000 UTC Type:0 Mac:52:54:00:ee:a0:a8 Iaid: IPaddr:192.168.72.218 Prefix:24 Hostname:embed-certs-005123 Clientid:01:52:54:00:ee:a0:a8}
	I1209 11:52:45.036263  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined IP address 192.168.72.218 and MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:45.036466  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHPort
	I1209 11:52:45.036666  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHKeyPath
	I1209 11:52:45.036864  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHKeyPath
	I1209 11:52:45.037003  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHUsername
	I1209 11:52:45.037189  661546 main.go:141] libmachine: Using SSH client type: native
	I1209 11:52:45.037396  661546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.218 22 <nil> <nil>}
	I1209 11:52:45.037413  661546 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-005123' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-005123/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-005123' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 11:52:45.147189  661546 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 11:52:45.147225  661546 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20068-609844/.minikube CaCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20068-609844/.minikube}
	I1209 11:52:45.147284  661546 buildroot.go:174] setting up certificates
	I1209 11:52:45.147299  661546 provision.go:84] configureAuth start
	I1209 11:52:45.147313  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetMachineName
	I1209 11:52:45.147667  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetIP
	I1209 11:52:45.150526  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:45.150965  661546 main.go:141] libmachine: (embed-certs-005123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:a0:a8", ip: ""} in network mk-embed-certs-005123: {Iface:virbr4 ExpiryTime:2024-12-09 12:52:37 +0000 UTC Type:0 Mac:52:54:00:ee:a0:a8 Iaid: IPaddr:192.168.72.218 Prefix:24 Hostname:embed-certs-005123 Clientid:01:52:54:00:ee:a0:a8}
	I1209 11:52:45.151009  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined IP address 192.168.72.218 and MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:45.151118  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHHostname
	I1209 11:52:45.153778  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:45.154178  661546 main.go:141] libmachine: (embed-certs-005123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:a0:a8", ip: ""} in network mk-embed-certs-005123: {Iface:virbr4 ExpiryTime:2024-12-09 12:52:37 +0000 UTC Type:0 Mac:52:54:00:ee:a0:a8 Iaid: IPaddr:192.168.72.218 Prefix:24 Hostname:embed-certs-005123 Clientid:01:52:54:00:ee:a0:a8}
	I1209 11:52:45.154213  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined IP address 192.168.72.218 and MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:45.154382  661546 provision.go:143] copyHostCerts
	I1209 11:52:45.154455  661546 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem, removing ...
	I1209 11:52:45.154478  661546 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem
	I1209 11:52:45.154549  661546 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem (1082 bytes)
	I1209 11:52:45.154673  661546 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem, removing ...
	I1209 11:52:45.154685  661546 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem
	I1209 11:52:45.154717  661546 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem (1123 bytes)
	I1209 11:52:45.154816  661546 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem, removing ...
	I1209 11:52:45.154829  661546 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem
	I1209 11:52:45.154857  661546 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem (1679 bytes)
	I1209 11:52:45.154935  661546 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem org=jenkins.embed-certs-005123 san=[127.0.0.1 192.168.72.218 embed-certs-005123 localhost minikube]
	I1209 11:52:45.382712  661546 provision.go:177] copyRemoteCerts
	I1209 11:52:45.382772  661546 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 11:52:45.382801  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHHostname
	I1209 11:52:45.385625  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:45.386020  661546 main.go:141] libmachine: (embed-certs-005123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:a0:a8", ip: ""} in network mk-embed-certs-005123: {Iface:virbr4 ExpiryTime:2024-12-09 12:52:37 +0000 UTC Type:0 Mac:52:54:00:ee:a0:a8 Iaid: IPaddr:192.168.72.218 Prefix:24 Hostname:embed-certs-005123 Clientid:01:52:54:00:ee:a0:a8}
	I1209 11:52:45.386050  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined IP address 192.168.72.218 and MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:45.386241  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHPort
	I1209 11:52:45.386448  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHKeyPath
	I1209 11:52:45.386626  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHUsername
	I1209 11:52:45.386765  661546 sshutil.go:53] new ssh client: &{IP:192.168.72.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/embed-certs-005123/id_rsa Username:docker}
	I1209 11:52:45.464427  661546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1209 11:52:45.488111  661546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1209 11:52:45.511231  661546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1209 11:52:45.534104  661546 provision.go:87] duration metric: took 386.787703ms to configureAuth
	I1209 11:52:45.534141  661546 buildroot.go:189] setting minikube options for container-runtime
	I1209 11:52:45.534411  661546 config.go:182] Loaded profile config "embed-certs-005123": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 11:52:45.534526  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHHostname
	I1209 11:52:45.537936  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:45.538370  661546 main.go:141] libmachine: (embed-certs-005123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:a0:a8", ip: ""} in network mk-embed-certs-005123: {Iface:virbr4 ExpiryTime:2024-12-09 12:52:37 +0000 UTC Type:0 Mac:52:54:00:ee:a0:a8 Iaid: IPaddr:192.168.72.218 Prefix:24 Hostname:embed-certs-005123 Clientid:01:52:54:00:ee:a0:a8}
	I1209 11:52:45.538402  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined IP address 192.168.72.218 and MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:45.538584  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHPort
	I1209 11:52:45.538826  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHKeyPath
	I1209 11:52:45.539019  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHKeyPath
	I1209 11:52:45.539150  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHUsername
	I1209 11:52:45.539378  661546 main.go:141] libmachine: Using SSH client type: native
	I1209 11:52:45.539551  661546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.218 22 <nil> <nil>}
	I1209 11:52:45.539568  661546 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 11:52:45.771215  661546 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 11:52:45.771259  661546 machine.go:96] duration metric: took 968.466766ms to provisionDockerMachine
	I1209 11:52:45.771276  661546 start.go:293] postStartSetup for "embed-certs-005123" (driver="kvm2")
	I1209 11:52:45.771287  661546 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 11:52:45.771316  661546 main.go:141] libmachine: (embed-certs-005123) Calling .DriverName
	I1209 11:52:45.771673  661546 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 11:52:45.771709  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHHostname
	I1209 11:52:45.774881  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:45.775294  661546 main.go:141] libmachine: (embed-certs-005123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:a0:a8", ip: ""} in network mk-embed-certs-005123: {Iface:virbr4 ExpiryTime:2024-12-09 12:52:37 +0000 UTC Type:0 Mac:52:54:00:ee:a0:a8 Iaid: IPaddr:192.168.72.218 Prefix:24 Hostname:embed-certs-005123 Clientid:01:52:54:00:ee:a0:a8}
	I1209 11:52:45.775340  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined IP address 192.168.72.218 and MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:45.775510  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHPort
	I1209 11:52:45.775714  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHKeyPath
	I1209 11:52:45.775899  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHUsername
	I1209 11:52:45.776065  661546 sshutil.go:53] new ssh client: &{IP:192.168.72.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/embed-certs-005123/id_rsa Username:docker}
	I1209 11:52:45.856991  661546 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 11:52:45.862195  661546 info.go:137] Remote host: Buildroot 2023.02.9
	I1209 11:52:45.862224  661546 filesync.go:126] Scanning /home/jenkins/minikube-integration/20068-609844/.minikube/addons for local assets ...
	I1209 11:52:45.862295  661546 filesync.go:126] Scanning /home/jenkins/minikube-integration/20068-609844/.minikube/files for local assets ...
	I1209 11:52:45.862368  661546 filesync.go:149] local asset: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem -> 6170172.pem in /etc/ssl/certs
	I1209 11:52:45.862497  661546 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 11:52:45.874850  661546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem --> /etc/ssl/certs/6170172.pem (1708 bytes)
	I1209 11:52:45.899279  661546 start.go:296] duration metric: took 127.984399ms for postStartSetup
	I1209 11:52:45.899332  661546 fix.go:56] duration metric: took 19.756446591s for fixHost
	I1209 11:52:45.899362  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHHostname
	I1209 11:52:45.902428  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:45.902828  661546 main.go:141] libmachine: (embed-certs-005123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:a0:a8", ip: ""} in network mk-embed-certs-005123: {Iface:virbr4 ExpiryTime:2024-12-09 12:52:37 +0000 UTC Type:0 Mac:52:54:00:ee:a0:a8 Iaid: IPaddr:192.168.72.218 Prefix:24 Hostname:embed-certs-005123 Clientid:01:52:54:00:ee:a0:a8}
	I1209 11:52:45.902861  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined IP address 192.168.72.218 and MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:45.903117  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHPort
	I1209 11:52:45.903344  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHKeyPath
	I1209 11:52:45.903554  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHKeyPath
	I1209 11:52:45.903704  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHUsername
	I1209 11:52:45.903955  661546 main.go:141] libmachine: Using SSH client type: native
	I1209 11:52:45.904191  661546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.218 22 <nil> <nil>}
	I1209 11:52:45.904209  661546 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1209 11:52:46.007164  661546 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733745165.964649155
	
	I1209 11:52:46.007194  661546 fix.go:216] guest clock: 1733745165.964649155
	I1209 11:52:46.007217  661546 fix.go:229] Guest: 2024-12-09 11:52:45.964649155 +0000 UTC Remote: 2024-12-09 11:52:45.899337716 +0000 UTC m=+369.711404421 (delta=65.311439ms)
	I1209 11:52:46.007267  661546 fix.go:200] guest clock delta is within tolerance: 65.311439ms
	I1209 11:52:46.007280  661546 start.go:83] releasing machines lock for "embed-certs-005123", held for 19.864428938s
	I1209 11:52:46.007313  661546 main.go:141] libmachine: (embed-certs-005123) Calling .DriverName
	I1209 11:52:46.007616  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetIP
	I1209 11:52:46.011273  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:46.011799  661546 main.go:141] libmachine: (embed-certs-005123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:a0:a8", ip: ""} in network mk-embed-certs-005123: {Iface:virbr4 ExpiryTime:2024-12-09 12:52:37 +0000 UTC Type:0 Mac:52:54:00:ee:a0:a8 Iaid: IPaddr:192.168.72.218 Prefix:24 Hostname:embed-certs-005123 Clientid:01:52:54:00:ee:a0:a8}
	I1209 11:52:46.011830  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined IP address 192.168.72.218 and MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:46.012074  661546 main.go:141] libmachine: (embed-certs-005123) Calling .DriverName
	I1209 11:52:46.012681  661546 main.go:141] libmachine: (embed-certs-005123) Calling .DriverName
	I1209 11:52:46.012907  661546 main.go:141] libmachine: (embed-certs-005123) Calling .DriverName
	I1209 11:52:46.013027  661546 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 11:52:46.013099  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHHostname
	I1209 11:52:46.013170  661546 ssh_runner.go:195] Run: cat /version.json
	I1209 11:52:46.013196  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHHostname
	I1209 11:52:46.016473  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:46.016764  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:46.016840  661546 main.go:141] libmachine: (embed-certs-005123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:a0:a8", ip: ""} in network mk-embed-certs-005123: {Iface:virbr4 ExpiryTime:2024-12-09 12:52:37 +0000 UTC Type:0 Mac:52:54:00:ee:a0:a8 Iaid: IPaddr:192.168.72.218 Prefix:24 Hostname:embed-certs-005123 Clientid:01:52:54:00:ee:a0:a8}
	I1209 11:52:46.016875  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined IP address 192.168.72.218 and MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:46.016964  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHPort
	I1209 11:52:46.017186  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHKeyPath
	I1209 11:52:46.017287  661546 main.go:141] libmachine: (embed-certs-005123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:a0:a8", ip: ""} in network mk-embed-certs-005123: {Iface:virbr4 ExpiryTime:2024-12-09 12:52:37 +0000 UTC Type:0 Mac:52:54:00:ee:a0:a8 Iaid: IPaddr:192.168.72.218 Prefix:24 Hostname:embed-certs-005123 Clientid:01:52:54:00:ee:a0:a8}
	I1209 11:52:46.017401  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHUsername
	I1209 11:52:46.017442  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined IP address 192.168.72.218 and MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:46.017480  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHPort
	I1209 11:52:46.017553  661546 sshutil.go:53] new ssh client: &{IP:192.168.72.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/embed-certs-005123/id_rsa Username:docker}
	I1209 11:52:46.017785  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHKeyPath
	I1209 11:52:46.017911  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHUsername
	I1209 11:52:46.018075  661546 sshutil.go:53] new ssh client: &{IP:192.168.72.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/embed-certs-005123/id_rsa Username:docker}
	I1209 11:52:46.129248  661546 ssh_runner.go:195] Run: systemctl --version
	I1209 11:52:46.136309  661546 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 11:52:43.091899  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:45.592415  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:46.287879  661546 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 11:52:46.293689  661546 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 11:52:46.293770  661546 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 11:52:46.311972  661546 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1209 11:52:46.312009  661546 start.go:495] detecting cgroup driver to use...
	I1209 11:52:46.312085  661546 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 11:52:46.329406  661546 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 11:52:46.344607  661546 docker.go:217] disabling cri-docker service (if available) ...
	I1209 11:52:46.344664  661546 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 11:52:46.360448  661546 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 11:52:46.374509  661546 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 11:52:46.503687  661546 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 11:52:46.649152  661546 docker.go:233] disabling docker service ...
	I1209 11:52:46.649234  661546 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 11:52:46.663277  661546 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 11:52:46.677442  661546 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 11:52:46.832667  661546 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 11:52:46.949826  661546 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 11:52:46.963119  661546 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 11:52:46.981743  661546 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1209 11:52:46.981834  661546 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:52:46.991634  661546 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 11:52:46.991706  661546 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:52:47.004032  661546 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:52:47.015001  661546 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:52:47.025000  661546 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 11:52:47.035513  661546 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:52:47.045431  661546 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:52:47.061931  661546 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:52:47.071531  661546 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 11:52:47.080492  661546 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1209 11:52:47.080559  661546 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1209 11:52:47.094021  661546 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 11:52:47.104015  661546 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 11:52:47.226538  661546 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 11:52:47.318832  661546 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 11:52:47.318911  661546 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 11:52:47.323209  661546 start.go:563] Will wait 60s for crictl version
	I1209 11:52:47.323276  661546 ssh_runner.go:195] Run: which crictl
	I1209 11:52:47.326773  661546 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 11:52:47.365536  661546 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1209 11:52:47.365629  661546 ssh_runner.go:195] Run: crio --version
	I1209 11:52:47.392781  661546 ssh_runner.go:195] Run: crio --version
	I1209 11:52:47.422945  661546 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1209 11:52:43.133189  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:43.632726  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:44.132804  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:44.632952  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:45.132474  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:45.633318  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:46.133116  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:46.632595  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:47.133211  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:47.633233  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:46.858128  663024 pod_ready.go:103] pod "coredns-7c65d6cfc9-zzrgn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:49.358845  663024 pod_ready.go:103] pod "coredns-7c65d6cfc9-zzrgn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:47.423936  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetIP
	I1209 11:52:47.426959  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:47.427401  661546 main.go:141] libmachine: (embed-certs-005123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:a0:a8", ip: ""} in network mk-embed-certs-005123: {Iface:virbr4 ExpiryTime:2024-12-09 12:52:37 +0000 UTC Type:0 Mac:52:54:00:ee:a0:a8 Iaid: IPaddr:192.168.72.218 Prefix:24 Hostname:embed-certs-005123 Clientid:01:52:54:00:ee:a0:a8}
	I1209 11:52:47.427425  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined IP address 192.168.72.218 and MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:47.427636  661546 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1209 11:52:47.432509  661546 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 11:52:47.448620  661546 kubeadm.go:883] updating cluster {Name:embed-certs-005123 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:embed-certs-005123 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.218 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 11:52:47.448772  661546 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 11:52:47.448824  661546 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 11:52:47.485100  661546 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1209 11:52:47.485173  661546 ssh_runner.go:195] Run: which lz4
	I1209 11:52:47.489202  661546 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1209 11:52:47.493060  661546 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1209 11:52:47.493093  661546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1209 11:52:48.772297  661546 crio.go:462] duration metric: took 1.283133931s to copy over tarball
	I1209 11:52:48.772381  661546 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1209 11:52:50.959318  661546 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.18690714s)
	I1209 11:52:50.959352  661546 crio.go:469] duration metric: took 2.187018432s to extract the tarball
	I1209 11:52:50.959359  661546 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1209 11:52:50.995746  661546 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 11:52:51.037764  661546 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 11:52:51.037792  661546 cache_images.go:84] Images are preloaded, skipping loading
	I1209 11:52:51.037799  661546 kubeadm.go:934] updating node { 192.168.72.218 8443 v1.31.2 crio true true} ...
	I1209 11:52:51.037909  661546 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-005123 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.218
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:embed-certs-005123 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 11:52:51.037972  661546 ssh_runner.go:195] Run: crio config
	I1209 11:52:51.080191  661546 cni.go:84] Creating CNI manager for ""
	I1209 11:52:51.080220  661546 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 11:52:51.080231  661546 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1209 11:52:51.080258  661546 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.218 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-005123 NodeName:embed-certs-005123 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.218"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.218 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 11:52:51.080442  661546 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.218
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-005123"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.218"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.218"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 11:52:51.080544  661546 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1209 11:52:51.091894  661546 binaries.go:44] Found k8s binaries, skipping transfer
	I1209 11:52:51.091975  661546 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 11:52:51.101702  661546 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I1209 11:52:51.117636  661546 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 11:52:51.133662  661546 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2298 bytes)
	I1209 11:52:51.151725  661546 ssh_runner.go:195] Run: grep 192.168.72.218	control-plane.minikube.internal$ /etc/hosts
	I1209 11:52:51.155759  661546 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.218	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 11:52:51.167480  661546 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 11:52:47.592707  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:50.093177  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:48.132348  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:48.632894  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:49.133272  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:49.633015  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:50.132977  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:50.632533  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:51.132939  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:51.632463  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:52.133082  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:52.633298  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:50.357709  663024 pod_ready.go:93] pod "coredns-7c65d6cfc9-zzrgn" in "kube-system" namespace has status "Ready":"True"
	I1209 11:52:50.357740  663024 pod_ready.go:82] duration metric: took 10.006992001s for pod "coredns-7c65d6cfc9-zzrgn" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:50.357752  663024 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-482476" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:50.363374  663024 pod_ready.go:93] pod "etcd-default-k8s-diff-port-482476" in "kube-system" namespace has status "Ready":"True"
	I1209 11:52:50.363403  663024 pod_ready.go:82] duration metric: took 5.642657ms for pod "etcd-default-k8s-diff-port-482476" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:50.363417  663024 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-482476" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:50.368456  663024 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-482476" in "kube-system" namespace has status "Ready":"True"
	I1209 11:52:50.368478  663024 pod_ready.go:82] duration metric: took 5.053713ms for pod "kube-apiserver-default-k8s-diff-port-482476" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:50.368488  663024 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-482476" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:50.374156  663024 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-482476" in "kube-system" namespace has status "Ready":"True"
	I1209 11:52:50.374205  663024 pod_ready.go:82] duration metric: took 5.708489ms for pod "kube-controller-manager-default-k8s-diff-port-482476" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:50.374219  663024 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-6th5d" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:50.378734  663024 pod_ready.go:93] pod "kube-proxy-6th5d" in "kube-system" namespace has status "Ready":"True"
	I1209 11:52:50.378752  663024 pod_ready.go:82] duration metric: took 4.526066ms for pod "kube-proxy-6th5d" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:50.378760  663024 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-482476" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:52.384763  663024 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-482476" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:53.389110  663024 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-482476" in "kube-system" namespace has status "Ready":"True"
	I1209 11:52:53.389146  663024 pod_ready.go:82] duration metric: took 3.010378852s for pod "kube-scheduler-default-k8s-diff-port-482476" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:53.389162  663024 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:51.305408  661546 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 11:52:51.330738  661546 certs.go:68] Setting up /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/embed-certs-005123 for IP: 192.168.72.218
	I1209 11:52:51.330766  661546 certs.go:194] generating shared ca certs ...
	I1209 11:52:51.330791  661546 certs.go:226] acquiring lock for ca certs: {Name:mk2df5887e08965d909a9c950da5dfffb8a04ddc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 11:52:51.331002  661546 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.key
	I1209 11:52:51.331099  661546 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.key
	I1209 11:52:51.331116  661546 certs.go:256] generating profile certs ...
	I1209 11:52:51.331252  661546 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/embed-certs-005123/client.key
	I1209 11:52:51.331333  661546 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/embed-certs-005123/apiserver.key.a40d22b0
	I1209 11:52:51.331400  661546 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/embed-certs-005123/proxy-client.key
	I1209 11:52:51.331595  661546 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017.pem (1338 bytes)
	W1209 11:52:51.331631  661546 certs.go:480] ignoring /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017_empty.pem, impossibly tiny 0 bytes
	I1209 11:52:51.331645  661546 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem (1675 bytes)
	I1209 11:52:51.331680  661546 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem (1082 bytes)
	I1209 11:52:51.331717  661546 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem (1123 bytes)
	I1209 11:52:51.331747  661546 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem (1679 bytes)
	I1209 11:52:51.331824  661546 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem (1708 bytes)
	I1209 11:52:51.332728  661546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 11:52:51.366002  661546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1209 11:52:51.400591  661546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 11:52:51.431219  661546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1209 11:52:51.459334  661546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/embed-certs-005123/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1209 11:52:51.487240  661546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/embed-certs-005123/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1209 11:52:51.522273  661546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/embed-certs-005123/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 11:52:51.545757  661546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/embed-certs-005123/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1209 11:52:51.572793  661546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 11:52:51.595719  661546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017.pem --> /usr/share/ca-certificates/617017.pem (1338 bytes)
	I1209 11:52:51.618456  661546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem --> /usr/share/ca-certificates/6170172.pem (1708 bytes)
	I1209 11:52:51.643337  661546 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 11:52:51.659719  661546 ssh_runner.go:195] Run: openssl version
	I1209 11:52:51.665339  661546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/617017.pem && ln -fs /usr/share/ca-certificates/617017.pem /etc/ssl/certs/617017.pem"
	I1209 11:52:51.676145  661546 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/617017.pem
	I1209 11:52:51.680615  661546 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 10:45 /usr/share/ca-certificates/617017.pem
	I1209 11:52:51.680670  661546 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/617017.pem
	I1209 11:52:51.686782  661546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/617017.pem /etc/ssl/certs/51391683.0"
	I1209 11:52:51.697398  661546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6170172.pem && ln -fs /usr/share/ca-certificates/6170172.pem /etc/ssl/certs/6170172.pem"
	I1209 11:52:51.707438  661546 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6170172.pem
	I1209 11:52:51.711764  661546 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 10:45 /usr/share/ca-certificates/6170172.pem
	I1209 11:52:51.711832  661546 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6170172.pem
	I1209 11:52:51.717278  661546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6170172.pem /etc/ssl/certs/3ec20f2e.0"
	I1209 11:52:51.727774  661546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1209 11:52:51.738575  661546 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 11:52:51.742996  661546 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 10:34 /usr/share/ca-certificates/minikubeCA.pem
	I1209 11:52:51.743057  661546 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 11:52:51.748505  661546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1209 11:52:51.758738  661546 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 11:52:51.763005  661546 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1209 11:52:51.768964  661546 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1209 11:52:51.775011  661546 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1209 11:52:51.780810  661546 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1209 11:52:51.786716  661546 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1209 11:52:51.792351  661546 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1209 11:52:51.798098  661546 kubeadm.go:392] StartCluster: {Name:embed-certs-005123 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:embed-certs-005123 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.218 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 11:52:51.798239  661546 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 11:52:51.798296  661546 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 11:52:51.840669  661546 cri.go:89] found id: ""
	I1209 11:52:51.840755  661546 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 11:52:51.850404  661546 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1209 11:52:51.850429  661546 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1209 11:52:51.850474  661546 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1209 11:52:51.859350  661546 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1209 11:52:51.860405  661546 kubeconfig.go:125] found "embed-certs-005123" server: "https://192.168.72.218:8443"
	I1209 11:52:51.862591  661546 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1209 11:52:51.872497  661546 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.218
	I1209 11:52:51.872539  661546 kubeadm.go:1160] stopping kube-system containers ...
	I1209 11:52:51.872558  661546 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1209 11:52:51.872638  661546 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 11:52:51.913221  661546 cri.go:89] found id: ""
	I1209 11:52:51.913316  661546 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1209 11:52:51.929885  661546 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 11:52:51.940078  661546 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 11:52:51.940105  661546 kubeadm.go:157] found existing configuration files:
	
	I1209 11:52:51.940166  661546 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 11:52:51.948911  661546 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 11:52:51.948977  661546 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 11:52:51.958278  661546 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 11:52:51.966808  661546 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 11:52:51.966879  661546 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 11:52:51.975480  661546 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 11:52:51.984071  661546 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 11:52:51.984127  661546 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 11:52:51.992480  661546 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 11:52:52.000798  661546 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 11:52:52.000873  661546 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 11:52:52.009553  661546 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 11:52:52.019274  661546 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:52.133477  661546 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:53.081976  661546 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:53.293871  661546 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:53.364259  661546 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:53.452043  661546 api_server.go:52] waiting for apiserver process to appear ...
	I1209 11:52:53.452147  661546 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:53.952743  661546 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:54.452498  661546 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:54.952482  661546 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:55.452783  661546 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:55.483411  661546 api_server.go:72] duration metric: took 2.0313706s to wait for apiserver process to appear ...
	I1209 11:52:55.483448  661546 api_server.go:88] waiting for apiserver healthz status ...
	I1209 11:52:55.483473  661546 api_server.go:253] Checking apiserver healthz at https://192.168.72.218:8443/healthz ...
	I1209 11:52:55.483982  661546 api_server.go:269] stopped: https://192.168.72.218:8443/healthz: Get "https://192.168.72.218:8443/healthz": dial tcp 192.168.72.218:8443: connect: connection refused
	I1209 11:52:55.983589  661546 api_server.go:253] Checking apiserver healthz at https://192.168.72.218:8443/healthz ...
	I1209 11:52:52.592309  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:55.257400  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:53.132520  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:53.633385  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:54.132432  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:54.632974  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:55.132958  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:55.633343  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:56.132687  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:56.633236  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:57.133489  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:57.633105  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:55.396602  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:57.397077  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:58.136225  661546 api_server.go:279] https://192.168.72.218:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1209 11:52:58.136259  661546 api_server.go:103] status: https://192.168.72.218:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1209 11:52:58.136276  661546 api_server.go:253] Checking apiserver healthz at https://192.168.72.218:8443/healthz ...
	I1209 11:52:58.174521  661546 api_server.go:279] https://192.168.72.218:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1209 11:52:58.174583  661546 api_server.go:103] status: https://192.168.72.218:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1209 11:52:58.484089  661546 api_server.go:253] Checking apiserver healthz at https://192.168.72.218:8443/healthz ...
	I1209 11:52:58.489495  661546 api_server.go:279] https://192.168.72.218:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1209 11:52:58.489536  661546 api_server.go:103] status: https://192.168.72.218:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1209 11:52:58.984185  661546 api_server.go:253] Checking apiserver healthz at https://192.168.72.218:8443/healthz ...
	I1209 11:52:58.990889  661546 api_server.go:279] https://192.168.72.218:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1209 11:52:58.990932  661546 api_server.go:103] status: https://192.168.72.218:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1209 11:52:59.484415  661546 api_server.go:253] Checking apiserver healthz at https://192.168.72.218:8443/healthz ...
	I1209 11:52:59.490878  661546 api_server.go:279] https://192.168.72.218:8443/healthz returned 200:
	ok
	I1209 11:52:59.498196  661546 api_server.go:141] control plane version: v1.31.2
	I1209 11:52:59.498231  661546 api_server.go:131] duration metric: took 4.014775842s to wait for apiserver health ...
	I1209 11:52:59.498241  661546 cni.go:84] Creating CNI manager for ""
	I1209 11:52:59.498247  661546 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 11:52:59.499779  661546 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1209 11:52:59.500941  661546 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1209 11:52:59.514201  661546 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1209 11:52:59.544391  661546 system_pods.go:43] waiting for kube-system pods to appear ...
	I1209 11:52:59.555798  661546 system_pods.go:59] 8 kube-system pods found
	I1209 11:52:59.555837  661546 system_pods.go:61] "coredns-7c65d6cfc9-cdnjm" [7cb724f8-c570-4a19-808d-da994ec43eaa] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1209 11:52:59.555849  661546 system_pods.go:61] "etcd-embed-certs-005123" [bf194765-7520-4b5d-a1e5-b49830a0f620] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1209 11:52:59.555858  661546 system_pods.go:61] "kube-apiserver-embed-certs-005123" [470f6c19-0112-4b0d-89d9-b792e912cf6d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1209 11:52:59.555863  661546 system_pods.go:61] "kube-controller-manager-embed-certs-005123" [b42748b2-f3a9-4d29-a832-a30d54b329c2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1209 11:52:59.555868  661546 system_pods.go:61] "kube-proxy-b7bf2" [f9aab69c-2232-4f56-a502-ffd033f7ac10] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1209 11:52:59.555877  661546 system_pods.go:61] "kube-scheduler-embed-certs-005123" [e61a8e3c-c1d3-4dab-abb2-6f5221bc5d25] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1209 11:52:59.555885  661546 system_pods.go:61] "metrics-server-6867b74b74-x4kvn" [210cb99c-e3e7-4337-bed4-985cb98143dd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1209 11:52:59.555893  661546 system_pods.go:61] "storage-provisioner" [f2f7d9e2-1121-4df2-adb7-a0af32f957ff] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1209 11:52:59.555903  661546 system_pods.go:74] duration metric: took 11.485008ms to wait for pod list to return data ...
	I1209 11:52:59.555913  661546 node_conditions.go:102] verifying NodePressure condition ...
	I1209 11:52:59.560077  661546 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1209 11:52:59.560100  661546 node_conditions.go:123] node cpu capacity is 2
	I1209 11:52:59.560110  661546 node_conditions.go:105] duration metric: took 4.192476ms to run NodePressure ...
	I1209 11:52:59.560132  661546 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:59.890141  661546 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1209 11:52:59.895382  661546 kubeadm.go:739] kubelet initialised
	I1209 11:52:59.895414  661546 kubeadm.go:740] duration metric: took 5.227549ms waiting for restarted kubelet to initialise ...
	I1209 11:52:59.895425  661546 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 11:52:59.901454  661546 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-cdnjm" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:57.593336  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:00.094942  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:58.132858  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:58.633386  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:59.132544  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:59.633427  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:00.133402  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:00.632719  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:01.132786  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:01.632909  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:02.133197  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:02.632620  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:59.896691  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:02.396546  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:01.907730  661546 pod_ready.go:103] pod "coredns-7c65d6cfc9-cdnjm" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:03.910835  661546 pod_ready.go:103] pod "coredns-7c65d6cfc9-cdnjm" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:02.591692  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:05.090892  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:03.133091  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:03.633389  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:04.132587  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:04.633239  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:05.132773  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:05.632456  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:06.132989  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:06.632584  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:07.133153  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:07.633389  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:04.895599  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:06.896541  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:08.912963  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:06.408122  661546 pod_ready.go:103] pod "coredns-7c65d6cfc9-cdnjm" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:08.412579  661546 pod_ready.go:103] pod "coredns-7c65d6cfc9-cdnjm" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:10.419673  661546 pod_ready.go:93] pod "coredns-7c65d6cfc9-cdnjm" in "kube-system" namespace has status "Ready":"True"
	I1209 11:53:10.419702  661546 pod_ready.go:82] duration metric: took 10.518223469s for pod "coredns-7c65d6cfc9-cdnjm" in "kube-system" namespace to be "Ready" ...
	I1209 11:53:10.419716  661546 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-005123" in "kube-system" namespace to be "Ready" ...
	I1209 11:53:07.591181  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:10.091248  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:08.132885  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:08.633192  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:09.132446  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:09.633385  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:10.132534  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:10.632399  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:11.132877  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:11.633091  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:12.132592  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:12.633185  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:11.396121  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:13.901605  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:12.425696  661546 pod_ready.go:103] pod "etcd-embed-certs-005123" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:13.926007  661546 pod_ready.go:93] pod "etcd-embed-certs-005123" in "kube-system" namespace has status "Ready":"True"
	I1209 11:53:13.926041  661546 pod_ready.go:82] duration metric: took 3.50631846s for pod "etcd-embed-certs-005123" in "kube-system" namespace to be "Ready" ...
	I1209 11:53:13.926053  661546 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-005123" in "kube-system" namespace to be "Ready" ...
	I1209 11:53:13.931124  661546 pod_ready.go:93] pod "kube-apiserver-embed-certs-005123" in "kube-system" namespace has status "Ready":"True"
	I1209 11:53:13.931150  661546 pod_ready.go:82] duration metric: took 5.090118ms for pod "kube-apiserver-embed-certs-005123" in "kube-system" namespace to be "Ready" ...
	I1209 11:53:13.931163  661546 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-005123" in "kube-system" namespace to be "Ready" ...
	I1209 11:53:13.935763  661546 pod_ready.go:93] pod "kube-controller-manager-embed-certs-005123" in "kube-system" namespace has status "Ready":"True"
	I1209 11:53:13.935783  661546 pod_ready.go:82] duration metric: took 4.613902ms for pod "kube-controller-manager-embed-certs-005123" in "kube-system" namespace to be "Ready" ...
	I1209 11:53:13.935792  661546 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-b7bf2" in "kube-system" namespace to be "Ready" ...
	I1209 11:53:13.940013  661546 pod_ready.go:93] pod "kube-proxy-b7bf2" in "kube-system" namespace has status "Ready":"True"
	I1209 11:53:13.940037  661546 pod_ready.go:82] duration metric: took 4.238468ms for pod "kube-proxy-b7bf2" in "kube-system" namespace to be "Ready" ...
	I1209 11:53:13.940050  661546 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-005123" in "kube-system" namespace to be "Ready" ...
	I1209 11:53:13.944480  661546 pod_ready.go:93] pod "kube-scheduler-embed-certs-005123" in "kube-system" namespace has status "Ready":"True"
	I1209 11:53:13.944497  661546 pod_ready.go:82] duration metric: took 4.439334ms for pod "kube-scheduler-embed-certs-005123" in "kube-system" namespace to be "Ready" ...
	I1209 11:53:13.944504  661546 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace to be "Ready" ...
	I1209 11:53:15.951194  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:12.091413  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:14.591239  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:13.132852  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:13.632863  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:14.132638  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:14.632522  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:15.133201  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:15.632442  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:16.132620  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:53:16.132747  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:53:16.171708  662586 cri.go:89] found id: ""
	I1209 11:53:16.171748  662586 logs.go:282] 0 containers: []
	W1209 11:53:16.171761  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:53:16.171768  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:53:16.171823  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:53:16.206350  662586 cri.go:89] found id: ""
	I1209 11:53:16.206381  662586 logs.go:282] 0 containers: []
	W1209 11:53:16.206390  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:53:16.206398  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:53:16.206468  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:53:16.239292  662586 cri.go:89] found id: ""
	I1209 11:53:16.239325  662586 logs.go:282] 0 containers: []
	W1209 11:53:16.239334  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:53:16.239341  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:53:16.239397  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:53:16.275809  662586 cri.go:89] found id: ""
	I1209 11:53:16.275841  662586 logs.go:282] 0 containers: []
	W1209 11:53:16.275850  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:53:16.275856  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:53:16.275913  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:53:16.310434  662586 cri.go:89] found id: ""
	I1209 11:53:16.310466  662586 logs.go:282] 0 containers: []
	W1209 11:53:16.310474  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:53:16.310480  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:53:16.310540  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:53:16.347697  662586 cri.go:89] found id: ""
	I1209 11:53:16.347729  662586 logs.go:282] 0 containers: []
	W1209 11:53:16.347738  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:53:16.347745  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:53:16.347801  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:53:16.380949  662586 cri.go:89] found id: ""
	I1209 11:53:16.380977  662586 logs.go:282] 0 containers: []
	W1209 11:53:16.380985  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:53:16.380992  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:53:16.381074  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:53:16.415236  662586 cri.go:89] found id: ""
	I1209 11:53:16.415268  662586 logs.go:282] 0 containers: []
	W1209 11:53:16.415290  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:53:16.415304  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:53:16.415321  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:53:16.459614  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:53:16.459645  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:53:16.509575  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:53:16.509617  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:53:16.522864  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:53:16.522898  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:53:16.644997  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:53:16.645059  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:53:16.645106  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:53:16.396028  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:18.397195  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:17.951721  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:19.952199  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:16.591767  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:19.091470  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:21.095835  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:19.220978  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:19.233506  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:53:19.233597  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:53:19.268975  662586 cri.go:89] found id: ""
	I1209 11:53:19.269007  662586 logs.go:282] 0 containers: []
	W1209 11:53:19.269019  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:53:19.269027  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:53:19.269103  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:53:19.304898  662586 cri.go:89] found id: ""
	I1209 11:53:19.304935  662586 logs.go:282] 0 containers: []
	W1209 11:53:19.304949  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:53:19.304957  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:53:19.305034  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:53:19.344798  662586 cri.go:89] found id: ""
	I1209 11:53:19.344835  662586 logs.go:282] 0 containers: []
	W1209 11:53:19.344846  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:53:19.344855  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:53:19.344925  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:53:19.395335  662586 cri.go:89] found id: ""
	I1209 11:53:19.395377  662586 logs.go:282] 0 containers: []
	W1209 11:53:19.395387  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:53:19.395395  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:53:19.395464  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:53:19.430334  662586 cri.go:89] found id: ""
	I1209 11:53:19.430364  662586 logs.go:282] 0 containers: []
	W1209 11:53:19.430377  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:53:19.430386  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:53:19.430465  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:53:19.468732  662586 cri.go:89] found id: ""
	I1209 11:53:19.468766  662586 logs.go:282] 0 containers: []
	W1209 11:53:19.468775  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:53:19.468782  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:53:19.468836  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:53:19.503194  662586 cri.go:89] found id: ""
	I1209 11:53:19.503242  662586 logs.go:282] 0 containers: []
	W1209 11:53:19.503255  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:53:19.503263  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:53:19.503328  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:53:19.537074  662586 cri.go:89] found id: ""
	I1209 11:53:19.537114  662586 logs.go:282] 0 containers: []
	W1209 11:53:19.537125  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:53:19.537135  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:53:19.537151  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:53:19.590081  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:53:19.590130  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:53:19.604350  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:53:19.604388  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:53:19.683073  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:53:19.683106  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:53:19.683124  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:53:19.763564  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:53:19.763611  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:53:22.302792  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:22.315992  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:53:22.316079  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:53:22.350464  662586 cri.go:89] found id: ""
	I1209 11:53:22.350495  662586 logs.go:282] 0 containers: []
	W1209 11:53:22.350505  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:53:22.350511  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:53:22.350569  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:53:22.382832  662586 cri.go:89] found id: ""
	I1209 11:53:22.382867  662586 logs.go:282] 0 containers: []
	W1209 11:53:22.382880  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:53:22.382889  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:53:22.382958  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:53:22.417826  662586 cri.go:89] found id: ""
	I1209 11:53:22.417859  662586 logs.go:282] 0 containers: []
	W1209 11:53:22.417871  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:53:22.417880  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:53:22.417963  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:53:22.451545  662586 cri.go:89] found id: ""
	I1209 11:53:22.451579  662586 logs.go:282] 0 containers: []
	W1209 11:53:22.451588  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:53:22.451594  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:53:22.451659  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:53:22.488413  662586 cri.go:89] found id: ""
	I1209 11:53:22.488448  662586 logs.go:282] 0 containers: []
	W1209 11:53:22.488458  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:53:22.488464  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:53:22.488531  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:53:22.523891  662586 cri.go:89] found id: ""
	I1209 11:53:22.523916  662586 logs.go:282] 0 containers: []
	W1209 11:53:22.523925  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:53:22.523931  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:53:22.523990  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:53:22.555828  662586 cri.go:89] found id: ""
	I1209 11:53:22.555866  662586 logs.go:282] 0 containers: []
	W1209 11:53:22.555879  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:53:22.555887  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:53:22.555960  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:53:22.592133  662586 cri.go:89] found id: ""
	I1209 11:53:22.592171  662586 logs.go:282] 0 containers: []
	W1209 11:53:22.592181  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:53:22.592192  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:53:22.592209  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:53:22.641928  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:53:22.641966  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:53:22.655182  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:53:22.655215  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 11:53:20.896189  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:23.397242  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:21.957934  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:24.451292  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:23.591147  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:25.591982  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	W1209 11:53:22.724320  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:53:22.724343  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:53:22.724359  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:53:22.811692  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:53:22.811743  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:53:25.347903  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:25.360839  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:53:25.360907  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:53:25.392880  662586 cri.go:89] found id: ""
	I1209 11:53:25.392917  662586 logs.go:282] 0 containers: []
	W1209 11:53:25.392930  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:53:25.392939  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:53:25.393008  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:53:25.427862  662586 cri.go:89] found id: ""
	I1209 11:53:25.427905  662586 logs.go:282] 0 containers: []
	W1209 11:53:25.427914  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:53:25.427921  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:53:25.428009  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:53:25.463733  662586 cri.go:89] found id: ""
	I1209 11:53:25.463767  662586 logs.go:282] 0 containers: []
	W1209 11:53:25.463778  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:53:25.463788  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:53:25.463884  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:53:25.501653  662586 cri.go:89] found id: ""
	I1209 11:53:25.501681  662586 logs.go:282] 0 containers: []
	W1209 11:53:25.501690  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:53:25.501697  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:53:25.501751  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:53:25.535368  662586 cri.go:89] found id: ""
	I1209 11:53:25.535410  662586 logs.go:282] 0 containers: []
	W1209 11:53:25.535422  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:53:25.535431  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:53:25.535511  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:53:25.569709  662586 cri.go:89] found id: ""
	I1209 11:53:25.569739  662586 logs.go:282] 0 containers: []
	W1209 11:53:25.569748  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:53:25.569761  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:53:25.569827  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:53:25.604352  662586 cri.go:89] found id: ""
	I1209 11:53:25.604389  662586 logs.go:282] 0 containers: []
	W1209 11:53:25.604404  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:53:25.604413  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:53:25.604477  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:53:25.635832  662586 cri.go:89] found id: ""
	I1209 11:53:25.635865  662586 logs.go:282] 0 containers: []
	W1209 11:53:25.635878  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:53:25.635892  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:53:25.635908  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:53:25.650611  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:53:25.650647  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:53:25.721092  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:53:25.721121  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:53:25.721139  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:53:25.795552  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:53:25.795598  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:53:25.858088  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:53:25.858161  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:53:25.898217  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:28.395882  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:26.950691  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:28.951203  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:28.091306  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:30.091842  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:28.410683  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:28.422993  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:53:28.423072  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:53:28.455054  662586 cri.go:89] found id: ""
	I1209 11:53:28.455083  662586 logs.go:282] 0 containers: []
	W1209 11:53:28.455092  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:53:28.455098  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:53:28.455162  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:53:28.493000  662586 cri.go:89] found id: ""
	I1209 11:53:28.493037  662586 logs.go:282] 0 containers: []
	W1209 11:53:28.493046  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:53:28.493052  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:53:28.493104  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:53:28.526294  662586 cri.go:89] found id: ""
	I1209 11:53:28.526333  662586 logs.go:282] 0 containers: []
	W1209 11:53:28.526346  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:53:28.526354  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:53:28.526417  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:53:28.560383  662586 cri.go:89] found id: ""
	I1209 11:53:28.560414  662586 logs.go:282] 0 containers: []
	W1209 11:53:28.560423  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:53:28.560430  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:53:28.560485  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:53:28.595906  662586 cri.go:89] found id: ""
	I1209 11:53:28.595935  662586 logs.go:282] 0 containers: []
	W1209 11:53:28.595946  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:53:28.595954  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:53:28.596021  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:53:28.629548  662586 cri.go:89] found id: ""
	I1209 11:53:28.629584  662586 logs.go:282] 0 containers: []
	W1209 11:53:28.629597  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:53:28.629607  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:53:28.629673  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:53:28.666362  662586 cri.go:89] found id: ""
	I1209 11:53:28.666398  662586 logs.go:282] 0 containers: []
	W1209 11:53:28.666410  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:53:28.666418  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:53:28.666494  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:53:28.697704  662586 cri.go:89] found id: ""
	I1209 11:53:28.697736  662586 logs.go:282] 0 containers: []
	W1209 11:53:28.697746  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:53:28.697756  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:53:28.697769  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:53:28.745774  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:53:28.745816  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:53:28.759543  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:53:28.759582  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:53:28.834772  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:53:28.834795  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:53:28.834812  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:53:28.913137  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:53:28.913178  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:53:31.460658  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:31.473503  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:53:31.473575  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:53:31.506710  662586 cri.go:89] found id: ""
	I1209 11:53:31.506748  662586 logs.go:282] 0 containers: []
	W1209 11:53:31.506760  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:53:31.506770  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:53:31.506842  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:53:31.544127  662586 cri.go:89] found id: ""
	I1209 11:53:31.544188  662586 logs.go:282] 0 containers: []
	W1209 11:53:31.544202  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:53:31.544211  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:53:31.544289  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:53:31.591081  662586 cri.go:89] found id: ""
	I1209 11:53:31.591116  662586 logs.go:282] 0 containers: []
	W1209 11:53:31.591128  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:53:31.591135  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:53:31.591213  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:53:31.629311  662586 cri.go:89] found id: ""
	I1209 11:53:31.629340  662586 logs.go:282] 0 containers: []
	W1209 11:53:31.629348  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:53:31.629355  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:53:31.629432  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:53:31.671035  662586 cri.go:89] found id: ""
	I1209 11:53:31.671069  662586 logs.go:282] 0 containers: []
	W1209 11:53:31.671081  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:53:31.671090  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:53:31.671162  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:53:31.705753  662586 cri.go:89] found id: ""
	I1209 11:53:31.705792  662586 logs.go:282] 0 containers: []
	W1209 11:53:31.705805  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:53:31.705815  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:53:31.705889  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:53:31.739118  662586 cri.go:89] found id: ""
	I1209 11:53:31.739146  662586 logs.go:282] 0 containers: []
	W1209 11:53:31.739155  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:53:31.739162  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:53:31.739225  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:53:31.771085  662586 cri.go:89] found id: ""
	I1209 11:53:31.771120  662586 logs.go:282] 0 containers: []
	W1209 11:53:31.771129  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:53:31.771139  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:53:31.771152  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:53:31.820993  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:53:31.821049  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:53:31.835576  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:53:31.835612  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:53:31.903011  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:53:31.903039  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:53:31.903056  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:53:31.977784  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:53:31.977830  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:53:30.896197  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:33.395937  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:31.450832  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:33.451161  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:35.451446  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:32.590724  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:34.592352  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:34.514654  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:34.529156  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:53:34.529236  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:53:34.567552  662586 cri.go:89] found id: ""
	I1209 11:53:34.567580  662586 logs.go:282] 0 containers: []
	W1209 11:53:34.567590  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:53:34.567598  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:53:34.567665  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:53:34.608863  662586 cri.go:89] found id: ""
	I1209 11:53:34.608891  662586 logs.go:282] 0 containers: []
	W1209 11:53:34.608900  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:53:34.608907  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:53:34.608970  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:53:34.647204  662586 cri.go:89] found id: ""
	I1209 11:53:34.647242  662586 logs.go:282] 0 containers: []
	W1209 11:53:34.647254  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:53:34.647263  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:53:34.647333  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:53:34.682511  662586 cri.go:89] found id: ""
	I1209 11:53:34.682565  662586 logs.go:282] 0 containers: []
	W1209 11:53:34.682580  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:53:34.682596  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:53:34.682674  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:53:34.717557  662586 cri.go:89] found id: ""
	I1209 11:53:34.717585  662586 logs.go:282] 0 containers: []
	W1209 11:53:34.717595  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:53:34.717602  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:53:34.717670  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:53:34.749814  662586 cri.go:89] found id: ""
	I1209 11:53:34.749851  662586 logs.go:282] 0 containers: []
	W1209 11:53:34.749865  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:53:34.749876  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:53:34.749949  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:53:34.782732  662586 cri.go:89] found id: ""
	I1209 11:53:34.782766  662586 logs.go:282] 0 containers: []
	W1209 11:53:34.782776  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:53:34.782782  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:53:34.782846  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:53:34.817114  662586 cri.go:89] found id: ""
	I1209 11:53:34.817149  662586 logs.go:282] 0 containers: []
	W1209 11:53:34.817162  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:53:34.817175  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:53:34.817192  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:53:34.885963  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:53:34.885986  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:53:34.886001  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:53:34.969858  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:53:34.969905  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:53:35.006981  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:53:35.007024  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:53:35.055360  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:53:35.055401  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:53:37.570641  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:37.595904  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:53:37.595986  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:53:37.642205  662586 cri.go:89] found id: ""
	I1209 11:53:37.642248  662586 logs.go:282] 0 containers: []
	W1209 11:53:37.642261  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:53:37.642270  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:53:37.642347  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:53:37.676666  662586 cri.go:89] found id: ""
	I1209 11:53:37.676692  662586 logs.go:282] 0 containers: []
	W1209 11:53:37.676701  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:53:37.676707  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:53:37.676760  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:53:35.396037  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:37.896489  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:37.952569  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:40.450464  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:37.092250  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:39.092392  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:37.714201  662586 cri.go:89] found id: ""
	I1209 11:53:37.714233  662586 logs.go:282] 0 containers: []
	W1209 11:53:37.714243  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:53:37.714249  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:53:37.714311  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:53:37.748018  662586 cri.go:89] found id: ""
	I1209 11:53:37.748047  662586 logs.go:282] 0 containers: []
	W1209 11:53:37.748058  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:53:37.748067  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:53:37.748127  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:53:37.783763  662586 cri.go:89] found id: ""
	I1209 11:53:37.783799  662586 logs.go:282] 0 containers: []
	W1209 11:53:37.783807  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:53:37.783823  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:53:37.783898  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:53:37.822470  662586 cri.go:89] found id: ""
	I1209 11:53:37.822502  662586 logs.go:282] 0 containers: []
	W1209 11:53:37.822514  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:53:37.822523  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:53:37.822585  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:53:37.858493  662586 cri.go:89] found id: ""
	I1209 11:53:37.858527  662586 logs.go:282] 0 containers: []
	W1209 11:53:37.858537  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:53:37.858543  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:53:37.858599  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:53:37.899263  662586 cri.go:89] found id: ""
	I1209 11:53:37.899288  662586 logs.go:282] 0 containers: []
	W1209 11:53:37.899295  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:53:37.899304  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:53:37.899321  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:53:37.972531  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:53:37.972559  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:53:37.972575  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:53:38.046271  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:53:38.046315  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:53:38.088829  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:53:38.088867  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:53:38.141935  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:53:38.141985  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:53:40.657131  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:40.669884  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:53:40.669954  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:53:40.704291  662586 cri.go:89] found id: ""
	I1209 11:53:40.704332  662586 logs.go:282] 0 containers: []
	W1209 11:53:40.704345  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:53:40.704357  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:53:40.704435  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:53:40.738637  662586 cri.go:89] found id: ""
	I1209 11:53:40.738673  662586 logs.go:282] 0 containers: []
	W1209 11:53:40.738684  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:53:40.738690  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:53:40.738747  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:53:40.770737  662586 cri.go:89] found id: ""
	I1209 11:53:40.770774  662586 logs.go:282] 0 containers: []
	W1209 11:53:40.770787  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:53:40.770796  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:53:40.770865  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:53:40.805667  662586 cri.go:89] found id: ""
	I1209 11:53:40.805702  662586 logs.go:282] 0 containers: []
	W1209 11:53:40.805729  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:53:40.805739  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:53:40.805812  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:53:40.838444  662586 cri.go:89] found id: ""
	I1209 11:53:40.838482  662586 logs.go:282] 0 containers: []
	W1209 11:53:40.838496  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:53:40.838505  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:53:40.838578  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:53:40.871644  662586 cri.go:89] found id: ""
	I1209 11:53:40.871679  662586 logs.go:282] 0 containers: []
	W1209 11:53:40.871691  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:53:40.871700  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:53:40.871763  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:53:40.907242  662586 cri.go:89] found id: ""
	I1209 11:53:40.907275  662586 logs.go:282] 0 containers: []
	W1209 11:53:40.907284  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:53:40.907291  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:53:40.907359  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:53:40.941542  662586 cri.go:89] found id: ""
	I1209 11:53:40.941570  662586 logs.go:282] 0 containers: []
	W1209 11:53:40.941583  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:53:40.941595  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:53:40.941616  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:53:41.022344  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:53:41.022373  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:53:41.022387  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:53:41.097083  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:53:41.097129  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:53:41.135303  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:53:41.135349  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:53:41.191400  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:53:41.191447  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:53:40.396681  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:42.895118  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:42.451217  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:44.951893  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:41.591753  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:44.090762  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:46.091821  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:43.705246  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:43.717939  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:53:43.718001  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:53:43.750027  662586 cri.go:89] found id: ""
	I1209 11:53:43.750066  662586 logs.go:282] 0 containers: []
	W1209 11:53:43.750079  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:53:43.750087  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:53:43.750156  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:53:43.782028  662586 cri.go:89] found id: ""
	I1209 11:53:43.782067  662586 logs.go:282] 0 containers: []
	W1209 11:53:43.782081  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:53:43.782090  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:53:43.782153  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:53:43.815509  662586 cri.go:89] found id: ""
	I1209 11:53:43.815549  662586 logs.go:282] 0 containers: []
	W1209 11:53:43.815562  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:53:43.815570  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:53:43.815629  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:53:43.852803  662586 cri.go:89] found id: ""
	I1209 11:53:43.852834  662586 logs.go:282] 0 containers: []
	W1209 11:53:43.852842  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:53:43.852850  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:53:43.852915  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:53:43.886761  662586 cri.go:89] found id: ""
	I1209 11:53:43.886789  662586 logs.go:282] 0 containers: []
	W1209 11:53:43.886798  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:53:43.886805  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:53:43.886883  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:53:43.924427  662586 cri.go:89] found id: ""
	I1209 11:53:43.924458  662586 logs.go:282] 0 containers: []
	W1209 11:53:43.924466  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:53:43.924478  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:53:43.924542  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:53:43.960351  662586 cri.go:89] found id: ""
	I1209 11:53:43.960381  662586 logs.go:282] 0 containers: []
	W1209 11:53:43.960398  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:53:43.960407  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:53:43.960476  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:53:43.993933  662586 cri.go:89] found id: ""
	I1209 11:53:43.993960  662586 logs.go:282] 0 containers: []
	W1209 11:53:43.993969  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:53:43.993979  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:53:43.994002  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:53:44.006915  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:53:44.006952  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:53:44.078928  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:53:44.078981  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:53:44.078999  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:53:44.158129  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:53:44.158188  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:53:44.199543  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:53:44.199577  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:53:46.748607  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:46.762381  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:53:46.762494  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:53:46.795674  662586 cri.go:89] found id: ""
	I1209 11:53:46.795713  662586 logs.go:282] 0 containers: []
	W1209 11:53:46.795727  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:53:46.795737  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:53:46.795812  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:53:46.834027  662586 cri.go:89] found id: ""
	I1209 11:53:46.834055  662586 logs.go:282] 0 containers: []
	W1209 11:53:46.834065  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:53:46.834072  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:53:46.834124  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:53:46.872116  662586 cri.go:89] found id: ""
	I1209 11:53:46.872156  662586 logs.go:282] 0 containers: []
	W1209 11:53:46.872169  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:53:46.872179  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:53:46.872264  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:53:46.906571  662586 cri.go:89] found id: ""
	I1209 11:53:46.906599  662586 logs.go:282] 0 containers: []
	W1209 11:53:46.906608  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:53:46.906615  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:53:46.906676  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:53:46.938266  662586 cri.go:89] found id: ""
	I1209 11:53:46.938303  662586 logs.go:282] 0 containers: []
	W1209 11:53:46.938315  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:53:46.938323  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:53:46.938381  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:53:46.972281  662586 cri.go:89] found id: ""
	I1209 11:53:46.972318  662586 logs.go:282] 0 containers: []
	W1209 11:53:46.972329  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:53:46.972337  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:53:46.972391  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:53:47.004797  662586 cri.go:89] found id: ""
	I1209 11:53:47.004828  662586 logs.go:282] 0 containers: []
	W1209 11:53:47.004837  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:53:47.004843  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:53:47.004908  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:53:47.035877  662586 cri.go:89] found id: ""
	I1209 11:53:47.035905  662586 logs.go:282] 0 containers: []
	W1209 11:53:47.035917  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:53:47.035931  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:53:47.035947  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:53:47.087654  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:53:47.087706  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:53:47.102311  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:53:47.102346  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:53:47.195370  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:53:47.195396  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:53:47.195414  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:53:47.279103  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:53:47.279158  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:53:44.895382  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:46.895838  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:48.896133  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:47.453879  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:49.951686  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:48.591393  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:51.090806  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:49.817942  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:49.830291  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:53:49.830357  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:53:49.862917  662586 cri.go:89] found id: ""
	I1209 11:53:49.862950  662586 logs.go:282] 0 containers: []
	W1209 11:53:49.862959  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:53:49.862965  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:53:49.863033  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:53:49.894971  662586 cri.go:89] found id: ""
	I1209 11:53:49.895005  662586 logs.go:282] 0 containers: []
	W1209 11:53:49.895018  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:53:49.895027  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:53:49.895097  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:53:49.931737  662586 cri.go:89] found id: ""
	I1209 11:53:49.931775  662586 logs.go:282] 0 containers: []
	W1209 11:53:49.931786  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:53:49.931800  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:53:49.931862  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:53:49.971064  662586 cri.go:89] found id: ""
	I1209 11:53:49.971097  662586 logs.go:282] 0 containers: []
	W1209 11:53:49.971109  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:53:49.971118  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:53:49.971210  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:53:50.005354  662586 cri.go:89] found id: ""
	I1209 11:53:50.005393  662586 logs.go:282] 0 containers: []
	W1209 11:53:50.005417  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:53:50.005427  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:53:50.005501  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:53:50.044209  662586 cri.go:89] found id: ""
	I1209 11:53:50.044240  662586 logs.go:282] 0 containers: []
	W1209 11:53:50.044249  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:53:50.044257  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:53:50.044313  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:53:50.076360  662586 cri.go:89] found id: ""
	I1209 11:53:50.076408  662586 logs.go:282] 0 containers: []
	W1209 11:53:50.076418  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:53:50.076426  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:53:50.076494  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:53:50.112125  662586 cri.go:89] found id: ""
	I1209 11:53:50.112168  662586 logs.go:282] 0 containers: []
	W1209 11:53:50.112196  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:53:50.112210  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:53:50.112228  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:53:50.164486  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:53:50.164530  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:53:50.178489  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:53:50.178525  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:53:50.250131  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:53:50.250165  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:53:50.250196  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:53:50.329733  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:53:50.329779  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:53:50.896354  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:53.395149  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:52.450595  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:54.450939  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:53.092311  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:55.590766  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:52.874887  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:52.888518  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:53:52.888607  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:53:52.924361  662586 cri.go:89] found id: ""
	I1209 11:53:52.924389  662586 logs.go:282] 0 containers: []
	W1209 11:53:52.924398  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:53:52.924404  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:53:52.924467  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:53:52.957769  662586 cri.go:89] found id: ""
	I1209 11:53:52.957803  662586 logs.go:282] 0 containers: []
	W1209 11:53:52.957816  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:53:52.957824  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:53:52.957891  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:53:52.990339  662586 cri.go:89] found id: ""
	I1209 11:53:52.990376  662586 logs.go:282] 0 containers: []
	W1209 11:53:52.990388  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:53:52.990397  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:53:52.990461  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:53:53.022959  662586 cri.go:89] found id: ""
	I1209 11:53:53.023003  662586 logs.go:282] 0 containers: []
	W1209 11:53:53.023017  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:53:53.023028  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:53:53.023111  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:53:53.060271  662586 cri.go:89] found id: ""
	I1209 11:53:53.060299  662586 logs.go:282] 0 containers: []
	W1209 11:53:53.060315  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:53:53.060321  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:53:53.060390  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:53:53.093470  662586 cri.go:89] found id: ""
	I1209 11:53:53.093500  662586 logs.go:282] 0 containers: []
	W1209 11:53:53.093511  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:53:53.093519  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:53:53.093575  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:53:53.128902  662586 cri.go:89] found id: ""
	I1209 11:53:53.128941  662586 logs.go:282] 0 containers: []
	W1209 11:53:53.128955  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:53:53.128963  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:53:53.129036  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:53:53.161927  662586 cri.go:89] found id: ""
	I1209 11:53:53.161955  662586 logs.go:282] 0 containers: []
	W1209 11:53:53.161964  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:53:53.161974  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:53:53.161988  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:53:53.214098  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:53:53.214140  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:53:53.229191  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:53:53.229232  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:53:53.308648  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:53:53.308678  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:53:53.308695  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:53:53.386776  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:53:53.386816  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:53:55.929307  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:55.942217  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:53:55.942285  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:53:55.983522  662586 cri.go:89] found id: ""
	I1209 11:53:55.983563  662586 logs.go:282] 0 containers: []
	W1209 11:53:55.983572  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:53:55.983579  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:53:55.983645  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:53:56.017262  662586 cri.go:89] found id: ""
	I1209 11:53:56.017293  662586 logs.go:282] 0 containers: []
	W1209 11:53:56.017308  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:53:56.017314  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:53:56.017367  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:53:56.052385  662586 cri.go:89] found id: ""
	I1209 11:53:56.052419  662586 logs.go:282] 0 containers: []
	W1209 11:53:56.052429  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:53:56.052436  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:53:56.052489  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:53:56.085385  662586 cri.go:89] found id: ""
	I1209 11:53:56.085432  662586 logs.go:282] 0 containers: []
	W1209 11:53:56.085444  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:53:56.085452  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:53:56.085519  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:53:56.122754  662586 cri.go:89] found id: ""
	I1209 11:53:56.122785  662586 logs.go:282] 0 containers: []
	W1209 11:53:56.122794  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:53:56.122800  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:53:56.122862  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:53:56.159033  662586 cri.go:89] found id: ""
	I1209 11:53:56.159061  662586 logs.go:282] 0 containers: []
	W1209 11:53:56.159070  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:53:56.159077  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:53:56.159128  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:53:56.198022  662586 cri.go:89] found id: ""
	I1209 11:53:56.198058  662586 logs.go:282] 0 containers: []
	W1209 11:53:56.198070  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:53:56.198078  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:53:56.198148  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:53:56.231475  662586 cri.go:89] found id: ""
	I1209 11:53:56.231515  662586 logs.go:282] 0 containers: []
	W1209 11:53:56.231528  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:53:56.231542  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:53:56.231559  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:53:56.304922  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:53:56.304969  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:53:56.339875  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:53:56.339916  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:53:56.392893  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:53:56.392929  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:53:56.406334  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:53:56.406376  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:53:56.474037  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:53:55.895077  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:57.895835  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:56.452163  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:58.950981  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:57.590943  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:00.091057  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:58.974725  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:58.987817  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:53:58.987890  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:53:59.020951  662586 cri.go:89] found id: ""
	I1209 11:53:59.020987  662586 logs.go:282] 0 containers: []
	W1209 11:53:59.020996  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:53:59.021003  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:53:59.021055  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:53:59.055675  662586 cri.go:89] found id: ""
	I1209 11:53:59.055715  662586 logs.go:282] 0 containers: []
	W1209 11:53:59.055727  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:53:59.055733  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:53:59.055800  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:53:59.090099  662586 cri.go:89] found id: ""
	I1209 11:53:59.090138  662586 logs.go:282] 0 containers: []
	W1209 11:53:59.090150  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:53:59.090158  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:53:59.090252  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:53:59.124680  662586 cri.go:89] found id: ""
	I1209 11:53:59.124718  662586 logs.go:282] 0 containers: []
	W1209 11:53:59.124730  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:53:59.124739  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:53:59.124802  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:53:59.157772  662586 cri.go:89] found id: ""
	I1209 11:53:59.157808  662586 logs.go:282] 0 containers: []
	W1209 11:53:59.157819  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:53:59.157828  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:53:59.157892  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:53:59.191098  662586 cri.go:89] found id: ""
	I1209 11:53:59.191132  662586 logs.go:282] 0 containers: []
	W1209 11:53:59.191141  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:53:59.191148  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:53:59.191212  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:53:59.224050  662586 cri.go:89] found id: ""
	I1209 11:53:59.224090  662586 logs.go:282] 0 containers: []
	W1209 11:53:59.224102  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:53:59.224110  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:53:59.224198  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:53:59.262361  662586 cri.go:89] found id: ""
	I1209 11:53:59.262397  662586 logs.go:282] 0 containers: []
	W1209 11:53:59.262418  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:53:59.262432  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:53:59.262456  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:53:59.276811  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:53:59.276844  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:53:59.349465  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:53:59.349492  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:53:59.349506  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:53:59.429146  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:53:59.429192  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:53:59.470246  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:53:59.470287  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:02.021651  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:02.036039  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:02.036109  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:02.070999  662586 cri.go:89] found id: ""
	I1209 11:54:02.071034  662586 logs.go:282] 0 containers: []
	W1209 11:54:02.071045  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:02.071052  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:02.071119  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:02.107506  662586 cri.go:89] found id: ""
	I1209 11:54:02.107536  662586 logs.go:282] 0 containers: []
	W1209 11:54:02.107546  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:02.107554  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:02.107624  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:02.146279  662586 cri.go:89] found id: ""
	I1209 11:54:02.146314  662586 logs.go:282] 0 containers: []
	W1209 11:54:02.146326  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:02.146342  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:02.146408  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:02.178349  662586 cri.go:89] found id: ""
	I1209 11:54:02.178378  662586 logs.go:282] 0 containers: []
	W1209 11:54:02.178387  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:02.178402  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:02.178460  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:02.211916  662586 cri.go:89] found id: ""
	I1209 11:54:02.211952  662586 logs.go:282] 0 containers: []
	W1209 11:54:02.211963  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:02.211969  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:02.212038  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:02.246334  662586 cri.go:89] found id: ""
	I1209 11:54:02.246370  662586 logs.go:282] 0 containers: []
	W1209 11:54:02.246380  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:02.246387  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:02.246452  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:02.280111  662586 cri.go:89] found id: ""
	I1209 11:54:02.280157  662586 logs.go:282] 0 containers: []
	W1209 11:54:02.280168  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:02.280175  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:02.280246  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:02.314141  662586 cri.go:89] found id: ""
	I1209 11:54:02.314188  662586 logs.go:282] 0 containers: []
	W1209 11:54:02.314203  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:02.314216  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:02.314236  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:02.327220  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:02.327253  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:02.396099  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:02.396127  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:02.396142  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:02.478096  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:02.478148  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:02.515144  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:02.515175  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:53:59.896109  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:02.396485  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:04.396925  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:01.450279  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:03.450732  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:05.451265  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:02.092010  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:04.092427  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:05.069286  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:05.082453  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:05.082540  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:05.116263  662586 cri.go:89] found id: ""
	I1209 11:54:05.116299  662586 logs.go:282] 0 containers: []
	W1209 11:54:05.116313  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:05.116321  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:05.116388  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:05.150736  662586 cri.go:89] found id: ""
	I1209 11:54:05.150775  662586 logs.go:282] 0 containers: []
	W1209 11:54:05.150788  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:05.150796  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:05.150864  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:05.183757  662586 cri.go:89] found id: ""
	I1209 11:54:05.183792  662586 logs.go:282] 0 containers: []
	W1209 11:54:05.183804  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:05.183812  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:05.183873  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:05.215986  662586 cri.go:89] found id: ""
	I1209 11:54:05.216017  662586 logs.go:282] 0 containers: []
	W1209 11:54:05.216029  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:05.216038  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:05.216096  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:05.247648  662586 cri.go:89] found id: ""
	I1209 11:54:05.247686  662586 logs.go:282] 0 containers: []
	W1209 11:54:05.247698  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:05.247707  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:05.247776  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:05.279455  662586 cri.go:89] found id: ""
	I1209 11:54:05.279484  662586 logs.go:282] 0 containers: []
	W1209 11:54:05.279495  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:05.279504  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:05.279567  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:05.320334  662586 cri.go:89] found id: ""
	I1209 11:54:05.320374  662586 logs.go:282] 0 containers: []
	W1209 11:54:05.320387  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:05.320398  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:05.320490  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:05.353475  662586 cri.go:89] found id: ""
	I1209 11:54:05.353503  662586 logs.go:282] 0 containers: []
	W1209 11:54:05.353512  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:05.353522  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:05.353536  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:05.368320  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:05.368357  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:05.442152  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:05.442193  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:05.442212  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:05.523726  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:05.523768  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:05.562405  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:05.562438  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:06.895764  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:08.897032  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:07.454237  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:09.456440  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:06.591474  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:08.591578  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:11.091599  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:08.115564  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:08.129426  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:08.129523  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:08.162412  662586 cri.go:89] found id: ""
	I1209 11:54:08.162454  662586 logs.go:282] 0 containers: []
	W1209 11:54:08.162467  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:08.162477  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:08.162543  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:08.196821  662586 cri.go:89] found id: ""
	I1209 11:54:08.196860  662586 logs.go:282] 0 containers: []
	W1209 11:54:08.196873  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:08.196882  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:08.196949  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:08.233068  662586 cri.go:89] found id: ""
	I1209 11:54:08.233106  662586 logs.go:282] 0 containers: []
	W1209 11:54:08.233117  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:08.233124  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:08.233184  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:08.268683  662586 cri.go:89] found id: ""
	I1209 11:54:08.268715  662586 logs.go:282] 0 containers: []
	W1209 11:54:08.268724  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:08.268731  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:08.268790  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:08.303237  662586 cri.go:89] found id: ""
	I1209 11:54:08.303276  662586 logs.go:282] 0 containers: []
	W1209 11:54:08.303288  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:08.303309  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:08.303393  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:08.339513  662586 cri.go:89] found id: ""
	I1209 11:54:08.339543  662586 logs.go:282] 0 containers: []
	W1209 11:54:08.339551  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:08.339557  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:08.339612  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:08.376237  662586 cri.go:89] found id: ""
	I1209 11:54:08.376268  662586 logs.go:282] 0 containers: []
	W1209 11:54:08.376289  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:08.376298  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:08.376363  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:08.410530  662586 cri.go:89] found id: ""
	I1209 11:54:08.410560  662586 logs.go:282] 0 containers: []
	W1209 11:54:08.410568  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:08.410577  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:08.410589  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:08.460064  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:08.460101  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:08.474547  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:08.474582  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:08.544231  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:08.544260  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:08.544277  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:08.624727  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:08.624775  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:11.167943  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:11.183210  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:11.183294  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:11.221326  662586 cri.go:89] found id: ""
	I1209 11:54:11.221356  662586 logs.go:282] 0 containers: []
	W1209 11:54:11.221365  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:11.221371  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:11.221434  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:11.254688  662586 cri.go:89] found id: ""
	I1209 11:54:11.254721  662586 logs.go:282] 0 containers: []
	W1209 11:54:11.254730  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:11.254736  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:11.254801  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:11.287611  662586 cri.go:89] found id: ""
	I1209 11:54:11.287649  662586 logs.go:282] 0 containers: []
	W1209 11:54:11.287660  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:11.287666  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:11.287738  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:11.320533  662586 cri.go:89] found id: ""
	I1209 11:54:11.320565  662586 logs.go:282] 0 containers: []
	W1209 11:54:11.320574  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:11.320580  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:11.320638  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:11.362890  662586 cri.go:89] found id: ""
	I1209 11:54:11.362923  662586 logs.go:282] 0 containers: []
	W1209 11:54:11.362933  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:11.362939  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:11.363007  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:11.418729  662586 cri.go:89] found id: ""
	I1209 11:54:11.418762  662586 logs.go:282] 0 containers: []
	W1209 11:54:11.418772  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:11.418779  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:11.418837  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:11.455336  662586 cri.go:89] found id: ""
	I1209 11:54:11.455374  662586 logs.go:282] 0 containers: []
	W1209 11:54:11.455388  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:11.455397  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:11.455479  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:11.491307  662586 cri.go:89] found id: ""
	I1209 11:54:11.491334  662586 logs.go:282] 0 containers: []
	W1209 11:54:11.491344  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:11.491355  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:11.491369  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:11.543161  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:11.543204  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:11.556633  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:11.556670  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:11.626971  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:11.627001  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:11.627025  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:11.702061  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:11.702107  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:11.396167  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:13.897097  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:11.952029  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:14.451701  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:13.590749  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:15.591845  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:14.245241  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:14.258461  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:14.258544  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:14.292108  662586 cri.go:89] found id: ""
	I1209 11:54:14.292147  662586 logs.go:282] 0 containers: []
	W1209 11:54:14.292156  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:14.292163  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:14.292219  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:14.327347  662586 cri.go:89] found id: ""
	I1209 11:54:14.327381  662586 logs.go:282] 0 containers: []
	W1209 11:54:14.327394  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:14.327403  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:14.327484  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:14.361188  662586 cri.go:89] found id: ""
	I1209 11:54:14.361220  662586 logs.go:282] 0 containers: []
	W1209 11:54:14.361229  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:14.361236  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:14.361290  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:14.394898  662586 cri.go:89] found id: ""
	I1209 11:54:14.394936  662586 logs.go:282] 0 containers: []
	W1209 11:54:14.394948  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:14.394960  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:14.395027  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:14.429326  662586 cri.go:89] found id: ""
	I1209 11:54:14.429402  662586 logs.go:282] 0 containers: []
	W1209 11:54:14.429420  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:14.429431  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:14.429510  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:14.462903  662586 cri.go:89] found id: ""
	I1209 11:54:14.462938  662586 logs.go:282] 0 containers: []
	W1209 11:54:14.462947  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:14.462954  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:14.463009  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:14.496362  662586 cri.go:89] found id: ""
	I1209 11:54:14.496396  662586 logs.go:282] 0 containers: []
	W1209 11:54:14.496409  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:14.496418  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:14.496562  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:14.530052  662586 cri.go:89] found id: ""
	I1209 11:54:14.530085  662586 logs.go:282] 0 containers: []
	W1209 11:54:14.530098  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:14.530111  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:14.530131  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:14.543096  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:14.543133  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:14.611030  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:14.611055  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:14.611067  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:14.684984  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:14.685041  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:14.722842  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:14.722881  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:17.275868  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:17.288812  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:17.288898  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:17.323732  662586 cri.go:89] found id: ""
	I1209 11:54:17.323766  662586 logs.go:282] 0 containers: []
	W1209 11:54:17.323777  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:17.323786  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:17.323852  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:17.367753  662586 cri.go:89] found id: ""
	I1209 11:54:17.367788  662586 logs.go:282] 0 containers: []
	W1209 11:54:17.367801  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:17.367810  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:17.367878  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:17.411444  662586 cri.go:89] found id: ""
	I1209 11:54:17.411476  662586 logs.go:282] 0 containers: []
	W1209 11:54:17.411488  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:17.411496  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:17.411563  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:17.450790  662586 cri.go:89] found id: ""
	I1209 11:54:17.450821  662586 logs.go:282] 0 containers: []
	W1209 11:54:17.450832  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:17.450840  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:17.450913  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:17.488824  662586 cri.go:89] found id: ""
	I1209 11:54:17.488859  662586 logs.go:282] 0 containers: []
	W1209 11:54:17.488869  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:17.488876  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:17.488948  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:17.522051  662586 cri.go:89] found id: ""
	I1209 11:54:17.522085  662586 logs.go:282] 0 containers: []
	W1209 11:54:17.522094  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:17.522102  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:17.522165  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:17.556653  662586 cri.go:89] found id: ""
	I1209 11:54:17.556687  662586 logs.go:282] 0 containers: []
	W1209 11:54:17.556700  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:17.556707  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:17.556783  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:17.591303  662586 cri.go:89] found id: ""
	I1209 11:54:17.591337  662586 logs.go:282] 0 containers: []
	W1209 11:54:17.591355  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:17.591367  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:17.591384  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:17.656675  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:17.656699  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:17.656712  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:16.396574  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:18.896050  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:16.950508  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:19.451294  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:18.091307  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:20.091489  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:17.739894  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:17.739939  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:17.789486  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:17.789517  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:17.843606  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:17.843648  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:20.361896  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:20.378015  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:20.378105  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:20.412252  662586 cri.go:89] found id: ""
	I1209 11:54:20.412299  662586 logs.go:282] 0 containers: []
	W1209 11:54:20.412311  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:20.412327  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:20.412396  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:20.443638  662586 cri.go:89] found id: ""
	I1209 11:54:20.443671  662586 logs.go:282] 0 containers: []
	W1209 11:54:20.443682  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:20.443690  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:20.443758  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:20.478578  662586 cri.go:89] found id: ""
	I1209 11:54:20.478613  662586 logs.go:282] 0 containers: []
	W1209 11:54:20.478625  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:20.478634  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:20.478704  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:20.512232  662586 cri.go:89] found id: ""
	I1209 11:54:20.512266  662586 logs.go:282] 0 containers: []
	W1209 11:54:20.512279  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:20.512295  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:20.512357  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:20.544358  662586 cri.go:89] found id: ""
	I1209 11:54:20.544398  662586 logs.go:282] 0 containers: []
	W1209 11:54:20.544413  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:20.544429  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:20.544494  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:20.579476  662586 cri.go:89] found id: ""
	I1209 11:54:20.579513  662586 logs.go:282] 0 containers: []
	W1209 11:54:20.579525  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:20.579533  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:20.579600  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:20.613851  662586 cri.go:89] found id: ""
	I1209 11:54:20.613884  662586 logs.go:282] 0 containers: []
	W1209 11:54:20.613897  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:20.613903  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:20.613973  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:20.647311  662586 cri.go:89] found id: ""
	I1209 11:54:20.647342  662586 logs.go:282] 0 containers: []
	W1209 11:54:20.647351  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:20.647362  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:20.647375  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:20.695798  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:20.695839  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:20.709443  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:20.709478  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:20.779211  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:20.779237  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:20.779253  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:20.857966  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:20.858012  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:20.896168  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:22.896667  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:21.455716  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:23.950823  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:25.952038  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:22.592225  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:25.091934  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:23.398095  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:23.412622  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:23.412686  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:23.446582  662586 cri.go:89] found id: ""
	I1209 11:54:23.446616  662586 logs.go:282] 0 containers: []
	W1209 11:54:23.446628  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:23.446637  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:23.446705  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:23.487896  662586 cri.go:89] found id: ""
	I1209 11:54:23.487926  662586 logs.go:282] 0 containers: []
	W1209 11:54:23.487935  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:23.487941  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:23.488007  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:23.521520  662586 cri.go:89] found id: ""
	I1209 11:54:23.521559  662586 logs.go:282] 0 containers: []
	W1209 11:54:23.521571  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:23.521579  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:23.521651  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:23.561296  662586 cri.go:89] found id: ""
	I1209 11:54:23.561329  662586 logs.go:282] 0 containers: []
	W1209 11:54:23.561342  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:23.561350  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:23.561417  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:23.604936  662586 cri.go:89] found id: ""
	I1209 11:54:23.604965  662586 logs.go:282] 0 containers: []
	W1209 11:54:23.604976  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:23.604985  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:23.605055  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:23.665193  662586 cri.go:89] found id: ""
	I1209 11:54:23.665225  662586 logs.go:282] 0 containers: []
	W1209 11:54:23.665237  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:23.665247  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:23.665315  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:23.700202  662586 cri.go:89] found id: ""
	I1209 11:54:23.700239  662586 logs.go:282] 0 containers: []
	W1209 11:54:23.700251  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:23.700259  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:23.700336  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:23.734877  662586 cri.go:89] found id: ""
	I1209 11:54:23.734907  662586 logs.go:282] 0 containers: []
	W1209 11:54:23.734917  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:23.734927  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:23.734941  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:23.817328  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:23.817371  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:23.855052  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:23.855085  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:23.909107  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:23.909154  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:23.924198  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:23.924227  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:23.991976  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:26.492366  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:26.506223  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:26.506299  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:26.544932  662586 cri.go:89] found id: ""
	I1209 11:54:26.544974  662586 logs.go:282] 0 containers: []
	W1209 11:54:26.544987  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:26.544997  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:26.545080  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:26.579581  662586 cri.go:89] found id: ""
	I1209 11:54:26.579621  662586 logs.go:282] 0 containers: []
	W1209 11:54:26.579634  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:26.579643  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:26.579716  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:26.612510  662586 cri.go:89] found id: ""
	I1209 11:54:26.612545  662586 logs.go:282] 0 containers: []
	W1209 11:54:26.612567  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:26.612577  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:26.612646  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:26.646273  662586 cri.go:89] found id: ""
	I1209 11:54:26.646306  662586 logs.go:282] 0 containers: []
	W1209 11:54:26.646316  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:26.646322  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:26.646376  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:26.682027  662586 cri.go:89] found id: ""
	I1209 11:54:26.682063  662586 logs.go:282] 0 containers: []
	W1209 11:54:26.682072  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:26.682078  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:26.682132  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:26.715822  662586 cri.go:89] found id: ""
	I1209 11:54:26.715876  662586 logs.go:282] 0 containers: []
	W1209 11:54:26.715889  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:26.715898  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:26.715964  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:26.755976  662586 cri.go:89] found id: ""
	I1209 11:54:26.756016  662586 logs.go:282] 0 containers: []
	W1209 11:54:26.756031  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:26.756040  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:26.756122  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:26.787258  662586 cri.go:89] found id: ""
	I1209 11:54:26.787297  662586 logs.go:282] 0 containers: []
	W1209 11:54:26.787308  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:26.787319  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:26.787333  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:26.800534  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:26.800573  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:26.865767  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:26.865798  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:26.865824  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:26.950409  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:26.950460  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:26.994281  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:26.994320  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:25.396411  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:27.894846  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:28.451141  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:30.455101  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:27.591769  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:30.091528  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:29.544568  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:29.565182  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:29.565263  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:29.625116  662586 cri.go:89] found id: ""
	I1209 11:54:29.625155  662586 logs.go:282] 0 containers: []
	W1209 11:54:29.625168  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:29.625181  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:29.625257  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:29.673689  662586 cri.go:89] found id: ""
	I1209 11:54:29.673727  662586 logs.go:282] 0 containers: []
	W1209 11:54:29.673739  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:29.673747  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:29.673811  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:29.705925  662586 cri.go:89] found id: ""
	I1209 11:54:29.705959  662586 logs.go:282] 0 containers: []
	W1209 11:54:29.705971  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:29.705979  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:29.706033  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:29.738731  662586 cri.go:89] found id: ""
	I1209 11:54:29.738759  662586 logs.go:282] 0 containers: []
	W1209 11:54:29.738767  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:29.738774  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:29.738832  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:29.770778  662586 cri.go:89] found id: ""
	I1209 11:54:29.770814  662586 logs.go:282] 0 containers: []
	W1209 11:54:29.770826  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:29.770833  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:29.770899  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:29.801925  662586 cri.go:89] found id: ""
	I1209 11:54:29.801961  662586 logs.go:282] 0 containers: []
	W1209 11:54:29.801973  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:29.801981  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:29.802050  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:29.833681  662586 cri.go:89] found id: ""
	I1209 11:54:29.833712  662586 logs.go:282] 0 containers: []
	W1209 11:54:29.833722  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:29.833727  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:29.833791  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:29.873666  662586 cri.go:89] found id: ""
	I1209 11:54:29.873700  662586 logs.go:282] 0 containers: []
	W1209 11:54:29.873712  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:29.873722  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:29.873735  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:29.914855  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:29.914895  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:29.967730  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:29.967772  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:29.982037  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:29.982070  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:30.047168  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:30.047195  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:30.047212  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:32.623371  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:32.636346  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:32.636411  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:32.677709  662586 cri.go:89] found id: ""
	I1209 11:54:32.677736  662586 logs.go:282] 0 containers: []
	W1209 11:54:32.677744  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:32.677753  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:32.677805  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:29.896176  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:32.395216  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:32.952287  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:35.451456  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:32.092615  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:34.591397  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:32.710906  662586 cri.go:89] found id: ""
	I1209 11:54:32.710933  662586 logs.go:282] 0 containers: []
	W1209 11:54:32.710942  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:32.710948  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:32.711000  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:32.744623  662586 cri.go:89] found id: ""
	I1209 11:54:32.744654  662586 logs.go:282] 0 containers: []
	W1209 11:54:32.744667  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:32.744676  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:32.744736  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:32.779334  662586 cri.go:89] found id: ""
	I1209 11:54:32.779364  662586 logs.go:282] 0 containers: []
	W1209 11:54:32.779375  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:32.779382  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:32.779443  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:32.814998  662586 cri.go:89] found id: ""
	I1209 11:54:32.815032  662586 logs.go:282] 0 containers: []
	W1209 11:54:32.815046  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:32.815055  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:32.815128  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:32.850054  662586 cri.go:89] found id: ""
	I1209 11:54:32.850099  662586 logs.go:282] 0 containers: []
	W1209 11:54:32.850116  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:32.850127  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:32.850213  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:32.885769  662586 cri.go:89] found id: ""
	I1209 11:54:32.885805  662586 logs.go:282] 0 containers: []
	W1209 11:54:32.885818  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:32.885827  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:32.885899  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:32.927973  662586 cri.go:89] found id: ""
	I1209 11:54:32.928001  662586 logs.go:282] 0 containers: []
	W1209 11:54:32.928010  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:32.928019  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:32.928032  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:32.981915  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:32.981966  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:32.995817  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:32.995851  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:33.062409  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:33.062445  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:33.062462  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:33.146967  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:33.147011  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:35.688225  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:35.701226  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:35.701325  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:35.738628  662586 cri.go:89] found id: ""
	I1209 11:54:35.738655  662586 logs.go:282] 0 containers: []
	W1209 11:54:35.738663  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:35.738670  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:35.738737  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:35.771125  662586 cri.go:89] found id: ""
	I1209 11:54:35.771163  662586 logs.go:282] 0 containers: []
	W1209 11:54:35.771177  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:35.771187  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:35.771260  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:35.806244  662586 cri.go:89] found id: ""
	I1209 11:54:35.806277  662586 logs.go:282] 0 containers: []
	W1209 11:54:35.806290  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:35.806301  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:35.806376  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:35.839871  662586 cri.go:89] found id: ""
	I1209 11:54:35.839912  662586 logs.go:282] 0 containers: []
	W1209 11:54:35.839925  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:35.839932  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:35.840010  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:35.874994  662586 cri.go:89] found id: ""
	I1209 11:54:35.875034  662586 logs.go:282] 0 containers: []
	W1209 11:54:35.875046  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:35.875054  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:35.875129  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:35.910802  662586 cri.go:89] found id: ""
	I1209 11:54:35.910834  662586 logs.go:282] 0 containers: []
	W1209 11:54:35.910846  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:35.910855  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:35.910927  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:35.944633  662586 cri.go:89] found id: ""
	I1209 11:54:35.944663  662586 logs.go:282] 0 containers: []
	W1209 11:54:35.944672  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:35.944678  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:35.944749  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:35.982732  662586 cri.go:89] found id: ""
	I1209 11:54:35.982781  662586 logs.go:282] 0 containers: []
	W1209 11:54:35.982796  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:35.982811  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:35.982830  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:35.996271  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:35.996302  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:36.063463  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:36.063533  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:36.063554  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:36.141789  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:36.141833  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:36.187015  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:36.187047  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:34.895890  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:37.396472  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:37.951404  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:40.452814  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:37.091548  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:39.092168  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:38.739585  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:38.754322  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:38.754394  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:38.792497  662586 cri.go:89] found id: ""
	I1209 11:54:38.792525  662586 logs.go:282] 0 containers: []
	W1209 11:54:38.792535  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:38.792543  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:38.792608  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:38.829730  662586 cri.go:89] found id: ""
	I1209 11:54:38.829759  662586 logs.go:282] 0 containers: []
	W1209 11:54:38.829768  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:38.829774  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:38.829834  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:38.869942  662586 cri.go:89] found id: ""
	I1209 11:54:38.869981  662586 logs.go:282] 0 containers: []
	W1209 11:54:38.869994  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:38.870015  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:38.870085  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:38.906001  662586 cri.go:89] found id: ""
	I1209 11:54:38.906041  662586 logs.go:282] 0 containers: []
	W1209 11:54:38.906054  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:38.906063  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:38.906133  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:38.944389  662586 cri.go:89] found id: ""
	I1209 11:54:38.944427  662586 logs.go:282] 0 containers: []
	W1209 11:54:38.944445  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:38.944453  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:38.944534  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:38.979633  662586 cri.go:89] found id: ""
	I1209 11:54:38.979665  662586 logs.go:282] 0 containers: []
	W1209 11:54:38.979674  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:38.979681  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:38.979735  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:39.016366  662586 cri.go:89] found id: ""
	I1209 11:54:39.016402  662586 logs.go:282] 0 containers: []
	W1209 11:54:39.016416  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:39.016424  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:39.016489  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:39.049084  662586 cri.go:89] found id: ""
	I1209 11:54:39.049116  662586 logs.go:282] 0 containers: []
	W1209 11:54:39.049125  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:39.049134  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:39.049148  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:39.113953  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:39.113985  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:39.114004  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:39.191715  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:39.191767  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:39.232127  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:39.232167  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:39.281406  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:39.281448  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:41.795395  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:41.810293  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:41.810364  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:41.849819  662586 cri.go:89] found id: ""
	I1209 11:54:41.849858  662586 logs.go:282] 0 containers: []
	W1209 11:54:41.849872  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:41.849882  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:41.849952  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:41.883871  662586 cri.go:89] found id: ""
	I1209 11:54:41.883908  662586 logs.go:282] 0 containers: []
	W1209 11:54:41.883934  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:41.883942  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:41.884017  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:41.918194  662586 cri.go:89] found id: ""
	I1209 11:54:41.918230  662586 logs.go:282] 0 containers: []
	W1209 11:54:41.918239  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:41.918245  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:41.918312  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:41.950878  662586 cri.go:89] found id: ""
	I1209 11:54:41.950912  662586 logs.go:282] 0 containers: []
	W1209 11:54:41.950924  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:41.950933  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:41.950995  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:41.982922  662586 cri.go:89] found id: ""
	I1209 11:54:41.982964  662586 logs.go:282] 0 containers: []
	W1209 11:54:41.982976  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:41.982985  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:41.983064  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:42.014066  662586 cri.go:89] found id: ""
	I1209 11:54:42.014107  662586 logs.go:282] 0 containers: []
	W1209 11:54:42.014120  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:42.014129  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:42.014229  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:42.048017  662586 cri.go:89] found id: ""
	I1209 11:54:42.048056  662586 logs.go:282] 0 containers: []
	W1209 11:54:42.048070  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:42.048079  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:42.048146  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:42.080585  662586 cri.go:89] found id: ""
	I1209 11:54:42.080614  662586 logs.go:282] 0 containers: []
	W1209 11:54:42.080624  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:42.080634  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:42.080646  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:42.135012  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:42.135054  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:42.148424  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:42.148462  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:42.219179  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:42.219206  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:42.219230  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:42.305855  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:42.305902  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:39.895830  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:41.896255  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:44.398373  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:42.949835  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:44.951542  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:41.590831  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:43.592053  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:45.593044  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:44.843158  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:44.856317  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:44.856380  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:44.890940  662586 cri.go:89] found id: ""
	I1209 11:54:44.890984  662586 logs.go:282] 0 containers: []
	W1209 11:54:44.891003  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:44.891012  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:44.891081  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:44.923657  662586 cri.go:89] found id: ""
	I1209 11:54:44.923684  662586 logs.go:282] 0 containers: []
	W1209 11:54:44.923692  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:44.923698  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:44.923769  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:44.957512  662586 cri.go:89] found id: ""
	I1209 11:54:44.957545  662586 logs.go:282] 0 containers: []
	W1209 11:54:44.957558  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:44.957566  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:44.957636  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:44.998084  662586 cri.go:89] found id: ""
	I1209 11:54:44.998112  662586 logs.go:282] 0 containers: []
	W1209 11:54:44.998121  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:44.998128  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:44.998210  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:45.030335  662586 cri.go:89] found id: ""
	I1209 11:54:45.030360  662586 logs.go:282] 0 containers: []
	W1209 11:54:45.030369  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:45.030375  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:45.030447  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:45.063098  662586 cri.go:89] found id: ""
	I1209 11:54:45.063127  662586 logs.go:282] 0 containers: []
	W1209 11:54:45.063135  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:45.063141  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:45.063210  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:45.098430  662586 cri.go:89] found id: ""
	I1209 11:54:45.098458  662586 logs.go:282] 0 containers: []
	W1209 11:54:45.098466  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:45.098472  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:45.098526  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:45.132064  662586 cri.go:89] found id: ""
	I1209 11:54:45.132094  662586 logs.go:282] 0 containers: []
	W1209 11:54:45.132102  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:45.132113  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:45.132131  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:45.185512  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:45.185556  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:45.199543  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:45.199572  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:45.268777  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:45.268803  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:45.268817  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:45.352250  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:45.352299  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:46.897153  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:49.395935  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:46.952862  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:49.450006  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:48.092394  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:50.591937  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:47.892201  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:47.906961  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:47.907053  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:47.941349  662586 cri.go:89] found id: ""
	I1209 11:54:47.941394  662586 logs.go:282] 0 containers: []
	W1209 11:54:47.941408  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:47.941418  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:47.941479  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:47.981086  662586 cri.go:89] found id: ""
	I1209 11:54:47.981120  662586 logs.go:282] 0 containers: []
	W1209 11:54:47.981133  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:47.981141  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:47.981210  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:48.014105  662586 cri.go:89] found id: ""
	I1209 11:54:48.014142  662586 logs.go:282] 0 containers: []
	W1209 11:54:48.014151  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:48.014162  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:48.014249  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:48.049506  662586 cri.go:89] found id: ""
	I1209 11:54:48.049535  662586 logs.go:282] 0 containers: []
	W1209 11:54:48.049544  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:48.049552  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:48.049619  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:48.084284  662586 cri.go:89] found id: ""
	I1209 11:54:48.084314  662586 logs.go:282] 0 containers: []
	W1209 11:54:48.084324  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:48.084336  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:48.084406  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:48.117318  662586 cri.go:89] found id: ""
	I1209 11:54:48.117349  662586 logs.go:282] 0 containers: []
	W1209 11:54:48.117362  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:48.117371  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:48.117441  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:48.150121  662586 cri.go:89] found id: ""
	I1209 11:54:48.150151  662586 logs.go:282] 0 containers: []
	W1209 11:54:48.150187  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:48.150198  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:48.150266  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:48.180919  662586 cri.go:89] found id: ""
	I1209 11:54:48.180947  662586 logs.go:282] 0 containers: []
	W1209 11:54:48.180955  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:48.180966  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:48.180978  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:48.249572  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:48.249602  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:48.249617  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:48.324508  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:48.324552  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:48.363856  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:48.363901  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:48.415662  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:48.415721  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:50.929811  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:50.943650  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:50.943714  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:50.976444  662586 cri.go:89] found id: ""
	I1209 11:54:50.976480  662586 logs.go:282] 0 containers: []
	W1209 11:54:50.976493  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:50.976502  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:50.976574  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:51.016567  662586 cri.go:89] found id: ""
	I1209 11:54:51.016600  662586 logs.go:282] 0 containers: []
	W1209 11:54:51.016613  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:51.016621  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:51.016699  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:51.048933  662586 cri.go:89] found id: ""
	I1209 11:54:51.048967  662586 logs.go:282] 0 containers: []
	W1209 11:54:51.048977  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:51.048986  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:51.049073  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:51.083292  662586 cri.go:89] found id: ""
	I1209 11:54:51.083333  662586 logs.go:282] 0 containers: []
	W1209 11:54:51.083345  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:51.083354  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:51.083423  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:51.118505  662586 cri.go:89] found id: ""
	I1209 11:54:51.118547  662586 logs.go:282] 0 containers: []
	W1209 11:54:51.118560  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:51.118571  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:51.118644  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:51.152818  662586 cri.go:89] found id: ""
	I1209 11:54:51.152847  662586 logs.go:282] 0 containers: []
	W1209 11:54:51.152856  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:51.152870  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:51.152922  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:51.186953  662586 cri.go:89] found id: ""
	I1209 11:54:51.186981  662586 logs.go:282] 0 containers: []
	W1209 11:54:51.186991  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:51.186997  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:51.187063  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:51.219305  662586 cri.go:89] found id: ""
	I1209 11:54:51.219337  662586 logs.go:282] 0 containers: []
	W1209 11:54:51.219348  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:51.219361  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:51.219380  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:51.256295  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:51.256338  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:51.313751  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:51.313806  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:51.326940  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:51.326977  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:51.397395  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:51.397428  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:51.397445  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:51.396434  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:53.896554  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:51.456719  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:53.951566  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:52.592043  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:55.091800  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:53.975557  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:53.989509  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:53.989581  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:54.024363  662586 cri.go:89] found id: ""
	I1209 11:54:54.024403  662586 logs.go:282] 0 containers: []
	W1209 11:54:54.024416  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:54.024423  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:54.024484  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:54.062618  662586 cri.go:89] found id: ""
	I1209 11:54:54.062649  662586 logs.go:282] 0 containers: []
	W1209 11:54:54.062659  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:54.062667  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:54.062739  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:54.100194  662586 cri.go:89] found id: ""
	I1209 11:54:54.100231  662586 logs.go:282] 0 containers: []
	W1209 11:54:54.100243  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:54.100252  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:54.100324  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:54.135302  662586 cri.go:89] found id: ""
	I1209 11:54:54.135341  662586 logs.go:282] 0 containers: []
	W1209 11:54:54.135354  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:54.135363  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:54.135447  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:54.170898  662586 cri.go:89] found id: ""
	I1209 11:54:54.170940  662586 logs.go:282] 0 containers: []
	W1209 11:54:54.170953  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:54.170963  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:54.171035  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:54.205098  662586 cri.go:89] found id: ""
	I1209 11:54:54.205138  662586 logs.go:282] 0 containers: []
	W1209 11:54:54.205151  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:54.205159  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:54.205223  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:54.239153  662586 cri.go:89] found id: ""
	I1209 11:54:54.239210  662586 logs.go:282] 0 containers: []
	W1209 11:54:54.239226  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:54.239234  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:54.239307  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:54.278213  662586 cri.go:89] found id: ""
	I1209 11:54:54.278248  662586 logs.go:282] 0 containers: []
	W1209 11:54:54.278260  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:54.278275  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:54.278296  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:54.348095  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:54.348128  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:54.348156  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:54.427181  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:54.427224  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:54.467623  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:54.467656  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:54.519690  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:54.519734  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:57.033524  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:57.046420  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:57.046518  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:57.079588  662586 cri.go:89] found id: ""
	I1209 11:54:57.079616  662586 logs.go:282] 0 containers: []
	W1209 11:54:57.079626  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:57.079633  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:57.079687  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:57.114944  662586 cri.go:89] found id: ""
	I1209 11:54:57.114973  662586 logs.go:282] 0 containers: []
	W1209 11:54:57.114982  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:57.114988  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:57.115043  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:57.147667  662586 cri.go:89] found id: ""
	I1209 11:54:57.147708  662586 logs.go:282] 0 containers: []
	W1209 11:54:57.147721  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:57.147730  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:57.147794  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:57.182339  662586 cri.go:89] found id: ""
	I1209 11:54:57.182370  662586 logs.go:282] 0 containers: []
	W1209 11:54:57.182386  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:57.182395  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:57.182470  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:57.223129  662586 cri.go:89] found id: ""
	I1209 11:54:57.223170  662586 logs.go:282] 0 containers: []
	W1209 11:54:57.223186  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:57.223197  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:57.223270  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:57.262351  662586 cri.go:89] found id: ""
	I1209 11:54:57.262386  662586 logs.go:282] 0 containers: []
	W1209 11:54:57.262398  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:57.262409  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:57.262471  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:57.298743  662586 cri.go:89] found id: ""
	I1209 11:54:57.298772  662586 logs.go:282] 0 containers: []
	W1209 11:54:57.298782  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:57.298789  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:57.298856  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:57.339030  662586 cri.go:89] found id: ""
	I1209 11:54:57.339064  662586 logs.go:282] 0 containers: []
	W1209 11:54:57.339073  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:57.339085  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:57.339122  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:57.352603  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:57.352637  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:57.426627  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:57.426653  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:57.426669  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:57.515357  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:57.515401  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:57.554882  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:57.554925  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:56.396610  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:58.895822  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:56.451429  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:58.951440  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:57.590864  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:00.091967  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:00.112082  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:00.124977  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:00.125056  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:00.159003  662586 cri.go:89] found id: ""
	I1209 11:55:00.159032  662586 logs.go:282] 0 containers: []
	W1209 11:55:00.159041  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:00.159048  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:00.159101  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:00.192479  662586 cri.go:89] found id: ""
	I1209 11:55:00.192515  662586 logs.go:282] 0 containers: []
	W1209 11:55:00.192527  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:00.192533  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:00.192587  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:00.226146  662586 cri.go:89] found id: ""
	I1209 11:55:00.226194  662586 logs.go:282] 0 containers: []
	W1209 11:55:00.226208  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:00.226216  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:00.226273  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:00.260389  662586 cri.go:89] found id: ""
	I1209 11:55:00.260420  662586 logs.go:282] 0 containers: []
	W1209 11:55:00.260430  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:00.260442  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:00.260500  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:00.296091  662586 cri.go:89] found id: ""
	I1209 11:55:00.296121  662586 logs.go:282] 0 containers: []
	W1209 11:55:00.296131  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:00.296138  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:00.296195  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:00.332101  662586 cri.go:89] found id: ""
	I1209 11:55:00.332137  662586 logs.go:282] 0 containers: []
	W1209 11:55:00.332150  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:00.332158  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:00.332244  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:00.377329  662586 cri.go:89] found id: ""
	I1209 11:55:00.377358  662586 logs.go:282] 0 containers: []
	W1209 11:55:00.377368  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:00.377374  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:00.377438  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:00.415660  662586 cri.go:89] found id: ""
	I1209 11:55:00.415688  662586 logs.go:282] 0 containers: []
	W1209 11:55:00.415751  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:00.415767  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:00.415781  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:00.467734  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:00.467776  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:00.481244  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:00.481280  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:00.545721  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:00.545755  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:00.545777  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:00.624482  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:00.624533  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:01.396452  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:03.895539  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:01.452337  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:03.950752  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:05.951246  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:02.092654  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:04.592173  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:03.168340  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:03.183354  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:03.183439  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:03.223131  662586 cri.go:89] found id: ""
	I1209 11:55:03.223171  662586 logs.go:282] 0 containers: []
	W1209 11:55:03.223185  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:03.223193  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:03.223263  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:03.256561  662586 cri.go:89] found id: ""
	I1209 11:55:03.256595  662586 logs.go:282] 0 containers: []
	W1209 11:55:03.256603  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:03.256609  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:03.256667  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:03.289670  662586 cri.go:89] found id: ""
	I1209 11:55:03.289707  662586 logs.go:282] 0 containers: []
	W1209 11:55:03.289722  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:03.289738  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:03.289813  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:03.323687  662586 cri.go:89] found id: ""
	I1209 11:55:03.323714  662586 logs.go:282] 0 containers: []
	W1209 11:55:03.323724  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:03.323730  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:03.323786  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:03.358163  662586 cri.go:89] found id: ""
	I1209 11:55:03.358221  662586 logs.go:282] 0 containers: []
	W1209 11:55:03.358233  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:03.358241  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:03.358311  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:03.399688  662586 cri.go:89] found id: ""
	I1209 11:55:03.399721  662586 logs.go:282] 0 containers: []
	W1209 11:55:03.399734  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:03.399744  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:03.399812  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:03.433909  662586 cri.go:89] found id: ""
	I1209 11:55:03.433939  662586 logs.go:282] 0 containers: []
	W1209 11:55:03.433948  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:03.433954  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:03.434011  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:03.470208  662586 cri.go:89] found id: ""
	I1209 11:55:03.470239  662586 logs.go:282] 0 containers: []
	W1209 11:55:03.470248  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:03.470270  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:03.470289  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:03.545801  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:03.545848  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:03.584357  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:03.584389  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:03.641241  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:03.641283  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:03.657034  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:03.657080  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:03.731285  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:06.232380  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:06.246339  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:06.246411  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:06.281323  662586 cri.go:89] found id: ""
	I1209 11:55:06.281362  662586 logs.go:282] 0 containers: []
	W1209 11:55:06.281377  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:06.281385  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:06.281444  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:06.318225  662586 cri.go:89] found id: ""
	I1209 11:55:06.318261  662586 logs.go:282] 0 containers: []
	W1209 11:55:06.318277  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:06.318293  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:06.318364  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:06.353649  662586 cri.go:89] found id: ""
	I1209 11:55:06.353685  662586 logs.go:282] 0 containers: []
	W1209 11:55:06.353699  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:06.353708  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:06.353782  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:06.395204  662586 cri.go:89] found id: ""
	I1209 11:55:06.395242  662586 logs.go:282] 0 containers: []
	W1209 11:55:06.395257  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:06.395266  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:06.395335  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:06.436421  662586 cri.go:89] found id: ""
	I1209 11:55:06.436452  662586 logs.go:282] 0 containers: []
	W1209 11:55:06.436462  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:06.436469  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:06.436524  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:06.472218  662586 cri.go:89] found id: ""
	I1209 11:55:06.472246  662586 logs.go:282] 0 containers: []
	W1209 11:55:06.472255  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:06.472268  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:06.472335  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:06.506585  662586 cri.go:89] found id: ""
	I1209 11:55:06.506629  662586 logs.go:282] 0 containers: []
	W1209 11:55:06.506640  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:06.506647  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:06.506702  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:06.541442  662586 cri.go:89] found id: ""
	I1209 11:55:06.541472  662586 logs.go:282] 0 containers: []
	W1209 11:55:06.541481  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:06.541493  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:06.541512  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:06.592642  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:06.592682  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:06.606764  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:06.606805  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:06.677693  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:06.677720  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:06.677740  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:06.766074  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:06.766124  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:05.896263  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:08.396283  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:07.951409  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:10.451540  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:06.592724  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:09.091961  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:09.305144  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:09.319352  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:09.319444  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:09.357918  662586 cri.go:89] found id: ""
	I1209 11:55:09.358027  662586 logs.go:282] 0 containers: []
	W1209 11:55:09.358066  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:09.358077  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:09.358139  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:09.413181  662586 cri.go:89] found id: ""
	I1209 11:55:09.413213  662586 logs.go:282] 0 containers: []
	W1209 11:55:09.413226  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:09.413234  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:09.413310  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:09.448417  662586 cri.go:89] found id: ""
	I1209 11:55:09.448460  662586 logs.go:282] 0 containers: []
	W1209 11:55:09.448471  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:09.448480  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:09.448566  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:09.489732  662586 cri.go:89] found id: ""
	I1209 11:55:09.489765  662586 logs.go:282] 0 containers: []
	W1209 11:55:09.489775  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:09.489781  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:09.489845  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:09.524919  662586 cri.go:89] found id: ""
	I1209 11:55:09.524948  662586 logs.go:282] 0 containers: []
	W1209 11:55:09.524959  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:09.524968  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:09.525051  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:09.563268  662586 cri.go:89] found id: ""
	I1209 11:55:09.563301  662586 logs.go:282] 0 containers: []
	W1209 11:55:09.563311  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:09.563318  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:09.563373  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:09.598747  662586 cri.go:89] found id: ""
	I1209 11:55:09.598780  662586 logs.go:282] 0 containers: []
	W1209 11:55:09.598790  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:09.598798  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:09.598866  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:09.634447  662586 cri.go:89] found id: ""
	I1209 11:55:09.634479  662586 logs.go:282] 0 containers: []
	W1209 11:55:09.634492  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:09.634505  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:09.634520  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:09.647380  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:09.647419  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:09.721335  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:09.721363  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:09.721380  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:09.801039  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:09.801088  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:09.840929  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:09.840971  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:12.393810  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:12.407553  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:12.407654  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:12.444391  662586 cri.go:89] found id: ""
	I1209 11:55:12.444437  662586 logs.go:282] 0 containers: []
	W1209 11:55:12.444450  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:12.444459  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:12.444533  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:12.482714  662586 cri.go:89] found id: ""
	I1209 11:55:12.482752  662586 logs.go:282] 0 containers: []
	W1209 11:55:12.482764  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:12.482771  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:12.482853  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:12.518139  662586 cri.go:89] found id: ""
	I1209 11:55:12.518187  662586 logs.go:282] 0 containers: []
	W1209 11:55:12.518202  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:12.518211  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:12.518281  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:12.556903  662586 cri.go:89] found id: ""
	I1209 11:55:12.556938  662586 logs.go:282] 0 containers: []
	W1209 11:55:12.556950  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:12.556958  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:12.557028  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:12.591915  662586 cri.go:89] found id: ""
	I1209 11:55:12.591953  662586 logs.go:282] 0 containers: []
	W1209 11:55:12.591963  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:12.591971  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:12.592038  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:12.629767  662586 cri.go:89] found id: ""
	I1209 11:55:12.629797  662586 logs.go:282] 0 containers: []
	W1209 11:55:12.629806  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:12.629812  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:12.629878  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:12.667677  662586 cri.go:89] found id: ""
	I1209 11:55:12.667710  662586 logs.go:282] 0 containers: []
	W1209 11:55:12.667720  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:12.667727  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:12.667781  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:10.896109  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:12.896992  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:12.451770  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:14.952359  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:11.591952  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:14.092213  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:12.705720  662586 cri.go:89] found id: ""
	I1209 11:55:12.705747  662586 logs.go:282] 0 containers: []
	W1209 11:55:12.705756  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:12.705766  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:12.705780  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:12.758399  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:12.758441  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:12.772297  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:12.772336  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:12.839545  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:12.839569  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:12.839582  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:12.918424  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:12.918467  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:15.458122  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:15.473193  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:15.473284  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:15.508756  662586 cri.go:89] found id: ""
	I1209 11:55:15.508790  662586 logs.go:282] 0 containers: []
	W1209 11:55:15.508799  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:15.508806  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:15.508862  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:15.544735  662586 cri.go:89] found id: ""
	I1209 11:55:15.544770  662586 logs.go:282] 0 containers: []
	W1209 11:55:15.544782  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:15.544791  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:15.544866  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:15.577169  662586 cri.go:89] found id: ""
	I1209 11:55:15.577200  662586 logs.go:282] 0 containers: []
	W1209 11:55:15.577210  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:15.577216  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:15.577277  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:15.610662  662586 cri.go:89] found id: ""
	I1209 11:55:15.610690  662586 logs.go:282] 0 containers: []
	W1209 11:55:15.610700  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:15.610707  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:15.610763  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:15.645339  662586 cri.go:89] found id: ""
	I1209 11:55:15.645375  662586 logs.go:282] 0 containers: []
	W1209 11:55:15.645386  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:15.645394  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:15.645469  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:15.682044  662586 cri.go:89] found id: ""
	I1209 11:55:15.682079  662586 logs.go:282] 0 containers: []
	W1209 11:55:15.682096  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:15.682106  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:15.682201  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:15.717193  662586 cri.go:89] found id: ""
	I1209 11:55:15.717228  662586 logs.go:282] 0 containers: []
	W1209 11:55:15.717245  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:15.717256  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:15.717332  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:15.751756  662586 cri.go:89] found id: ""
	I1209 11:55:15.751792  662586 logs.go:282] 0 containers: []
	W1209 11:55:15.751803  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:15.751813  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:15.751827  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:15.811010  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:15.811063  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:15.842556  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:15.842597  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:15.920169  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:15.920195  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:15.920209  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:16.003180  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:16.003226  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:15.395666  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:17.396041  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:19.396262  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:17.451272  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:19.951638  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:16.591423  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:18.592456  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:21.090108  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:18.542563  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:18.555968  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:18.556059  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:18.588746  662586 cri.go:89] found id: ""
	I1209 11:55:18.588780  662586 logs.go:282] 0 containers: []
	W1209 11:55:18.588790  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:18.588797  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:18.588854  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:18.623664  662586 cri.go:89] found id: ""
	I1209 11:55:18.623707  662586 logs.go:282] 0 containers: []
	W1209 11:55:18.623720  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:18.623728  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:18.623798  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:18.659012  662586 cri.go:89] found id: ""
	I1209 11:55:18.659051  662586 logs.go:282] 0 containers: []
	W1209 11:55:18.659065  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:18.659074  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:18.659148  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:18.693555  662586 cri.go:89] found id: ""
	I1209 11:55:18.693588  662586 logs.go:282] 0 containers: []
	W1209 11:55:18.693600  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:18.693607  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:18.693661  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:18.726609  662586 cri.go:89] found id: ""
	I1209 11:55:18.726641  662586 logs.go:282] 0 containers: []
	W1209 11:55:18.726652  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:18.726659  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:18.726712  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:18.760654  662586 cri.go:89] found id: ""
	I1209 11:55:18.760682  662586 logs.go:282] 0 containers: []
	W1209 11:55:18.760694  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:18.760704  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:18.760761  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:18.794656  662586 cri.go:89] found id: ""
	I1209 11:55:18.794688  662586 logs.go:282] 0 containers: []
	W1209 11:55:18.794699  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:18.794706  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:18.794769  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:18.829988  662586 cri.go:89] found id: ""
	I1209 11:55:18.830030  662586 logs.go:282] 0 containers: []
	W1209 11:55:18.830045  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:18.830059  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:18.830073  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:18.872523  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:18.872558  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:18.929408  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:18.929449  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:18.943095  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:18.943133  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:19.009125  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:19.009150  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:19.009164  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:21.587418  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:21.606271  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:21.606358  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:21.653536  662586 cri.go:89] found id: ""
	I1209 11:55:21.653574  662586 logs.go:282] 0 containers: []
	W1209 11:55:21.653586  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:21.653595  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:21.653671  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:21.687023  662586 cri.go:89] found id: ""
	I1209 11:55:21.687049  662586 logs.go:282] 0 containers: []
	W1209 11:55:21.687060  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:21.687068  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:21.687131  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:21.720112  662586 cri.go:89] found id: ""
	I1209 11:55:21.720150  662586 logs.go:282] 0 containers: []
	W1209 11:55:21.720163  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:21.720171  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:21.720243  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:21.754697  662586 cri.go:89] found id: ""
	I1209 11:55:21.754729  662586 logs.go:282] 0 containers: []
	W1209 11:55:21.754740  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:21.754749  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:21.754814  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:21.793926  662586 cri.go:89] found id: ""
	I1209 11:55:21.793957  662586 logs.go:282] 0 containers: []
	W1209 11:55:21.793967  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:21.793973  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:21.794040  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:21.827572  662586 cri.go:89] found id: ""
	I1209 11:55:21.827609  662586 logs.go:282] 0 containers: []
	W1209 11:55:21.827622  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:21.827633  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:21.827700  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:21.861442  662586 cri.go:89] found id: ""
	I1209 11:55:21.861472  662586 logs.go:282] 0 containers: []
	W1209 11:55:21.861490  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:21.861499  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:21.861565  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:21.894858  662586 cri.go:89] found id: ""
	I1209 11:55:21.894884  662586 logs.go:282] 0 containers: []
	W1209 11:55:21.894892  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:21.894901  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:21.894914  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:21.942567  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:21.942625  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:21.956849  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:21.956879  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:22.020700  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:22.020724  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:22.020738  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:22.095730  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:22.095767  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:21.896304  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:24.395936  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:21.951928  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:24.450997  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:23.090962  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:25.091816  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:24.631715  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:24.644165  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:24.644252  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:24.677720  662586 cri.go:89] found id: ""
	I1209 11:55:24.677757  662586 logs.go:282] 0 containers: []
	W1209 11:55:24.677769  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:24.677778  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:24.677835  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:24.711053  662586 cri.go:89] found id: ""
	I1209 11:55:24.711086  662586 logs.go:282] 0 containers: []
	W1209 11:55:24.711095  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:24.711101  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:24.711154  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:24.744107  662586 cri.go:89] found id: ""
	I1209 11:55:24.744139  662586 logs.go:282] 0 containers: []
	W1209 11:55:24.744148  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:24.744154  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:24.744210  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:24.777811  662586 cri.go:89] found id: ""
	I1209 11:55:24.777853  662586 logs.go:282] 0 containers: []
	W1209 11:55:24.777866  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:24.777876  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:24.777938  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:24.810524  662586 cri.go:89] found id: ""
	I1209 11:55:24.810558  662586 logs.go:282] 0 containers: []
	W1209 11:55:24.810571  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:24.810580  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:24.810648  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:24.843551  662586 cri.go:89] found id: ""
	I1209 11:55:24.843582  662586 logs.go:282] 0 containers: []
	W1209 11:55:24.843590  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:24.843597  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:24.843649  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:24.875342  662586 cri.go:89] found id: ""
	I1209 11:55:24.875371  662586 logs.go:282] 0 containers: []
	W1209 11:55:24.875384  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:24.875390  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:24.875446  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:24.910298  662586 cri.go:89] found id: ""
	I1209 11:55:24.910329  662586 logs.go:282] 0 containers: []
	W1209 11:55:24.910340  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:24.910352  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:24.910377  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:24.962151  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:24.962204  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:24.976547  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:24.976577  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:25.050606  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:25.050635  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:25.050652  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:25.134204  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:25.134254  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:27.671220  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:27.685132  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:27.685194  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:26.895311  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:28.895954  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:26.950106  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:28.950915  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:30.952019  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:27.591908  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:30.090353  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:27.718113  662586 cri.go:89] found id: ""
	I1209 11:55:27.718141  662586 logs.go:282] 0 containers: []
	W1209 11:55:27.718150  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:27.718160  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:27.718242  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:27.752350  662586 cri.go:89] found id: ""
	I1209 11:55:27.752384  662586 logs.go:282] 0 containers: []
	W1209 11:55:27.752395  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:27.752401  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:27.752481  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:27.797360  662586 cri.go:89] found id: ""
	I1209 11:55:27.797393  662586 logs.go:282] 0 containers: []
	W1209 11:55:27.797406  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:27.797415  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:27.797488  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:27.834549  662586 cri.go:89] found id: ""
	I1209 11:55:27.834579  662586 logs.go:282] 0 containers: []
	W1209 11:55:27.834588  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:27.834594  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:27.834655  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:27.874403  662586 cri.go:89] found id: ""
	I1209 11:55:27.874440  662586 logs.go:282] 0 containers: []
	W1209 11:55:27.874465  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:27.874474  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:27.874557  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:27.914324  662586 cri.go:89] found id: ""
	I1209 11:55:27.914360  662586 logs.go:282] 0 containers: []
	W1209 11:55:27.914373  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:27.914380  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:27.914450  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:27.948001  662586 cri.go:89] found id: ""
	I1209 11:55:27.948043  662586 logs.go:282] 0 containers: []
	W1209 11:55:27.948056  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:27.948066  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:27.948219  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:27.982329  662586 cri.go:89] found id: ""
	I1209 11:55:27.982360  662586 logs.go:282] 0 containers: []
	W1209 11:55:27.982369  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:27.982379  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:27.982391  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:28.038165  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:28.038228  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:28.051578  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:28.051609  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:28.119914  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:28.119937  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:28.119951  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:28.195634  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:28.195679  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:30.735392  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:30.748430  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:30.748521  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:30.780500  662586 cri.go:89] found id: ""
	I1209 11:55:30.780528  662586 logs.go:282] 0 containers: []
	W1209 11:55:30.780537  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:30.780544  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:30.780606  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:30.812430  662586 cri.go:89] found id: ""
	I1209 11:55:30.812462  662586 logs.go:282] 0 containers: []
	W1209 11:55:30.812470  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:30.812477  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:30.812530  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:30.854030  662586 cri.go:89] found id: ""
	I1209 11:55:30.854057  662586 logs.go:282] 0 containers: []
	W1209 11:55:30.854066  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:30.854073  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:30.854130  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:30.892144  662586 cri.go:89] found id: ""
	I1209 11:55:30.892182  662586 logs.go:282] 0 containers: []
	W1209 11:55:30.892202  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:30.892211  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:30.892284  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:30.927540  662586 cri.go:89] found id: ""
	I1209 11:55:30.927576  662586 logs.go:282] 0 containers: []
	W1209 11:55:30.927590  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:30.927597  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:30.927660  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:30.963820  662586 cri.go:89] found id: ""
	I1209 11:55:30.963852  662586 logs.go:282] 0 containers: []
	W1209 11:55:30.963861  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:30.963867  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:30.963920  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:30.997793  662586 cri.go:89] found id: ""
	I1209 11:55:30.997819  662586 logs.go:282] 0 containers: []
	W1209 11:55:30.997828  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:30.997836  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:30.997902  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:31.031649  662586 cri.go:89] found id: ""
	I1209 11:55:31.031699  662586 logs.go:282] 0 containers: []
	W1209 11:55:31.031712  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:31.031726  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:31.031746  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:31.101464  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:31.101492  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:31.101509  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:31.184635  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:31.184681  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:31.222690  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:31.222732  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:31.276518  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:31.276566  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:30.896544  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:33.395861  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:33.451560  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:35.952567  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:32.091788  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:34.592091  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:33.790941  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:33.805299  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:33.805390  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:33.844205  662586 cri.go:89] found id: ""
	I1209 11:55:33.844241  662586 logs.go:282] 0 containers: []
	W1209 11:55:33.844253  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:33.844262  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:33.844337  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:33.883378  662586 cri.go:89] found id: ""
	I1209 11:55:33.883410  662586 logs.go:282] 0 containers: []
	W1209 11:55:33.883424  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:33.883431  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:33.883505  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:33.920007  662586 cri.go:89] found id: ""
	I1209 11:55:33.920049  662586 logs.go:282] 0 containers: []
	W1209 11:55:33.920061  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:33.920074  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:33.920141  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:33.956111  662586 cri.go:89] found id: ""
	I1209 11:55:33.956163  662586 logs.go:282] 0 containers: []
	W1209 11:55:33.956175  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:33.956183  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:33.956241  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:33.990057  662586 cri.go:89] found id: ""
	I1209 11:55:33.990092  662586 logs.go:282] 0 containers: []
	W1209 11:55:33.990102  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:33.990109  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:33.990166  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:34.023046  662586 cri.go:89] found id: ""
	I1209 11:55:34.023082  662586 logs.go:282] 0 containers: []
	W1209 11:55:34.023096  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:34.023103  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:34.023171  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:34.055864  662586 cri.go:89] found id: ""
	I1209 11:55:34.055898  662586 logs.go:282] 0 containers: []
	W1209 11:55:34.055909  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:34.055916  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:34.055987  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:34.091676  662586 cri.go:89] found id: ""
	I1209 11:55:34.091710  662586 logs.go:282] 0 containers: []
	W1209 11:55:34.091722  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:34.091733  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:34.091747  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:34.142959  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:34.143002  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:34.156431  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:34.156466  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:34.230277  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:34.230303  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:34.230320  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:34.313660  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:34.313713  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:36.850056  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:36.862486  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:36.862582  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:36.893134  662586 cri.go:89] found id: ""
	I1209 11:55:36.893163  662586 logs.go:282] 0 containers: []
	W1209 11:55:36.893173  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:36.893179  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:36.893257  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:36.927438  662586 cri.go:89] found id: ""
	I1209 11:55:36.927469  662586 logs.go:282] 0 containers: []
	W1209 11:55:36.927479  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:36.927485  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:36.927546  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:36.958787  662586 cri.go:89] found id: ""
	I1209 11:55:36.958818  662586 logs.go:282] 0 containers: []
	W1209 11:55:36.958829  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:36.958837  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:36.958901  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:36.995470  662586 cri.go:89] found id: ""
	I1209 11:55:36.995508  662586 logs.go:282] 0 containers: []
	W1209 11:55:36.995520  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:36.995529  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:36.995590  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:37.026705  662586 cri.go:89] found id: ""
	I1209 11:55:37.026736  662586 logs.go:282] 0 containers: []
	W1209 11:55:37.026746  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:37.026752  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:37.026805  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:37.059717  662586 cri.go:89] found id: ""
	I1209 11:55:37.059748  662586 logs.go:282] 0 containers: []
	W1209 11:55:37.059756  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:37.059762  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:37.059820  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:37.094049  662586 cri.go:89] found id: ""
	I1209 11:55:37.094076  662586 logs.go:282] 0 containers: []
	W1209 11:55:37.094088  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:37.094097  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:37.094190  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:37.128684  662586 cri.go:89] found id: ""
	I1209 11:55:37.128715  662586 logs.go:282] 0 containers: []
	W1209 11:55:37.128724  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:37.128735  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:37.128755  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:37.177932  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:37.177973  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:37.191218  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:37.191252  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:37.256488  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:37.256521  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:37.256538  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:37.330603  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:37.330647  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:35.895823  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:37.895972  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:37.952764  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:40.450704  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:37.092013  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:39.591402  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:39.868604  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:39.881991  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:39.882063  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:39.916750  662586 cri.go:89] found id: ""
	I1209 11:55:39.916786  662586 logs.go:282] 0 containers: []
	W1209 11:55:39.916799  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:39.916806  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:39.916874  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:39.957744  662586 cri.go:89] found id: ""
	I1209 11:55:39.957773  662586 logs.go:282] 0 containers: []
	W1209 11:55:39.957781  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:39.957788  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:39.957854  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:39.994613  662586 cri.go:89] found id: ""
	I1209 11:55:39.994645  662586 logs.go:282] 0 containers: []
	W1209 11:55:39.994654  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:39.994661  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:39.994726  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:40.032606  662586 cri.go:89] found id: ""
	I1209 11:55:40.032635  662586 logs.go:282] 0 containers: []
	W1209 11:55:40.032644  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:40.032650  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:40.032710  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:40.067172  662586 cri.go:89] found id: ""
	I1209 11:55:40.067204  662586 logs.go:282] 0 containers: []
	W1209 11:55:40.067214  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:40.067221  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:40.067278  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:40.101391  662586 cri.go:89] found id: ""
	I1209 11:55:40.101423  662586 logs.go:282] 0 containers: []
	W1209 11:55:40.101432  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:40.101439  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:40.101510  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:40.133160  662586 cri.go:89] found id: ""
	I1209 11:55:40.133196  662586 logs.go:282] 0 containers: []
	W1209 11:55:40.133209  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:40.133217  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:40.133283  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:40.166105  662586 cri.go:89] found id: ""
	I1209 11:55:40.166137  662586 logs.go:282] 0 containers: []
	W1209 11:55:40.166145  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:40.166160  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:40.166187  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:40.231525  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:40.231559  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:40.231582  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:40.311298  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:40.311354  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:40.350040  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:40.350077  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:40.404024  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:40.404061  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:39.896541  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:42.396800  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:42.453720  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:44.950595  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:42.091300  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:44.591230  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:42.917868  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:42.930289  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:42.930357  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:42.962822  662586 cri.go:89] found id: ""
	I1209 11:55:42.962856  662586 logs.go:282] 0 containers: []
	W1209 11:55:42.962869  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:42.962878  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:42.962950  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:42.996932  662586 cri.go:89] found id: ""
	I1209 11:55:42.996962  662586 logs.go:282] 0 containers: []
	W1209 11:55:42.996972  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:42.996979  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:42.997040  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:43.031782  662586 cri.go:89] found id: ""
	I1209 11:55:43.031824  662586 logs.go:282] 0 containers: []
	W1209 11:55:43.031837  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:43.031846  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:43.031915  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:43.064717  662586 cri.go:89] found id: ""
	I1209 11:55:43.064751  662586 logs.go:282] 0 containers: []
	W1209 11:55:43.064764  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:43.064774  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:43.064851  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:43.097248  662586 cri.go:89] found id: ""
	I1209 11:55:43.097278  662586 logs.go:282] 0 containers: []
	W1209 11:55:43.097287  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:43.097294  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:43.097356  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:43.135726  662586 cri.go:89] found id: ""
	I1209 11:55:43.135766  662586 logs.go:282] 0 containers: []
	W1209 11:55:43.135779  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:43.135788  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:43.135881  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:43.171120  662586 cri.go:89] found id: ""
	I1209 11:55:43.171148  662586 logs.go:282] 0 containers: []
	W1209 11:55:43.171157  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:43.171163  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:43.171216  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:43.207488  662586 cri.go:89] found id: ""
	I1209 11:55:43.207523  662586 logs.go:282] 0 containers: []
	W1209 11:55:43.207533  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:43.207545  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:43.207565  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:43.276112  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:43.276142  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:43.276159  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:43.354942  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:43.354990  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:43.392755  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:43.392800  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:43.445708  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:43.445752  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:45.962533  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:45.975508  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:45.975589  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:46.009619  662586 cri.go:89] found id: ""
	I1209 11:55:46.009653  662586 logs.go:282] 0 containers: []
	W1209 11:55:46.009663  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:46.009670  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:46.009726  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:46.042218  662586 cri.go:89] found id: ""
	I1209 11:55:46.042250  662586 logs.go:282] 0 containers: []
	W1209 11:55:46.042259  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:46.042265  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:46.042318  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:46.076204  662586 cri.go:89] found id: ""
	I1209 11:55:46.076239  662586 logs.go:282] 0 containers: []
	W1209 11:55:46.076249  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:46.076255  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:46.076326  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:46.113117  662586 cri.go:89] found id: ""
	I1209 11:55:46.113145  662586 logs.go:282] 0 containers: []
	W1209 11:55:46.113154  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:46.113160  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:46.113225  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:46.148232  662586 cri.go:89] found id: ""
	I1209 11:55:46.148277  662586 logs.go:282] 0 containers: []
	W1209 11:55:46.148293  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:46.148303  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:46.148379  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:46.185028  662586 cri.go:89] found id: ""
	I1209 11:55:46.185083  662586 logs.go:282] 0 containers: []
	W1209 11:55:46.185096  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:46.185106  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:46.185200  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:46.222882  662586 cri.go:89] found id: ""
	I1209 11:55:46.222920  662586 logs.go:282] 0 containers: []
	W1209 11:55:46.222933  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:46.222941  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:46.223007  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:46.263486  662586 cri.go:89] found id: ""
	I1209 11:55:46.263528  662586 logs.go:282] 0 containers: []
	W1209 11:55:46.263538  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:46.263549  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:46.263565  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:46.340524  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:46.340550  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:46.340567  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:46.422768  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:46.422810  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:46.464344  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:46.464382  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:46.517311  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:46.517354  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:44.895283  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:46.895427  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:48.895674  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:46.952912  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:48.953432  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:46.591521  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:49.093057  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:49.031192  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:49.043840  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:49.043929  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:49.077648  662586 cri.go:89] found id: ""
	I1209 11:55:49.077705  662586 logs.go:282] 0 containers: []
	W1209 11:55:49.077720  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:49.077730  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:49.077802  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:49.114111  662586 cri.go:89] found id: ""
	I1209 11:55:49.114138  662586 logs.go:282] 0 containers: []
	W1209 11:55:49.114146  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:49.114154  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:49.114236  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:49.147870  662586 cri.go:89] found id: ""
	I1209 11:55:49.147908  662586 logs.go:282] 0 containers: []
	W1209 11:55:49.147917  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:49.147923  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:49.147976  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:49.185223  662586 cri.go:89] found id: ""
	I1209 11:55:49.185256  662586 logs.go:282] 0 containers: []
	W1209 11:55:49.185269  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:49.185277  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:49.185350  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:49.218037  662586 cri.go:89] found id: ""
	I1209 11:55:49.218068  662586 logs.go:282] 0 containers: []
	W1209 11:55:49.218077  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:49.218084  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:49.218138  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:49.255483  662586 cri.go:89] found id: ""
	I1209 11:55:49.255522  662586 logs.go:282] 0 containers: []
	W1209 11:55:49.255535  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:49.255549  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:49.255629  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:49.288623  662586 cri.go:89] found id: ""
	I1209 11:55:49.288650  662586 logs.go:282] 0 containers: []
	W1209 11:55:49.288659  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:49.288666  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:49.288732  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:49.322880  662586 cri.go:89] found id: ""
	I1209 11:55:49.322913  662586 logs.go:282] 0 containers: []
	W1209 11:55:49.322921  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:49.322930  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:49.322943  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:49.372380  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:49.372428  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:49.385877  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:49.385914  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:49.460078  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:49.460101  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:49.460114  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:49.534588  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:49.534647  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:52.071408  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:52.084198  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:52.084276  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:52.118908  662586 cri.go:89] found id: ""
	I1209 11:55:52.118937  662586 logs.go:282] 0 containers: []
	W1209 11:55:52.118950  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:52.118958  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:52.119026  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:52.156494  662586 cri.go:89] found id: ""
	I1209 11:55:52.156521  662586 logs.go:282] 0 containers: []
	W1209 11:55:52.156530  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:52.156535  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:52.156586  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:52.196037  662586 cri.go:89] found id: ""
	I1209 11:55:52.196075  662586 logs.go:282] 0 containers: []
	W1209 11:55:52.196094  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:52.196102  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:52.196177  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:52.229436  662586 cri.go:89] found id: ""
	I1209 11:55:52.229465  662586 logs.go:282] 0 containers: []
	W1209 11:55:52.229477  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:52.229486  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:52.229558  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:52.268751  662586 cri.go:89] found id: ""
	I1209 11:55:52.268785  662586 logs.go:282] 0 containers: []
	W1209 11:55:52.268797  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:52.268805  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:52.268871  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:52.302405  662586 cri.go:89] found id: ""
	I1209 11:55:52.302436  662586 logs.go:282] 0 containers: []
	W1209 11:55:52.302446  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:52.302453  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:52.302522  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:52.338641  662586 cri.go:89] found id: ""
	I1209 11:55:52.338676  662586 logs.go:282] 0 containers: []
	W1209 11:55:52.338688  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:52.338698  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:52.338754  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:52.375541  662586 cri.go:89] found id: ""
	I1209 11:55:52.375578  662586 logs.go:282] 0 containers: []
	W1209 11:55:52.375591  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:52.375604  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:52.375624  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:52.389140  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:52.389190  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:52.460520  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:52.460546  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:52.460562  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:52.535234  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:52.535280  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:52.573317  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:52.573354  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:50.896292  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:52.896875  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:51.453540  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:53.456640  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:55.950197  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:51.590899  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:53.591317  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:56.092219  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:55.124068  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:55.136800  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:55.136868  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:55.169724  662586 cri.go:89] found id: ""
	I1209 11:55:55.169757  662586 logs.go:282] 0 containers: []
	W1209 11:55:55.169769  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:55.169777  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:55.169843  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:55.207466  662586 cri.go:89] found id: ""
	I1209 11:55:55.207514  662586 logs.go:282] 0 containers: []
	W1209 11:55:55.207528  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:55.207537  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:55.207600  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:55.241761  662586 cri.go:89] found id: ""
	I1209 11:55:55.241790  662586 logs.go:282] 0 containers: []
	W1209 11:55:55.241801  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:55.241809  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:55.241874  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:55.274393  662586 cri.go:89] found id: ""
	I1209 11:55:55.274434  662586 logs.go:282] 0 containers: []
	W1209 11:55:55.274447  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:55.274455  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:55.274522  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:55.307942  662586 cri.go:89] found id: ""
	I1209 11:55:55.307988  662586 logs.go:282] 0 containers: []
	W1209 11:55:55.308002  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:55.308012  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:55.308088  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:55.340074  662586 cri.go:89] found id: ""
	I1209 11:55:55.340107  662586 logs.go:282] 0 containers: []
	W1209 11:55:55.340116  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:55.340122  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:55.340196  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:55.388077  662586 cri.go:89] found id: ""
	I1209 11:55:55.388119  662586 logs.go:282] 0 containers: []
	W1209 11:55:55.388140  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:55.388149  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:55.388230  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:55.422923  662586 cri.go:89] found id: ""
	I1209 11:55:55.422961  662586 logs.go:282] 0 containers: []
	W1209 11:55:55.422975  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:55.422990  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:55.423008  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:55.476178  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:55.476219  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:55.489891  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:55.489919  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:55.555705  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:55.555726  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:55.555745  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:55.634818  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:55.634862  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:55.396320  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:57.895122  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:57.951119  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:00.451659  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:58.092427  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:00.590304  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:58.173169  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:58.188529  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:58.188620  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:58.225602  662586 cri.go:89] found id: ""
	I1209 11:55:58.225630  662586 logs.go:282] 0 containers: []
	W1209 11:55:58.225641  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:58.225649  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:58.225709  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:58.259597  662586 cri.go:89] found id: ""
	I1209 11:55:58.259638  662586 logs.go:282] 0 containers: []
	W1209 11:55:58.259652  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:58.259662  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:58.259744  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:58.293287  662586 cri.go:89] found id: ""
	I1209 11:55:58.293320  662586 logs.go:282] 0 containers: []
	W1209 11:55:58.293329  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:58.293336  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:58.293390  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:58.326581  662586 cri.go:89] found id: ""
	I1209 11:55:58.326611  662586 logs.go:282] 0 containers: []
	W1209 11:55:58.326622  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:58.326630  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:58.326699  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:58.359636  662586 cri.go:89] found id: ""
	I1209 11:55:58.359665  662586 logs.go:282] 0 containers: []
	W1209 11:55:58.359675  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:58.359681  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:58.359736  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:58.396767  662586 cri.go:89] found id: ""
	I1209 11:55:58.396798  662586 logs.go:282] 0 containers: []
	W1209 11:55:58.396809  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:58.396818  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:58.396887  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:58.428907  662586 cri.go:89] found id: ""
	I1209 11:55:58.428941  662586 logs.go:282] 0 containers: []
	W1209 11:55:58.428954  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:58.428962  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:58.429032  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:58.466082  662586 cri.go:89] found id: ""
	I1209 11:55:58.466124  662586 logs.go:282] 0 containers: []
	W1209 11:55:58.466136  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:58.466149  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:58.466186  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:58.542333  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:58.542378  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:58.582397  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:58.582436  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:58.632980  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:58.633030  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:58.648464  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:58.648514  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:58.711714  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:56:01.212475  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:56:01.225574  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:56:01.225642  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:56:01.259666  662586 cri.go:89] found id: ""
	I1209 11:56:01.259704  662586 logs.go:282] 0 containers: []
	W1209 11:56:01.259718  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:56:01.259726  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:56:01.259800  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:56:01.295433  662586 cri.go:89] found id: ""
	I1209 11:56:01.295474  662586 logs.go:282] 0 containers: []
	W1209 11:56:01.295495  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:56:01.295503  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:56:01.295561  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:56:01.330316  662586 cri.go:89] found id: ""
	I1209 11:56:01.330352  662586 logs.go:282] 0 containers: []
	W1209 11:56:01.330364  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:56:01.330373  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:56:01.330447  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:56:01.366762  662586 cri.go:89] found id: ""
	I1209 11:56:01.366797  662586 logs.go:282] 0 containers: []
	W1209 11:56:01.366808  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:56:01.366814  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:56:01.366878  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:56:01.403511  662586 cri.go:89] found id: ""
	I1209 11:56:01.403539  662586 logs.go:282] 0 containers: []
	W1209 11:56:01.403547  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:56:01.403553  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:56:01.403604  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:56:01.436488  662586 cri.go:89] found id: ""
	I1209 11:56:01.436526  662586 logs.go:282] 0 containers: []
	W1209 11:56:01.436538  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:56:01.436546  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:56:01.436617  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:56:01.471647  662586 cri.go:89] found id: ""
	I1209 11:56:01.471676  662586 logs.go:282] 0 containers: []
	W1209 11:56:01.471685  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:56:01.471690  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:56:01.471744  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:56:01.504065  662586 cri.go:89] found id: ""
	I1209 11:56:01.504099  662586 logs.go:282] 0 containers: []
	W1209 11:56:01.504111  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:56:01.504124  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:56:01.504143  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:56:01.553434  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:56:01.553482  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:56:01.567537  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:56:01.567579  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:56:01.636968  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:56:01.636995  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:56:01.637012  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:56:01.713008  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:56:01.713049  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:59.896841  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:02.396972  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:02.451893  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:04.453118  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:02.591218  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:04.592199  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:04.253143  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:56:04.266428  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:56:04.266512  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:56:04.298769  662586 cri.go:89] found id: ""
	I1209 11:56:04.298810  662586 logs.go:282] 0 containers: []
	W1209 11:56:04.298823  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:56:04.298833  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:56:04.298913  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:56:04.330392  662586 cri.go:89] found id: ""
	I1209 11:56:04.330428  662586 logs.go:282] 0 containers: []
	W1209 11:56:04.330441  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:56:04.330449  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:56:04.330528  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:56:04.362409  662586 cri.go:89] found id: ""
	I1209 11:56:04.362443  662586 logs.go:282] 0 containers: []
	W1209 11:56:04.362455  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:56:04.362463  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:56:04.362544  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:56:04.396853  662586 cri.go:89] found id: ""
	I1209 11:56:04.396884  662586 logs.go:282] 0 containers: []
	W1209 11:56:04.396893  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:56:04.396899  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:56:04.396966  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:56:04.430425  662586 cri.go:89] found id: ""
	I1209 11:56:04.430461  662586 logs.go:282] 0 containers: []
	W1209 11:56:04.430470  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:56:04.430477  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:56:04.430531  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:56:04.465354  662586 cri.go:89] found id: ""
	I1209 11:56:04.465391  662586 logs.go:282] 0 containers: []
	W1209 11:56:04.465403  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:56:04.465411  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:56:04.465480  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:56:04.500114  662586 cri.go:89] found id: ""
	I1209 11:56:04.500156  662586 logs.go:282] 0 containers: []
	W1209 11:56:04.500167  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:56:04.500179  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:56:04.500259  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:56:04.534853  662586 cri.go:89] found id: ""
	I1209 11:56:04.534888  662586 logs.go:282] 0 containers: []
	W1209 11:56:04.534902  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:56:04.534914  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:56:04.534928  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:56:04.586419  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:56:04.586457  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:56:04.600690  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:56:04.600728  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:56:04.669645  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:56:04.669685  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:56:04.669703  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:56:04.747973  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:56:04.748026  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:56:07.288721  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:56:07.302905  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:56:07.302975  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:56:07.336686  662586 cri.go:89] found id: ""
	I1209 11:56:07.336720  662586 logs.go:282] 0 containers: []
	W1209 11:56:07.336728  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:56:07.336735  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:56:07.336798  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:56:07.370119  662586 cri.go:89] found id: ""
	I1209 11:56:07.370150  662586 logs.go:282] 0 containers: []
	W1209 11:56:07.370159  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:56:07.370165  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:56:07.370245  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:56:07.402818  662586 cri.go:89] found id: ""
	I1209 11:56:07.402845  662586 logs.go:282] 0 containers: []
	W1209 11:56:07.402853  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:56:07.402861  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:56:07.402923  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:56:07.437694  662586 cri.go:89] found id: ""
	I1209 11:56:07.437722  662586 logs.go:282] 0 containers: []
	W1209 11:56:07.437732  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:56:07.437741  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:56:07.437806  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:56:07.474576  662586 cri.go:89] found id: ""
	I1209 11:56:07.474611  662586 logs.go:282] 0 containers: []
	W1209 11:56:07.474622  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:56:07.474629  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:56:07.474705  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:56:07.508538  662586 cri.go:89] found id: ""
	I1209 11:56:07.508575  662586 logs.go:282] 0 containers: []
	W1209 11:56:07.508585  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:56:07.508592  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:56:07.508661  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:56:07.548863  662586 cri.go:89] found id: ""
	I1209 11:56:07.548897  662586 logs.go:282] 0 containers: []
	W1209 11:56:07.548911  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:56:07.548922  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:56:07.549093  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:56:07.592515  662586 cri.go:89] found id: ""
	I1209 11:56:07.592543  662586 logs.go:282] 0 containers: []
	W1209 11:56:07.592555  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:56:07.592564  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:56:07.592579  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:56:07.652176  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:56:07.652219  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:56:04.895898  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:07.395712  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:09.398273  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:06.950668  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:09.450539  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:07.091573  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:09.591049  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:07.703040  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:56:07.703094  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:56:07.717880  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:56:07.717924  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:56:07.783396  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:56:07.783425  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:56:07.783441  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:56:10.362395  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:56:10.377478  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:56:10.377574  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:56:10.411923  662586 cri.go:89] found id: ""
	I1209 11:56:10.411956  662586 logs.go:282] 0 containers: []
	W1209 11:56:10.411969  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:56:10.411978  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:56:10.412049  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:56:10.444601  662586 cri.go:89] found id: ""
	I1209 11:56:10.444633  662586 logs.go:282] 0 containers: []
	W1209 11:56:10.444642  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:56:10.444648  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:56:10.444705  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:56:10.486720  662586 cri.go:89] found id: ""
	I1209 11:56:10.486753  662586 logs.go:282] 0 containers: []
	W1209 11:56:10.486763  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:56:10.486769  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:56:10.486822  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:56:10.523535  662586 cri.go:89] found id: ""
	I1209 11:56:10.523572  662586 logs.go:282] 0 containers: []
	W1209 11:56:10.523581  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:56:10.523587  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:56:10.523641  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:56:10.557701  662586 cri.go:89] found id: ""
	I1209 11:56:10.557741  662586 logs.go:282] 0 containers: []
	W1209 11:56:10.557754  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:56:10.557762  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:56:10.557834  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:56:10.593914  662586 cri.go:89] found id: ""
	I1209 11:56:10.593949  662586 logs.go:282] 0 containers: []
	W1209 11:56:10.593959  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:56:10.593965  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:56:10.594017  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:56:10.626367  662586 cri.go:89] found id: ""
	I1209 11:56:10.626469  662586 logs.go:282] 0 containers: []
	W1209 11:56:10.626482  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:56:10.626489  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:56:10.626547  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:56:10.665415  662586 cri.go:89] found id: ""
	I1209 11:56:10.665446  662586 logs.go:282] 0 containers: []
	W1209 11:56:10.665456  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:56:10.665467  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:56:10.665480  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:56:10.747483  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:56:10.747532  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:56:10.787728  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:56:10.787758  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:56:10.840678  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:56:10.840722  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:56:10.855774  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:56:10.855809  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:56:10.929638  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:56:11.896254  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:14.395661  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:11.451031  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:13.452502  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:15.951720  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:11.592197  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:13.593711  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:16.091641  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:13.430793  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:56:13.446156  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:56:13.446261  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:56:13.491624  662586 cri.go:89] found id: ""
	I1209 11:56:13.491662  662586 logs.go:282] 0 containers: []
	W1209 11:56:13.491675  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:56:13.491684  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:56:13.491758  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:56:13.537619  662586 cri.go:89] found id: ""
	I1209 11:56:13.537653  662586 logs.go:282] 0 containers: []
	W1209 11:56:13.537666  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:56:13.537675  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:56:13.537750  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:56:13.585761  662586 cri.go:89] found id: ""
	I1209 11:56:13.585796  662586 logs.go:282] 0 containers: []
	W1209 11:56:13.585810  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:56:13.585819  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:56:13.585883  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:56:13.620740  662586 cri.go:89] found id: ""
	I1209 11:56:13.620774  662586 logs.go:282] 0 containers: []
	W1209 11:56:13.620785  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:56:13.620791  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:56:13.620858  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:56:13.654405  662586 cri.go:89] found id: ""
	I1209 11:56:13.654433  662586 logs.go:282] 0 containers: []
	W1209 11:56:13.654442  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:56:13.654448  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:56:13.654509  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:56:13.687520  662586 cri.go:89] found id: ""
	I1209 11:56:13.687547  662586 logs.go:282] 0 containers: []
	W1209 11:56:13.687558  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:56:13.687566  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:56:13.687642  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:56:13.721105  662586 cri.go:89] found id: ""
	I1209 11:56:13.721140  662586 logs.go:282] 0 containers: []
	W1209 11:56:13.721153  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:56:13.721162  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:56:13.721238  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:56:13.753900  662586 cri.go:89] found id: ""
	I1209 11:56:13.753933  662586 logs.go:282] 0 containers: []
	W1209 11:56:13.753945  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:56:13.753960  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:56:13.753978  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:56:13.805864  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:56:13.805909  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:56:13.819356  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:56:13.819393  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:56:13.896097  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:56:13.896128  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:56:13.896150  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:56:13.979041  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:56:13.979084  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:56:16.516777  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:56:16.529916  662586 kubeadm.go:597] duration metric: took 4m1.869807937s to restartPrimaryControlPlane
	W1209 11:56:16.530015  662586 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1209 11:56:16.530067  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1209 11:56:16.396353  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:18.896097  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:18.451294  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:20.452525  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:18.092780  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:20.593275  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:18.635832  662586 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.105742271s)
	I1209 11:56:18.635914  662586 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 11:56:18.651678  662586 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 11:56:18.661965  662586 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 11:56:18.672060  662586 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 11:56:18.672082  662586 kubeadm.go:157] found existing configuration files:
	
	I1209 11:56:18.672147  662586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 11:56:18.681627  662586 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 11:56:18.681697  662586 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 11:56:18.691514  662586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 11:56:18.701210  662586 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 11:56:18.701292  662586 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 11:56:18.710934  662586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 11:56:18.720506  662586 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 11:56:18.720583  662586 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 11:56:18.729996  662586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 11:56:18.739425  662586 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 11:56:18.739486  662586 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 11:56:18.748788  662586 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1209 11:56:18.981849  662586 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1209 11:56:21.396764  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:23.894781  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:22.950912  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:24.951678  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:23.091306  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:24.592439  662109 pod_ready.go:82] duration metric: took 4m0.007699806s for pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace to be "Ready" ...
	E1209 11:56:24.592477  662109 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1209 11:56:24.592486  662109 pod_ready.go:39] duration metric: took 4m7.416528348s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 11:56:24.592504  662109 api_server.go:52] waiting for apiserver process to appear ...
	I1209 11:56:24.592537  662109 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:56:24.592590  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:56:24.643050  662109 cri.go:89] found id: "478ca5095dcdb90c166952aee1291e7de907591fa02ac65de132b6d7e6ea79cb"
	I1209 11:56:24.643085  662109 cri.go:89] found id: ""
	I1209 11:56:24.643094  662109 logs.go:282] 1 containers: [478ca5095dcdb90c166952aee1291e7de907591fa02ac65de132b6d7e6ea79cb]
	I1209 11:56:24.643151  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:24.647529  662109 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:56:24.647590  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:56:24.683125  662109 cri.go:89] found id: "13e00a6fef368f36aa166cfe9a5a6e37476d0bff708b09a552a102e6103aff16"
	I1209 11:56:24.683150  662109 cri.go:89] found id: ""
	I1209 11:56:24.683159  662109 logs.go:282] 1 containers: [13e00a6fef368f36aa166cfe9a5a6e37476d0bff708b09a552a102e6103aff16]
	I1209 11:56:24.683222  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:24.687584  662109 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:56:24.687706  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:56:24.720663  662109 cri.go:89] found id: "909852cc820d2c5faeb9b92e83a833c397dbbf46bbc715f79ef8b77f0a338f42"
	I1209 11:56:24.720699  662109 cri.go:89] found id: ""
	I1209 11:56:24.720708  662109 logs.go:282] 1 containers: [909852cc820d2c5faeb9b92e83a833c397dbbf46bbc715f79ef8b77f0a338f42]
	I1209 11:56:24.720769  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:24.724881  662109 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:56:24.724942  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:56:24.766055  662109 cri.go:89] found id: "73b01a8a4080f1488054462bb97f98c08528c799629927ce6a684fb0ade63413"
	I1209 11:56:24.766081  662109 cri.go:89] found id: ""
	I1209 11:56:24.766091  662109 logs.go:282] 1 containers: [73b01a8a4080f1488054462bb97f98c08528c799629927ce6a684fb0ade63413]
	I1209 11:56:24.766152  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:24.770491  662109 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:56:24.770557  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:56:24.804523  662109 cri.go:89] found id: "de64a319ab30ae80695633af0e1c9206551e2408918b743c2258216259de56d2"
	I1209 11:56:24.804549  662109 cri.go:89] found id: ""
	I1209 11:56:24.804558  662109 logs.go:282] 1 containers: [de64a319ab30ae80695633af0e1c9206551e2408918b743c2258216259de56d2]
	I1209 11:56:24.804607  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:24.808452  662109 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:56:24.808528  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:56:24.846043  662109 cri.go:89] found id: "b6662f1bed1995329291aee23d70ac4157125e45c75dd92e34e07fa257f2be2d"
	I1209 11:56:24.846072  662109 cri.go:89] found id: ""
	I1209 11:56:24.846084  662109 logs.go:282] 1 containers: [b6662f1bed1995329291aee23d70ac4157125e45c75dd92e34e07fa257f2be2d]
	I1209 11:56:24.846140  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:24.849991  662109 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:56:24.850057  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:56:24.884853  662109 cri.go:89] found id: ""
	I1209 11:56:24.884889  662109 logs.go:282] 0 containers: []
	W1209 11:56:24.884902  662109 logs.go:284] No container was found matching "kindnet"
	I1209 11:56:24.884912  662109 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 11:56:24.884983  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 11:56:24.920103  662109 cri.go:89] found id: "d184b6139f52f80605886a70426659453eee61916ead187b881c64f6b4c59bbb"
	I1209 11:56:24.920131  662109 cri.go:89] found id: "0ef403336ca71e96a468d654fafbbe9fd9c37a3d04d2578e06771b425f20004f"
	I1209 11:56:24.920135  662109 cri.go:89] found id: ""
	I1209 11:56:24.920152  662109 logs.go:282] 2 containers: [d184b6139f52f80605886a70426659453eee61916ead187b881c64f6b4c59bbb 0ef403336ca71e96a468d654fafbbe9fd9c37a3d04d2578e06771b425f20004f]
	I1209 11:56:24.920223  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:24.924212  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:24.928416  662109 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:56:24.928436  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 11:56:25.077407  662109 logs.go:123] Gathering logs for kube-apiserver [478ca5095dcdb90c166952aee1291e7de907591fa02ac65de132b6d7e6ea79cb] ...
	I1209 11:56:25.077468  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 478ca5095dcdb90c166952aee1291e7de907591fa02ac65de132b6d7e6ea79cb"
	I1209 11:56:25.125600  662109 logs.go:123] Gathering logs for storage-provisioner [d184b6139f52f80605886a70426659453eee61916ead187b881c64f6b4c59bbb] ...
	I1209 11:56:25.125649  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d184b6139f52f80605886a70426659453eee61916ead187b881c64f6b4c59bbb"
	I1209 11:56:25.163222  662109 logs.go:123] Gathering logs for container status ...
	I1209 11:56:25.163268  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:56:25.208430  662109 logs.go:123] Gathering logs for storage-provisioner [0ef403336ca71e96a468d654fafbbe9fd9c37a3d04d2578e06771b425f20004f] ...
	I1209 11:56:25.208465  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0ef403336ca71e96a468d654fafbbe9fd9c37a3d04d2578e06771b425f20004f"
	I1209 11:56:25.245884  662109 logs.go:123] Gathering logs for kubelet ...
	I1209 11:56:25.245917  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:56:25.318723  662109 logs.go:123] Gathering logs for dmesg ...
	I1209 11:56:25.318775  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:56:25.333173  662109 logs.go:123] Gathering logs for etcd [13e00a6fef368f36aa166cfe9a5a6e37476d0bff708b09a552a102e6103aff16] ...
	I1209 11:56:25.333207  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 13e00a6fef368f36aa166cfe9a5a6e37476d0bff708b09a552a102e6103aff16"
	I1209 11:56:25.394636  662109 logs.go:123] Gathering logs for coredns [909852cc820d2c5faeb9b92e83a833c397dbbf46bbc715f79ef8b77f0a338f42] ...
	I1209 11:56:25.394683  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 909852cc820d2c5faeb9b92e83a833c397dbbf46bbc715f79ef8b77f0a338f42"
	I1209 11:56:25.435210  662109 logs.go:123] Gathering logs for kube-scheduler [73b01a8a4080f1488054462bb97f98c08528c799629927ce6a684fb0ade63413] ...
	I1209 11:56:25.435248  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 73b01a8a4080f1488054462bb97f98c08528c799629927ce6a684fb0ade63413"
	I1209 11:56:25.482142  662109 logs.go:123] Gathering logs for kube-proxy [de64a319ab30ae80695633af0e1c9206551e2408918b743c2258216259de56d2] ...
	I1209 11:56:25.482184  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de64a319ab30ae80695633af0e1c9206551e2408918b743c2258216259de56d2"
	I1209 11:56:25.516975  662109 logs.go:123] Gathering logs for kube-controller-manager [b6662f1bed1995329291aee23d70ac4157125e45c75dd92e34e07fa257f2be2d] ...
	I1209 11:56:25.517006  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b6662f1bed1995329291aee23d70ac4157125e45c75dd92e34e07fa257f2be2d"
	I1209 11:56:25.565526  662109 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:56:25.565565  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:56:25.896281  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:28.395529  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:27.454449  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:29.950704  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:28.549071  662109 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:56:28.567288  662109 api_server.go:72] duration metric: took 4m18.770451099s to wait for apiserver process to appear ...
	I1209 11:56:28.567319  662109 api_server.go:88] waiting for apiserver healthz status ...
	I1209 11:56:28.567367  662109 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:56:28.567418  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:56:28.603341  662109 cri.go:89] found id: "478ca5095dcdb90c166952aee1291e7de907591fa02ac65de132b6d7e6ea79cb"
	I1209 11:56:28.603365  662109 cri.go:89] found id: ""
	I1209 11:56:28.603372  662109 logs.go:282] 1 containers: [478ca5095dcdb90c166952aee1291e7de907591fa02ac65de132b6d7e6ea79cb]
	I1209 11:56:28.603423  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:28.607416  662109 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:56:28.607493  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:56:28.647437  662109 cri.go:89] found id: "13e00a6fef368f36aa166cfe9a5a6e37476d0bff708b09a552a102e6103aff16"
	I1209 11:56:28.647465  662109 cri.go:89] found id: ""
	I1209 11:56:28.647477  662109 logs.go:282] 1 containers: [13e00a6fef368f36aa166cfe9a5a6e37476d0bff708b09a552a102e6103aff16]
	I1209 11:56:28.647539  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:28.651523  662109 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:56:28.651584  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:56:28.687889  662109 cri.go:89] found id: "909852cc820d2c5faeb9b92e83a833c397dbbf46bbc715f79ef8b77f0a338f42"
	I1209 11:56:28.687920  662109 cri.go:89] found id: ""
	I1209 11:56:28.687929  662109 logs.go:282] 1 containers: [909852cc820d2c5faeb9b92e83a833c397dbbf46bbc715f79ef8b77f0a338f42]
	I1209 11:56:28.687983  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:28.692025  662109 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:56:28.692100  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:56:28.728934  662109 cri.go:89] found id: "73b01a8a4080f1488054462bb97f98c08528c799629927ce6a684fb0ade63413"
	I1209 11:56:28.728961  662109 cri.go:89] found id: ""
	I1209 11:56:28.728969  662109 logs.go:282] 1 containers: [73b01a8a4080f1488054462bb97f98c08528c799629927ce6a684fb0ade63413]
	I1209 11:56:28.729020  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:28.733217  662109 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:56:28.733300  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:56:28.768700  662109 cri.go:89] found id: "de64a319ab30ae80695633af0e1c9206551e2408918b743c2258216259de56d2"
	I1209 11:56:28.768726  662109 cri.go:89] found id: ""
	I1209 11:56:28.768735  662109 logs.go:282] 1 containers: [de64a319ab30ae80695633af0e1c9206551e2408918b743c2258216259de56d2]
	I1209 11:56:28.768790  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:28.772844  662109 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:56:28.772921  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:56:28.812073  662109 cri.go:89] found id: "b6662f1bed1995329291aee23d70ac4157125e45c75dd92e34e07fa257f2be2d"
	I1209 11:56:28.812104  662109 cri.go:89] found id: ""
	I1209 11:56:28.812116  662109 logs.go:282] 1 containers: [b6662f1bed1995329291aee23d70ac4157125e45c75dd92e34e07fa257f2be2d]
	I1209 11:56:28.812195  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:28.816542  662109 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:56:28.816612  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:56:28.850959  662109 cri.go:89] found id: ""
	I1209 11:56:28.850997  662109 logs.go:282] 0 containers: []
	W1209 11:56:28.851010  662109 logs.go:284] No container was found matching "kindnet"
	I1209 11:56:28.851018  662109 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 11:56:28.851075  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 11:56:28.894115  662109 cri.go:89] found id: "d184b6139f52f80605886a70426659453eee61916ead187b881c64f6b4c59bbb"
	I1209 11:56:28.894142  662109 cri.go:89] found id: "0ef403336ca71e96a468d654fafbbe9fd9c37a3d04d2578e06771b425f20004f"
	I1209 11:56:28.894148  662109 cri.go:89] found id: ""
	I1209 11:56:28.894157  662109 logs.go:282] 2 containers: [d184b6139f52f80605886a70426659453eee61916ead187b881c64f6b4c59bbb 0ef403336ca71e96a468d654fafbbe9fd9c37a3d04d2578e06771b425f20004f]
	I1209 11:56:28.894228  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:28.899260  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:28.903033  662109 logs.go:123] Gathering logs for dmesg ...
	I1209 11:56:28.903055  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:56:28.916411  662109 logs.go:123] Gathering logs for kube-apiserver [478ca5095dcdb90c166952aee1291e7de907591fa02ac65de132b6d7e6ea79cb] ...
	I1209 11:56:28.916447  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 478ca5095dcdb90c166952aee1291e7de907591fa02ac65de132b6d7e6ea79cb"
	I1209 11:56:28.965873  662109 logs.go:123] Gathering logs for coredns [909852cc820d2c5faeb9b92e83a833c397dbbf46bbc715f79ef8b77f0a338f42] ...
	I1209 11:56:28.965911  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 909852cc820d2c5faeb9b92e83a833c397dbbf46bbc715f79ef8b77f0a338f42"
	I1209 11:56:29.003553  662109 logs.go:123] Gathering logs for kube-scheduler [73b01a8a4080f1488054462bb97f98c08528c799629927ce6a684fb0ade63413] ...
	I1209 11:56:29.003591  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 73b01a8a4080f1488054462bb97f98c08528c799629927ce6a684fb0ade63413"
	I1209 11:56:29.038945  662109 logs.go:123] Gathering logs for kube-proxy [de64a319ab30ae80695633af0e1c9206551e2408918b743c2258216259de56d2] ...
	I1209 11:56:29.038989  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de64a319ab30ae80695633af0e1c9206551e2408918b743c2258216259de56d2"
	I1209 11:56:29.079595  662109 logs.go:123] Gathering logs for storage-provisioner [d184b6139f52f80605886a70426659453eee61916ead187b881c64f6b4c59bbb] ...
	I1209 11:56:29.079636  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d184b6139f52f80605886a70426659453eee61916ead187b881c64f6b4c59bbb"
	I1209 11:56:29.117632  662109 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:56:29.117665  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:56:29.556193  662109 logs.go:123] Gathering logs for kubelet ...
	I1209 11:56:29.556245  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:56:29.629530  662109 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:56:29.629571  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 11:56:29.746102  662109 logs.go:123] Gathering logs for etcd [13e00a6fef368f36aa166cfe9a5a6e37476d0bff708b09a552a102e6103aff16] ...
	I1209 11:56:29.746137  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 13e00a6fef368f36aa166cfe9a5a6e37476d0bff708b09a552a102e6103aff16"
	I1209 11:56:29.799342  662109 logs.go:123] Gathering logs for kube-controller-manager [b6662f1bed1995329291aee23d70ac4157125e45c75dd92e34e07fa257f2be2d] ...
	I1209 11:56:29.799379  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b6662f1bed1995329291aee23d70ac4157125e45c75dd92e34e07fa257f2be2d"
	I1209 11:56:29.851197  662109 logs.go:123] Gathering logs for storage-provisioner [0ef403336ca71e96a468d654fafbbe9fd9c37a3d04d2578e06771b425f20004f] ...
	I1209 11:56:29.851254  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0ef403336ca71e96a468d654fafbbe9fd9c37a3d04d2578e06771b425f20004f"
	I1209 11:56:29.884688  662109 logs.go:123] Gathering logs for container status ...
	I1209 11:56:29.884725  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:56:30.396025  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:32.396195  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:34.396605  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:31.951405  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:34.451838  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:32.425773  662109 api_server.go:253] Checking apiserver healthz at https://192.168.39.169:8443/healthz ...
	I1209 11:56:32.432276  662109 api_server.go:279] https://192.168.39.169:8443/healthz returned 200:
	ok
	I1209 11:56:32.433602  662109 api_server.go:141] control plane version: v1.31.2
	I1209 11:56:32.433634  662109 api_server.go:131] duration metric: took 3.866306159s to wait for apiserver health ...
	I1209 11:56:32.433647  662109 system_pods.go:43] waiting for kube-system pods to appear ...
	I1209 11:56:32.433680  662109 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:56:32.433744  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:56:32.471560  662109 cri.go:89] found id: "478ca5095dcdb90c166952aee1291e7de907591fa02ac65de132b6d7e6ea79cb"
	I1209 11:56:32.471593  662109 cri.go:89] found id: ""
	I1209 11:56:32.471604  662109 logs.go:282] 1 containers: [478ca5095dcdb90c166952aee1291e7de907591fa02ac65de132b6d7e6ea79cb]
	I1209 11:56:32.471684  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:32.475735  662109 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:56:32.475809  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:56:32.509788  662109 cri.go:89] found id: "13e00a6fef368f36aa166cfe9a5a6e37476d0bff708b09a552a102e6103aff16"
	I1209 11:56:32.509821  662109 cri.go:89] found id: ""
	I1209 11:56:32.509833  662109 logs.go:282] 1 containers: [13e00a6fef368f36aa166cfe9a5a6e37476d0bff708b09a552a102e6103aff16]
	I1209 11:56:32.509889  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:32.513849  662109 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:56:32.513908  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:56:32.547022  662109 cri.go:89] found id: "909852cc820d2c5faeb9b92e83a833c397dbbf46bbc715f79ef8b77f0a338f42"
	I1209 11:56:32.547046  662109 cri.go:89] found id: ""
	I1209 11:56:32.547055  662109 logs.go:282] 1 containers: [909852cc820d2c5faeb9b92e83a833c397dbbf46bbc715f79ef8b77f0a338f42]
	I1209 11:56:32.547113  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:32.551393  662109 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:56:32.551476  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:56:32.586478  662109 cri.go:89] found id: "73b01a8a4080f1488054462bb97f98c08528c799629927ce6a684fb0ade63413"
	I1209 11:56:32.586516  662109 cri.go:89] found id: ""
	I1209 11:56:32.586536  662109 logs.go:282] 1 containers: [73b01a8a4080f1488054462bb97f98c08528c799629927ce6a684fb0ade63413]
	I1209 11:56:32.586605  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:32.592876  662109 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:56:32.592950  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:56:32.626775  662109 cri.go:89] found id: "de64a319ab30ae80695633af0e1c9206551e2408918b743c2258216259de56d2"
	I1209 11:56:32.626803  662109 cri.go:89] found id: ""
	I1209 11:56:32.626812  662109 logs.go:282] 1 containers: [de64a319ab30ae80695633af0e1c9206551e2408918b743c2258216259de56d2]
	I1209 11:56:32.626869  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:32.630757  662109 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:56:32.630825  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:56:32.663980  662109 cri.go:89] found id: "b6662f1bed1995329291aee23d70ac4157125e45c75dd92e34e07fa257f2be2d"
	I1209 11:56:32.664013  662109 cri.go:89] found id: ""
	I1209 11:56:32.664026  662109 logs.go:282] 1 containers: [b6662f1bed1995329291aee23d70ac4157125e45c75dd92e34e07fa257f2be2d]
	I1209 11:56:32.664093  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:32.668368  662109 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:56:32.668449  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:56:32.704638  662109 cri.go:89] found id: ""
	I1209 11:56:32.704675  662109 logs.go:282] 0 containers: []
	W1209 11:56:32.704688  662109 logs.go:284] No container was found matching "kindnet"
	I1209 11:56:32.704695  662109 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 11:56:32.704752  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 11:56:32.743694  662109 cri.go:89] found id: "d184b6139f52f80605886a70426659453eee61916ead187b881c64f6b4c59bbb"
	I1209 11:56:32.743729  662109 cri.go:89] found id: "0ef403336ca71e96a468d654fafbbe9fd9c37a3d04d2578e06771b425f20004f"
	I1209 11:56:32.743735  662109 cri.go:89] found id: ""
	I1209 11:56:32.743746  662109 logs.go:282] 2 containers: [d184b6139f52f80605886a70426659453eee61916ead187b881c64f6b4c59bbb 0ef403336ca71e96a468d654fafbbe9fd9c37a3d04d2578e06771b425f20004f]
	I1209 11:56:32.743814  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:32.749146  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:32.753226  662109 logs.go:123] Gathering logs for kube-scheduler [73b01a8a4080f1488054462bb97f98c08528c799629927ce6a684fb0ade63413] ...
	I1209 11:56:32.753253  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 73b01a8a4080f1488054462bb97f98c08528c799629927ce6a684fb0ade63413"
	I1209 11:56:32.787832  662109 logs.go:123] Gathering logs for kube-proxy [de64a319ab30ae80695633af0e1c9206551e2408918b743c2258216259de56d2] ...
	I1209 11:56:32.787877  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de64a319ab30ae80695633af0e1c9206551e2408918b743c2258216259de56d2"
	I1209 11:56:32.824859  662109 logs.go:123] Gathering logs for kube-controller-manager [b6662f1bed1995329291aee23d70ac4157125e45c75dd92e34e07fa257f2be2d] ...
	I1209 11:56:32.824891  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b6662f1bed1995329291aee23d70ac4157125e45c75dd92e34e07fa257f2be2d"
	I1209 11:56:32.881776  662109 logs.go:123] Gathering logs for storage-provisioner [d184b6139f52f80605886a70426659453eee61916ead187b881c64f6b4c59bbb] ...
	I1209 11:56:32.881808  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d184b6139f52f80605886a70426659453eee61916ead187b881c64f6b4c59bbb"
	I1209 11:56:32.919018  662109 logs.go:123] Gathering logs for storage-provisioner [0ef403336ca71e96a468d654fafbbe9fd9c37a3d04d2578e06771b425f20004f] ...
	I1209 11:56:32.919064  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0ef403336ca71e96a468d654fafbbe9fd9c37a3d04d2578e06771b425f20004f"
	I1209 11:56:32.956839  662109 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:56:32.956869  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:56:33.334255  662109 logs.go:123] Gathering logs for kubelet ...
	I1209 11:56:33.334300  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:56:33.406008  662109 logs.go:123] Gathering logs for etcd [13e00a6fef368f36aa166cfe9a5a6e37476d0bff708b09a552a102e6103aff16] ...
	I1209 11:56:33.406049  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 13e00a6fef368f36aa166cfe9a5a6e37476d0bff708b09a552a102e6103aff16"
	I1209 11:56:33.453689  662109 logs.go:123] Gathering logs for kube-apiserver [478ca5095dcdb90c166952aee1291e7de907591fa02ac65de132b6d7e6ea79cb] ...
	I1209 11:56:33.453724  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 478ca5095dcdb90c166952aee1291e7de907591fa02ac65de132b6d7e6ea79cb"
	I1209 11:56:33.496168  662109 logs.go:123] Gathering logs for coredns [909852cc820d2c5faeb9b92e83a833c397dbbf46bbc715f79ef8b77f0a338f42] ...
	I1209 11:56:33.496209  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 909852cc820d2c5faeb9b92e83a833c397dbbf46bbc715f79ef8b77f0a338f42"
	I1209 11:56:33.532057  662109 logs.go:123] Gathering logs for container status ...
	I1209 11:56:33.532090  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:56:33.575050  662109 logs.go:123] Gathering logs for dmesg ...
	I1209 11:56:33.575087  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:56:33.588543  662109 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:56:33.588575  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 11:56:36.194483  662109 system_pods.go:59] 8 kube-system pods found
	I1209 11:56:36.194516  662109 system_pods.go:61] "coredns-7c65d6cfc9-z647g" [0e15e13e-efe6-4ae2-8bac-205aadf8f95a] Running
	I1209 11:56:36.194522  662109 system_pods.go:61] "etcd-no-preload-820741" [3ea97088-f3b5-4c8f-ac11-7ab96f37bc5b] Running
	I1209 11:56:36.194527  662109 system_pods.go:61] "kube-apiserver-no-preload-820741" [bd0d3e20-bdab-41c1-bb25-0c5149b4e456] Running
	I1209 11:56:36.194531  662109 system_pods.go:61] "kube-controller-manager-no-preload-820741" [5eda77af-a169-4be5-9f96-6ad5406f9036] Running
	I1209 11:56:36.194534  662109 system_pods.go:61] "kube-proxy-hpvvp" [0945206c-8d1e-47e0-b35b-9011073423b2] Running
	I1209 11:56:36.194538  662109 system_pods.go:61] "kube-scheduler-no-preload-820741" [e7003668-a70d-4334-bb99-c63c463d87e0] Running
	I1209 11:56:36.194543  662109 system_pods.go:61] "metrics-server-6867b74b74-pwcsr" [40d4df7e-de82-478b-a77b-b27208d8262e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1209 11:56:36.194549  662109 system_pods.go:61] "storage-provisioner" [aeba46d3-ecf1-4923-b89c-75b34e75a06d] Running
	I1209 11:56:36.194559  662109 system_pods.go:74] duration metric: took 3.76090495s to wait for pod list to return data ...
	I1209 11:56:36.194567  662109 default_sa.go:34] waiting for default service account to be created ...
	I1209 11:56:36.197070  662109 default_sa.go:45] found service account: "default"
	I1209 11:56:36.197094  662109 default_sa.go:55] duration metric: took 2.520926ms for default service account to be created ...
	I1209 11:56:36.197104  662109 system_pods.go:116] waiting for k8s-apps to be running ...
	I1209 11:56:36.201494  662109 system_pods.go:86] 8 kube-system pods found
	I1209 11:56:36.201518  662109 system_pods.go:89] "coredns-7c65d6cfc9-z647g" [0e15e13e-efe6-4ae2-8bac-205aadf8f95a] Running
	I1209 11:56:36.201524  662109 system_pods.go:89] "etcd-no-preload-820741" [3ea97088-f3b5-4c8f-ac11-7ab96f37bc5b] Running
	I1209 11:56:36.201528  662109 system_pods.go:89] "kube-apiserver-no-preload-820741" [bd0d3e20-bdab-41c1-bb25-0c5149b4e456] Running
	I1209 11:56:36.201533  662109 system_pods.go:89] "kube-controller-manager-no-preload-820741" [5eda77af-a169-4be5-9f96-6ad5406f9036] Running
	I1209 11:56:36.201537  662109 system_pods.go:89] "kube-proxy-hpvvp" [0945206c-8d1e-47e0-b35b-9011073423b2] Running
	I1209 11:56:36.201540  662109 system_pods.go:89] "kube-scheduler-no-preload-820741" [e7003668-a70d-4334-bb99-c63c463d87e0] Running
	I1209 11:56:36.201547  662109 system_pods.go:89] "metrics-server-6867b74b74-pwcsr" [40d4df7e-de82-478b-a77b-b27208d8262e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1209 11:56:36.201551  662109 system_pods.go:89] "storage-provisioner" [aeba46d3-ecf1-4923-b89c-75b34e75a06d] Running
	I1209 11:56:36.201558  662109 system_pods.go:126] duration metric: took 4.448871ms to wait for k8s-apps to be running ...
	I1209 11:56:36.201567  662109 system_svc.go:44] waiting for kubelet service to be running ....
	I1209 11:56:36.201628  662109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 11:56:36.217457  662109 system_svc.go:56] duration metric: took 15.878252ms WaitForService to wait for kubelet
	I1209 11:56:36.217503  662109 kubeadm.go:582] duration metric: took 4m26.420670146s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 11:56:36.217527  662109 node_conditions.go:102] verifying NodePressure condition ...
	I1209 11:56:36.220498  662109 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1209 11:56:36.220526  662109 node_conditions.go:123] node cpu capacity is 2
	I1209 11:56:36.220572  662109 node_conditions.go:105] duration metric: took 3.039367ms to run NodePressure ...
	I1209 11:56:36.220586  662109 start.go:241] waiting for startup goroutines ...
	I1209 11:56:36.220597  662109 start.go:246] waiting for cluster config update ...
	I1209 11:56:36.220628  662109 start.go:255] writing updated cluster config ...
	I1209 11:56:36.220974  662109 ssh_runner.go:195] Run: rm -f paused
	I1209 11:56:36.272920  662109 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1209 11:56:36.274686  662109 out.go:177] * Done! kubectl is now configured to use "no-preload-820741" cluster and "default" namespace by default
	I1209 11:56:36.895681  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:38.896066  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:36.951281  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:39.455225  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:41.395880  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:43.895464  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:41.951287  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:44.451357  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:45.896184  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:48.398617  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:46.451733  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:48.950857  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:50.950964  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:50.895678  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:52.896291  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:53.389365  663024 pod_ready.go:82] duration metric: took 4m0.00015362s for pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace to be "Ready" ...
	E1209 11:56:53.389414  663024 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1209 11:56:53.389440  663024 pod_ready.go:39] duration metric: took 4m13.044002506s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 11:56:53.389480  663024 kubeadm.go:597] duration metric: took 4m21.286289463s to restartPrimaryControlPlane
	W1209 11:56:53.389572  663024 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1209 11:56:53.389610  663024 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1209 11:56:52.951153  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:55.451223  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:57.950413  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:57:00.449904  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:57:02.450069  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:57:04.451074  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:57:06.950873  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:57:08.951176  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:57:11.450596  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:57:13.451552  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:57:13.944884  661546 pod_ready.go:82] duration metric: took 4m0.000348644s for pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace to be "Ready" ...
	E1209 11:57:13.944919  661546 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace to be "Ready" (will not retry!)
	I1209 11:57:13.944943  661546 pod_ready.go:39] duration metric: took 4m14.049505666s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 11:57:13.944980  661546 kubeadm.go:597] duration metric: took 4m22.094543781s to restartPrimaryControlPlane
	W1209 11:57:13.945086  661546 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1209 11:57:13.945123  661546 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1209 11:57:19.569119  663024 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.179481312s)
	I1209 11:57:19.569196  663024 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 11:57:19.583584  663024 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 11:57:19.592807  663024 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 11:57:19.602121  663024 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 11:57:19.602190  663024 kubeadm.go:157] found existing configuration files:
	
	I1209 11:57:19.602249  663024 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1209 11:57:19.611109  663024 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 11:57:19.611187  663024 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 11:57:19.620264  663024 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1209 11:57:19.629026  663024 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 11:57:19.629103  663024 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 11:57:19.638036  663024 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1209 11:57:19.646265  663024 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 11:57:19.646331  663024 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 11:57:19.655187  663024 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1209 11:57:19.663908  663024 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 11:57:19.663962  663024 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 11:57:19.673002  663024 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1209 11:57:19.717664  663024 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1209 11:57:19.717737  663024 kubeadm.go:310] [preflight] Running pre-flight checks
	I1209 11:57:19.818945  663024 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1209 11:57:19.819065  663024 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1209 11:57:19.819160  663024 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1209 11:57:19.828186  663024 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1209 11:57:19.829831  663024 out.go:235]   - Generating certificates and keys ...
	I1209 11:57:19.829938  663024 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1209 11:57:19.830031  663024 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1209 11:57:19.830145  663024 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1209 11:57:19.830252  663024 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1209 11:57:19.830377  663024 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1209 11:57:19.830470  663024 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1209 11:57:19.830568  663024 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1209 11:57:19.830644  663024 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1209 11:57:19.830745  663024 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1209 11:57:19.830825  663024 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1209 11:57:19.830878  663024 kubeadm.go:310] [certs] Using the existing "sa" key
	I1209 11:57:19.830963  663024 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1209 11:57:19.961813  663024 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1209 11:57:20.436964  663024 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1209 11:57:20.652041  663024 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1209 11:57:20.837664  663024 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1209 11:57:20.892035  663024 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1209 11:57:20.892497  663024 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1209 11:57:20.895295  663024 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1209 11:57:20.896871  663024 out.go:235]   - Booting up control plane ...
	I1209 11:57:20.896992  663024 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1209 11:57:20.897139  663024 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1209 11:57:20.897260  663024 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1209 11:57:20.914735  663024 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1209 11:57:20.920520  663024 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1209 11:57:20.920566  663024 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1209 11:57:21.047290  663024 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1209 11:57:21.047437  663024 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1209 11:57:22.049131  663024 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001914766s
	I1209 11:57:22.049257  663024 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1209 11:57:27.053443  663024 kubeadm.go:310] [api-check] The API server is healthy after 5.002570817s
	I1209 11:57:27.068518  663024 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1209 11:57:27.086371  663024 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1209 11:57:27.114617  663024 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1209 11:57:27.114833  663024 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-482476 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1209 11:57:27.131354  663024 kubeadm.go:310] [bootstrap-token] Using token: 6aanjy.0y855mmcca5ic9co
	I1209 11:57:27.132852  663024 out.go:235]   - Configuring RBAC rules ...
	I1209 11:57:27.132992  663024 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1209 11:57:27.139770  663024 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1209 11:57:27.147974  663024 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1209 11:57:27.155508  663024 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1209 11:57:27.159181  663024 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1209 11:57:27.163403  663024 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1209 11:57:27.458812  663024 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1209 11:57:27.900322  663024 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1209 11:57:28.458864  663024 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1209 11:57:28.459944  663024 kubeadm.go:310] 
	I1209 11:57:28.460043  663024 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1209 11:57:28.460054  663024 kubeadm.go:310] 
	I1209 11:57:28.460156  663024 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1209 11:57:28.460166  663024 kubeadm.go:310] 
	I1209 11:57:28.460198  663024 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1209 11:57:28.460284  663024 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1209 11:57:28.460385  663024 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1209 11:57:28.460414  663024 kubeadm.go:310] 
	I1209 11:57:28.460499  663024 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1209 11:57:28.460509  663024 kubeadm.go:310] 
	I1209 11:57:28.460576  663024 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1209 11:57:28.460586  663024 kubeadm.go:310] 
	I1209 11:57:28.460663  663024 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1209 11:57:28.460766  663024 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1209 11:57:28.460862  663024 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1209 11:57:28.460871  663024 kubeadm.go:310] 
	I1209 11:57:28.460992  663024 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1209 11:57:28.461096  663024 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1209 11:57:28.461121  663024 kubeadm.go:310] 
	I1209 11:57:28.461244  663024 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token 6aanjy.0y855mmcca5ic9co \
	I1209 11:57:28.461395  663024 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c011865ce27f188d55feab5f04226df0aae8d2e7cfe661283806c6f2de0ef29a \
	I1209 11:57:28.461435  663024 kubeadm.go:310] 	--control-plane 
	I1209 11:57:28.461446  663024 kubeadm.go:310] 
	I1209 11:57:28.461551  663024 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1209 11:57:28.461574  663024 kubeadm.go:310] 
	I1209 11:57:28.461679  663024 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token 6aanjy.0y855mmcca5ic9co \
	I1209 11:57:28.461832  663024 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c011865ce27f188d55feab5f04226df0aae8d2e7cfe661283806c6f2de0ef29a 
	I1209 11:57:28.462544  663024 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1209 11:57:28.462594  663024 cni.go:84] Creating CNI manager for ""
	I1209 11:57:28.462620  663024 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 11:57:28.464574  663024 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1209 11:57:28.465952  663024 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1209 11:57:28.476155  663024 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1209 11:57:28.493471  663024 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1209 11:57:28.493551  663024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 11:57:28.493594  663024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-482476 minikube.k8s.io/updated_at=2024_12_09T11_57_28_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=60110addcdbf0fec7168b962521659e922988d6c minikube.k8s.io/name=default-k8s-diff-port-482476 minikube.k8s.io/primary=true
	I1209 11:57:28.506467  663024 ops.go:34] apiserver oom_adj: -16
	I1209 11:57:28.724224  663024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 11:57:29.224971  663024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 11:57:29.724660  663024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 11:57:30.224466  663024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 11:57:30.724354  663024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 11:57:31.224702  663024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 11:57:31.725101  663024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 11:57:32.224364  663024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 11:57:32.724357  663024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 11:57:32.844191  663024 kubeadm.go:1113] duration metric: took 4.350713188s to wait for elevateKubeSystemPrivileges
	I1209 11:57:32.844243  663024 kubeadm.go:394] duration metric: took 5m0.79272843s to StartCluster
	I1209 11:57:32.844287  663024 settings.go:142] acquiring lock: {Name:mkd819f3469f56ef651f2e5bb461e93bb6b2b5fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 11:57:32.844417  663024 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20068-609844/kubeconfig
	I1209 11:57:32.846697  663024 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/kubeconfig: {Name:mk32876a1ab47f13c0d7c0ba1ee5e32de01b4a4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 11:57:32.847014  663024 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.25 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 11:57:32.847067  663024 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1209 11:57:32.847162  663024 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-482476"
	I1209 11:57:32.847186  663024 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-482476"
	I1209 11:57:32.847192  663024 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-482476"
	W1209 11:57:32.847201  663024 addons.go:243] addon storage-provisioner should already be in state true
	I1209 11:57:32.847204  663024 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-482476"
	I1209 11:57:32.847228  663024 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-482476"
	I1209 11:57:32.847272  663024 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-482476"
	W1209 11:57:32.847287  663024 addons.go:243] addon metrics-server should already be in state true
	I1209 11:57:32.847285  663024 config.go:182] Loaded profile config "default-k8s-diff-port-482476": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 11:57:32.847328  663024 host.go:66] Checking if "default-k8s-diff-port-482476" exists ...
	I1209 11:57:32.847237  663024 host.go:66] Checking if "default-k8s-diff-port-482476" exists ...
	I1209 11:57:32.847705  663024 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:57:32.847713  663024 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:57:32.847750  663024 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:57:32.847755  663024 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:57:32.847841  663024 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:57:32.847873  663024 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:57:32.848599  663024 out.go:177] * Verifying Kubernetes components...
	I1209 11:57:32.850246  663024 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 11:57:32.864945  663024 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44639
	I1209 11:57:32.865141  663024 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37211
	I1209 11:57:32.865203  663024 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42835
	I1209 11:57:32.865473  663024 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:57:32.865635  663024 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:57:32.865733  663024 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:57:32.866096  663024 main.go:141] libmachine: Using API Version  1
	I1209 11:57:32.866115  663024 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:57:32.866241  663024 main.go:141] libmachine: Using API Version  1
	I1209 11:57:32.866264  663024 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:57:32.866241  663024 main.go:141] libmachine: Using API Version  1
	I1209 11:57:32.866316  663024 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:57:32.866642  663024 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:57:32.866654  663024 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:57:32.866656  663024 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:57:32.866865  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetState
	I1209 11:57:32.867243  663024 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:57:32.867287  663024 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:57:32.867321  663024 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:57:32.867358  663024 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:57:32.871085  663024 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-482476"
	W1209 11:57:32.871109  663024 addons.go:243] addon default-storageclass should already be in state true
	I1209 11:57:32.871142  663024 host.go:66] Checking if "default-k8s-diff-port-482476" exists ...
	I1209 11:57:32.871395  663024 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:57:32.871431  663024 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:57:32.883301  663024 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45049
	I1209 11:57:32.883976  663024 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:57:32.884508  663024 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36509
	I1209 11:57:32.884758  663024 main.go:141] libmachine: Using API Version  1
	I1209 11:57:32.884775  663024 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:57:32.885123  663024 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:57:32.885279  663024 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:57:32.885610  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetState
	I1209 11:57:32.885801  663024 main.go:141] libmachine: Using API Version  1
	I1209 11:57:32.885817  663024 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:57:32.886142  663024 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:57:32.886347  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetState
	I1209 11:57:32.888357  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .DriverName
	I1209 11:57:32.888762  663024 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37611
	I1209 11:57:32.889103  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .DriverName
	I1209 11:57:32.889192  663024 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:57:32.889669  663024 main.go:141] libmachine: Using API Version  1
	I1209 11:57:32.889692  663024 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:57:32.890035  663024 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:57:32.890082  663024 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 11:57:32.890647  663024 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:57:32.890687  663024 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:57:32.890867  663024 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1209 11:57:32.891756  663024 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 11:57:32.891774  663024 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1209 11:57:32.891794  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHHostname
	I1209 11:57:32.892543  663024 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1209 11:57:32.892563  663024 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1209 11:57:32.892587  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHHostname
	I1209 11:57:32.896754  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:57:32.897437  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:c9:8a", ip: ""} in network mk-default-k8s-diff-port-482476: {Iface:virbr2 ExpiryTime:2024-12-09 12:52:18 +0000 UTC Type:0 Mac:52:54:00:f0:c9:8a Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:default-k8s-diff-port-482476 Clientid:01:52:54:00:f0:c9:8a}
	I1209 11:57:32.897471  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined IP address 192.168.50.25 and MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:57:32.897752  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHPort
	I1209 11:57:32.897836  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:57:32.897975  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHKeyPath
	I1209 11:57:32.898370  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:c9:8a", ip: ""} in network mk-default-k8s-diff-port-482476: {Iface:virbr2 ExpiryTime:2024-12-09 12:52:18 +0000 UTC Type:0 Mac:52:54:00:f0:c9:8a Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:default-k8s-diff-port-482476 Clientid:01:52:54:00:f0:c9:8a}
	I1209 11:57:32.898381  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHUsername
	I1209 11:57:32.898395  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined IP address 192.168.50.25 and MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:57:32.898556  663024 sshutil.go:53] new ssh client: &{IP:192.168.50.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/default-k8s-diff-port-482476/id_rsa Username:docker}
	I1209 11:57:32.898649  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHPort
	I1209 11:57:32.898829  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHKeyPath
	I1209 11:57:32.898975  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHUsername
	I1209 11:57:32.899101  663024 sshutil.go:53] new ssh client: &{IP:192.168.50.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/default-k8s-diff-port-482476/id_rsa Username:docker}
	I1209 11:57:32.907891  663024 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34063
	I1209 11:57:32.908317  663024 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:57:32.908827  663024 main.go:141] libmachine: Using API Version  1
	I1209 11:57:32.908848  663024 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:57:32.909352  663024 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:57:32.909551  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetState
	I1209 11:57:32.911172  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .DriverName
	I1209 11:57:32.911417  663024 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1209 11:57:32.911434  663024 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1209 11:57:32.911460  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHHostname
	I1209 11:57:32.914016  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:57:32.914474  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:c9:8a", ip: ""} in network mk-default-k8s-diff-port-482476: {Iface:virbr2 ExpiryTime:2024-12-09 12:52:18 +0000 UTC Type:0 Mac:52:54:00:f0:c9:8a Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:default-k8s-diff-port-482476 Clientid:01:52:54:00:f0:c9:8a}
	I1209 11:57:32.914490  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined IP address 192.168.50.25 and MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:57:32.914646  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHPort
	I1209 11:57:32.914838  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHKeyPath
	I1209 11:57:32.914965  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHUsername
	I1209 11:57:32.915071  663024 sshutil.go:53] new ssh client: &{IP:192.168.50.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/default-k8s-diff-port-482476/id_rsa Username:docker}
	I1209 11:57:33.067075  663024 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 11:57:33.085671  663024 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-482476" to be "Ready" ...
	I1209 11:57:33.095765  663024 node_ready.go:49] node "default-k8s-diff-port-482476" has status "Ready":"True"
	I1209 11:57:33.095801  663024 node_ready.go:38] duration metric: took 10.096442ms for node "default-k8s-diff-port-482476" to be "Ready" ...
	I1209 11:57:33.095815  663024 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 11:57:33.105497  663024 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-7rr27" in "kube-system" namespace to be "Ready" ...
	I1209 11:57:33.200059  663024 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 11:57:33.218467  663024 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1209 11:57:33.218496  663024 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1209 11:57:33.225990  663024 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1209 11:57:33.278736  663024 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1209 11:57:33.278772  663024 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1209 11:57:33.342270  663024 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1209 11:57:33.342304  663024 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1209 11:57:33.412771  663024 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1209 11:57:34.250639  663024 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.050535014s)
	I1209 11:57:34.250706  663024 main.go:141] libmachine: Making call to close driver server
	I1209 11:57:34.250720  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .Close
	I1209 11:57:34.250704  663024 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.024681453s)
	I1209 11:57:34.250811  663024 main.go:141] libmachine: Making call to close driver server
	I1209 11:57:34.250820  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .Close
	I1209 11:57:34.251151  663024 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:57:34.251170  663024 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:57:34.251182  663024 main.go:141] libmachine: Making call to close driver server
	I1209 11:57:34.251192  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .Close
	I1209 11:57:34.251197  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | Closing plugin on server side
	I1209 11:57:34.251238  663024 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:57:34.251245  663024 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:57:34.251253  663024 main.go:141] libmachine: Making call to close driver server
	I1209 11:57:34.251261  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .Close
	I1209 11:57:34.253136  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | Closing plugin on server side
	I1209 11:57:34.253141  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | Closing plugin on server side
	I1209 11:57:34.253180  663024 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:57:34.253182  663024 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:57:34.253194  663024 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:57:34.253214  663024 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:57:34.279650  663024 main.go:141] libmachine: Making call to close driver server
	I1209 11:57:34.279682  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .Close
	I1209 11:57:34.280064  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | Closing plugin on server side
	I1209 11:57:34.280116  663024 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:57:34.280130  663024 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:57:34.656217  663024 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.243394493s)
	I1209 11:57:34.656287  663024 main.go:141] libmachine: Making call to close driver server
	I1209 11:57:34.656305  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .Close
	I1209 11:57:34.656641  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | Closing plugin on server side
	I1209 11:57:34.656655  663024 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:57:34.656671  663024 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:57:34.656683  663024 main.go:141] libmachine: Making call to close driver server
	I1209 11:57:34.656691  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .Close
	I1209 11:57:34.656982  663024 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:57:34.656999  663024 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:57:34.657011  663024 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-482476"
	I1209 11:57:34.658878  663024 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1209 11:57:34.660089  663024 addons.go:510] duration metric: took 1.813029421s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1209 11:57:35.122487  663024 pod_ready.go:103] pod "coredns-7c65d6cfc9-7rr27" in "kube-system" namespace has status "Ready":"False"
	I1209 11:57:36.112072  663024 pod_ready.go:93] pod "coredns-7c65d6cfc9-7rr27" in "kube-system" namespace has status "Ready":"True"
	I1209 11:57:36.112097  663024 pod_ready.go:82] duration metric: took 3.006564547s for pod "coredns-7c65d6cfc9-7rr27" in "kube-system" namespace to be "Ready" ...
	I1209 11:57:36.112110  663024 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-bb47s" in "kube-system" namespace to be "Ready" ...
	I1209 11:57:36.117521  663024 pod_ready.go:93] pod "coredns-7c65d6cfc9-bb47s" in "kube-system" namespace has status "Ready":"True"
	I1209 11:57:36.117545  663024 pod_ready.go:82] duration metric: took 5.428168ms for pod "coredns-7c65d6cfc9-bb47s" in "kube-system" namespace to be "Ready" ...
	I1209 11:57:36.117554  663024 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-482476" in "kube-system" namespace to be "Ready" ...
	I1209 11:57:36.122929  663024 pod_ready.go:93] pod "etcd-default-k8s-diff-port-482476" in "kube-system" namespace has status "Ready":"True"
	I1209 11:57:36.122953  663024 pod_ready.go:82] duration metric: took 5.392834ms for pod "etcd-default-k8s-diff-port-482476" in "kube-system" namespace to be "Ready" ...
	I1209 11:57:36.122972  663024 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-482476" in "kube-system" namespace to be "Ready" ...
	I1209 11:57:36.127025  663024 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-482476" in "kube-system" namespace has status "Ready":"True"
	I1209 11:57:36.127047  663024 pod_ready.go:82] duration metric: took 4.068175ms for pod "kube-apiserver-default-k8s-diff-port-482476" in "kube-system" namespace to be "Ready" ...
	I1209 11:57:36.127056  663024 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-482476" in "kube-system" namespace to be "Ready" ...
	I1209 11:57:36.131036  663024 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-482476" in "kube-system" namespace has status "Ready":"True"
	I1209 11:57:36.131055  663024 pod_ready.go:82] duration metric: took 3.993825ms for pod "kube-controller-manager-default-k8s-diff-port-482476" in "kube-system" namespace to be "Ready" ...
	I1209 11:57:36.131064  663024 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-pgs52" in "kube-system" namespace to be "Ready" ...
	I1209 11:57:36.508951  663024 pod_ready.go:93] pod "kube-proxy-pgs52" in "kube-system" namespace has status "Ready":"True"
	I1209 11:57:36.508980  663024 pod_ready.go:82] duration metric: took 377.910722ms for pod "kube-proxy-pgs52" in "kube-system" namespace to be "Ready" ...
	I1209 11:57:36.508991  663024 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-482476" in "kube-system" namespace to be "Ready" ...
	I1209 11:57:36.909065  663024 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-482476" in "kube-system" namespace has status "Ready":"True"
	I1209 11:57:36.909093  663024 pod_ready.go:82] duration metric: took 400.095775ms for pod "kube-scheduler-default-k8s-diff-port-482476" in "kube-system" namespace to be "Ready" ...
	I1209 11:57:36.909100  663024 pod_ready.go:39] duration metric: took 3.813270613s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 11:57:36.909116  663024 api_server.go:52] waiting for apiserver process to appear ...
	I1209 11:57:36.909169  663024 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:57:36.924688  663024 api_server.go:72] duration metric: took 4.077626254s to wait for apiserver process to appear ...
	I1209 11:57:36.924726  663024 api_server.go:88] waiting for apiserver healthz status ...
	I1209 11:57:36.924752  663024 api_server.go:253] Checking apiserver healthz at https://192.168.50.25:8444/healthz ...
	I1209 11:57:36.930782  663024 api_server.go:279] https://192.168.50.25:8444/healthz returned 200:
	ok
	I1209 11:57:36.931734  663024 api_server.go:141] control plane version: v1.31.2
	I1209 11:57:36.931758  663024 api_server.go:131] duration metric: took 7.024599ms to wait for apiserver health ...
	I1209 11:57:36.931766  663024 system_pods.go:43] waiting for kube-system pods to appear ...
	I1209 11:57:37.112291  663024 system_pods.go:59] 9 kube-system pods found
	I1209 11:57:37.112323  663024 system_pods.go:61] "coredns-7c65d6cfc9-7rr27" [a5dd0401-80bf-4c87-9771-e1837c960425] Running
	I1209 11:57:37.112328  663024 system_pods.go:61] "coredns-7c65d6cfc9-bb47s" [2ff9fcf2-1494-4739-9bf4-6c9dd5bcbbf4] Running
	I1209 11:57:37.112332  663024 system_pods.go:61] "etcd-default-k8s-diff-port-482476" [01dfcc5e-2ac5-4cee-8b68-ac1f3bf8866b] Running
	I1209 11:57:37.112337  663024 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-482476" [c65f1162-8d8a-47e8-8f1a-4abc0ebc8649] Running
	I1209 11:57:37.112340  663024 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-482476" [363e5f34-878a-434e-8bf2-21cdc06be305] Running
	I1209 11:57:37.112343  663024 system_pods.go:61] "kube-proxy-pgs52" [d5a3463e-e955-4345-9559-b23cce44fa0e] Running
	I1209 11:57:37.112346  663024 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-482476" [53f924fa-554b-4ceb-b171-efcfecdc137e] Running
	I1209 11:57:37.112356  663024 system_pods.go:61] "metrics-server-6867b74b74-2lmtn" [60803d31-d0b0-4d51-a9f2-cadafd184a90] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1209 11:57:37.112363  663024 system_pods.go:61] "storage-provisioner" [6b53e3ba-9bc9-4b5a-bec9-d06336616c8a] Running
	I1209 11:57:37.112373  663024 system_pods.go:74] duration metric: took 180.599339ms to wait for pod list to return data ...
	I1209 11:57:37.112387  663024 default_sa.go:34] waiting for default service account to be created ...
	I1209 11:57:37.309750  663024 default_sa.go:45] found service account: "default"
	I1209 11:57:37.309777  663024 default_sa.go:55] duration metric: took 197.382304ms for default service account to be created ...
	I1209 11:57:37.309787  663024 system_pods.go:116] waiting for k8s-apps to be running ...
	I1209 11:57:37.513080  663024 system_pods.go:86] 9 kube-system pods found
	I1209 11:57:37.513112  663024 system_pods.go:89] "coredns-7c65d6cfc9-7rr27" [a5dd0401-80bf-4c87-9771-e1837c960425] Running
	I1209 11:57:37.513118  663024 system_pods.go:89] "coredns-7c65d6cfc9-bb47s" [2ff9fcf2-1494-4739-9bf4-6c9dd5bcbbf4] Running
	I1209 11:57:37.513121  663024 system_pods.go:89] "etcd-default-k8s-diff-port-482476" [01dfcc5e-2ac5-4cee-8b68-ac1f3bf8866b] Running
	I1209 11:57:37.513128  663024 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-482476" [c65f1162-8d8a-47e8-8f1a-4abc0ebc8649] Running
	I1209 11:57:37.513133  663024 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-482476" [363e5f34-878a-434e-8bf2-21cdc06be305] Running
	I1209 11:57:37.513136  663024 system_pods.go:89] "kube-proxy-pgs52" [d5a3463e-e955-4345-9559-b23cce44fa0e] Running
	I1209 11:57:37.513141  663024 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-482476" [53f924fa-554b-4ceb-b171-efcfecdc137e] Running
	I1209 11:57:37.513150  663024 system_pods.go:89] "metrics-server-6867b74b74-2lmtn" [60803d31-d0b0-4d51-a9f2-cadafd184a90] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1209 11:57:37.513156  663024 system_pods.go:89] "storage-provisioner" [6b53e3ba-9bc9-4b5a-bec9-d06336616c8a] Running
	I1209 11:57:37.513168  663024 system_pods.go:126] duration metric: took 203.373238ms to wait for k8s-apps to be running ...
	I1209 11:57:37.513181  663024 system_svc.go:44] waiting for kubelet service to be running ....
	I1209 11:57:37.513233  663024 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 11:57:37.527419  663024 system_svc.go:56] duration metric: took 14.22618ms WaitForService to wait for kubelet
	I1209 11:57:37.527451  663024 kubeadm.go:582] duration metric: took 4.680397826s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 11:57:37.527473  663024 node_conditions.go:102] verifying NodePressure condition ...
	I1209 11:57:37.710396  663024 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1209 11:57:37.710429  663024 node_conditions.go:123] node cpu capacity is 2
	I1209 11:57:37.710447  663024 node_conditions.go:105] duration metric: took 182.968526ms to run NodePressure ...
	I1209 11:57:37.710463  663024 start.go:241] waiting for startup goroutines ...
	I1209 11:57:37.710473  663024 start.go:246] waiting for cluster config update ...
	I1209 11:57:37.710487  663024 start.go:255] writing updated cluster config ...
	I1209 11:57:37.710799  663024 ssh_runner.go:195] Run: rm -f paused
	I1209 11:57:37.760468  663024 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1209 11:57:37.762472  663024 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-482476" cluster and "default" namespace by default
	I1209 11:57:40.219406  661546 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.274255602s)
	I1209 11:57:40.219478  661546 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 11:57:40.234863  661546 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 11:57:40.245357  661546 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 11:57:40.255253  661546 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 11:57:40.255276  661546 kubeadm.go:157] found existing configuration files:
	
	I1209 11:57:40.255319  661546 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 11:57:40.264881  661546 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 11:57:40.264934  661546 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 11:57:40.274990  661546 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 11:57:40.284941  661546 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 11:57:40.284998  661546 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 11:57:40.295188  661546 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 11:57:40.305136  661546 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 11:57:40.305181  661546 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 11:57:40.315125  661546 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 11:57:40.324727  661546 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 11:57:40.324789  661546 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 11:57:40.333574  661546 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1209 11:57:40.378743  661546 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1209 11:57:40.378932  661546 kubeadm.go:310] [preflight] Running pre-flight checks
	I1209 11:57:40.492367  661546 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1209 11:57:40.492493  661546 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1209 11:57:40.492658  661546 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1209 11:57:40.504994  661546 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1209 11:57:40.506760  661546 out.go:235]   - Generating certificates and keys ...
	I1209 11:57:40.506878  661546 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1209 11:57:40.506955  661546 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1209 11:57:40.507033  661546 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1209 11:57:40.507088  661546 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1209 11:57:40.507156  661546 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1209 11:57:40.507274  661546 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1209 11:57:40.507377  661546 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1209 11:57:40.507463  661546 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1209 11:57:40.507573  661546 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1209 11:57:40.507692  661546 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1209 11:57:40.507756  661546 kubeadm.go:310] [certs] Using the existing "sa" key
	I1209 11:57:40.507836  661546 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1209 11:57:40.607744  661546 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1209 11:57:40.684950  661546 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1209 11:57:40.826079  661546 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1209 11:57:40.945768  661546 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1209 11:57:41.212984  661546 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1209 11:57:41.213406  661546 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1209 11:57:41.216390  661546 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1209 11:57:41.218053  661546 out.go:235]   - Booting up control plane ...
	I1209 11:57:41.218202  661546 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1209 11:57:41.218307  661546 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1209 11:57:41.220009  661546 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1209 11:57:41.237816  661546 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1209 11:57:41.244148  661546 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1209 11:57:41.244204  661546 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1209 11:57:41.371083  661546 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1209 11:57:41.371245  661546 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1209 11:57:41.872938  661546 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.998998ms
	I1209 11:57:41.873141  661546 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1209 11:57:46.874725  661546 kubeadm.go:310] [api-check] The API server is healthy after 5.001587898s
	I1209 11:57:46.886996  661546 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1209 11:57:46.897941  661546 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1209 11:57:46.927451  661546 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1209 11:57:46.927718  661546 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-005123 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1209 11:57:46.945578  661546 kubeadm.go:310] [bootstrap-token] Using token: bhdcn7.orsewwwtbk1gmdg8
	I1209 11:57:46.946894  661546 out.go:235]   - Configuring RBAC rules ...
	I1209 11:57:46.947041  661546 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1209 11:57:46.950006  661546 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1209 11:57:46.956761  661546 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1209 11:57:46.959756  661546 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1209 11:57:46.962973  661546 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1209 11:57:46.970016  661546 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1209 11:57:47.282251  661546 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1209 11:57:47.714588  661546 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1209 11:57:48.283610  661546 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1209 11:57:48.283671  661546 kubeadm.go:310] 
	I1209 11:57:48.283774  661546 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1209 11:57:48.283786  661546 kubeadm.go:310] 
	I1209 11:57:48.283901  661546 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1209 11:57:48.283948  661546 kubeadm.go:310] 
	I1209 11:57:48.283995  661546 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1209 11:57:48.284089  661546 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1209 11:57:48.284139  661546 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1209 11:57:48.284148  661546 kubeadm.go:310] 
	I1209 11:57:48.284216  661546 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1209 11:57:48.284224  661546 kubeadm.go:310] 
	I1209 11:57:48.284281  661546 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1209 11:57:48.284291  661546 kubeadm.go:310] 
	I1209 11:57:48.284359  661546 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1209 11:57:48.284465  661546 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1209 11:57:48.284583  661546 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1209 11:57:48.284596  661546 kubeadm.go:310] 
	I1209 11:57:48.284739  661546 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1209 11:57:48.284846  661546 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1209 11:57:48.284859  661546 kubeadm.go:310] 
	I1209 11:57:48.284972  661546 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token bhdcn7.orsewwwtbk1gmdg8 \
	I1209 11:57:48.285133  661546 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c011865ce27f188d55feab5f04226df0aae8d2e7cfe661283806c6f2de0ef29a \
	I1209 11:57:48.285170  661546 kubeadm.go:310] 	--control-plane 
	I1209 11:57:48.285184  661546 kubeadm.go:310] 
	I1209 11:57:48.285312  661546 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1209 11:57:48.285321  661546 kubeadm.go:310] 
	I1209 11:57:48.285388  661546 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token bhdcn7.orsewwwtbk1gmdg8 \
	I1209 11:57:48.285530  661546 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c011865ce27f188d55feab5f04226df0aae8d2e7cfe661283806c6f2de0ef29a 
	I1209 11:57:48.286117  661546 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1209 11:57:48.286246  661546 cni.go:84] Creating CNI manager for ""
	I1209 11:57:48.286263  661546 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 11:57:48.288141  661546 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1209 11:57:48.289484  661546 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1209 11:57:48.301160  661546 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1209 11:57:48.320752  661546 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1209 11:57:48.320831  661546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 11:57:48.320831  661546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-005123 minikube.k8s.io/updated_at=2024_12_09T11_57_48_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=60110addcdbf0fec7168b962521659e922988d6c minikube.k8s.io/name=embed-certs-005123 minikube.k8s.io/primary=true
	I1209 11:57:48.552069  661546 ops.go:34] apiserver oom_adj: -16
	I1209 11:57:48.552119  661546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 11:57:49.052304  661546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 11:57:49.552516  661546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 11:57:50.052548  661546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 11:57:50.552931  661546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 11:57:51.052381  661546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 11:57:51.552589  661546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 11:57:52.052273  661546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 11:57:52.552546  661546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 11:57:52.645059  661546 kubeadm.go:1113] duration metric: took 4.324296774s to wait for elevateKubeSystemPrivileges
	I1209 11:57:52.645107  661546 kubeadm.go:394] duration metric: took 5m0.847017281s to StartCluster
	I1209 11:57:52.645133  661546 settings.go:142] acquiring lock: {Name:mkd819f3469f56ef651f2e5bb461e93bb6b2b5fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 11:57:52.645241  661546 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20068-609844/kubeconfig
	I1209 11:57:52.647822  661546 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/kubeconfig: {Name:mk32876a1ab47f13c0d7c0ba1ee5e32de01b4a4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 11:57:52.648129  661546 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.218 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 11:57:52.648226  661546 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1209 11:57:52.648338  661546 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-005123"
	I1209 11:57:52.648354  661546 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-005123"
	W1209 11:57:52.648366  661546 addons.go:243] addon storage-provisioner should already be in state true
	I1209 11:57:52.648367  661546 addons.go:69] Setting default-storageclass=true in profile "embed-certs-005123"
	I1209 11:57:52.648396  661546 config.go:182] Loaded profile config "embed-certs-005123": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 11:57:52.648397  661546 addons.go:69] Setting metrics-server=true in profile "embed-certs-005123"
	I1209 11:57:52.648434  661546 addons.go:234] Setting addon metrics-server=true in "embed-certs-005123"
	I1209 11:57:52.648399  661546 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-005123"
	W1209 11:57:52.648448  661546 addons.go:243] addon metrics-server should already be in state true
	I1209 11:57:52.648499  661546 host.go:66] Checking if "embed-certs-005123" exists ...
	I1209 11:57:52.648400  661546 host.go:66] Checking if "embed-certs-005123" exists ...
	I1209 11:57:52.648867  661546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:57:52.648883  661546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:57:52.648914  661546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:57:52.648932  661546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:57:52.648947  661546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:57:52.648917  661546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:57:52.649702  661546 out.go:177] * Verifying Kubernetes components...
	I1209 11:57:52.651094  661546 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 11:57:52.665090  661546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38065
	I1209 11:57:52.665309  661546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35905
	I1209 11:57:52.665602  661546 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:57:52.665889  661546 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:57:52.666308  661546 main.go:141] libmachine: Using API Version  1
	I1209 11:57:52.666329  661546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:57:52.666470  661546 main.go:141] libmachine: Using API Version  1
	I1209 11:57:52.666492  661546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:57:52.666768  661546 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:57:52.666907  661546 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:57:52.667140  661546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35159
	I1209 11:57:52.667344  661546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:57:52.667387  661546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:57:52.667536  661546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:57:52.667580  661546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:57:52.667652  661546 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:57:52.668127  661546 main.go:141] libmachine: Using API Version  1
	I1209 11:57:52.668154  661546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:57:52.668657  661546 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:57:52.668868  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetState
	I1209 11:57:52.672550  661546 addons.go:234] Setting addon default-storageclass=true in "embed-certs-005123"
	W1209 11:57:52.672580  661546 addons.go:243] addon default-storageclass should already be in state true
	I1209 11:57:52.672612  661546 host.go:66] Checking if "embed-certs-005123" exists ...
	I1209 11:57:52.672985  661546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:57:52.673032  661546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:57:52.684848  661546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43667
	I1209 11:57:52.684854  661546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36851
	I1209 11:57:52.685398  661546 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:57:52.685451  661546 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:57:52.686054  661546 main.go:141] libmachine: Using API Version  1
	I1209 11:57:52.686081  661546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:57:52.686155  661546 main.go:141] libmachine: Using API Version  1
	I1209 11:57:52.686228  661546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:57:52.686553  661546 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:57:52.686614  661546 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:57:52.686753  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetState
	I1209 11:57:52.686930  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetState
	I1209 11:57:52.687838  661546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33245
	I1209 11:57:52.688391  661546 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:57:52.688818  661546 main.go:141] libmachine: (embed-certs-005123) Calling .DriverName
	I1209 11:57:52.689013  661546 main.go:141] libmachine: Using API Version  1
	I1209 11:57:52.689040  661546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:57:52.689314  661546 main.go:141] libmachine: (embed-certs-005123) Calling .DriverName
	I1209 11:57:52.689450  661546 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:57:52.689908  661546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:57:52.689943  661546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:57:52.691136  661546 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 11:57:52.691137  661546 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1209 11:57:52.692714  661546 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1209 11:57:52.692732  661546 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1209 11:57:52.692749  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHHostname
	I1209 11:57:52.692789  661546 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 11:57:52.692800  661546 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1209 11:57:52.692813  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHHostname
	I1209 11:57:52.696349  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:57:52.696791  661546 main.go:141] libmachine: (embed-certs-005123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:a0:a8", ip: ""} in network mk-embed-certs-005123: {Iface:virbr4 ExpiryTime:2024-12-09 12:52:37 +0000 UTC Type:0 Mac:52:54:00:ee:a0:a8 Iaid: IPaddr:192.168.72.218 Prefix:24 Hostname:embed-certs-005123 Clientid:01:52:54:00:ee:a0:a8}
	I1209 11:57:52.696815  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined IP address 192.168.72.218 and MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:57:52.697088  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:57:52.697143  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHPort
	I1209 11:57:52.697314  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHKeyPath
	I1209 11:57:52.697482  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHUsername
	I1209 11:57:52.697512  661546 main.go:141] libmachine: (embed-certs-005123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:a0:a8", ip: ""} in network mk-embed-certs-005123: {Iface:virbr4 ExpiryTime:2024-12-09 12:52:37 +0000 UTC Type:0 Mac:52:54:00:ee:a0:a8 Iaid: IPaddr:192.168.72.218 Prefix:24 Hostname:embed-certs-005123 Clientid:01:52:54:00:ee:a0:a8}
	I1209 11:57:52.697547  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined IP address 192.168.72.218 and MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:57:52.697658  661546 sshutil.go:53] new ssh client: &{IP:192.168.72.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/embed-certs-005123/id_rsa Username:docker}
	I1209 11:57:52.697787  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHPort
	I1209 11:57:52.697962  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHKeyPath
	I1209 11:57:52.698093  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHUsername
	I1209 11:57:52.698209  661546 sshutil.go:53] new ssh client: &{IP:192.168.72.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/embed-certs-005123/id_rsa Username:docker}
	I1209 11:57:52.705766  661546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37265
	I1209 11:57:52.706265  661546 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:57:52.706694  661546 main.go:141] libmachine: Using API Version  1
	I1209 11:57:52.706721  661546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:57:52.707031  661546 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:57:52.707241  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetState
	I1209 11:57:52.708747  661546 main.go:141] libmachine: (embed-certs-005123) Calling .DriverName
	I1209 11:57:52.708980  661546 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1209 11:57:52.708997  661546 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1209 11:57:52.709016  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHHostname
	I1209 11:57:52.711546  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:57:52.711986  661546 main.go:141] libmachine: (embed-certs-005123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:a0:a8", ip: ""} in network mk-embed-certs-005123: {Iface:virbr4 ExpiryTime:2024-12-09 12:52:37 +0000 UTC Type:0 Mac:52:54:00:ee:a0:a8 Iaid: IPaddr:192.168.72.218 Prefix:24 Hostname:embed-certs-005123 Clientid:01:52:54:00:ee:a0:a8}
	I1209 11:57:52.712011  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined IP address 192.168.72.218 and MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:57:52.712263  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHPort
	I1209 11:57:52.712438  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHKeyPath
	I1209 11:57:52.712604  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHUsername
	I1209 11:57:52.712751  661546 sshutil.go:53] new ssh client: &{IP:192.168.72.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/embed-certs-005123/id_rsa Username:docker}
	I1209 11:57:52.858535  661546 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 11:57:52.879035  661546 node_ready.go:35] waiting up to 6m0s for node "embed-certs-005123" to be "Ready" ...
	I1209 11:57:52.899550  661546 node_ready.go:49] node "embed-certs-005123" has status "Ready":"True"
	I1209 11:57:52.899575  661546 node_ready.go:38] duration metric: took 20.508179ms for node "embed-certs-005123" to be "Ready" ...
	I1209 11:57:52.899589  661546 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 11:57:52.960716  661546 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-t49mk" in "kube-system" namespace to be "Ready" ...
	I1209 11:57:52.962755  661546 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1209 11:57:52.962779  661546 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1209 11:57:52.995747  661546 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1209 11:57:52.995787  661546 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1209 11:57:53.031395  661546 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1209 11:57:53.031426  661546 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1209 11:57:53.031535  661546 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1209 11:57:53.049695  661546 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 11:57:53.061716  661546 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1209 11:57:53.314158  661546 main.go:141] libmachine: Making call to close driver server
	I1209 11:57:53.314212  661546 main.go:141] libmachine: (embed-certs-005123) Calling .Close
	I1209 11:57:53.314523  661546 main.go:141] libmachine: (embed-certs-005123) DBG | Closing plugin on server side
	I1209 11:57:53.314548  661546 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:57:53.314565  661546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:57:53.314586  661546 main.go:141] libmachine: Making call to close driver server
	I1209 11:57:53.314598  661546 main.go:141] libmachine: (embed-certs-005123) Calling .Close
	I1209 11:57:53.314857  661546 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:57:53.314875  661546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:57:53.323573  661546 main.go:141] libmachine: Making call to close driver server
	I1209 11:57:53.323590  661546 main.go:141] libmachine: (embed-certs-005123) Calling .Close
	I1209 11:57:53.323822  661546 main.go:141] libmachine: (embed-certs-005123) DBG | Closing plugin on server side
	I1209 11:57:53.323873  661546 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:57:53.323882  661546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:57:54.004616  661546 main.go:141] libmachine: Making call to close driver server
	I1209 11:57:54.004655  661546 main.go:141] libmachine: (embed-certs-005123) Calling .Close
	I1209 11:57:54.005050  661546 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:57:54.005067  661546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:57:54.005075  661546 main.go:141] libmachine: Making call to close driver server
	I1209 11:57:54.005083  661546 main.go:141] libmachine: (embed-certs-005123) Calling .Close
	I1209 11:57:54.005351  661546 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:57:54.005372  661546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:57:54.005370  661546 main.go:141] libmachine: (embed-certs-005123) DBG | Closing plugin on server side
	I1209 11:57:54.352527  661546 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.290758533s)
	I1209 11:57:54.352616  661546 main.go:141] libmachine: Making call to close driver server
	I1209 11:57:54.352636  661546 main.go:141] libmachine: (embed-certs-005123) Calling .Close
	I1209 11:57:54.352957  661546 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:57:54.352977  661546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:57:54.352987  661546 main.go:141] libmachine: Making call to close driver server
	I1209 11:57:54.352995  661546 main.go:141] libmachine: (embed-certs-005123) Calling .Close
	I1209 11:57:54.353278  661546 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:57:54.353320  661546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:57:54.353336  661546 addons.go:475] Verifying addon metrics-server=true in "embed-certs-005123"
	I1209 11:57:54.353387  661546 main.go:141] libmachine: (embed-certs-005123) DBG | Closing plugin on server side
	I1209 11:57:54.355153  661546 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1209 11:57:54.356250  661546 addons.go:510] duration metric: took 1.708044398s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1209 11:57:54.968202  661546 pod_ready.go:103] pod "coredns-7c65d6cfc9-t49mk" in "kube-system" namespace has status "Ready":"False"
	I1209 11:57:57.467948  661546 pod_ready.go:93] pod "coredns-7c65d6cfc9-t49mk" in "kube-system" namespace has status "Ready":"True"
	I1209 11:57:57.467979  661546 pod_ready.go:82] duration metric: took 4.507228843s for pod "coredns-7c65d6cfc9-t49mk" in "kube-system" namespace to be "Ready" ...
	I1209 11:57:57.467992  661546 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-xspr9" in "kube-system" namespace to be "Ready" ...
	I1209 11:57:59.475024  661546 pod_ready.go:103] pod "coredns-7c65d6cfc9-xspr9" in "kube-system" namespace has status "Ready":"False"
	I1209 11:58:00.473961  661546 pod_ready.go:93] pod "coredns-7c65d6cfc9-xspr9" in "kube-system" namespace has status "Ready":"True"
	I1209 11:58:00.473987  661546 pod_ready.go:82] duration metric: took 3.005987981s for pod "coredns-7c65d6cfc9-xspr9" in "kube-system" namespace to be "Ready" ...
	I1209 11:58:00.473996  661546 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-005123" in "kube-system" namespace to be "Ready" ...
	I1209 11:58:00.478022  661546 pod_ready.go:93] pod "etcd-embed-certs-005123" in "kube-system" namespace has status "Ready":"True"
	I1209 11:58:00.478040  661546 pod_ready.go:82] duration metric: took 4.038353ms for pod "etcd-embed-certs-005123" in "kube-system" namespace to be "Ready" ...
	I1209 11:58:00.478049  661546 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-005123" in "kube-system" namespace to be "Ready" ...
	I1209 11:58:00.482415  661546 pod_ready.go:93] pod "kube-apiserver-embed-certs-005123" in "kube-system" namespace has status "Ready":"True"
	I1209 11:58:00.482439  661546 pod_ready.go:82] duration metric: took 4.384854ms for pod "kube-apiserver-embed-certs-005123" in "kube-system" namespace to be "Ready" ...
	I1209 11:58:00.482449  661546 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-005123" in "kube-system" namespace to be "Ready" ...
	I1209 11:58:00.486284  661546 pod_ready.go:93] pod "kube-controller-manager-embed-certs-005123" in "kube-system" namespace has status "Ready":"True"
	I1209 11:58:00.486311  661546 pod_ready.go:82] duration metric: took 3.85467ms for pod "kube-controller-manager-embed-certs-005123" in "kube-system" namespace to be "Ready" ...
	I1209 11:58:00.486326  661546 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-n4pph" in "kube-system" namespace to be "Ready" ...
	I1209 11:58:00.490260  661546 pod_ready.go:93] pod "kube-proxy-n4pph" in "kube-system" namespace has status "Ready":"True"
	I1209 11:58:00.490284  661546 pod_ready.go:82] duration metric: took 3.949342ms for pod "kube-proxy-n4pph" in "kube-system" namespace to be "Ready" ...
	I1209 11:58:00.490296  661546 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-005123" in "kube-system" namespace to be "Ready" ...
	I1209 11:58:00.872396  661546 pod_ready.go:93] pod "kube-scheduler-embed-certs-005123" in "kube-system" namespace has status "Ready":"True"
	I1209 11:58:00.872420  661546 pod_ready.go:82] duration metric: took 382.116873ms for pod "kube-scheduler-embed-certs-005123" in "kube-system" namespace to be "Ready" ...
	I1209 11:58:00.872428  661546 pod_ready.go:39] duration metric: took 7.97282742s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 11:58:00.872446  661546 api_server.go:52] waiting for apiserver process to appear ...
	I1209 11:58:00.872502  661546 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:58:00.887281  661546 api_server.go:72] duration metric: took 8.239108757s to wait for apiserver process to appear ...
	I1209 11:58:00.887312  661546 api_server.go:88] waiting for apiserver healthz status ...
	I1209 11:58:00.887333  661546 api_server.go:253] Checking apiserver healthz at https://192.168.72.218:8443/healthz ...
	I1209 11:58:00.892005  661546 api_server.go:279] https://192.168.72.218:8443/healthz returned 200:
	ok
	I1209 11:58:00.893247  661546 api_server.go:141] control plane version: v1.31.2
	I1209 11:58:00.893277  661546 api_server.go:131] duration metric: took 5.95753ms to wait for apiserver health ...
	I1209 11:58:00.893288  661546 system_pods.go:43] waiting for kube-system pods to appear ...
	I1209 11:58:01.074723  661546 system_pods.go:59] 9 kube-system pods found
	I1209 11:58:01.074756  661546 system_pods.go:61] "coredns-7c65d6cfc9-t49mk" [ca3ba094-58a2-401d-8aea-46d6d96baacb] Running
	I1209 11:58:01.074762  661546 system_pods.go:61] "coredns-7c65d6cfc9-xspr9" [9384e9ea-987e-4728-bdf2-773645d52ab1] Running
	I1209 11:58:01.074766  661546 system_pods.go:61] "etcd-embed-certs-005123" [f8b23ab4-b852-4598-93d7-3b6eb1543a4b] Running
	I1209 11:58:01.074771  661546 system_pods.go:61] "kube-apiserver-embed-certs-005123" [96b0cb5a-1d81-48fc-bbc9-015de7c48ac5] Running
	I1209 11:58:01.074774  661546 system_pods.go:61] "kube-controller-manager-embed-certs-005123" [33c237d8-8f5a-4d8f-870a-7ba0dc2180dd] Running
	I1209 11:58:01.074777  661546 system_pods.go:61] "kube-proxy-n4pph" [520d101f-0df0-413f-a0fc-22ecc2884d40] Running
	I1209 11:58:01.074780  661546 system_pods.go:61] "kube-scheduler-embed-certs-005123" [11dcaf15-fc3b-4945-af9b-d08aa5528679] Running
	I1209 11:58:01.074786  661546 system_pods.go:61] "metrics-server-6867b74b74-zfw9r" [8438b820-4cc5-4d7b-8af5-9349fdd87ca8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1209 11:58:01.074791  661546 system_pods.go:61] "storage-provisioner" [91ceb801-7262-4d7e-9623-c8c1931fc34b] Running
	I1209 11:58:01.074797  661546 system_pods.go:74] duration metric: took 181.502993ms to wait for pod list to return data ...
	I1209 11:58:01.074804  661546 default_sa.go:34] waiting for default service account to be created ...
	I1209 11:58:01.272664  661546 default_sa.go:45] found service account: "default"
	I1209 11:58:01.272697  661546 default_sa.go:55] duration metric: took 197.886347ms for default service account to be created ...
	I1209 11:58:01.272707  661546 system_pods.go:116] waiting for k8s-apps to be running ...
	I1209 11:58:01.475062  661546 system_pods.go:86] 9 kube-system pods found
	I1209 11:58:01.475096  661546 system_pods.go:89] "coredns-7c65d6cfc9-t49mk" [ca3ba094-58a2-401d-8aea-46d6d96baacb] Running
	I1209 11:58:01.475102  661546 system_pods.go:89] "coredns-7c65d6cfc9-xspr9" [9384e9ea-987e-4728-bdf2-773645d52ab1] Running
	I1209 11:58:01.475105  661546 system_pods.go:89] "etcd-embed-certs-005123" [f8b23ab4-b852-4598-93d7-3b6eb1543a4b] Running
	I1209 11:58:01.475109  661546 system_pods.go:89] "kube-apiserver-embed-certs-005123" [96b0cb5a-1d81-48fc-bbc9-015de7c48ac5] Running
	I1209 11:58:01.475114  661546 system_pods.go:89] "kube-controller-manager-embed-certs-005123" [33c237d8-8f5a-4d8f-870a-7ba0dc2180dd] Running
	I1209 11:58:01.475118  661546 system_pods.go:89] "kube-proxy-n4pph" [520d101f-0df0-413f-a0fc-22ecc2884d40] Running
	I1209 11:58:01.475121  661546 system_pods.go:89] "kube-scheduler-embed-certs-005123" [11dcaf15-fc3b-4945-af9b-d08aa5528679] Running
	I1209 11:58:01.475131  661546 system_pods.go:89] "metrics-server-6867b74b74-zfw9r" [8438b820-4cc5-4d7b-8af5-9349fdd87ca8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1209 11:58:01.475138  661546 system_pods.go:89] "storage-provisioner" [91ceb801-7262-4d7e-9623-c8c1931fc34b] Running
	I1209 11:58:01.475148  661546 system_pods.go:126] duration metric: took 202.434687ms to wait for k8s-apps to be running ...
	I1209 11:58:01.475158  661546 system_svc.go:44] waiting for kubelet service to be running ....
	I1209 11:58:01.475220  661546 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 11:58:01.490373  661546 system_svc.go:56] duration metric: took 15.20079ms WaitForService to wait for kubelet
	I1209 11:58:01.490416  661546 kubeadm.go:582] duration metric: took 8.842250416s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 11:58:01.490451  661546 node_conditions.go:102] verifying NodePressure condition ...
	I1209 11:58:01.673621  661546 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1209 11:58:01.673651  661546 node_conditions.go:123] node cpu capacity is 2
	I1209 11:58:01.673662  661546 node_conditions.go:105] duration metric: took 183.205852ms to run NodePressure ...
	I1209 11:58:01.673674  661546 start.go:241] waiting for startup goroutines ...
	I1209 11:58:01.673681  661546 start.go:246] waiting for cluster config update ...
	I1209 11:58:01.673691  661546 start.go:255] writing updated cluster config ...
	I1209 11:58:01.673995  661546 ssh_runner.go:195] Run: rm -f paused
	I1209 11:58:01.725363  661546 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1209 11:58:01.727275  661546 out.go:177] * Done! kubectl is now configured to use "embed-certs-005123" cluster and "default" namespace by default
	I1209 11:58:14.994765  662586 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1209 11:58:14.994918  662586 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1209 11:58:14.995050  662586 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1209 11:58:14.995118  662586 kubeadm.go:310] [preflight] Running pre-flight checks
	I1209 11:58:14.995182  662586 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1209 11:58:14.995272  662586 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1209 11:58:14.995353  662586 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1209 11:58:14.995410  662586 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1209 11:58:14.996905  662586 out.go:235]   - Generating certificates and keys ...
	I1209 11:58:14.997000  662586 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1209 11:58:14.997055  662586 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1209 11:58:14.997123  662586 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1209 11:58:14.997184  662586 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1209 11:58:14.997278  662586 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1209 11:58:14.997349  662586 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1209 11:58:14.997474  662586 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1209 11:58:14.997567  662586 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1209 11:58:14.997631  662586 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1209 11:58:14.997700  662586 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1209 11:58:14.997736  662586 kubeadm.go:310] [certs] Using the existing "sa" key
	I1209 11:58:14.997783  662586 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1209 11:58:14.997826  662586 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1209 11:58:14.997871  662586 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1209 11:58:14.997930  662586 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1209 11:58:14.997977  662586 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1209 11:58:14.998063  662586 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1209 11:58:14.998141  662586 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1209 11:58:14.998199  662586 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1209 11:58:14.998264  662586 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1209 11:58:14.999539  662586 out.go:235]   - Booting up control plane ...
	I1209 11:58:14.999663  662586 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1209 11:58:14.999748  662586 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1209 11:58:14.999824  662586 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1209 11:58:14.999946  662586 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1209 11:58:15.000148  662586 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1209 11:58:15.000221  662586 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1209 11:58:15.000326  662586 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1209 11:58:15.000532  662586 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1209 11:58:15.000598  662586 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1209 11:58:15.000753  662586 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1209 11:58:15.000814  662586 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1209 11:58:15.000971  662586 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1209 11:58:15.001064  662586 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1209 11:58:15.001273  662586 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1209 11:58:15.001335  662586 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1209 11:58:15.001486  662586 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1209 11:58:15.001493  662586 kubeadm.go:310] 
	I1209 11:58:15.001553  662586 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1209 11:58:15.001616  662586 kubeadm.go:310] 		timed out waiting for the condition
	I1209 11:58:15.001631  662586 kubeadm.go:310] 
	I1209 11:58:15.001685  662586 kubeadm.go:310] 	This error is likely caused by:
	I1209 11:58:15.001732  662586 kubeadm.go:310] 		- The kubelet is not running
	I1209 11:58:15.001883  662586 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1209 11:58:15.001897  662586 kubeadm.go:310] 
	I1209 11:58:15.002041  662586 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1209 11:58:15.002087  662586 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1209 11:58:15.002146  662586 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1209 11:58:15.002156  662586 kubeadm.go:310] 
	I1209 11:58:15.002294  662586 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1209 11:58:15.002373  662586 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1209 11:58:15.002380  662586 kubeadm.go:310] 
	I1209 11:58:15.002502  662586 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1209 11:58:15.002623  662586 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1209 11:58:15.002725  662586 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1209 11:58:15.002799  662586 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1209 11:58:15.002835  662586 kubeadm.go:310] 
	W1209 11:58:15.002956  662586 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1209 11:58:15.003022  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1209 11:58:15.469838  662586 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 11:58:15.484503  662586 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 11:58:15.493409  662586 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 11:58:15.493430  662586 kubeadm.go:157] found existing configuration files:
	
	I1209 11:58:15.493487  662586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 11:58:15.502508  662586 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 11:58:15.502568  662586 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 11:58:15.511743  662586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 11:58:15.519855  662586 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 11:58:15.519913  662586 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 11:58:15.528743  662586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 11:58:15.537000  662586 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 11:58:15.537072  662586 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 11:58:15.546520  662586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 11:58:15.555448  662586 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 11:58:15.555526  662586 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 11:58:15.565618  662586 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1209 11:58:15.631763  662586 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1209 11:58:15.631832  662586 kubeadm.go:310] [preflight] Running pre-flight checks
	I1209 11:58:15.798683  662586 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1209 11:58:15.798822  662586 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1209 11:58:15.798957  662586 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1209 11:58:15.974522  662586 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1209 11:58:15.976286  662586 out.go:235]   - Generating certificates and keys ...
	I1209 11:58:15.976408  662586 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1209 11:58:15.976492  662586 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1209 11:58:15.976616  662586 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1209 11:58:15.976714  662586 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1209 11:58:15.976813  662586 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1209 11:58:15.976889  662586 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1209 11:58:15.976978  662586 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1209 11:58:15.977064  662586 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1209 11:58:15.977184  662586 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1209 11:58:15.977251  662586 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1209 11:58:15.977287  662586 kubeadm.go:310] [certs] Using the existing "sa" key
	I1209 11:58:15.977363  662586 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1209 11:58:16.193383  662586 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1209 11:58:16.324912  662586 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1209 11:58:16.541372  662586 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1209 11:58:16.786389  662586 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1209 11:58:16.807241  662586 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1209 11:58:16.808750  662586 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1209 11:58:16.808823  662586 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1209 11:58:16.951756  662586 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1209 11:58:16.954338  662586 out.go:235]   - Booting up control plane ...
	I1209 11:58:16.954486  662586 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1209 11:58:16.968892  662586 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1209 11:58:16.970556  662586 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1209 11:58:16.971301  662586 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1209 11:58:16.974040  662586 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1209 11:58:56.976537  662586 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1209 11:58:56.976966  662586 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1209 11:58:56.977214  662586 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1209 11:59:01.977861  662586 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1209 11:59:01.978074  662586 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1209 11:59:11.978821  662586 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1209 11:59:11.979056  662586 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1209 11:59:31.980118  662586 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1209 11:59:31.980386  662586 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1209 12:00:11.981507  662586 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1209 12:00:11.981791  662586 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1209 12:00:11.981804  662586 kubeadm.go:310] 
	I1209 12:00:11.981863  662586 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1209 12:00:11.981916  662586 kubeadm.go:310] 		timed out waiting for the condition
	I1209 12:00:11.981926  662586 kubeadm.go:310] 
	I1209 12:00:11.981977  662586 kubeadm.go:310] 	This error is likely caused by:
	I1209 12:00:11.982028  662586 kubeadm.go:310] 		- The kubelet is not running
	I1209 12:00:11.982232  662586 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1209 12:00:11.982262  662586 kubeadm.go:310] 
	I1209 12:00:11.982449  662586 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1209 12:00:11.982506  662586 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1209 12:00:11.982555  662586 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1209 12:00:11.982564  662586 kubeadm.go:310] 
	I1209 12:00:11.982709  662586 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1209 12:00:11.982824  662586 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1209 12:00:11.982837  662586 kubeadm.go:310] 
	I1209 12:00:11.982975  662586 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1209 12:00:11.983092  662586 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1209 12:00:11.983186  662586 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1209 12:00:11.983259  662586 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1209 12:00:11.983308  662586 kubeadm.go:310] 
	I1209 12:00:11.983442  662586 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1209 12:00:11.983534  662586 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1209 12:00:11.983622  662586 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1209 12:00:11.983692  662586 kubeadm.go:394] duration metric: took 7m57.372617524s to StartCluster
	I1209 12:00:11.983778  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 12:00:11.983852  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 12:00:12.032068  662586 cri.go:89] found id: ""
	I1209 12:00:12.032110  662586 logs.go:282] 0 containers: []
	W1209 12:00:12.032126  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 12:00:12.032139  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 12:00:12.032232  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 12:00:12.074929  662586 cri.go:89] found id: ""
	I1209 12:00:12.074977  662586 logs.go:282] 0 containers: []
	W1209 12:00:12.074990  662586 logs.go:284] No container was found matching "etcd"
	I1209 12:00:12.075001  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 12:00:12.075074  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 12:00:12.113547  662586 cri.go:89] found id: ""
	I1209 12:00:12.113582  662586 logs.go:282] 0 containers: []
	W1209 12:00:12.113592  662586 logs.go:284] No container was found matching "coredns"
	I1209 12:00:12.113598  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 12:00:12.113661  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 12:00:12.147436  662586 cri.go:89] found id: ""
	I1209 12:00:12.147465  662586 logs.go:282] 0 containers: []
	W1209 12:00:12.147475  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 12:00:12.147481  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 12:00:12.147535  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 12:00:12.184398  662586 cri.go:89] found id: ""
	I1209 12:00:12.184439  662586 logs.go:282] 0 containers: []
	W1209 12:00:12.184453  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 12:00:12.184463  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 12:00:12.184541  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 12:00:12.230844  662586 cri.go:89] found id: ""
	I1209 12:00:12.230884  662586 logs.go:282] 0 containers: []
	W1209 12:00:12.230896  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 12:00:12.230905  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 12:00:12.230981  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 12:00:12.264897  662586 cri.go:89] found id: ""
	I1209 12:00:12.264930  662586 logs.go:282] 0 containers: []
	W1209 12:00:12.264939  662586 logs.go:284] No container was found matching "kindnet"
	I1209 12:00:12.264946  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 12:00:12.265001  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 12:00:12.303553  662586 cri.go:89] found id: ""
	I1209 12:00:12.303594  662586 logs.go:282] 0 containers: []
	W1209 12:00:12.303607  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 12:00:12.303622  662586 logs.go:123] Gathering logs for container status ...
	I1209 12:00:12.303638  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 12:00:12.342799  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 12:00:12.342838  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 12:00:12.392992  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 12:00:12.393039  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 12:00:12.407065  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 12:00:12.407100  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 12:00:12.483599  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 12:00:12.483651  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 12:00:12.483675  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1209 12:00:12.591518  662586 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1209 12:00:12.591615  662586 out.go:270] * 
	W1209 12:00:12.591715  662586 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1209 12:00:12.591737  662586 out.go:270] * 
	W1209 12:00:12.592644  662586 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 12:00:12.596340  662586 out.go:201] 
	W1209 12:00:12.597706  662586 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1209 12:00:12.597757  662586 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1209 12:00:12.597798  662586 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1209 12:00:12.599219  662586 out.go:201] 
	
	
	==> CRI-O <==
	Dec 09 12:00:14 old-k8s-version-014592 crio[629]: time="2024-12-09 12:00:14.489874669Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733745614489841117,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1e7b37a4-4b99-4885-822d-33278cb9c862 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 12:00:14 old-k8s-version-014592 crio[629]: time="2024-12-09 12:00:14.490580966Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2f54e92d-84b3-4cfd-b9cf-856e0e7b5927 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 12:00:14 old-k8s-version-014592 crio[629]: time="2024-12-09 12:00:14.490627054Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2f54e92d-84b3-4cfd-b9cf-856e0e7b5927 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 12:00:14 old-k8s-version-014592 crio[629]: time="2024-12-09 12:00:14.490658065Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=2f54e92d-84b3-4cfd-b9cf-856e0e7b5927 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 12:00:14 old-k8s-version-014592 crio[629]: time="2024-12-09 12:00:14.524382763Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=52fc8cad-6d84-49c9-9c1a-d903cab403b0 name=/runtime.v1.RuntimeService/Version
	Dec 09 12:00:14 old-k8s-version-014592 crio[629]: time="2024-12-09 12:00:14.524507517Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=52fc8cad-6d84-49c9-9c1a-d903cab403b0 name=/runtime.v1.RuntimeService/Version
	Dec 09 12:00:14 old-k8s-version-014592 crio[629]: time="2024-12-09 12:00:14.525578209Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=faa6faa7-70ba-49cd-b85a-633a5ec46ffd name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 12:00:14 old-k8s-version-014592 crio[629]: time="2024-12-09 12:00:14.526103934Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733745614526071461,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=faa6faa7-70ba-49cd-b85a-633a5ec46ffd name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 12:00:14 old-k8s-version-014592 crio[629]: time="2024-12-09 12:00:14.526585792Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c8350842-7ae0-4515-a6db-b6e4651befaa name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 12:00:14 old-k8s-version-014592 crio[629]: time="2024-12-09 12:00:14.526640153Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c8350842-7ae0-4515-a6db-b6e4651befaa name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 12:00:14 old-k8s-version-014592 crio[629]: time="2024-12-09 12:00:14.526672765Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=c8350842-7ae0-4515-a6db-b6e4651befaa name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 12:00:14 old-k8s-version-014592 crio[629]: time="2024-12-09 12:00:14.558625239Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a7e51404-cdb9-437c-b736-be59a16a48fc name=/runtime.v1.RuntimeService/Version
	Dec 09 12:00:14 old-k8s-version-014592 crio[629]: time="2024-12-09 12:00:14.558704668Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a7e51404-cdb9-437c-b736-be59a16a48fc name=/runtime.v1.RuntimeService/Version
	Dec 09 12:00:14 old-k8s-version-014592 crio[629]: time="2024-12-09 12:00:14.559600937Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=23d9554c-44f6-427d-9297-4dea4b2b7322 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 12:00:14 old-k8s-version-014592 crio[629]: time="2024-12-09 12:00:14.559965391Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733745614559942163,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=23d9554c-44f6-427d-9297-4dea4b2b7322 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 12:00:14 old-k8s-version-014592 crio[629]: time="2024-12-09 12:00:14.560467626Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=03dd81f4-778a-4ff5-b582-43965bb37230 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 12:00:14 old-k8s-version-014592 crio[629]: time="2024-12-09 12:00:14.560533372Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=03dd81f4-778a-4ff5-b582-43965bb37230 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 12:00:14 old-k8s-version-014592 crio[629]: time="2024-12-09 12:00:14.560582241Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=03dd81f4-778a-4ff5-b582-43965bb37230 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 12:00:14 old-k8s-version-014592 crio[629]: time="2024-12-09 12:00:14.592019116Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e9e60512-8319-4dd3-a89e-83decc37066b name=/runtime.v1.RuntimeService/Version
	Dec 09 12:00:14 old-k8s-version-014592 crio[629]: time="2024-12-09 12:00:14.592106122Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e9e60512-8319-4dd3-a89e-83decc37066b name=/runtime.v1.RuntimeService/Version
	Dec 09 12:00:14 old-k8s-version-014592 crio[629]: time="2024-12-09 12:00:14.593307687Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c8eb3a94-907f-43e5-9e2d-8fcf5cee1f91 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 12:00:14 old-k8s-version-014592 crio[629]: time="2024-12-09 12:00:14.593732619Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733745614593708866,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c8eb3a94-907f-43e5-9e2d-8fcf5cee1f91 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 12:00:14 old-k8s-version-014592 crio[629]: time="2024-12-09 12:00:14.594271941Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d06a6ae4-2d94-4ddf-8954-4bc1e518ead9 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 12:00:14 old-k8s-version-014592 crio[629]: time="2024-12-09 12:00:14.594341702Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d06a6ae4-2d94-4ddf-8954-4bc1e518ead9 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 12:00:14 old-k8s-version-014592 crio[629]: time="2024-12-09 12:00:14.594384550Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=d06a6ae4-2d94-4ddf-8954-4bc1e518ead9 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 9 11:51] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053266] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039222] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.927032] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.003479] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.562691] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Dec 9 11:52] systemd-fstab-generator[557]: Ignoring "noauto" option for root device
	[  +0.070928] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.073924] systemd-fstab-generator[569]: Ignoring "noauto" option for root device
	[  +0.215176] systemd-fstab-generator[583]: Ignoring "noauto" option for root device
	[  +0.123356] systemd-fstab-generator[595]: Ignoring "noauto" option for root device
	[  +0.253740] systemd-fstab-generator[620]: Ignoring "noauto" option for root device
	[  +6.933985] systemd-fstab-generator[876]: Ignoring "noauto" option for root device
	[  +0.063858] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.761344] systemd-fstab-generator[1001]: Ignoring "noauto" option for root device
	[  +9.884362] kauditd_printk_skb: 46 callbacks suppressed
	[Dec 9 11:56] systemd-fstab-generator[5066]: Ignoring "noauto" option for root device
	[Dec 9 11:58] systemd-fstab-generator[5348]: Ignoring "noauto" option for root device
	[  +0.064846] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 12:00:14 up 8 min,  0 users,  load average: 0.03, 0.14, 0.10
	Linux old-k8s-version-014592 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Dec 09 12:00:11 old-k8s-version-014592 kubelet[5528]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run.func1(0xc0001020c0, 0xc000cdc120)
	Dec 09 12:00:11 old-k8s-version-014592 kubelet[5528]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:129
	Dec 09 12:00:11 old-k8s-version-014592 kubelet[5528]: created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run
	Dec 09 12:00:11 old-k8s-version-014592 kubelet[5528]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:129 +0xa5
	Dec 09 12:00:11 old-k8s-version-014592 kubelet[5528]: goroutine 159 [select]:
	Dec 09 12:00:11 old-k8s-version-014592 kubelet[5528]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000823ef0, 0x4f0ac20, 0xc000cd83c0, 0x1, 0xc0001020c0)
	Dec 09 12:00:11 old-k8s-version-014592 kubelet[5528]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
	Dec 09 12:00:11 old-k8s-version-014592 kubelet[5528]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc000bcec40, 0xc0001020c0)
	Dec 09 12:00:11 old-k8s-version-014592 kubelet[5528]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Dec 09 12:00:11 old-k8s-version-014592 kubelet[5528]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Dec 09 12:00:11 old-k8s-version-014592 kubelet[5528]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Dec 09 12:00:11 old-k8s-version-014592 kubelet[5528]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000c13820, 0xc000cd4700)
	Dec 09 12:00:11 old-k8s-version-014592 kubelet[5528]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Dec 09 12:00:11 old-k8s-version-014592 kubelet[5528]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Dec 09 12:00:11 old-k8s-version-014592 kubelet[5528]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Dec 09 12:00:11 old-k8s-version-014592 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Dec 09 12:00:11 old-k8s-version-014592 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 12:00:12 old-k8s-version-014592 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Dec 09 12:00:12 old-k8s-version-014592 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Dec 09 12:00:12 old-k8s-version-014592 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Dec 09 12:00:12 old-k8s-version-014592 kubelet[5595]: I1209 12:00:12.621427    5595 server.go:416] Version: v1.20.0
	Dec 09 12:00:12 old-k8s-version-014592 kubelet[5595]: I1209 12:00:12.621827    5595 server.go:837] Client rotation is on, will bootstrap in background
	Dec 09 12:00:12 old-k8s-version-014592 kubelet[5595]: I1209 12:00:12.627255    5595 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Dec 09 12:00:12 old-k8s-version-014592 kubelet[5595]: W1209 12:00:12.633563    5595 manager.go:159] Cannot detect current cgroup on cgroup v2
	Dec 09 12:00:12 old-k8s-version-014592 kubelet[5595]: I1209 12:00:12.633667    5595 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-014592 -n old-k8s-version-014592
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-014592 -n old-k8s-version-014592: exit status 2 (250.388841ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-014592" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (708.59s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-482476 -n default-k8s-diff-port-482476
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-482476 -n default-k8s-diff-port-482476: exit status 3 (3.167757963s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1209 11:49:50.222514  662898 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.25:22: connect: no route to host
	E1209 11:49:50.222535  662898 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.50.25:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-482476 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-482476 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.15167738s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.25:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-482476 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-482476 -n default-k8s-diff-port-482476
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-482476 -n default-k8s-diff-port-482476: exit status 3 (3.064101141s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1209 11:49:59.438629  662978 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.25:22: connect: no route to host
	E1209 11:49:59.438662  662978 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.50.25:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-482476" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-820741 -n no-preload-820741
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-12-09 12:05:36.840764249 +0000 UTC m=+5525.301489766
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-820741 -n no-preload-820741
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-820741 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-820741 logs -n 25: (1.950399365s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p running-upgrade-119214                              | running-upgrade-119214       | jenkins | v1.34.0 | 09 Dec 24 11:43 UTC | 09 Dec 24 11:43 UTC |
	| delete  | -p                                                     | disable-driver-mounts-905993 | jenkins | v1.34.0 | 09 Dec 24 11:43 UTC | 09 Dec 24 11:43 UTC |
	|         | disable-driver-mounts-905993                           |                              |         |         |                     |                     |
	| start   | -p no-preload-820741                                   | no-preload-820741            | jenkins | v1.34.0 | 09 Dec 24 11:43 UTC | 09 Dec 24 11:44 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-005123            | embed-certs-005123           | jenkins | v1.34.0 | 09 Dec 24 11:44 UTC | 09 Dec 24 11:44 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-005123                                  | embed-certs-005123           | jenkins | v1.34.0 | 09 Dec 24 11:44 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-820741             | no-preload-820741            | jenkins | v1.34.0 | 09 Dec 24 11:44 UTC | 09 Dec 24 11:44 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-820741                                   | no-preload-820741            | jenkins | v1.34.0 | 09 Dec 24 11:44 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-835095                           | kubernetes-upgrade-835095    | jenkins | v1.34.0 | 09 Dec 24 11:45 UTC | 09 Dec 24 11:45 UTC |
	| start   | -p kubernetes-upgrade-835095                           | kubernetes-upgrade-835095    | jenkins | v1.34.0 | 09 Dec 24 11:45 UTC | 09 Dec 24 11:45 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-835095                           | kubernetes-upgrade-835095    | jenkins | v1.34.0 | 09 Dec 24 11:45 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-835095                           | kubernetes-upgrade-835095    | jenkins | v1.34.0 | 09 Dec 24 11:45 UTC | 09 Dec 24 11:46 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-835095                           | kubernetes-upgrade-835095    | jenkins | v1.34.0 | 09 Dec 24 11:46 UTC | 09 Dec 24 11:46 UTC |
	| start   | -p                                                     | default-k8s-diff-port-482476 | jenkins | v1.34.0 | 09 Dec 24 11:46 UTC | 09 Dec 24 11:47 UTC |
	|         | default-k8s-diff-port-482476                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-005123                 | embed-certs-005123           | jenkins | v1.34.0 | 09 Dec 24 11:46 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-005123                                  | embed-certs-005123           | jenkins | v1.34.0 | 09 Dec 24 11:46 UTC | 09 Dec 24 11:58 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-014592        | old-k8s-version-014592       | jenkins | v1.34.0 | 09 Dec 24 11:46 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-820741                  | no-preload-820741            | jenkins | v1.34.0 | 09 Dec 24 11:47 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-820741                                   | no-preload-820741            | jenkins | v1.34.0 | 09 Dec 24 11:47 UTC | 09 Dec 24 11:56 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-482476  | default-k8s-diff-port-482476 | jenkins | v1.34.0 | 09 Dec 24 11:47 UTC | 09 Dec 24 11:47 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-482476 | jenkins | v1.34.0 | 09 Dec 24 11:47 UTC |                     |
	|         | default-k8s-diff-port-482476                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-014592                              | old-k8s-version-014592       | jenkins | v1.34.0 | 09 Dec 24 11:48 UTC | 09 Dec 24 11:48 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-014592             | old-k8s-version-014592       | jenkins | v1.34.0 | 09 Dec 24 11:48 UTC | 09 Dec 24 11:48 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-014592                              | old-k8s-version-014592       | jenkins | v1.34.0 | 09 Dec 24 11:48 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-482476       | default-k8s-diff-port-482476 | jenkins | v1.34.0 | 09 Dec 24 11:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-482476 | jenkins | v1.34.0 | 09 Dec 24 11:49 UTC | 09 Dec 24 11:57 UTC |
	|         | default-k8s-diff-port-482476                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/09 11:49:59
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 11:49:59.489110  663024 out.go:345] Setting OutFile to fd 1 ...
	I1209 11:49:59.489218  663024 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 11:49:59.489223  663024 out.go:358] Setting ErrFile to fd 2...
	I1209 11:49:59.489227  663024 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 11:49:59.489393  663024 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20068-609844/.minikube/bin
	I1209 11:49:59.489968  663024 out.go:352] Setting JSON to false
	I1209 11:49:59.491001  663024 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":16343,"bootTime":1733728656,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 11:49:59.491116  663024 start.go:139] virtualization: kvm guest
	I1209 11:49:59.493422  663024 out.go:177] * [default-k8s-diff-port-482476] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1209 11:49:59.495230  663024 notify.go:220] Checking for updates...
	I1209 11:49:59.495310  663024 out.go:177]   - MINIKUBE_LOCATION=20068
	I1209 11:49:59.496833  663024 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 11:49:59.498350  663024 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20068-609844/kubeconfig
	I1209 11:49:59.499799  663024 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20068-609844/.minikube
	I1209 11:49:59.501159  663024 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1209 11:49:59.502351  663024 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 11:49:59.503976  663024 config.go:182] Loaded profile config "default-k8s-diff-port-482476": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 11:49:59.504355  663024 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:49:59.504434  663024 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:49:59.519867  663024 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39863
	I1209 11:49:59.520292  663024 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:49:59.520859  663024 main.go:141] libmachine: Using API Version  1
	I1209 11:49:59.520886  663024 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:49:59.521235  663024 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:49:59.521438  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .DriverName
	I1209 11:49:59.521739  663024 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 11:49:59.522124  663024 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:49:59.522225  663024 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:49:59.537355  663024 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39881
	I1209 11:49:59.537882  663024 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:49:59.538473  663024 main.go:141] libmachine: Using API Version  1
	I1209 11:49:59.538507  663024 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:49:59.538862  663024 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:49:59.539111  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .DriverName
	I1209 11:49:59.573642  663024 out.go:177] * Using the kvm2 driver based on existing profile
	I1209 11:49:59.574808  663024 start.go:297] selected driver: kvm2
	I1209 11:49:59.574821  663024 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-482476 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-482476 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.25 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 11:49:59.574939  663024 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 11:49:59.575618  663024 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 11:49:59.575711  663024 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20068-609844/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1209 11:49:59.591990  663024 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1209 11:49:59.592425  663024 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 11:49:59.592468  663024 cni.go:84] Creating CNI manager for ""
	I1209 11:49:59.592500  663024 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 11:49:59.592535  663024 start.go:340] cluster config:
	{Name:default-k8s-diff-port-482476 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-482476 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.25 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-h
ost Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 11:49:59.592645  663024 iso.go:125] acquiring lock: {Name:mk3bf4d9da9b75cc0ae0a3732f4492bac54a4b46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 11:49:59.594451  663024 out.go:177] * Starting "default-k8s-diff-port-482476" primary control-plane node in "default-k8s-diff-port-482476" cluster
	I1209 11:49:56.270467  661546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.218:22: connect: no route to host
	I1209 11:49:59.342522  661546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.218:22: connect: no route to host
	I1209 11:49:59.595812  663024 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 11:49:59.595868  663024 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1209 11:49:59.595876  663024 cache.go:56] Caching tarball of preloaded images
	I1209 11:49:59.595966  663024 preload.go:172] Found /home/jenkins/minikube-integration/20068-609844/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1209 11:49:59.595978  663024 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1209 11:49:59.596080  663024 profile.go:143] Saving config to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/default-k8s-diff-port-482476/config.json ...
	I1209 11:49:59.596311  663024 start.go:360] acquireMachinesLock for default-k8s-diff-port-482476: {Name:mka6afffe9ddf2faebdc603ed0401805d58bf31e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 11:50:05.422464  661546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.218:22: connect: no route to host
	I1209 11:50:08.494459  661546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.218:22: connect: no route to host
	I1209 11:50:14.574530  661546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.218:22: connect: no route to host
	I1209 11:50:17.646514  661546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.218:22: connect: no route to host
	I1209 11:50:23.726481  661546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.218:22: connect: no route to host
	I1209 11:50:26.798485  661546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.218:22: connect: no route to host
	I1209 11:50:32.878439  661546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.218:22: connect: no route to host
	I1209 11:50:35.950501  661546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.218:22: connect: no route to host
	I1209 11:50:42.030519  661546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.218:22: connect: no route to host
	I1209 11:50:45.102528  661546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.218:22: connect: no route to host
	I1209 11:50:51.182489  661546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.218:22: connect: no route to host
	I1209 11:50:54.254539  661546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.218:22: connect: no route to host
	I1209 11:51:00.334461  661546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.218:22: connect: no route to host
	I1209 11:51:03.406475  661546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.218:22: connect: no route to host
	I1209 11:51:09.486483  661546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.218:22: connect: no route to host
	I1209 11:51:12.558522  661546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.218:22: connect: no route to host
	I1209 11:51:18.638454  661546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.218:22: connect: no route to host
	I1209 11:51:24.715494  662109 start.go:364] duration metric: took 4m3.035196519s to acquireMachinesLock for "no-preload-820741"
	I1209 11:51:24.715567  662109 start.go:96] Skipping create...Using existing machine configuration
	I1209 11:51:24.715578  662109 fix.go:54] fixHost starting: 
	I1209 11:51:24.715984  662109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:51:24.716040  662109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:51:24.731722  662109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43799
	I1209 11:51:24.732247  662109 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:51:24.732853  662109 main.go:141] libmachine: Using API Version  1
	I1209 11:51:24.732876  662109 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:51:24.733244  662109 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:51:24.733437  662109 main.go:141] libmachine: (no-preload-820741) Calling .DriverName
	I1209 11:51:24.733606  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetState
	I1209 11:51:24.735295  662109 fix.go:112] recreateIfNeeded on no-preload-820741: state=Stopped err=<nil>
	I1209 11:51:24.735325  662109 main.go:141] libmachine: (no-preload-820741) Calling .DriverName
	W1209 11:51:24.735521  662109 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 11:51:24.737237  662109 out.go:177] * Restarting existing kvm2 VM for "no-preload-820741" ...
	I1209 11:51:21.710446  661546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.218:22: connect: no route to host
	I1209 11:51:24.712631  661546 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 11:51:24.712695  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetMachineName
	I1209 11:51:24.713111  661546 buildroot.go:166] provisioning hostname "embed-certs-005123"
	I1209 11:51:24.713140  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetMachineName
	I1209 11:51:24.713398  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHHostname
	I1209 11:51:24.715321  661546 machine.go:96] duration metric: took 4m34.547615205s to provisionDockerMachine
	I1209 11:51:24.715372  661546 fix.go:56] duration metric: took 4m34.572283015s for fixHost
	I1209 11:51:24.715381  661546 start.go:83] releasing machines lock for "embed-certs-005123", held for 4m34.572321017s
	W1209 11:51:24.715401  661546 start.go:714] error starting host: provision: host is not running
	W1209 11:51:24.715538  661546 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1209 11:51:24.715550  661546 start.go:729] Will try again in 5 seconds ...
	I1209 11:51:24.738507  662109 main.go:141] libmachine: (no-preload-820741) Calling .Start
	I1209 11:51:24.738692  662109 main.go:141] libmachine: (no-preload-820741) Ensuring networks are active...
	I1209 11:51:24.739450  662109 main.go:141] libmachine: (no-preload-820741) Ensuring network default is active
	I1209 11:51:24.739799  662109 main.go:141] libmachine: (no-preload-820741) Ensuring network mk-no-preload-820741 is active
	I1209 11:51:24.740206  662109 main.go:141] libmachine: (no-preload-820741) Getting domain xml...
	I1209 11:51:24.740963  662109 main.go:141] libmachine: (no-preload-820741) Creating domain...
	I1209 11:51:25.958244  662109 main.go:141] libmachine: (no-preload-820741) Waiting to get IP...
	I1209 11:51:25.959122  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:25.959507  662109 main.go:141] libmachine: (no-preload-820741) DBG | unable to find current IP address of domain no-preload-820741 in network mk-no-preload-820741
	I1209 11:51:25.959585  662109 main.go:141] libmachine: (no-preload-820741) DBG | I1209 11:51:25.959486  663348 retry.go:31] will retry after 256.759149ms: waiting for machine to come up
	I1209 11:51:26.218626  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:26.219187  662109 main.go:141] libmachine: (no-preload-820741) DBG | unable to find current IP address of domain no-preload-820741 in network mk-no-preload-820741
	I1209 11:51:26.219222  662109 main.go:141] libmachine: (no-preload-820741) DBG | I1209 11:51:26.219121  663348 retry.go:31] will retry after 259.957451ms: waiting for machine to come up
	I1209 11:51:26.480403  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:26.480800  662109 main.go:141] libmachine: (no-preload-820741) DBG | unable to find current IP address of domain no-preload-820741 in network mk-no-preload-820741
	I1209 11:51:26.480828  662109 main.go:141] libmachine: (no-preload-820741) DBG | I1209 11:51:26.480753  663348 retry.go:31] will retry after 482.242492ms: waiting for machine to come up
	I1209 11:51:29.718422  661546 start.go:360] acquireMachinesLock for embed-certs-005123: {Name:mka6afffe9ddf2faebdc603ed0401805d58bf31e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 11:51:26.964420  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:26.964870  662109 main.go:141] libmachine: (no-preload-820741) DBG | unable to find current IP address of domain no-preload-820741 in network mk-no-preload-820741
	I1209 11:51:26.964903  662109 main.go:141] libmachine: (no-preload-820741) DBG | I1209 11:51:26.964821  663348 retry.go:31] will retry after 386.489156ms: waiting for machine to come up
	I1209 11:51:27.353471  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:27.353850  662109 main.go:141] libmachine: (no-preload-820741) DBG | unable to find current IP address of domain no-preload-820741 in network mk-no-preload-820741
	I1209 11:51:27.353875  662109 main.go:141] libmachine: (no-preload-820741) DBG | I1209 11:51:27.353796  663348 retry.go:31] will retry after 602.322538ms: waiting for machine to come up
	I1209 11:51:27.957621  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:27.958020  662109 main.go:141] libmachine: (no-preload-820741) DBG | unable to find current IP address of domain no-preload-820741 in network mk-no-preload-820741
	I1209 11:51:27.958051  662109 main.go:141] libmachine: (no-preload-820741) DBG | I1209 11:51:27.957967  663348 retry.go:31] will retry after 747.355263ms: waiting for machine to come up
	I1209 11:51:28.707049  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:28.707486  662109 main.go:141] libmachine: (no-preload-820741) DBG | unable to find current IP address of domain no-preload-820741 in network mk-no-preload-820741
	I1209 11:51:28.707515  662109 main.go:141] libmachine: (no-preload-820741) DBG | I1209 11:51:28.707436  663348 retry.go:31] will retry after 1.034218647s: waiting for machine to come up
	I1209 11:51:29.743755  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:29.744171  662109 main.go:141] libmachine: (no-preload-820741) DBG | unable to find current IP address of domain no-preload-820741 in network mk-no-preload-820741
	I1209 11:51:29.744213  662109 main.go:141] libmachine: (no-preload-820741) DBG | I1209 11:51:29.744119  663348 retry.go:31] will retry after 1.348194555s: waiting for machine to come up
	I1209 11:51:31.094696  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:31.095202  662109 main.go:141] libmachine: (no-preload-820741) DBG | unable to find current IP address of domain no-preload-820741 in network mk-no-preload-820741
	I1209 11:51:31.095234  662109 main.go:141] libmachine: (no-preload-820741) DBG | I1209 11:51:31.095124  663348 retry.go:31] will retry after 1.226653754s: waiting for machine to come up
	I1209 11:51:32.323529  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:32.323935  662109 main.go:141] libmachine: (no-preload-820741) DBG | unable to find current IP address of domain no-preload-820741 in network mk-no-preload-820741
	I1209 11:51:32.323959  662109 main.go:141] libmachine: (no-preload-820741) DBG | I1209 11:51:32.323884  663348 retry.go:31] will retry after 2.008914491s: waiting for machine to come up
	I1209 11:51:34.335246  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:34.335619  662109 main.go:141] libmachine: (no-preload-820741) DBG | unable to find current IP address of domain no-preload-820741 in network mk-no-preload-820741
	I1209 11:51:34.335658  662109 main.go:141] libmachine: (no-preload-820741) DBG | I1209 11:51:34.335593  663348 retry.go:31] will retry after 1.835576732s: waiting for machine to come up
	I1209 11:51:36.173316  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:36.173752  662109 main.go:141] libmachine: (no-preload-820741) DBG | unable to find current IP address of domain no-preload-820741 in network mk-no-preload-820741
	I1209 11:51:36.173786  662109 main.go:141] libmachine: (no-preload-820741) DBG | I1209 11:51:36.173711  663348 retry.go:31] will retry after 3.204076548s: waiting for machine to come up
	I1209 11:51:39.382184  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:39.382619  662109 main.go:141] libmachine: (no-preload-820741) DBG | unable to find current IP address of domain no-preload-820741 in network mk-no-preload-820741
	I1209 11:51:39.382656  662109 main.go:141] libmachine: (no-preload-820741) DBG | I1209 11:51:39.382560  663348 retry.go:31] will retry after 3.298451611s: waiting for machine to come up
	I1209 11:51:44.103077  662586 start.go:364] duration metric: took 3m16.308265809s to acquireMachinesLock for "old-k8s-version-014592"
	I1209 11:51:44.103164  662586 start.go:96] Skipping create...Using existing machine configuration
	I1209 11:51:44.103178  662586 fix.go:54] fixHost starting: 
	I1209 11:51:44.103657  662586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:51:44.103716  662586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:51:44.121162  662586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33531
	I1209 11:51:44.121672  662586 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:51:44.122203  662586 main.go:141] libmachine: Using API Version  1
	I1209 11:51:44.122232  662586 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:51:44.122644  662586 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:51:44.122852  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .DriverName
	I1209 11:51:44.123023  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetState
	I1209 11:51:44.124544  662586 fix.go:112] recreateIfNeeded on old-k8s-version-014592: state=Stopped err=<nil>
	I1209 11:51:44.124567  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .DriverName
	W1209 11:51:44.124704  662586 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 11:51:44.126942  662586 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-014592" ...
	I1209 11:51:42.684438  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:42.684824  662109 main.go:141] libmachine: (no-preload-820741) Found IP for machine: 192.168.39.169
	I1209 11:51:42.684859  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has current primary IP address 192.168.39.169 and MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:42.684867  662109 main.go:141] libmachine: (no-preload-820741) Reserving static IP address...
	I1209 11:51:42.685269  662109 main.go:141] libmachine: (no-preload-820741) DBG | found host DHCP lease matching {name: "no-preload-820741", mac: "52:54:00:27:4c:0e", ip: "192.168.39.169"} in network mk-no-preload-820741: {Iface:virbr1 ExpiryTime:2024-12-09 12:43:41 +0000 UTC Type:0 Mac:52:54:00:27:4c:0e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:no-preload-820741 Clientid:01:52:54:00:27:4c:0e}
	I1209 11:51:42.685296  662109 main.go:141] libmachine: (no-preload-820741) DBG | skip adding static IP to network mk-no-preload-820741 - found existing host DHCP lease matching {name: "no-preload-820741", mac: "52:54:00:27:4c:0e", ip: "192.168.39.169"}
	I1209 11:51:42.685311  662109 main.go:141] libmachine: (no-preload-820741) Reserved static IP address: 192.168.39.169
	I1209 11:51:42.685334  662109 main.go:141] libmachine: (no-preload-820741) Waiting for SSH to be available...
	I1209 11:51:42.685348  662109 main.go:141] libmachine: (no-preload-820741) DBG | Getting to WaitForSSH function...
	I1209 11:51:42.687295  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:42.687588  662109 main.go:141] libmachine: (no-preload-820741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4c:0e", ip: ""} in network mk-no-preload-820741: {Iface:virbr1 ExpiryTime:2024-12-09 12:43:41 +0000 UTC Type:0 Mac:52:54:00:27:4c:0e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:no-preload-820741 Clientid:01:52:54:00:27:4c:0e}
	I1209 11:51:42.687625  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined IP address 192.168.39.169 and MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:42.687702  662109 main.go:141] libmachine: (no-preload-820741) DBG | Using SSH client type: external
	I1209 11:51:42.687790  662109 main.go:141] libmachine: (no-preload-820741) DBG | Using SSH private key: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/no-preload-820741/id_rsa (-rw-------)
	I1209 11:51:42.687824  662109 main.go:141] libmachine: (no-preload-820741) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.169 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20068-609844/.minikube/machines/no-preload-820741/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1209 11:51:42.687844  662109 main.go:141] libmachine: (no-preload-820741) DBG | About to run SSH command:
	I1209 11:51:42.687857  662109 main.go:141] libmachine: (no-preload-820741) DBG | exit 0
	I1209 11:51:42.822609  662109 main.go:141] libmachine: (no-preload-820741) DBG | SSH cmd err, output: <nil>: 
	I1209 11:51:42.822996  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetConfigRaw
	I1209 11:51:42.823665  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetIP
	I1209 11:51:42.826484  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:42.826783  662109 main.go:141] libmachine: (no-preload-820741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4c:0e", ip: ""} in network mk-no-preload-820741: {Iface:virbr1 ExpiryTime:2024-12-09 12:43:41 +0000 UTC Type:0 Mac:52:54:00:27:4c:0e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:no-preload-820741 Clientid:01:52:54:00:27:4c:0e}
	I1209 11:51:42.826808  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined IP address 192.168.39.169 and MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:42.827050  662109 profile.go:143] Saving config to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/no-preload-820741/config.json ...
	I1209 11:51:42.827323  662109 machine.go:93] provisionDockerMachine start ...
	I1209 11:51:42.827346  662109 main.go:141] libmachine: (no-preload-820741) Calling .DriverName
	I1209 11:51:42.827620  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHHostname
	I1209 11:51:42.830224  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:42.830569  662109 main.go:141] libmachine: (no-preload-820741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4c:0e", ip: ""} in network mk-no-preload-820741: {Iface:virbr1 ExpiryTime:2024-12-09 12:43:41 +0000 UTC Type:0 Mac:52:54:00:27:4c:0e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:no-preload-820741 Clientid:01:52:54:00:27:4c:0e}
	I1209 11:51:42.830599  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined IP address 192.168.39.169 and MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:42.830717  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHPort
	I1209 11:51:42.830909  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHKeyPath
	I1209 11:51:42.831107  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHKeyPath
	I1209 11:51:42.831274  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHUsername
	I1209 11:51:42.831454  662109 main.go:141] libmachine: Using SSH client type: native
	I1209 11:51:42.831790  662109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.169 22 <nil> <nil>}
	I1209 11:51:42.831807  662109 main.go:141] libmachine: About to run SSH command:
	hostname
	I1209 11:51:42.938456  662109 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1209 11:51:42.938500  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetMachineName
	I1209 11:51:42.938778  662109 buildroot.go:166] provisioning hostname "no-preload-820741"
	I1209 11:51:42.938813  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetMachineName
	I1209 11:51:42.939023  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHHostname
	I1209 11:51:42.941706  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:42.942236  662109 main.go:141] libmachine: (no-preload-820741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4c:0e", ip: ""} in network mk-no-preload-820741: {Iface:virbr1 ExpiryTime:2024-12-09 12:43:41 +0000 UTC Type:0 Mac:52:54:00:27:4c:0e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:no-preload-820741 Clientid:01:52:54:00:27:4c:0e}
	I1209 11:51:42.942267  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined IP address 192.168.39.169 and MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:42.942390  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHPort
	I1209 11:51:42.942606  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHKeyPath
	I1209 11:51:42.942773  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHKeyPath
	I1209 11:51:42.942922  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHUsername
	I1209 11:51:42.943177  662109 main.go:141] libmachine: Using SSH client type: native
	I1209 11:51:42.943382  662109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.169 22 <nil> <nil>}
	I1209 11:51:42.943406  662109 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-820741 && echo "no-preload-820741" | sudo tee /etc/hostname
	I1209 11:51:43.065816  662109 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-820741
	
	I1209 11:51:43.065849  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHHostname
	I1209 11:51:43.068607  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:43.068916  662109 main.go:141] libmachine: (no-preload-820741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4c:0e", ip: ""} in network mk-no-preload-820741: {Iface:virbr1 ExpiryTime:2024-12-09 12:43:41 +0000 UTC Type:0 Mac:52:54:00:27:4c:0e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:no-preload-820741 Clientid:01:52:54:00:27:4c:0e}
	I1209 11:51:43.068951  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined IP address 192.168.39.169 and MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:43.069127  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHPort
	I1209 11:51:43.069256  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHKeyPath
	I1209 11:51:43.069351  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHKeyPath
	I1209 11:51:43.069514  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHUsername
	I1209 11:51:43.069637  662109 main.go:141] libmachine: Using SSH client type: native
	I1209 11:51:43.069841  662109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.169 22 <nil> <nil>}
	I1209 11:51:43.069861  662109 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-820741' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-820741/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-820741' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 11:51:43.182210  662109 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 11:51:43.182257  662109 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20068-609844/.minikube CaCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20068-609844/.minikube}
	I1209 11:51:43.182289  662109 buildroot.go:174] setting up certificates
	I1209 11:51:43.182305  662109 provision.go:84] configureAuth start
	I1209 11:51:43.182323  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetMachineName
	I1209 11:51:43.182674  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetIP
	I1209 11:51:43.185513  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:43.185872  662109 main.go:141] libmachine: (no-preload-820741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4c:0e", ip: ""} in network mk-no-preload-820741: {Iface:virbr1 ExpiryTime:2024-12-09 12:43:41 +0000 UTC Type:0 Mac:52:54:00:27:4c:0e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:no-preload-820741 Clientid:01:52:54:00:27:4c:0e}
	I1209 11:51:43.185897  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined IP address 192.168.39.169 and MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:43.186018  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHHostname
	I1209 11:51:43.188128  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:43.188482  662109 main.go:141] libmachine: (no-preload-820741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4c:0e", ip: ""} in network mk-no-preload-820741: {Iface:virbr1 ExpiryTime:2024-12-09 12:43:41 +0000 UTC Type:0 Mac:52:54:00:27:4c:0e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:no-preload-820741 Clientid:01:52:54:00:27:4c:0e}
	I1209 11:51:43.188534  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined IP address 192.168.39.169 and MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:43.188668  662109 provision.go:143] copyHostCerts
	I1209 11:51:43.188752  662109 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem, removing ...
	I1209 11:51:43.188774  662109 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem
	I1209 11:51:43.188840  662109 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem (1679 bytes)
	I1209 11:51:43.188928  662109 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem, removing ...
	I1209 11:51:43.188936  662109 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem
	I1209 11:51:43.188963  662109 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem (1082 bytes)
	I1209 11:51:43.189019  662109 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem, removing ...
	I1209 11:51:43.189027  662109 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem
	I1209 11:51:43.189049  662109 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem (1123 bytes)
	I1209 11:51:43.189104  662109 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem org=jenkins.no-preload-820741 san=[127.0.0.1 192.168.39.169 localhost minikube no-preload-820741]
	I1209 11:51:43.488258  662109 provision.go:177] copyRemoteCerts
	I1209 11:51:43.488336  662109 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 11:51:43.488367  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHHostname
	I1209 11:51:43.491689  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:43.492025  662109 main.go:141] libmachine: (no-preload-820741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4c:0e", ip: ""} in network mk-no-preload-820741: {Iface:virbr1 ExpiryTime:2024-12-09 12:43:41 +0000 UTC Type:0 Mac:52:54:00:27:4c:0e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:no-preload-820741 Clientid:01:52:54:00:27:4c:0e}
	I1209 11:51:43.492059  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined IP address 192.168.39.169 and MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:43.492267  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHPort
	I1209 11:51:43.492465  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHKeyPath
	I1209 11:51:43.492635  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHUsername
	I1209 11:51:43.492768  662109 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/no-preload-820741/id_rsa Username:docker}
	I1209 11:51:43.577708  662109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1209 11:51:43.602000  662109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1209 11:51:43.627251  662109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1209 11:51:43.651591  662109 provision.go:87] duration metric: took 469.266358ms to configureAuth
	I1209 11:51:43.651626  662109 buildroot.go:189] setting minikube options for container-runtime
	I1209 11:51:43.651863  662109 config.go:182] Loaded profile config "no-preload-820741": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 11:51:43.652059  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHHostname
	I1209 11:51:43.655150  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:43.655489  662109 main.go:141] libmachine: (no-preload-820741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4c:0e", ip: ""} in network mk-no-preload-820741: {Iface:virbr1 ExpiryTime:2024-12-09 12:43:41 +0000 UTC Type:0 Mac:52:54:00:27:4c:0e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:no-preload-820741 Clientid:01:52:54:00:27:4c:0e}
	I1209 11:51:43.655518  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined IP address 192.168.39.169 and MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:43.655738  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHPort
	I1209 11:51:43.655963  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHKeyPath
	I1209 11:51:43.656146  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHKeyPath
	I1209 11:51:43.656295  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHUsername
	I1209 11:51:43.656483  662109 main.go:141] libmachine: Using SSH client type: native
	I1209 11:51:43.656688  662109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.169 22 <nil> <nil>}
	I1209 11:51:43.656710  662109 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 11:51:43.870704  662109 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 11:51:43.870738  662109 machine.go:96] duration metric: took 1.043398486s to provisionDockerMachine
	I1209 11:51:43.870756  662109 start.go:293] postStartSetup for "no-preload-820741" (driver="kvm2")
	I1209 11:51:43.870771  662109 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 11:51:43.870796  662109 main.go:141] libmachine: (no-preload-820741) Calling .DriverName
	I1209 11:51:43.871158  662109 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 11:51:43.871186  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHHostname
	I1209 11:51:43.873863  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:43.874207  662109 main.go:141] libmachine: (no-preload-820741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4c:0e", ip: ""} in network mk-no-preload-820741: {Iface:virbr1 ExpiryTime:2024-12-09 12:43:41 +0000 UTC Type:0 Mac:52:54:00:27:4c:0e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:no-preload-820741 Clientid:01:52:54:00:27:4c:0e}
	I1209 11:51:43.874230  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined IP address 192.168.39.169 and MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:43.874408  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHPort
	I1209 11:51:43.874610  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHKeyPath
	I1209 11:51:43.874800  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHUsername
	I1209 11:51:43.874925  662109 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/no-preload-820741/id_rsa Username:docker}
	I1209 11:51:43.956874  662109 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 11:51:43.960825  662109 info.go:137] Remote host: Buildroot 2023.02.9
	I1209 11:51:43.960853  662109 filesync.go:126] Scanning /home/jenkins/minikube-integration/20068-609844/.minikube/addons for local assets ...
	I1209 11:51:43.960919  662109 filesync.go:126] Scanning /home/jenkins/minikube-integration/20068-609844/.minikube/files for local assets ...
	I1209 11:51:43.960993  662109 filesync.go:149] local asset: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem -> 6170172.pem in /etc/ssl/certs
	I1209 11:51:43.961095  662109 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 11:51:43.970138  662109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem --> /etc/ssl/certs/6170172.pem (1708 bytes)
	I1209 11:51:43.991975  662109 start.go:296] duration metric: took 121.20118ms for postStartSetup
	I1209 11:51:43.992020  662109 fix.go:56] duration metric: took 19.276442325s for fixHost
	I1209 11:51:43.992043  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHHostname
	I1209 11:51:43.994707  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:43.995035  662109 main.go:141] libmachine: (no-preload-820741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4c:0e", ip: ""} in network mk-no-preload-820741: {Iface:virbr1 ExpiryTime:2024-12-09 12:43:41 +0000 UTC Type:0 Mac:52:54:00:27:4c:0e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:no-preload-820741 Clientid:01:52:54:00:27:4c:0e}
	I1209 11:51:43.995069  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined IP address 192.168.39.169 and MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:43.995186  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHPort
	I1209 11:51:43.995403  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHKeyPath
	I1209 11:51:43.995568  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHKeyPath
	I1209 11:51:43.995716  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHUsername
	I1209 11:51:43.995927  662109 main.go:141] libmachine: Using SSH client type: native
	I1209 11:51:43.996107  662109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.169 22 <nil> <nil>}
	I1209 11:51:43.996117  662109 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1209 11:51:44.102890  662109 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733745104.077047488
	
	I1209 11:51:44.102914  662109 fix.go:216] guest clock: 1733745104.077047488
	I1209 11:51:44.102922  662109 fix.go:229] Guest: 2024-12-09 11:51:44.077047488 +0000 UTC Remote: 2024-12-09 11:51:43.992024296 +0000 UTC m=+262.463051778 (delta=85.023192ms)
	I1209 11:51:44.102952  662109 fix.go:200] guest clock delta is within tolerance: 85.023192ms
	I1209 11:51:44.102957  662109 start.go:83] releasing machines lock for "no-preload-820741", held for 19.387413234s
	I1209 11:51:44.102980  662109 main.go:141] libmachine: (no-preload-820741) Calling .DriverName
	I1209 11:51:44.103272  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetIP
	I1209 11:51:44.105929  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:44.106314  662109 main.go:141] libmachine: (no-preload-820741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4c:0e", ip: ""} in network mk-no-preload-820741: {Iface:virbr1 ExpiryTime:2024-12-09 12:43:41 +0000 UTC Type:0 Mac:52:54:00:27:4c:0e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:no-preload-820741 Clientid:01:52:54:00:27:4c:0e}
	I1209 11:51:44.106341  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined IP address 192.168.39.169 and MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:44.106567  662109 main.go:141] libmachine: (no-preload-820741) Calling .DriverName
	I1209 11:51:44.107102  662109 main.go:141] libmachine: (no-preload-820741) Calling .DriverName
	I1209 11:51:44.107323  662109 main.go:141] libmachine: (no-preload-820741) Calling .DriverName
	I1209 11:51:44.107453  662109 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 11:51:44.107507  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHHostname
	I1209 11:51:44.107640  662109 ssh_runner.go:195] Run: cat /version.json
	I1209 11:51:44.107672  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHHostname
	I1209 11:51:44.110422  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:44.110792  662109 main.go:141] libmachine: (no-preload-820741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4c:0e", ip: ""} in network mk-no-preload-820741: {Iface:virbr1 ExpiryTime:2024-12-09 12:43:41 +0000 UTC Type:0 Mac:52:54:00:27:4c:0e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:no-preload-820741 Clientid:01:52:54:00:27:4c:0e}
	I1209 11:51:44.110822  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined IP address 192.168.39.169 and MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:44.110840  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:44.110984  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHPort
	I1209 11:51:44.111194  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHKeyPath
	I1209 11:51:44.111376  662109 main.go:141] libmachine: (no-preload-820741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4c:0e", ip: ""} in network mk-no-preload-820741: {Iface:virbr1 ExpiryTime:2024-12-09 12:43:41 +0000 UTC Type:0 Mac:52:54:00:27:4c:0e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:no-preload-820741 Clientid:01:52:54:00:27:4c:0e}
	I1209 11:51:44.111395  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined IP address 192.168.39.169 and MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:44.111408  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHUsername
	I1209 11:51:44.111569  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHPort
	I1209 11:51:44.111589  662109 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/no-preload-820741/id_rsa Username:docker}
	I1209 11:51:44.111722  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHKeyPath
	I1209 11:51:44.111827  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHUsername
	I1209 11:51:44.111986  662109 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/no-preload-820741/id_rsa Username:docker}
	I1209 11:51:44.228799  662109 ssh_runner.go:195] Run: systemctl --version
	I1209 11:51:44.234678  662109 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 11:51:44.383290  662109 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 11:51:44.388906  662109 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 11:51:44.388981  662109 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 11:51:44.405271  662109 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1209 11:51:44.405308  662109 start.go:495] detecting cgroup driver to use...
	I1209 11:51:44.405389  662109 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 11:51:44.425480  662109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 11:51:44.439827  662109 docker.go:217] disabling cri-docker service (if available) ...
	I1209 11:51:44.439928  662109 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 11:51:44.454750  662109 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 11:51:44.470828  662109 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 11:51:44.595400  662109 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 11:51:44.756743  662109 docker.go:233] disabling docker service ...
	I1209 11:51:44.756817  662109 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 11:51:44.774069  662109 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 11:51:44.788188  662109 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 11:51:44.909156  662109 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 11:51:45.036992  662109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 11:51:45.051284  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 11:51:45.071001  662109 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1209 11:51:45.071074  662109 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:51:45.081491  662109 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 11:51:45.081549  662109 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:51:45.091476  662109 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:51:45.103237  662109 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:51:45.114723  662109 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 11:51:45.126330  662109 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:51:45.136501  662109 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:51:45.152804  662109 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:51:45.163221  662109 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 11:51:45.173297  662109 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1209 11:51:45.173379  662109 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1209 11:51:45.186209  662109 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 11:51:45.195773  662109 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 11:51:45.339593  662109 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 11:51:45.438766  662109 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 11:51:45.438851  662109 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 11:51:45.444775  662109 start.go:563] Will wait 60s for crictl version
	I1209 11:51:45.444847  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:51:45.449585  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 11:51:45.493796  662109 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1209 11:51:45.493899  662109 ssh_runner.go:195] Run: crio --version
	I1209 11:51:45.521391  662109 ssh_runner.go:195] Run: crio --version
	I1209 11:51:45.551249  662109 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1209 11:51:45.552714  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetIP
	I1209 11:51:45.555910  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:45.556271  662109 main.go:141] libmachine: (no-preload-820741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4c:0e", ip: ""} in network mk-no-preload-820741: {Iface:virbr1 ExpiryTime:2024-12-09 12:43:41 +0000 UTC Type:0 Mac:52:54:00:27:4c:0e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:no-preload-820741 Clientid:01:52:54:00:27:4c:0e}
	I1209 11:51:45.556298  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined IP address 192.168.39.169 and MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:45.556571  662109 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1209 11:51:45.560718  662109 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 11:51:45.573027  662109 kubeadm.go:883] updating cluster {Name:no-preload-820741 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:no-preload-820741 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.169 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 11:51:45.573171  662109 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 11:51:45.573226  662109 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 11:51:45.613696  662109 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1209 11:51:45.613724  662109 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.2 registry.k8s.io/kube-controller-manager:v1.31.2 registry.k8s.io/kube-scheduler:v1.31.2 registry.k8s.io/kube-proxy:v1.31.2 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1209 11:51:45.613810  662109 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1209 11:51:45.613847  662109 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.2
	I1209 11:51:45.613864  662109 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.2
	I1209 11:51:45.613880  662109 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1209 11:51:45.613857  662109 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1209 11:51:45.613939  662109 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1209 11:51:45.613801  662109 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.2
	I1209 11:51:45.613810  662109 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 11:51:45.615885  662109 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1209 11:51:45.615983  662109 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.2
	I1209 11:51:45.615889  662109 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1209 11:51:45.615885  662109 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 11:51:45.615893  662109 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1209 11:51:45.615891  662109 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1209 11:51:45.615897  662109 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.2
	I1209 11:51:45.615893  662109 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.2
	I1209 11:51:45.819757  662109 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1209 11:51:45.836546  662109 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1209 11:51:45.851918  662109 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.2
	I1209 11:51:45.857461  662109 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.2
	I1209 11:51:45.857468  662109 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.2
	I1209 11:51:45.863981  662109 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1209 11:51:45.864038  662109 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1209 11:51:45.864122  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:51:45.865289  662109 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.2
	I1209 11:51:45.868361  662109 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I1209 11:51:46.030476  662109 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.2" does not exist at hash "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173" in container runtime
	I1209 11:51:46.030525  662109 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.2
	I1209 11:51:46.030582  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:51:46.030525  662109 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.2" needs transfer: "registry.k8s.io/kube-proxy:v1.31.2" does not exist at hash "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38" in container runtime
	I1209 11:51:46.030603  662109 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.2" does not exist at hash "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856" in container runtime
	I1209 11:51:46.030625  662109 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.2
	I1209 11:51:46.030652  662109 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.2
	I1209 11:51:46.030694  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:51:46.030655  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:51:46.030720  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1209 11:51:46.030760  662109 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.2" does not exist at hash "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503" in container runtime
	I1209 11:51:46.030794  662109 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1209 11:51:46.030823  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:51:46.030823  662109 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I1209 11:51:46.030845  662109 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I1209 11:51:46.030868  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:51:46.041983  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1209 11:51:46.042072  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1209 11:51:46.042088  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1209 11:51:46.086909  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1209 11:51:46.086966  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1209 11:51:46.086997  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1209 11:51:46.141636  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1209 11:51:46.141723  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1209 11:51:46.141779  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1209 11:51:46.249908  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1209 11:51:46.249972  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1209 11:51:46.250024  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1209 11:51:46.250056  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1209 11:51:46.266345  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1209 11:51:46.266425  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1209 11:51:46.376691  662109 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1209 11:51:46.376784  662109 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2
	I1209 11:51:46.376904  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1209 11:51:46.376937  662109 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1209 11:51:46.376911  662109 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1209 11:51:46.376980  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1209 11:51:46.407997  662109 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2
	I1209 11:51:46.408015  662109 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2
	I1209 11:51:46.408120  662109 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.2
	I1209 11:51:46.408120  662109 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1209 11:51:46.450341  662109 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.2 (exists)
	I1209 11:51:46.450374  662109 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1209 11:51:46.450445  662109 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1209 11:51:46.450503  662109 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I1209 11:51:46.450537  662109 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I1209 11:51:46.450541  662109 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2
	I1209 11:51:46.450570  662109 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.2 (exists)
	I1209 11:51:46.450621  662109 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I1209 11:51:46.450621  662109 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1209 11:51:46.450621  662109 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.2 (exists)
	I1209 11:51:44.128421  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .Start
	I1209 11:51:44.128663  662586 main.go:141] libmachine: (old-k8s-version-014592) Ensuring networks are active...
	I1209 11:51:44.129435  662586 main.go:141] libmachine: (old-k8s-version-014592) Ensuring network default is active
	I1209 11:51:44.129805  662586 main.go:141] libmachine: (old-k8s-version-014592) Ensuring network mk-old-k8s-version-014592 is active
	I1209 11:51:44.130314  662586 main.go:141] libmachine: (old-k8s-version-014592) Getting domain xml...
	I1209 11:51:44.131070  662586 main.go:141] libmachine: (old-k8s-version-014592) Creating domain...
	I1209 11:51:45.405214  662586 main.go:141] libmachine: (old-k8s-version-014592) Waiting to get IP...
	I1209 11:51:45.406116  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:51:45.406680  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | unable to find current IP address of domain old-k8s-version-014592 in network mk-old-k8s-version-014592
	I1209 11:51:45.406716  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | I1209 11:51:45.406613  663492 retry.go:31] will retry after 249.130873ms: waiting for machine to come up
	I1209 11:51:45.657224  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:51:45.657727  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | unable to find current IP address of domain old-k8s-version-014592 in network mk-old-k8s-version-014592
	I1209 11:51:45.657756  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | I1209 11:51:45.657687  663492 retry.go:31] will retry after 363.458278ms: waiting for machine to come up
	I1209 11:51:46.023431  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:51:46.023912  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | unable to find current IP address of domain old-k8s-version-014592 in network mk-old-k8s-version-014592
	I1209 11:51:46.023945  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | I1209 11:51:46.023851  663492 retry.go:31] will retry after 313.220722ms: waiting for machine to come up
	I1209 11:51:46.339300  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:51:46.339850  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | unable to find current IP address of domain old-k8s-version-014592 in network mk-old-k8s-version-014592
	I1209 11:51:46.339876  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | I1209 11:51:46.339791  663492 retry.go:31] will retry after 517.613322ms: waiting for machine to come up
	I1209 11:51:46.859825  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:51:46.860229  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | unable to find current IP address of domain old-k8s-version-014592 in network mk-old-k8s-version-014592
	I1209 11:51:46.860260  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | I1209 11:51:46.860198  663492 retry.go:31] will retry after 710.195232ms: waiting for machine to come up
	I1209 11:51:47.572460  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:51:47.573030  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | unable to find current IP address of domain old-k8s-version-014592 in network mk-old-k8s-version-014592
	I1209 11:51:47.573080  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | I1209 11:51:47.573008  663492 retry.go:31] will retry after 620.717522ms: waiting for machine to come up
	I1209 11:51:46.869631  662109 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 11:51:48.822213  662109 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2: (2.371704342s)
	I1209 11:51:48.822263  662109 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2 from cache
	I1209 11:51:48.822262  662109 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0: (2.371603127s)
	I1209 11:51:48.822296  662109 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I1209 11:51:48.822295  662109 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.2: (2.371584353s)
	I1209 11:51:48.822298  662109 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1209 11:51:48.822309  662109 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.2 (exists)
	I1209 11:51:48.822324  662109 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.952666874s)
	I1209 11:51:48.822364  662109 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1209 11:51:48.822367  662109 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1209 11:51:48.822416  662109 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 11:51:48.822460  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:51:50.794288  662109 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (1.971891497s)
	I1209 11:51:50.794330  662109 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1209 11:51:50.794357  662109 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.2
	I1209 11:51:50.794357  662109 ssh_runner.go:235] Completed: which crictl: (1.971876587s)
	I1209 11:51:50.794417  662109 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2
	I1209 11:51:50.794437  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 11:51:48.195603  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:51:48.196140  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | unable to find current IP address of domain old-k8s-version-014592 in network mk-old-k8s-version-014592
	I1209 11:51:48.196172  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | I1209 11:51:48.196083  663492 retry.go:31] will retry after 747.45082ms: waiting for machine to come up
	I1209 11:51:48.945230  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:51:48.945682  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | unable to find current IP address of domain old-k8s-version-014592 in network mk-old-k8s-version-014592
	I1209 11:51:48.945737  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | I1209 11:51:48.945661  663492 retry.go:31] will retry after 1.307189412s: waiting for machine to come up
	I1209 11:51:50.254747  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:51:50.255335  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | unable to find current IP address of domain old-k8s-version-014592 in network mk-old-k8s-version-014592
	I1209 11:51:50.255359  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | I1209 11:51:50.255276  663492 retry.go:31] will retry after 1.269881759s: waiting for machine to come up
	I1209 11:51:51.526966  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:51:51.527400  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | unable to find current IP address of domain old-k8s-version-014592 in network mk-old-k8s-version-014592
	I1209 11:51:51.527431  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | I1209 11:51:51.527348  663492 retry.go:31] will retry after 1.424091669s: waiting for machine to come up
	I1209 11:51:52.958981  662109 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.164517823s)
	I1209 11:51:52.959044  662109 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2: (2.164597978s)
	I1209 11:51:52.959089  662109 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2 from cache
	I1209 11:51:52.959120  662109 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1209 11:51:52.959057  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 11:51:52.959203  662109 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1209 11:51:53.007629  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 11:51:54.832641  662109 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2: (1.873398185s)
	I1209 11:51:54.832686  662109 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2 from cache
	I1209 11:51:54.832694  662109 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.825022672s)
	I1209 11:51:54.832714  662109 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I1209 11:51:54.832748  662109 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1209 11:51:54.832769  662109 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I1209 11:51:54.832853  662109 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1209 11:51:52.953290  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:51:52.953711  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | unable to find current IP address of domain old-k8s-version-014592 in network mk-old-k8s-version-014592
	I1209 11:51:52.953743  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | I1209 11:51:52.953658  663492 retry.go:31] will retry after 2.009829783s: waiting for machine to come up
	I1209 11:51:54.965818  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:51:54.966337  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | unable to find current IP address of domain old-k8s-version-014592 in network mk-old-k8s-version-014592
	I1209 11:51:54.966372  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | I1209 11:51:54.966285  663492 retry.go:31] will retry after 2.209879817s: waiting for machine to come up
	I1209 11:51:57.177397  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:51:57.177870  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | unable to find current IP address of domain old-k8s-version-014592 in network mk-old-k8s-version-014592
	I1209 11:51:57.177901  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | I1209 11:51:57.177805  663492 retry.go:31] will retry after 2.999056002s: waiting for machine to come up
	I1209 11:51:58.433813  662109 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.600992195s)
	I1209 11:51:58.433889  662109 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I1209 11:51:58.433913  662109 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1209 11:51:58.433831  662109 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (3.600948593s)
	I1209 11:51:58.433947  662109 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1209 11:51:58.433961  662109 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1209 11:51:59.792012  662109 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2: (1.35801884s)
	I1209 11:51:59.792049  662109 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2 from cache
	I1209 11:51:59.792078  662109 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1209 11:51:59.792127  662109 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1209 11:52:00.635140  662109 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1209 11:52:00.635193  662109 cache_images.go:123] Successfully loaded all cached images
	I1209 11:52:00.635212  662109 cache_images.go:92] duration metric: took 15.021464053s to LoadCachedImages
	I1209 11:52:00.635232  662109 kubeadm.go:934] updating node { 192.168.39.169 8443 v1.31.2 crio true true} ...
	I1209 11:52:00.635395  662109 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-820741 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.169
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:no-preload-820741 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 11:52:00.635481  662109 ssh_runner.go:195] Run: crio config
	I1209 11:52:00.680321  662109 cni.go:84] Creating CNI manager for ""
	I1209 11:52:00.680345  662109 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 11:52:00.680370  662109 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1209 11:52:00.680394  662109 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.169 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-820741 NodeName:no-preload-820741 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.169"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.169 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 11:52:00.680545  662109 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.169
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-820741"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.169"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.169"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 11:52:00.680614  662109 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1209 11:52:00.690391  662109 binaries.go:44] Found k8s binaries, skipping transfer
	I1209 11:52:00.690484  662109 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 11:52:00.699034  662109 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1209 11:52:00.714710  662109 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 11:52:00.730375  662109 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2297 bytes)
	I1209 11:52:00.747519  662109 ssh_runner.go:195] Run: grep 192.168.39.169	control-plane.minikube.internal$ /etc/hosts
	I1209 11:52:00.751163  662109 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.169	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 11:52:00.762405  662109 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 11:52:00.881308  662109 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 11:52:00.898028  662109 certs.go:68] Setting up /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/no-preload-820741 for IP: 192.168.39.169
	I1209 11:52:00.898060  662109 certs.go:194] generating shared ca certs ...
	I1209 11:52:00.898085  662109 certs.go:226] acquiring lock for ca certs: {Name:mk2df5887e08965d909a9c950da5dfffb8a04ddc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 11:52:00.898349  662109 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.key
	I1209 11:52:00.898415  662109 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.key
	I1209 11:52:00.898429  662109 certs.go:256] generating profile certs ...
	I1209 11:52:00.898565  662109 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/no-preload-820741/client.key
	I1209 11:52:00.898646  662109 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/no-preload-820741/apiserver.key.814e22a1
	I1209 11:52:00.898701  662109 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/no-preload-820741/proxy-client.key
	I1209 11:52:00.898859  662109 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017.pem (1338 bytes)
	W1209 11:52:00.898904  662109 certs.go:480] ignoring /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017_empty.pem, impossibly tiny 0 bytes
	I1209 11:52:00.898918  662109 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem (1675 bytes)
	I1209 11:52:00.898949  662109 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem (1082 bytes)
	I1209 11:52:00.898982  662109 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem (1123 bytes)
	I1209 11:52:00.899007  662109 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem (1679 bytes)
	I1209 11:52:00.899045  662109 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem (1708 bytes)
	I1209 11:52:00.899994  662109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 11:52:00.943848  662109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1209 11:52:00.970587  662109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 11:52:01.025164  662109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1209 11:52:01.055766  662109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/no-preload-820741/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1209 11:52:01.089756  662109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/no-preload-820741/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1209 11:52:01.112171  662109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/no-preload-820741/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 11:52:01.135928  662109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/no-preload-820741/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1209 11:52:01.157703  662109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017.pem --> /usr/share/ca-certificates/617017.pem (1338 bytes)
	I1209 11:52:01.179806  662109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem --> /usr/share/ca-certificates/6170172.pem (1708 bytes)
	I1209 11:52:01.201663  662109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 11:52:01.223314  662109 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 11:52:01.239214  662109 ssh_runner.go:195] Run: openssl version
	I1209 11:52:01.244687  662109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/617017.pem && ln -fs /usr/share/ca-certificates/617017.pem /etc/ssl/certs/617017.pem"
	I1209 11:52:01.254630  662109 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/617017.pem
	I1209 11:52:01.258801  662109 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 10:45 /usr/share/ca-certificates/617017.pem
	I1209 11:52:01.258849  662109 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/617017.pem
	I1209 11:52:01.264219  662109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/617017.pem /etc/ssl/certs/51391683.0"
	I1209 11:52:01.274077  662109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6170172.pem && ln -fs /usr/share/ca-certificates/6170172.pem /etc/ssl/certs/6170172.pem"
	I1209 11:52:01.284511  662109 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6170172.pem
	I1209 11:52:01.289141  662109 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 10:45 /usr/share/ca-certificates/6170172.pem
	I1209 11:52:01.289216  662109 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6170172.pem
	I1209 11:52:01.295079  662109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6170172.pem /etc/ssl/certs/3ec20f2e.0"
	I1209 11:52:01.305606  662109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1209 11:52:01.315795  662109 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 11:52:01.320085  662109 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 10:34 /usr/share/ca-certificates/minikubeCA.pem
	I1209 11:52:01.320147  662109 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 11:52:01.325590  662109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1209 11:52:01.335747  662109 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 11:52:01.340113  662109 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1209 11:52:01.346217  662109 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1209 11:52:01.351799  662109 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1209 11:52:01.357441  662109 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1209 11:52:01.362784  662109 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1209 11:52:01.368210  662109 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1209 11:52:01.373975  662109 kubeadm.go:392] StartCluster: {Name:no-preload-820741 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:no-preload-820741 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.169 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 11:52:01.374101  662109 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 11:52:01.374160  662109 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 11:52:01.409780  662109 cri.go:89] found id: ""
	I1209 11:52:01.409852  662109 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 11:52:01.419505  662109 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1209 11:52:01.419550  662109 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1209 11:52:01.419603  662109 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1209 11:52:01.429000  662109 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1209 11:52:01.429999  662109 kubeconfig.go:125] found "no-preload-820741" server: "https://192.168.39.169:8443"
	I1209 11:52:01.432151  662109 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1209 11:52:01.440964  662109 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.169
	I1209 11:52:01.441003  662109 kubeadm.go:1160] stopping kube-system containers ...
	I1209 11:52:01.441021  662109 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1209 11:52:01.441084  662109 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 11:52:01.474788  662109 cri.go:89] found id: ""
	I1209 11:52:01.474865  662109 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1209 11:52:01.491360  662109 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 11:52:01.500483  662109 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 11:52:01.500505  662109 kubeadm.go:157] found existing configuration files:
	
	I1209 11:52:01.500558  662109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 11:52:01.509190  662109 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 11:52:01.509251  662109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 11:52:01.518248  662109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 11:52:01.526845  662109 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 11:52:01.526909  662109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 11:52:01.535849  662109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 11:52:01.544609  662109 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 11:52:01.544672  662109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 11:52:01.553527  662109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 11:52:01.561876  662109 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 11:52:01.561928  662109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 11:52:00.178781  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:00.179225  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | unable to find current IP address of domain old-k8s-version-014592 in network mk-old-k8s-version-014592
	I1209 11:52:00.179273  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | I1209 11:52:00.179165  663492 retry.go:31] will retry after 4.532370187s: waiting for machine to come up
	I1209 11:52:05.915073  663024 start.go:364] duration metric: took 2m6.318720193s to acquireMachinesLock for "default-k8s-diff-port-482476"
	I1209 11:52:05.915166  663024 start.go:96] Skipping create...Using existing machine configuration
	I1209 11:52:05.915179  663024 fix.go:54] fixHost starting: 
	I1209 11:52:05.915652  663024 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:52:05.915716  663024 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:52:05.933810  663024 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35679
	I1209 11:52:05.934363  663024 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:52:05.935019  663024 main.go:141] libmachine: Using API Version  1
	I1209 11:52:05.935071  663024 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:52:05.935489  663024 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:52:05.935682  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .DriverName
	I1209 11:52:05.935879  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetState
	I1209 11:52:05.937627  663024 fix.go:112] recreateIfNeeded on default-k8s-diff-port-482476: state=Stopped err=<nil>
	I1209 11:52:05.937660  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .DriverName
	W1209 11:52:05.937842  663024 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 11:52:05.939893  663024 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-482476" ...
	I1209 11:52:01.570657  662109 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 11:52:01.579782  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:01.680268  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:02.573653  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:02.762024  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:02.826444  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:02.932170  662109 api_server.go:52] waiting for apiserver process to appear ...
	I1209 11:52:02.932291  662109 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:03.432933  662109 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:03.933186  662109 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:03.948529  662109 api_server.go:72] duration metric: took 1.016357501s to wait for apiserver process to appear ...
	I1209 11:52:03.948565  662109 api_server.go:88] waiting for apiserver healthz status ...
	I1209 11:52:03.948595  662109 api_server.go:253] Checking apiserver healthz at https://192.168.39.169:8443/healthz ...
	I1209 11:52:06.443635  662109 api_server.go:279] https://192.168.39.169:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1209 11:52:06.443675  662109 api_server.go:103] status: https://192.168.39.169:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1209 11:52:06.443692  662109 api_server.go:253] Checking apiserver healthz at https://192.168.39.169:8443/healthz ...
	I1209 11:52:06.490801  662109 api_server.go:279] https://192.168.39.169:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1209 11:52:06.490839  662109 api_server.go:103] status: https://192.168.39.169:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1209 11:52:06.490860  662109 api_server.go:253] Checking apiserver healthz at https://192.168.39.169:8443/healthz ...
	I1209 11:52:06.502460  662109 api_server.go:279] https://192.168.39.169:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1209 11:52:06.502497  662109 api_server.go:103] status: https://192.168.39.169:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1209 11:52:04.713201  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:04.713711  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has current primary IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:04.713817  662586 main.go:141] libmachine: (old-k8s-version-014592) Found IP for machine: 192.168.61.132
	I1209 11:52:04.713853  662586 main.go:141] libmachine: (old-k8s-version-014592) Reserving static IP address...
	I1209 11:52:04.714267  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "old-k8s-version-014592", mac: "52:54:00:54:72:3e", ip: "192.168.61.132"} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:51:55 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:52:04.714298  662586 main.go:141] libmachine: (old-k8s-version-014592) Reserved static IP address: 192.168.61.132
	I1209 11:52:04.714318  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | skip adding static IP to network mk-old-k8s-version-014592 - found existing host DHCP lease matching {name: "old-k8s-version-014592", mac: "52:54:00:54:72:3e", ip: "192.168.61.132"}
	I1209 11:52:04.714332  662586 main.go:141] libmachine: (old-k8s-version-014592) Waiting for SSH to be available...
	I1209 11:52:04.714347  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | Getting to WaitForSSH function...
	I1209 11:52:04.716632  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:04.716972  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:72:3e", ip: ""} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:51:55 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:52:04.717005  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:04.717129  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | Using SSH client type: external
	I1209 11:52:04.717157  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | Using SSH private key: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/old-k8s-version-014592/id_rsa (-rw-------)
	I1209 11:52:04.717192  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.132 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20068-609844/.minikube/machines/old-k8s-version-014592/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1209 11:52:04.717206  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | About to run SSH command:
	I1209 11:52:04.717223  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | exit 0
	I1209 11:52:04.846290  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | SSH cmd err, output: <nil>: 
	I1209 11:52:04.846675  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetConfigRaw
	I1209 11:52:04.847483  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetIP
	I1209 11:52:04.850430  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:04.850859  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:72:3e", ip: ""} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:51:55 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:52:04.850888  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:04.851113  662586 profile.go:143] Saving config to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/old-k8s-version-014592/config.json ...
	I1209 11:52:04.851328  662586 machine.go:93] provisionDockerMachine start ...
	I1209 11:52:04.851348  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .DriverName
	I1209 11:52:04.851547  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHHostname
	I1209 11:52:04.854318  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:04.854622  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:72:3e", ip: ""} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:51:55 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:52:04.854654  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:04.854782  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHPort
	I1209 11:52:04.854959  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHKeyPath
	I1209 11:52:04.855134  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHKeyPath
	I1209 11:52:04.855276  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHUsername
	I1209 11:52:04.855438  662586 main.go:141] libmachine: Using SSH client type: native
	I1209 11:52:04.855696  662586 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.132 22 <nil> <nil>}
	I1209 11:52:04.855709  662586 main.go:141] libmachine: About to run SSH command:
	hostname
	I1209 11:52:04.963021  662586 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1209 11:52:04.963059  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetMachineName
	I1209 11:52:04.963344  662586 buildroot.go:166] provisioning hostname "old-k8s-version-014592"
	I1209 11:52:04.963368  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetMachineName
	I1209 11:52:04.963545  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHHostname
	I1209 11:52:04.966102  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:04.966461  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:72:3e", ip: ""} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:51:55 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:52:04.966496  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:04.966607  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHPort
	I1209 11:52:04.966780  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHKeyPath
	I1209 11:52:04.966919  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHKeyPath
	I1209 11:52:04.967056  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHUsername
	I1209 11:52:04.967221  662586 main.go:141] libmachine: Using SSH client type: native
	I1209 11:52:04.967407  662586 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.132 22 <nil> <nil>}
	I1209 11:52:04.967419  662586 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-014592 && echo "old-k8s-version-014592" | sudo tee /etc/hostname
	I1209 11:52:05.094147  662586 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-014592
	
	I1209 11:52:05.094210  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHHostname
	I1209 11:52:05.097298  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.097729  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:72:3e", ip: ""} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:51:55 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:52:05.097765  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.097949  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHPort
	I1209 11:52:05.098197  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHKeyPath
	I1209 11:52:05.098460  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHKeyPath
	I1209 11:52:05.098632  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHUsername
	I1209 11:52:05.098829  662586 main.go:141] libmachine: Using SSH client type: native
	I1209 11:52:05.099046  662586 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.132 22 <nil> <nil>}
	I1209 11:52:05.099082  662586 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-014592' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-014592/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-014592' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 11:52:05.210739  662586 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 11:52:05.210785  662586 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20068-609844/.minikube CaCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20068-609844/.minikube}
	I1209 11:52:05.210846  662586 buildroot.go:174] setting up certificates
	I1209 11:52:05.210859  662586 provision.go:84] configureAuth start
	I1209 11:52:05.210881  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetMachineName
	I1209 11:52:05.211210  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetIP
	I1209 11:52:05.214546  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.214937  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:72:3e", ip: ""} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:51:55 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:52:05.214967  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.215167  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHHostname
	I1209 11:52:05.217866  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.218269  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:72:3e", ip: ""} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:51:55 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:52:05.218300  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.218452  662586 provision.go:143] copyHostCerts
	I1209 11:52:05.218530  662586 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem, removing ...
	I1209 11:52:05.218558  662586 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem
	I1209 11:52:05.218630  662586 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem (1082 bytes)
	I1209 11:52:05.218807  662586 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem, removing ...
	I1209 11:52:05.218820  662586 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem
	I1209 11:52:05.218863  662586 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem (1123 bytes)
	I1209 11:52:05.218943  662586 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem, removing ...
	I1209 11:52:05.218953  662586 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem
	I1209 11:52:05.218983  662586 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem (1679 bytes)
	I1209 11:52:05.219060  662586 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-014592 san=[127.0.0.1 192.168.61.132 localhost minikube old-k8s-version-014592]
	I1209 11:52:05.292744  662586 provision.go:177] copyRemoteCerts
	I1209 11:52:05.292830  662586 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 11:52:05.292867  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHHostname
	I1209 11:52:05.296244  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.296670  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:72:3e", ip: ""} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:51:55 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:52:05.296712  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.296896  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHPort
	I1209 11:52:05.297111  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHKeyPath
	I1209 11:52:05.297330  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHUsername
	I1209 11:52:05.297514  662586 sshutil.go:53] new ssh client: &{IP:192.168.61.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/old-k8s-version-014592/id_rsa Username:docker}
	I1209 11:52:05.381148  662586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1209 11:52:05.404883  662586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1209 11:52:05.433421  662586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1209 11:52:05.456775  662586 provision.go:87] duration metric: took 245.894878ms to configureAuth
	I1209 11:52:05.456811  662586 buildroot.go:189] setting minikube options for container-runtime
	I1209 11:52:05.457003  662586 config.go:182] Loaded profile config "old-k8s-version-014592": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1209 11:52:05.457082  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHHostname
	I1209 11:52:05.459984  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.460372  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:72:3e", ip: ""} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:51:55 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:52:05.460415  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.460631  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHPort
	I1209 11:52:05.460851  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHKeyPath
	I1209 11:52:05.461021  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHKeyPath
	I1209 11:52:05.461217  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHUsername
	I1209 11:52:05.461481  662586 main.go:141] libmachine: Using SSH client type: native
	I1209 11:52:05.461702  662586 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.132 22 <nil> <nil>}
	I1209 11:52:05.461722  662586 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 11:52:05.683276  662586 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 11:52:05.683311  662586 machine.go:96] duration metric: took 831.968459ms to provisionDockerMachine
	I1209 11:52:05.683335  662586 start.go:293] postStartSetup for "old-k8s-version-014592" (driver="kvm2")
	I1209 11:52:05.683349  662586 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 11:52:05.683391  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .DriverName
	I1209 11:52:05.683809  662586 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 11:52:05.683850  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHHostname
	I1209 11:52:05.687116  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.687540  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:72:3e", ip: ""} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:51:55 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:52:05.687579  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.687787  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHPort
	I1209 11:52:05.688013  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHKeyPath
	I1209 11:52:05.688204  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHUsername
	I1209 11:52:05.688439  662586 sshutil.go:53] new ssh client: &{IP:192.168.61.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/old-k8s-version-014592/id_rsa Username:docker}
	I1209 11:52:05.768777  662586 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 11:52:05.772572  662586 info.go:137] Remote host: Buildroot 2023.02.9
	I1209 11:52:05.772603  662586 filesync.go:126] Scanning /home/jenkins/minikube-integration/20068-609844/.minikube/addons for local assets ...
	I1209 11:52:05.772690  662586 filesync.go:126] Scanning /home/jenkins/minikube-integration/20068-609844/.minikube/files for local assets ...
	I1209 11:52:05.772813  662586 filesync.go:149] local asset: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem -> 6170172.pem in /etc/ssl/certs
	I1209 11:52:05.772942  662586 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 11:52:05.784153  662586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem --> /etc/ssl/certs/6170172.pem (1708 bytes)
	I1209 11:52:05.808677  662586 start.go:296] duration metric: took 125.320445ms for postStartSetup
	I1209 11:52:05.808736  662586 fix.go:56] duration metric: took 21.705557963s for fixHost
	I1209 11:52:05.808766  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHHostname
	I1209 11:52:05.811685  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.812053  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:72:3e", ip: ""} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:51:55 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:52:05.812090  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.812426  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHPort
	I1209 11:52:05.812639  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHKeyPath
	I1209 11:52:05.812853  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHKeyPath
	I1209 11:52:05.812996  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHUsername
	I1209 11:52:05.813345  662586 main.go:141] libmachine: Using SSH client type: native
	I1209 11:52:05.813562  662586 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.132 22 <nil> <nil>}
	I1209 11:52:05.813572  662586 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1209 11:52:05.914863  662586 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733745125.875320243
	
	I1209 11:52:05.914892  662586 fix.go:216] guest clock: 1733745125.875320243
	I1209 11:52:05.914906  662586 fix.go:229] Guest: 2024-12-09 11:52:05.875320243 +0000 UTC Remote: 2024-12-09 11:52:05.808742373 +0000 UTC m=+218.159686894 (delta=66.57787ms)
	I1209 11:52:05.914941  662586 fix.go:200] guest clock delta is within tolerance: 66.57787ms
	I1209 11:52:05.914952  662586 start.go:83] releasing machines lock for "old-k8s-version-014592", held for 21.811813657s
	I1209 11:52:05.914983  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .DriverName
	I1209 11:52:05.915289  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetIP
	I1209 11:52:05.918015  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.918513  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:72:3e", ip: ""} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:51:55 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:52:05.918546  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.918662  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .DriverName
	I1209 11:52:05.919315  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .DriverName
	I1209 11:52:05.919508  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .DriverName
	I1209 11:52:05.919628  662586 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 11:52:05.919684  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHHostname
	I1209 11:52:05.919739  662586 ssh_runner.go:195] Run: cat /version.json
	I1209 11:52:05.919767  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHHostname
	I1209 11:52:05.922529  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.922816  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.923096  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:72:3e", ip: ""} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:51:55 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:52:05.923121  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.923223  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:72:3e", ip: ""} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:51:55 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:52:05.923258  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.923291  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHPort
	I1209 11:52:05.923459  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHPort
	I1209 11:52:05.923602  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHKeyPath
	I1209 11:52:05.923616  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHKeyPath
	I1209 11:52:05.923848  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHUsername
	I1209 11:52:05.923900  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHUsername
	I1209 11:52:05.924030  662586 sshutil.go:53] new ssh client: &{IP:192.168.61.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/old-k8s-version-014592/id_rsa Username:docker}
	I1209 11:52:05.924104  662586 sshutil.go:53] new ssh client: &{IP:192.168.61.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/old-k8s-version-014592/id_rsa Username:docker}
	I1209 11:52:06.037215  662586 ssh_runner.go:195] Run: systemctl --version
	I1209 11:52:06.043193  662586 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 11:52:06.193717  662586 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 11:52:06.199693  662586 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 11:52:06.199786  662586 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 11:52:06.216007  662586 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1209 11:52:06.216040  662586 start.go:495] detecting cgroup driver to use...
	I1209 11:52:06.216131  662586 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 11:52:06.233631  662586 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 11:52:06.249730  662586 docker.go:217] disabling cri-docker service (if available) ...
	I1209 11:52:06.249817  662586 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 11:52:06.265290  662586 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 11:52:06.281676  662586 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 11:52:06.432116  662586 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 11:52:06.605899  662586 docker.go:233] disabling docker service ...
	I1209 11:52:06.606004  662586 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 11:52:06.622861  662586 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 11:52:06.637605  662586 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 11:52:06.772842  662586 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 11:52:06.905950  662586 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 11:52:06.923048  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 11:52:06.943483  662586 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1209 11:52:06.943542  662586 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:52:06.957647  662586 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 11:52:06.957725  662586 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:52:06.970221  662586 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:52:06.981243  662586 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:52:06.992084  662586 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 11:52:07.004284  662586 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 11:52:07.014329  662586 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1209 11:52:07.014411  662586 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1209 11:52:07.028104  662586 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 11:52:07.038782  662586 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 11:52:07.155779  662586 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 11:52:07.271726  662586 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 11:52:07.271815  662586 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 11:52:07.276994  662586 start.go:563] Will wait 60s for crictl version
	I1209 11:52:07.277061  662586 ssh_runner.go:195] Run: which crictl
	I1209 11:52:07.281212  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 11:52:07.328839  662586 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1209 11:52:07.328959  662586 ssh_runner.go:195] Run: crio --version
	I1209 11:52:07.360632  662586 ssh_runner.go:195] Run: crio --version
	I1209 11:52:07.393046  662586 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1209 11:52:07.394357  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetIP
	I1209 11:52:07.398002  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:07.398539  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:72:3e", ip: ""} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:51:55 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:52:07.398564  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:07.398893  662586 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1209 11:52:07.404512  662586 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 11:52:07.417822  662586 kubeadm.go:883] updating cluster {Name:old-k8s-version-014592 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-014592 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.132 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 11:52:07.418006  662586 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1209 11:52:07.418108  662586 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 11:52:07.473163  662586 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1209 11:52:07.473249  662586 ssh_runner.go:195] Run: which lz4
	I1209 11:52:07.478501  662586 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1209 11:52:07.483744  662586 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1209 11:52:07.483786  662586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1209 11:52:06.949438  662109 api_server.go:253] Checking apiserver healthz at https://192.168.39.169:8443/healthz ...
	I1209 11:52:06.959097  662109 api_server.go:279] https://192.168.39.169:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1209 11:52:06.959150  662109 api_server.go:103] status: https://192.168.39.169:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1209 11:52:07.449249  662109 api_server.go:253] Checking apiserver healthz at https://192.168.39.169:8443/healthz ...
	I1209 11:52:07.466817  662109 api_server.go:279] https://192.168.39.169:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1209 11:52:07.466860  662109 api_server.go:103] status: https://192.168.39.169:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1209 11:52:07.948998  662109 api_server.go:253] Checking apiserver healthz at https://192.168.39.169:8443/healthz ...
	I1209 11:52:07.958340  662109 api_server.go:279] https://192.168.39.169:8443/healthz returned 200:
	ok
	I1209 11:52:07.966049  662109 api_server.go:141] control plane version: v1.31.2
	I1209 11:52:07.966095  662109 api_server.go:131] duration metric: took 4.017521352s to wait for apiserver health ...
	I1209 11:52:07.966111  662109 cni.go:84] Creating CNI manager for ""
	I1209 11:52:07.966121  662109 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 11:52:07.967962  662109 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1209 11:52:05.941206  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .Start
	I1209 11:52:05.941411  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Ensuring networks are active...
	I1209 11:52:05.942245  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Ensuring network default is active
	I1209 11:52:05.942724  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Ensuring network mk-default-k8s-diff-port-482476 is active
	I1209 11:52:05.943274  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Getting domain xml...
	I1209 11:52:05.944080  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Creating domain...
	I1209 11:52:07.394633  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting to get IP...
	I1209 11:52:07.396032  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:07.397444  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | unable to find current IP address of domain default-k8s-diff-port-482476 in network mk-default-k8s-diff-port-482476
	I1209 11:52:07.397560  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | I1209 11:52:07.397434  663663 retry.go:31] will retry after 205.256699ms: waiting for machine to come up
	I1209 11:52:07.604209  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:07.604884  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | unable to find current IP address of domain default-k8s-diff-port-482476 in network mk-default-k8s-diff-port-482476
	I1209 11:52:07.604920  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | I1209 11:52:07.604828  663663 retry.go:31] will retry after 291.255961ms: waiting for machine to come up
	I1209 11:52:07.897467  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:07.898992  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | unable to find current IP address of domain default-k8s-diff-port-482476 in network mk-default-k8s-diff-port-482476
	I1209 11:52:07.899020  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | I1209 11:52:07.898866  663663 retry.go:31] will retry after 437.180412ms: waiting for machine to come up
	I1209 11:52:08.337664  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:08.338195  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | unable to find current IP address of domain default-k8s-diff-port-482476 in network mk-default-k8s-diff-port-482476
	I1209 11:52:08.338235  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | I1209 11:52:08.338151  663663 retry.go:31] will retry after 603.826089ms: waiting for machine to come up
	I1209 11:52:08.944048  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:08.944672  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | unable to find current IP address of domain default-k8s-diff-port-482476 in network mk-default-k8s-diff-port-482476
	I1209 11:52:08.944702  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | I1209 11:52:08.944612  663663 retry.go:31] will retry after 557.882868ms: waiting for machine to come up
	I1209 11:52:07.969367  662109 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1209 11:52:07.986045  662109 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1209 11:52:08.075377  662109 system_pods.go:43] waiting for kube-system pods to appear ...
	I1209 11:52:08.091609  662109 system_pods.go:59] 8 kube-system pods found
	I1209 11:52:08.091648  662109 system_pods.go:61] "coredns-7c65d6cfc9-z647g" [0e15e13e-efe6-4ae2-8bac-205aadf8f95a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1209 11:52:08.091656  662109 system_pods.go:61] "etcd-no-preload-820741" [3ea97088-f3b5-4c8f-ac11-7ab96f37bc5b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1209 11:52:08.091664  662109 system_pods.go:61] "kube-apiserver-no-preload-820741" [bd0d3e20-bdab-41c1-bb25-0c5149b4e456] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1209 11:52:08.091670  662109 system_pods.go:61] "kube-controller-manager-no-preload-820741" [5eda77af-a169-4be5-9f96-6ad5406f9036] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1209 11:52:08.091675  662109 system_pods.go:61] "kube-proxy-hpvvp" [0945206c-8d1e-47e0-b35b-9011073423b2] Running
	I1209 11:52:08.091681  662109 system_pods.go:61] "kube-scheduler-no-preload-820741" [e7003668-a70d-4334-bb99-c63c463d87e0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1209 11:52:08.091686  662109 system_pods.go:61] "metrics-server-6867b74b74-pwcsr" [40d4df7e-de82-478b-a77b-b27208d8262e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1209 11:52:08.091691  662109 system_pods.go:61] "storage-provisioner" [aeba46d3-ecf1-4923-b89c-75b34e75a06d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1209 11:52:08.091699  662109 system_pods.go:74] duration metric: took 16.289433ms to wait for pod list to return data ...
	I1209 11:52:08.091707  662109 node_conditions.go:102] verifying NodePressure condition ...
	I1209 11:52:08.096961  662109 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1209 11:52:08.097010  662109 node_conditions.go:123] node cpu capacity is 2
	I1209 11:52:08.097047  662109 node_conditions.go:105] duration metric: took 5.334194ms to run NodePressure ...
	I1209 11:52:08.097073  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:08.573868  662109 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1209 11:52:08.583670  662109 kubeadm.go:739] kubelet initialised
	I1209 11:52:08.583700  662109 kubeadm.go:740] duration metric: took 9.800796ms waiting for restarted kubelet to initialise ...
	I1209 11:52:08.583713  662109 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 11:52:08.592490  662109 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-z647g" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:08.600581  662109 pod_ready.go:98] node "no-preload-820741" hosting pod "coredns-7c65d6cfc9-z647g" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-820741" has status "Ready":"False"
	I1209 11:52:08.600611  662109 pod_ready.go:82] duration metric: took 8.087599ms for pod "coredns-7c65d6cfc9-z647g" in "kube-system" namespace to be "Ready" ...
	E1209 11:52:08.600623  662109 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-820741" hosting pod "coredns-7c65d6cfc9-z647g" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-820741" has status "Ready":"False"
	I1209 11:52:08.600633  662109 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-820741" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:08.609663  662109 pod_ready.go:98] node "no-preload-820741" hosting pod "etcd-no-preload-820741" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-820741" has status "Ready":"False"
	I1209 11:52:08.609698  662109 pod_ready.go:82] duration metric: took 9.054194ms for pod "etcd-no-preload-820741" in "kube-system" namespace to be "Ready" ...
	E1209 11:52:08.609712  662109 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-820741" hosting pod "etcd-no-preload-820741" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-820741" has status "Ready":"False"
	I1209 11:52:08.609722  662109 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-820741" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:08.615482  662109 pod_ready.go:98] node "no-preload-820741" hosting pod "kube-apiserver-no-preload-820741" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-820741" has status "Ready":"False"
	I1209 11:52:08.615514  662109 pod_ready.go:82] duration metric: took 5.78152ms for pod "kube-apiserver-no-preload-820741" in "kube-system" namespace to be "Ready" ...
	E1209 11:52:08.615526  662109 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-820741" hosting pod "kube-apiserver-no-preload-820741" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-820741" has status "Ready":"False"
	I1209 11:52:08.615536  662109 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-820741" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:08.623662  662109 pod_ready.go:98] node "no-preload-820741" hosting pod "kube-controller-manager-no-preload-820741" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-820741" has status "Ready":"False"
	I1209 11:52:08.623698  662109 pod_ready.go:82] duration metric: took 8.151877ms for pod "kube-controller-manager-no-preload-820741" in "kube-system" namespace to be "Ready" ...
	E1209 11:52:08.623713  662109 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-820741" hosting pod "kube-controller-manager-no-preload-820741" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-820741" has status "Ready":"False"
	I1209 11:52:08.623722  662109 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-hpvvp" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:08.978286  662109 pod_ready.go:98] node "no-preload-820741" hosting pod "kube-proxy-hpvvp" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-820741" has status "Ready":"False"
	I1209 11:52:08.978323  662109 pod_ready.go:82] duration metric: took 354.589596ms for pod "kube-proxy-hpvvp" in "kube-system" namespace to be "Ready" ...
	E1209 11:52:08.978344  662109 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-820741" hosting pod "kube-proxy-hpvvp" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-820741" has status "Ready":"False"
	I1209 11:52:08.978356  662109 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-820741" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:09.378434  662109 pod_ready.go:98] node "no-preload-820741" hosting pod "kube-scheduler-no-preload-820741" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-820741" has status "Ready":"False"
	I1209 11:52:09.378471  662109 pod_ready.go:82] duration metric: took 400.107028ms for pod "kube-scheduler-no-preload-820741" in "kube-system" namespace to be "Ready" ...
	E1209 11:52:09.378484  662109 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-820741" hosting pod "kube-scheduler-no-preload-820741" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-820741" has status "Ready":"False"
	I1209 11:52:09.378494  662109 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:09.778087  662109 pod_ready.go:98] node "no-preload-820741" hosting pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-820741" has status "Ready":"False"
	I1209 11:52:09.778117  662109 pod_ready.go:82] duration metric: took 399.613592ms for pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace to be "Ready" ...
	E1209 11:52:09.778129  662109 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-820741" hosting pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-820741" has status "Ready":"False"
	I1209 11:52:09.778138  662109 pod_ready.go:39] duration metric: took 1.194413796s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 11:52:09.778162  662109 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1209 11:52:09.793629  662109 ops.go:34] apiserver oom_adj: -16
	I1209 11:52:09.793663  662109 kubeadm.go:597] duration metric: took 8.374104555s to restartPrimaryControlPlane
	I1209 11:52:09.793681  662109 kubeadm.go:394] duration metric: took 8.419719684s to StartCluster
	I1209 11:52:09.793708  662109 settings.go:142] acquiring lock: {Name:mkd819f3469f56ef651f2e5bb461e93bb6b2b5fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 11:52:09.793848  662109 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20068-609844/kubeconfig
	I1209 11:52:09.796407  662109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/kubeconfig: {Name:mk32876a1ab47f13c0d7c0ba1ee5e32de01b4a4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 11:52:09.796774  662109 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.169 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 11:52:09.796837  662109 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1209 11:52:09.796954  662109 addons.go:69] Setting storage-provisioner=true in profile "no-preload-820741"
	I1209 11:52:09.796975  662109 addons.go:234] Setting addon storage-provisioner=true in "no-preload-820741"
	W1209 11:52:09.796984  662109 addons.go:243] addon storage-provisioner should already be in state true
	I1209 11:52:09.797023  662109 host.go:66] Checking if "no-preload-820741" exists ...
	I1209 11:52:09.797048  662109 config.go:182] Loaded profile config "no-preload-820741": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 11:52:09.797086  662109 addons.go:69] Setting default-storageclass=true in profile "no-preload-820741"
	I1209 11:52:09.797110  662109 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-820741"
	I1209 11:52:09.797119  662109 addons.go:69] Setting metrics-server=true in profile "no-preload-820741"
	I1209 11:52:09.797150  662109 addons.go:234] Setting addon metrics-server=true in "no-preload-820741"
	W1209 11:52:09.797160  662109 addons.go:243] addon metrics-server should already be in state true
	I1209 11:52:09.797204  662109 host.go:66] Checking if "no-preload-820741" exists ...
	I1209 11:52:09.797545  662109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:52:09.797571  662109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:52:09.797579  662109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:52:09.797596  662109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:52:09.797611  662109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:52:09.797620  662109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:52:09.799690  662109 out.go:177] * Verifying Kubernetes components...
	I1209 11:52:09.801035  662109 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 11:52:09.814968  662109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33991
	I1209 11:52:09.815010  662109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34319
	I1209 11:52:09.815576  662109 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:52:09.815715  662109 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:52:09.816340  662109 main.go:141] libmachine: Using API Version  1
	I1209 11:52:09.816361  662109 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:52:09.816666  662109 main.go:141] libmachine: Using API Version  1
	I1209 11:52:09.816683  662109 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:52:09.816745  662109 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:52:09.817402  662109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:52:09.817449  662109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:52:09.818118  662109 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:52:09.818680  662109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:52:09.818718  662109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:52:09.842345  662109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37501
	I1209 11:52:09.842582  662109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44183
	I1209 11:52:09.842703  662109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38793
	I1209 11:52:09.843479  662109 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:52:09.843608  662109 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:52:09.843667  662109 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:52:09.843973  662109 main.go:141] libmachine: Using API Version  1
	I1209 11:52:09.843999  662109 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:52:09.844168  662109 main.go:141] libmachine: Using API Version  1
	I1209 11:52:09.844180  662109 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:52:09.844575  662109 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:52:09.844773  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetState
	I1209 11:52:09.845107  662109 main.go:141] libmachine: Using API Version  1
	I1209 11:52:09.845122  662109 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:52:09.845633  662109 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:52:09.845887  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetState
	I1209 11:52:09.847386  662109 main.go:141] libmachine: (no-preload-820741) Calling .DriverName
	I1209 11:52:09.848553  662109 main.go:141] libmachine: (no-preload-820741) Calling .DriverName
	I1209 11:52:09.849410  662109 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1209 11:52:09.849690  662109 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:52:09.850230  662109 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 11:52:09.850303  662109 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1209 11:52:09.850323  662109 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1209 11:52:09.850346  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHHostname
	I1209 11:52:09.851051  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetState
	I1209 11:52:09.851404  662109 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 11:52:09.851426  662109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1209 11:52:09.851447  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHHostname
	I1209 11:52:09.855303  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:52:09.855935  662109 addons.go:234] Setting addon default-storageclass=true in "no-preload-820741"
	W1209 11:52:09.855958  662109 addons.go:243] addon default-storageclass should already be in state true
	I1209 11:52:09.855991  662109 host.go:66] Checking if "no-preload-820741" exists ...
	I1209 11:52:09.856373  662109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:52:09.856429  662109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:52:09.857583  662109 main.go:141] libmachine: (no-preload-820741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4c:0e", ip: ""} in network mk-no-preload-820741: {Iface:virbr1 ExpiryTime:2024-12-09 12:43:41 +0000 UTC Type:0 Mac:52:54:00:27:4c:0e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:no-preload-820741 Clientid:01:52:54:00:27:4c:0e}
	I1209 11:52:09.857614  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined IP address 192.168.39.169 and MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:52:09.857874  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHPort
	I1209 11:52:09.858206  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHKeyPath
	I1209 11:52:09.858588  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHUsername
	I1209 11:52:09.858766  662109 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/no-preload-820741/id_rsa Username:docker}
	I1209 11:52:09.859464  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:52:09.859875  662109 main.go:141] libmachine: (no-preload-820741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4c:0e", ip: ""} in network mk-no-preload-820741: {Iface:virbr1 ExpiryTime:2024-12-09 12:43:41 +0000 UTC Type:0 Mac:52:54:00:27:4c:0e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:no-preload-820741 Clientid:01:52:54:00:27:4c:0e}
	I1209 11:52:09.859897  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined IP address 192.168.39.169 and MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:52:09.860238  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHPort
	I1209 11:52:09.860449  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHKeyPath
	I1209 11:52:09.860597  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHUsername
	I1209 11:52:09.860736  662109 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/no-preload-820741/id_rsa Username:docker}
	I1209 11:52:09.880235  662109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38341
	I1209 11:52:09.880846  662109 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:52:09.881409  662109 main.go:141] libmachine: Using API Version  1
	I1209 11:52:09.881429  662109 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:52:09.881855  662109 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:52:09.882651  662109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:52:09.882711  662109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:52:09.904576  662109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36779
	I1209 11:52:09.905132  662109 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:52:09.905765  662109 main.go:141] libmachine: Using API Version  1
	I1209 11:52:09.905788  662109 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:52:09.906224  662109 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:52:09.906469  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetState
	I1209 11:52:09.908475  662109 main.go:141] libmachine: (no-preload-820741) Calling .DriverName
	I1209 11:52:09.908715  662109 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1209 11:52:09.908735  662109 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1209 11:52:09.908756  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHHostname
	I1209 11:52:09.912294  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:52:09.912928  662109 main.go:141] libmachine: (no-preload-820741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4c:0e", ip: ""} in network mk-no-preload-820741: {Iface:virbr1 ExpiryTime:2024-12-09 12:43:41 +0000 UTC Type:0 Mac:52:54:00:27:4c:0e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:no-preload-820741 Clientid:01:52:54:00:27:4c:0e}
	I1209 11:52:09.912963  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined IP address 192.168.39.169 and MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:52:09.913128  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHPort
	I1209 11:52:09.913383  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHKeyPath
	I1209 11:52:09.913563  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHUsername
	I1209 11:52:09.913711  662109 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/no-preload-820741/id_rsa Username:docker}
	I1209 11:52:10.141200  662109 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 11:52:10.172182  662109 node_ready.go:35] waiting up to 6m0s for node "no-preload-820741" to be "Ready" ...
	I1209 11:52:10.306617  662109 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1209 11:52:10.306646  662109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1209 11:52:10.321962  662109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 11:52:10.326125  662109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1209 11:52:10.360534  662109 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1209 11:52:10.360568  662109 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1209 11:52:10.470875  662109 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1209 11:52:10.470917  662109 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1209 11:52:10.555610  662109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1209 11:52:11.721480  662109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.395310752s)
	I1209 11:52:11.721571  662109 main.go:141] libmachine: Making call to close driver server
	I1209 11:52:11.721638  662109 main.go:141] libmachine: (no-preload-820741) Calling .Close
	I1209 11:52:11.721581  662109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.165925756s)
	I1209 11:52:11.721735  662109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.399738143s)
	I1209 11:52:11.721753  662109 main.go:141] libmachine: Making call to close driver server
	I1209 11:52:11.721766  662109 main.go:141] libmachine: (no-preload-820741) Calling .Close
	I1209 11:52:11.721765  662109 main.go:141] libmachine: Making call to close driver server
	I1209 11:52:11.721779  662109 main.go:141] libmachine: (no-preload-820741) Calling .Close
	I1209 11:52:11.722002  662109 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:52:11.722014  662109 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:52:11.722021  662109 main.go:141] libmachine: Making call to close driver server
	I1209 11:52:11.722028  662109 main.go:141] libmachine: (no-preload-820741) Calling .Close
	I1209 11:52:11.722201  662109 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:52:11.722213  662109 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:52:11.722221  662109 main.go:141] libmachine: Making call to close driver server
	I1209 11:52:11.722226  662109 main.go:141] libmachine: (no-preload-820741) Calling .Close
	I1209 11:52:11.722320  662109 main.go:141] libmachine: (no-preload-820741) DBG | Closing plugin on server side
	I1209 11:52:11.722329  662109 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:52:11.722349  662109 main.go:141] libmachine: (no-preload-820741) DBG | Closing plugin on server side
	I1209 11:52:11.722360  662109 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:52:11.722384  662109 main.go:141] libmachine: Making call to close driver server
	I1209 11:52:11.722395  662109 main.go:141] libmachine: (no-preload-820741) Calling .Close
	I1209 11:52:11.722424  662109 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:52:11.722438  662109 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:52:11.722465  662109 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:52:11.722475  662109 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:52:11.722490  662109 addons.go:475] Verifying addon metrics-server=true in "no-preload-820741"
	I1209 11:52:11.722560  662109 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:52:11.722579  662109 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:52:11.722564  662109 main.go:141] libmachine: (no-preload-820741) DBG | Closing plugin on server side
	I1209 11:52:11.729638  662109 main.go:141] libmachine: Making call to close driver server
	I1209 11:52:11.729660  662109 main.go:141] libmachine: (no-preload-820741) Calling .Close
	I1209 11:52:11.729934  662109 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:52:11.729950  662109 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:52:11.731642  662109 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I1209 11:52:09.097654  662586 crio.go:462] duration metric: took 1.619191765s to copy over tarball
	I1209 11:52:09.097748  662586 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1209 11:52:12.304496  662586 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.20670295s)
	I1209 11:52:12.304543  662586 crio.go:469] duration metric: took 3.206852542s to extract the tarball
	I1209 11:52:12.304553  662586 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1209 11:52:12.347991  662586 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 11:52:12.385411  662586 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1209 11:52:12.385438  662586 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1209 11:52:12.385533  662586 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 11:52:12.385557  662586 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1209 11:52:12.385570  662586 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1209 11:52:12.385609  662586 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 11:52:12.385641  662586 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1209 11:52:12.385650  662586 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1209 11:52:12.385645  662586 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1209 11:52:12.385620  662586 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1209 11:52:12.387326  662586 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1209 11:52:12.387335  662586 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 11:52:12.387371  662586 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1209 11:52:12.387372  662586 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1209 11:52:12.387328  662586 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1209 11:52:12.387338  662586 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 11:52:12.387328  662586 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1209 11:52:12.387383  662586 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1209 11:52:12.621631  662586 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1209 11:52:12.623694  662586 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 11:52:12.632536  662586 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1209 11:52:12.634550  662586 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1209 11:52:12.638401  662586 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1209 11:52:12.641071  662586 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1209 11:52:12.645344  662586 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1209 11:52:09.504566  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:09.505124  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | unable to find current IP address of domain default-k8s-diff-port-482476 in network mk-default-k8s-diff-port-482476
	I1209 11:52:09.505155  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | I1209 11:52:09.505076  663663 retry.go:31] will retry after 636.87343ms: waiting for machine to come up
	I1209 11:52:10.144387  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:10.145090  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | unable to find current IP address of domain default-k8s-diff-port-482476 in network mk-default-k8s-diff-port-482476
	I1209 11:52:10.145119  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | I1209 11:52:10.145037  663663 retry.go:31] will retry after 716.448577ms: waiting for machine to come up
	I1209 11:52:10.863113  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:10.863816  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | unable to find current IP address of domain default-k8s-diff-port-482476 in network mk-default-k8s-diff-port-482476
	I1209 11:52:10.863848  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | I1209 11:52:10.863762  663663 retry.go:31] will retry after 901.007245ms: waiting for machine to come up
	I1209 11:52:11.766356  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:11.766745  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | unable to find current IP address of domain default-k8s-diff-port-482476 in network mk-default-k8s-diff-port-482476
	I1209 11:52:11.766773  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | I1209 11:52:11.766688  663663 retry.go:31] will retry after 1.570604193s: waiting for machine to come up
	I1209 11:52:13.339318  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:13.339796  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | unable to find current IP address of domain default-k8s-diff-port-482476 in network mk-default-k8s-diff-port-482476
	I1209 11:52:13.339828  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | I1209 11:52:13.339744  663663 retry.go:31] will retry after 1.928200683s: waiting for machine to come up
	I1209 11:52:11.732956  662109 addons.go:510] duration metric: took 1.936137102s for enable addons: enabled=[metrics-server storage-provisioner default-storageclass]
	I1209 11:52:12.175844  662109 node_ready.go:53] node "no-preload-820741" has status "Ready":"False"
	I1209 11:52:14.504491  662109 node_ready.go:53] node "no-preload-820741" has status "Ready":"False"
	I1209 11:52:12.756066  662586 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1209 11:52:12.756121  662586 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1209 11:52:12.756134  662586 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1209 11:52:12.756175  662586 ssh_runner.go:195] Run: which crictl
	I1209 11:52:12.756179  662586 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 11:52:12.756230  662586 ssh_runner.go:195] Run: which crictl
	I1209 11:52:12.808091  662586 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1209 11:52:12.808139  662586 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1209 11:52:12.808186  662586 ssh_runner.go:195] Run: which crictl
	I1209 11:52:12.809593  662586 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1209 11:52:12.809622  662586 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1209 11:52:12.809637  662586 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1209 11:52:12.809659  662586 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1209 11:52:12.809682  662586 ssh_runner.go:195] Run: which crictl
	I1209 11:52:12.809712  662586 ssh_runner.go:195] Run: which crictl
	I1209 11:52:12.809775  662586 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1209 11:52:12.809803  662586 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1209 11:52:12.809829  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 11:52:12.809841  662586 ssh_runner.go:195] Run: which crictl
	I1209 11:52:12.809724  662586 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1209 11:52:12.809873  662586 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1209 11:52:12.809898  662586 ssh_runner.go:195] Run: which crictl
	I1209 11:52:12.809933  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1209 11:52:12.812256  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1209 11:52:12.819121  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1209 11:52:12.825106  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1209 11:52:12.910431  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1209 11:52:12.910501  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 11:52:12.910560  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1209 11:52:12.910503  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1209 11:52:12.910638  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1209 11:52:12.910713  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1209 11:52:12.930461  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1209 11:52:13.079147  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1209 11:52:13.079189  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 11:52:13.079233  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1209 11:52:13.079276  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1209 11:52:13.079418  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1209 11:52:13.079447  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1209 11:52:13.079517  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1209 11:52:13.224753  662586 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1209 11:52:13.227126  662586 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1209 11:52:13.227190  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1209 11:52:13.227253  662586 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1209 11:52:13.227291  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1209 11:52:13.227332  662586 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1209 11:52:13.227393  662586 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1209 11:52:13.277747  662586 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1209 11:52:13.285286  662586 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1209 11:52:13.663858  662586 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 11:52:13.805603  662586 cache_images.go:92] duration metric: took 1.420145666s to LoadCachedImages
	W1209 11:52:13.805814  662586 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I1209 11:52:13.805848  662586 kubeadm.go:934] updating node { 192.168.61.132 8443 v1.20.0 crio true true} ...
	I1209 11:52:13.805980  662586 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-014592 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.132
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-014592 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 11:52:13.806079  662586 ssh_runner.go:195] Run: crio config
	I1209 11:52:13.870766  662586 cni.go:84] Creating CNI manager for ""
	I1209 11:52:13.870797  662586 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 11:52:13.870813  662586 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1209 11:52:13.870841  662586 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.132 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-014592 NodeName:old-k8s-version-014592 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.132"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.132 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1209 11:52:13.871050  662586 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.132
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-014592"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.132
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.132"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 11:52:13.871136  662586 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1209 11:52:13.881556  662586 binaries.go:44] Found k8s binaries, skipping transfer
	I1209 11:52:13.881628  662586 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 11:52:13.891122  662586 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1209 11:52:13.908181  662586 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 11:52:13.925041  662586 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1209 11:52:13.941567  662586 ssh_runner.go:195] Run: grep 192.168.61.132	control-plane.minikube.internal$ /etc/hosts
	I1209 11:52:13.945502  662586 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.132	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 11:52:13.957476  662586 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 11:52:14.091699  662586 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 11:52:14.108772  662586 certs.go:68] Setting up /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/old-k8s-version-014592 for IP: 192.168.61.132
	I1209 11:52:14.108810  662586 certs.go:194] generating shared ca certs ...
	I1209 11:52:14.108838  662586 certs.go:226] acquiring lock for ca certs: {Name:mk2df5887e08965d909a9c950da5dfffb8a04ddc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 11:52:14.109024  662586 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.key
	I1209 11:52:14.109087  662586 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.key
	I1209 11:52:14.109105  662586 certs.go:256] generating profile certs ...
	I1209 11:52:14.109248  662586 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/old-k8s-version-014592/client.key
	I1209 11:52:14.109323  662586 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/old-k8s-version-014592/apiserver.key.28078577
	I1209 11:52:14.109383  662586 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/old-k8s-version-014592/proxy-client.key
	I1209 11:52:14.109572  662586 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017.pem (1338 bytes)
	W1209 11:52:14.109609  662586 certs.go:480] ignoring /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017_empty.pem, impossibly tiny 0 bytes
	I1209 11:52:14.109619  662586 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem (1675 bytes)
	I1209 11:52:14.109659  662586 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem (1082 bytes)
	I1209 11:52:14.109697  662586 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem (1123 bytes)
	I1209 11:52:14.109737  662586 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem (1679 bytes)
	I1209 11:52:14.109802  662586 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem (1708 bytes)
	I1209 11:52:14.110497  662586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 11:52:14.145815  662586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1209 11:52:14.179452  662586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 11:52:14.217469  662586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1209 11:52:14.250288  662586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/old-k8s-version-014592/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1209 11:52:14.287110  662586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/old-k8s-version-014592/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1209 11:52:14.317190  662586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/old-k8s-version-014592/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 11:52:14.356825  662586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/old-k8s-version-014592/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1209 11:52:14.379756  662586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017.pem --> /usr/share/ca-certificates/617017.pem (1338 bytes)
	I1209 11:52:14.402045  662586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem --> /usr/share/ca-certificates/6170172.pem (1708 bytes)
	I1209 11:52:14.425287  662586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 11:52:14.448025  662586 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 11:52:14.464144  662586 ssh_runner.go:195] Run: openssl version
	I1209 11:52:14.470256  662586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/617017.pem && ln -fs /usr/share/ca-certificates/617017.pem /etc/ssl/certs/617017.pem"
	I1209 11:52:14.481298  662586 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/617017.pem
	I1209 11:52:14.485849  662586 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 10:45 /usr/share/ca-certificates/617017.pem
	I1209 11:52:14.485904  662586 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/617017.pem
	I1209 11:52:14.492321  662586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/617017.pem /etc/ssl/certs/51391683.0"
	I1209 11:52:14.504155  662586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6170172.pem && ln -fs /usr/share/ca-certificates/6170172.pem /etc/ssl/certs/6170172.pem"
	I1209 11:52:14.515819  662586 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6170172.pem
	I1209 11:52:14.520876  662586 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 10:45 /usr/share/ca-certificates/6170172.pem
	I1209 11:52:14.520955  662586 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6170172.pem
	I1209 11:52:14.527295  662586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6170172.pem /etc/ssl/certs/3ec20f2e.0"
	I1209 11:52:14.538319  662586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1209 11:52:14.549753  662586 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 11:52:14.554273  662586 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 10:34 /usr/share/ca-certificates/minikubeCA.pem
	I1209 11:52:14.554341  662586 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 11:52:14.559893  662586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1209 11:52:14.570744  662586 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 11:52:14.575763  662586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1209 11:52:14.582279  662586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1209 11:52:14.588549  662586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1209 11:52:14.594376  662586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1209 11:52:14.599758  662586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1209 11:52:14.605497  662586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1209 11:52:14.611083  662586 kubeadm.go:392] StartCluster: {Name:old-k8s-version-014592 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-014592 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.132 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 11:52:14.611213  662586 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 11:52:14.611288  662586 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 11:52:14.649447  662586 cri.go:89] found id: ""
	I1209 11:52:14.649538  662586 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 11:52:14.660070  662586 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1209 11:52:14.660094  662586 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1209 11:52:14.660145  662586 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1209 11:52:14.670412  662586 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1209 11:52:14.671387  662586 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-014592" does not appear in /home/jenkins/minikube-integration/20068-609844/kubeconfig
	I1209 11:52:14.672043  662586 kubeconfig.go:62] /home/jenkins/minikube-integration/20068-609844/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-014592" cluster setting kubeconfig missing "old-k8s-version-014592" context setting]
	I1209 11:52:14.673337  662586 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/kubeconfig: {Name:mk32876a1ab47f13c0d7c0ba1ee5e32de01b4a4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 11:52:14.708285  662586 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1209 11:52:14.719486  662586 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.132
	I1209 11:52:14.719535  662586 kubeadm.go:1160] stopping kube-system containers ...
	I1209 11:52:14.719563  662586 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1209 11:52:14.719635  662586 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 11:52:14.755280  662586 cri.go:89] found id: ""
	I1209 11:52:14.755369  662586 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1209 11:52:14.771385  662586 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 11:52:14.781364  662586 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 11:52:14.781387  662586 kubeadm.go:157] found existing configuration files:
	
	I1209 11:52:14.781455  662586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 11:52:14.790942  662586 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 11:52:14.791016  662586 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 11:52:14.800481  662586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 11:52:14.809875  662586 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 11:52:14.809948  662586 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 11:52:14.819619  662586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 11:52:14.831670  662586 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 11:52:14.831750  662586 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 11:52:14.844244  662586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 11:52:14.853328  662586 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 11:52:14.853403  662586 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 11:52:14.862428  662586 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 11:52:14.871346  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:15.007799  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:15.697594  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:15.921787  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:16.031826  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:16.132199  662586 api_server.go:52] waiting for apiserver process to appear ...
	I1209 11:52:16.132310  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:16.633329  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:17.133389  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:17.632581  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:15.270255  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:15.270804  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | unable to find current IP address of domain default-k8s-diff-port-482476 in network mk-default-k8s-diff-port-482476
	I1209 11:52:15.270836  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | I1209 11:52:15.270741  663663 retry.go:31] will retry after 2.90998032s: waiting for machine to come up
	I1209 11:52:18.182069  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:18.182741  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | unable to find current IP address of domain default-k8s-diff-port-482476 in network mk-default-k8s-diff-port-482476
	I1209 11:52:18.182774  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | I1209 11:52:18.182689  663663 retry.go:31] will retry after 3.196470388s: waiting for machine to come up
	I1209 11:52:16.676188  662109 node_ready.go:53] node "no-preload-820741" has status "Ready":"False"
	I1209 11:52:17.175894  662109 node_ready.go:49] node "no-preload-820741" has status "Ready":"True"
	I1209 11:52:17.175928  662109 node_ready.go:38] duration metric: took 7.003696159s for node "no-preload-820741" to be "Ready" ...
	I1209 11:52:17.175945  662109 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 11:52:17.180647  662109 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-z647g" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:19.188583  662109 pod_ready.go:103] pod "coredns-7c65d6cfc9-z647g" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:18.133165  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:18.632403  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:19.132416  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:19.633332  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:20.133343  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:20.632968  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:21.133411  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:21.632656  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:22.132876  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:22.632816  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:21.381260  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:21.381912  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | unable to find current IP address of domain default-k8s-diff-port-482476 in network mk-default-k8s-diff-port-482476
	I1209 11:52:21.381943  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | I1209 11:52:21.381834  663663 retry.go:31] will retry after 3.621023528s: waiting for machine to come up
	I1209 11:52:26.142813  661546 start.go:364] duration metric: took 56.424295065s to acquireMachinesLock for "embed-certs-005123"
	I1209 11:52:26.142877  661546 start.go:96] Skipping create...Using existing machine configuration
	I1209 11:52:26.142886  661546 fix.go:54] fixHost starting: 
	I1209 11:52:26.143376  661546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:52:26.143416  661546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:52:26.164438  661546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45591
	I1209 11:52:26.165041  661546 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:52:26.165779  661546 main.go:141] libmachine: Using API Version  1
	I1209 11:52:26.165828  661546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:52:26.166318  661546 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:52:26.166544  661546 main.go:141] libmachine: (embed-certs-005123) Calling .DriverName
	I1209 11:52:26.166745  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetState
	I1209 11:52:26.168534  661546 fix.go:112] recreateIfNeeded on embed-certs-005123: state=Stopped err=<nil>
	I1209 11:52:26.168564  661546 main.go:141] libmachine: (embed-certs-005123) Calling .DriverName
	W1209 11:52:26.168753  661546 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 11:52:26.170973  661546 out.go:177] * Restarting existing kvm2 VM for "embed-certs-005123" ...
	I1209 11:52:26.172269  661546 main.go:141] libmachine: (embed-certs-005123) Calling .Start
	I1209 11:52:26.172500  661546 main.go:141] libmachine: (embed-certs-005123) Ensuring networks are active...
	I1209 11:52:26.173391  661546 main.go:141] libmachine: (embed-certs-005123) Ensuring network default is active
	I1209 11:52:26.173747  661546 main.go:141] libmachine: (embed-certs-005123) Ensuring network mk-embed-certs-005123 is active
	I1209 11:52:26.174208  661546 main.go:141] libmachine: (embed-certs-005123) Getting domain xml...
	I1209 11:52:26.174990  661546 main.go:141] libmachine: (embed-certs-005123) Creating domain...
	I1209 11:52:21.687274  662109 pod_ready.go:103] pod "coredns-7c65d6cfc9-z647g" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:23.688011  662109 pod_ready.go:103] pod "coredns-7c65d6cfc9-z647g" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:24.187886  662109 pod_ready.go:93] pod "coredns-7c65d6cfc9-z647g" in "kube-system" namespace has status "Ready":"True"
	I1209 11:52:24.187917  662109 pod_ready.go:82] duration metric: took 7.007243363s for pod "coredns-7c65d6cfc9-z647g" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:24.187928  662109 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-820741" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:24.193936  662109 pod_ready.go:93] pod "etcd-no-preload-820741" in "kube-system" namespace has status "Ready":"True"
	I1209 11:52:24.193958  662109 pod_ready.go:82] duration metric: took 6.02353ms for pod "etcd-no-preload-820741" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:24.193966  662109 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-820741" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:24.203685  662109 pod_ready.go:93] pod "kube-apiserver-no-preload-820741" in "kube-system" namespace has status "Ready":"True"
	I1209 11:52:24.203712  662109 pod_ready.go:82] duration metric: took 9.739287ms for pod "kube-apiserver-no-preload-820741" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:24.203722  662109 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-820741" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:24.210004  662109 pod_ready.go:93] pod "kube-controller-manager-no-preload-820741" in "kube-system" namespace has status "Ready":"True"
	I1209 11:52:24.210034  662109 pod_ready.go:82] duration metric: took 6.304008ms for pod "kube-controller-manager-no-preload-820741" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:24.210048  662109 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-hpvvp" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:24.216225  662109 pod_ready.go:93] pod "kube-proxy-hpvvp" in "kube-system" namespace has status "Ready":"True"
	I1209 11:52:24.216249  662109 pod_ready.go:82] duration metric: took 6.193945ms for pod "kube-proxy-hpvvp" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:24.216258  662109 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-820741" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:24.584682  662109 pod_ready.go:93] pod "kube-scheduler-no-preload-820741" in "kube-system" namespace has status "Ready":"True"
	I1209 11:52:24.584711  662109 pod_ready.go:82] duration metric: took 368.445803ms for pod "kube-scheduler-no-preload-820741" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:24.584724  662109 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:25.004323  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.004761  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Found IP for machine: 192.168.50.25
	I1209 11:52:25.004791  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has current primary IP address 192.168.50.25 and MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.004798  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Reserving static IP address...
	I1209 11:52:25.005275  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-482476", mac: "52:54:00:f0:c9:8a", ip: "192.168.50.25"} in network mk-default-k8s-diff-port-482476: {Iface:virbr2 ExpiryTime:2024-12-09 12:52:18 +0000 UTC Type:0 Mac:52:54:00:f0:c9:8a Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:default-k8s-diff-port-482476 Clientid:01:52:54:00:f0:c9:8a}
	I1209 11:52:25.005301  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | skip adding static IP to network mk-default-k8s-diff-port-482476 - found existing host DHCP lease matching {name: "default-k8s-diff-port-482476", mac: "52:54:00:f0:c9:8a", ip: "192.168.50.25"}
	I1209 11:52:25.005314  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Reserved static IP address: 192.168.50.25
	I1209 11:52:25.005328  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for SSH to be available...
	I1209 11:52:25.005342  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | Getting to WaitForSSH function...
	I1209 11:52:25.007758  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.008146  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:c9:8a", ip: ""} in network mk-default-k8s-diff-port-482476: {Iface:virbr2 ExpiryTime:2024-12-09 12:52:18 +0000 UTC Type:0 Mac:52:54:00:f0:c9:8a Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:default-k8s-diff-port-482476 Clientid:01:52:54:00:f0:c9:8a}
	I1209 11:52:25.008189  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined IP address 192.168.50.25 and MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.008291  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | Using SSH client type: external
	I1209 11:52:25.008318  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | Using SSH private key: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/default-k8s-diff-port-482476/id_rsa (-rw-------)
	I1209 11:52:25.008348  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.25 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20068-609844/.minikube/machines/default-k8s-diff-port-482476/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1209 11:52:25.008361  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | About to run SSH command:
	I1209 11:52:25.008369  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | exit 0
	I1209 11:52:25.130532  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | SSH cmd err, output: <nil>: 
	I1209 11:52:25.130901  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetConfigRaw
	I1209 11:52:25.131568  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetIP
	I1209 11:52:25.134487  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.134816  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:c9:8a", ip: ""} in network mk-default-k8s-diff-port-482476: {Iface:virbr2 ExpiryTime:2024-12-09 12:52:18 +0000 UTC Type:0 Mac:52:54:00:f0:c9:8a Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:default-k8s-diff-port-482476 Clientid:01:52:54:00:f0:c9:8a}
	I1209 11:52:25.134854  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined IP address 192.168.50.25 and MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.135163  663024 profile.go:143] Saving config to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/default-k8s-diff-port-482476/config.json ...
	I1209 11:52:25.135451  663024 machine.go:93] provisionDockerMachine start ...
	I1209 11:52:25.135480  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .DriverName
	I1209 11:52:25.135736  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHHostname
	I1209 11:52:25.138444  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.138853  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:c9:8a", ip: ""} in network mk-default-k8s-diff-port-482476: {Iface:virbr2 ExpiryTime:2024-12-09 12:52:18 +0000 UTC Type:0 Mac:52:54:00:f0:c9:8a Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:default-k8s-diff-port-482476 Clientid:01:52:54:00:f0:c9:8a}
	I1209 11:52:25.138894  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined IP address 192.168.50.25 and MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.138981  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHPort
	I1209 11:52:25.139188  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHKeyPath
	I1209 11:52:25.139327  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHKeyPath
	I1209 11:52:25.139491  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHUsername
	I1209 11:52:25.139655  663024 main.go:141] libmachine: Using SSH client type: native
	I1209 11:52:25.139895  663024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.25 22 <nil> <nil>}
	I1209 11:52:25.139906  663024 main.go:141] libmachine: About to run SSH command:
	hostname
	I1209 11:52:25.242441  663024 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1209 11:52:25.242472  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetMachineName
	I1209 11:52:25.242837  663024 buildroot.go:166] provisioning hostname "default-k8s-diff-port-482476"
	I1209 11:52:25.242878  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetMachineName
	I1209 11:52:25.243093  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHHostname
	I1209 11:52:25.245995  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.246447  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:c9:8a", ip: ""} in network mk-default-k8s-diff-port-482476: {Iface:virbr2 ExpiryTime:2024-12-09 12:52:18 +0000 UTC Type:0 Mac:52:54:00:f0:c9:8a Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:default-k8s-diff-port-482476 Clientid:01:52:54:00:f0:c9:8a}
	I1209 11:52:25.246478  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined IP address 192.168.50.25 and MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.246685  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHPort
	I1209 11:52:25.246900  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHKeyPath
	I1209 11:52:25.247052  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHKeyPath
	I1209 11:52:25.247175  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHUsername
	I1209 11:52:25.247330  663024 main.go:141] libmachine: Using SSH client type: native
	I1209 11:52:25.247518  663024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.25 22 <nil> <nil>}
	I1209 11:52:25.247531  663024 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-482476 && echo "default-k8s-diff-port-482476" | sudo tee /etc/hostname
	I1209 11:52:25.361366  663024 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-482476
	
	I1209 11:52:25.361397  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHHostname
	I1209 11:52:25.364194  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.364608  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:c9:8a", ip: ""} in network mk-default-k8s-diff-port-482476: {Iface:virbr2 ExpiryTime:2024-12-09 12:52:18 +0000 UTC Type:0 Mac:52:54:00:f0:c9:8a Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:default-k8s-diff-port-482476 Clientid:01:52:54:00:f0:c9:8a}
	I1209 11:52:25.364639  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined IP address 192.168.50.25 and MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.364813  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHPort
	I1209 11:52:25.365064  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHKeyPath
	I1209 11:52:25.365267  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHKeyPath
	I1209 11:52:25.365369  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHUsername
	I1209 11:52:25.365613  663024 main.go:141] libmachine: Using SSH client type: native
	I1209 11:52:25.365790  663024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.25 22 <nil> <nil>}
	I1209 11:52:25.365808  663024 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-482476' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-482476/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-482476' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 11:52:25.475311  663024 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 11:52:25.475346  663024 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20068-609844/.minikube CaCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20068-609844/.minikube}
	I1209 11:52:25.475386  663024 buildroot.go:174] setting up certificates
	I1209 11:52:25.475403  663024 provision.go:84] configureAuth start
	I1209 11:52:25.475412  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetMachineName
	I1209 11:52:25.475711  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetIP
	I1209 11:52:25.478574  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.478903  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:c9:8a", ip: ""} in network mk-default-k8s-diff-port-482476: {Iface:virbr2 ExpiryTime:2024-12-09 12:52:18 +0000 UTC Type:0 Mac:52:54:00:f0:c9:8a Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:default-k8s-diff-port-482476 Clientid:01:52:54:00:f0:c9:8a}
	I1209 11:52:25.478935  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined IP address 192.168.50.25 and MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.479055  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHHostname
	I1209 11:52:25.481280  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.481655  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:c9:8a", ip: ""} in network mk-default-k8s-diff-port-482476: {Iface:virbr2 ExpiryTime:2024-12-09 12:52:18 +0000 UTC Type:0 Mac:52:54:00:f0:c9:8a Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:default-k8s-diff-port-482476 Clientid:01:52:54:00:f0:c9:8a}
	I1209 11:52:25.481688  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined IP address 192.168.50.25 and MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.481788  663024 provision.go:143] copyHostCerts
	I1209 11:52:25.481845  663024 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem, removing ...
	I1209 11:52:25.481876  663024 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem
	I1209 11:52:25.481957  663024 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem (1123 bytes)
	I1209 11:52:25.482056  663024 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem, removing ...
	I1209 11:52:25.482065  663024 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem
	I1209 11:52:25.482090  663024 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem (1679 bytes)
	I1209 11:52:25.482243  663024 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem, removing ...
	I1209 11:52:25.482254  663024 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem
	I1209 11:52:25.482279  663024 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem (1082 bytes)
	I1209 11:52:25.482336  663024 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-482476 san=[127.0.0.1 192.168.50.25 default-k8s-diff-port-482476 localhost minikube]
	I1209 11:52:25.534856  663024 provision.go:177] copyRemoteCerts
	I1209 11:52:25.534921  663024 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 11:52:25.534951  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHHostname
	I1209 11:52:25.537732  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.538138  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:c9:8a", ip: ""} in network mk-default-k8s-diff-port-482476: {Iface:virbr2 ExpiryTime:2024-12-09 12:52:18 +0000 UTC Type:0 Mac:52:54:00:f0:c9:8a Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:default-k8s-diff-port-482476 Clientid:01:52:54:00:f0:c9:8a}
	I1209 11:52:25.538190  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined IP address 192.168.50.25 and MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.538390  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHPort
	I1209 11:52:25.538611  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHKeyPath
	I1209 11:52:25.538783  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHUsername
	I1209 11:52:25.538943  663024 sshutil.go:53] new ssh client: &{IP:192.168.50.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/default-k8s-diff-port-482476/id_rsa Username:docker}
	I1209 11:52:25.619772  663024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1209 11:52:25.643527  663024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1209 11:52:25.668517  663024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1209 11:52:25.693573  663024 provision.go:87] duration metric: took 218.153182ms to configureAuth
	I1209 11:52:25.693615  663024 buildroot.go:189] setting minikube options for container-runtime
	I1209 11:52:25.693807  663024 config.go:182] Loaded profile config "default-k8s-diff-port-482476": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 11:52:25.693906  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHHostname
	I1209 11:52:25.696683  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.697058  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:c9:8a", ip: ""} in network mk-default-k8s-diff-port-482476: {Iface:virbr2 ExpiryTime:2024-12-09 12:52:18 +0000 UTC Type:0 Mac:52:54:00:f0:c9:8a Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:default-k8s-diff-port-482476 Clientid:01:52:54:00:f0:c9:8a}
	I1209 11:52:25.697092  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined IP address 192.168.50.25 and MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.697344  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHPort
	I1209 11:52:25.697548  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHKeyPath
	I1209 11:52:25.697741  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHKeyPath
	I1209 11:52:25.697868  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHUsername
	I1209 11:52:25.698033  663024 main.go:141] libmachine: Using SSH client type: native
	I1209 11:52:25.698229  663024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.25 22 <nil> <nil>}
	I1209 11:52:25.698254  663024 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 11:52:25.915568  663024 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 11:52:25.915595  663024 machine.go:96] duration metric: took 780.126343ms to provisionDockerMachine
	I1209 11:52:25.915610  663024 start.go:293] postStartSetup for "default-k8s-diff-port-482476" (driver="kvm2")
	I1209 11:52:25.915620  663024 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 11:52:25.915644  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .DriverName
	I1209 11:52:25.916005  663024 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 11:52:25.916047  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHHostname
	I1209 11:52:25.919268  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.919602  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:c9:8a", ip: ""} in network mk-default-k8s-diff-port-482476: {Iface:virbr2 ExpiryTime:2024-12-09 12:52:18 +0000 UTC Type:0 Mac:52:54:00:f0:c9:8a Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:default-k8s-diff-port-482476 Clientid:01:52:54:00:f0:c9:8a}
	I1209 11:52:25.919628  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined IP address 192.168.50.25 and MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.919775  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHPort
	I1209 11:52:25.919967  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHKeyPath
	I1209 11:52:25.920133  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHUsername
	I1209 11:52:25.920285  663024 sshutil.go:53] new ssh client: &{IP:192.168.50.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/default-k8s-diff-port-482476/id_rsa Username:docker}
	I1209 11:52:26.000530  663024 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 11:52:26.004544  663024 info.go:137] Remote host: Buildroot 2023.02.9
	I1209 11:52:26.004574  663024 filesync.go:126] Scanning /home/jenkins/minikube-integration/20068-609844/.minikube/addons for local assets ...
	I1209 11:52:26.004651  663024 filesync.go:126] Scanning /home/jenkins/minikube-integration/20068-609844/.minikube/files for local assets ...
	I1209 11:52:26.004759  663024 filesync.go:149] local asset: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem -> 6170172.pem in /etc/ssl/certs
	I1209 11:52:26.004885  663024 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 11:52:26.013444  663024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem --> /etc/ssl/certs/6170172.pem (1708 bytes)
	I1209 11:52:26.036052  663024 start.go:296] duration metric: took 120.422739ms for postStartSetup
	I1209 11:52:26.036110  663024 fix.go:56] duration metric: took 20.120932786s for fixHost
	I1209 11:52:26.036135  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHHostname
	I1209 11:52:26.039079  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:26.039445  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:c9:8a", ip: ""} in network mk-default-k8s-diff-port-482476: {Iface:virbr2 ExpiryTime:2024-12-09 12:52:18 +0000 UTC Type:0 Mac:52:54:00:f0:c9:8a Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:default-k8s-diff-port-482476 Clientid:01:52:54:00:f0:c9:8a}
	I1209 11:52:26.039478  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined IP address 192.168.50.25 and MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:26.039797  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHPort
	I1209 11:52:26.040065  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHKeyPath
	I1209 11:52:26.040228  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHKeyPath
	I1209 11:52:26.040427  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHUsername
	I1209 11:52:26.040620  663024 main.go:141] libmachine: Using SSH client type: native
	I1209 11:52:26.040906  663024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.25 22 <nil> <nil>}
	I1209 11:52:26.040924  663024 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1209 11:52:26.142590  663024 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733745146.090497627
	
	I1209 11:52:26.142623  663024 fix.go:216] guest clock: 1733745146.090497627
	I1209 11:52:26.142634  663024 fix.go:229] Guest: 2024-12-09 11:52:26.090497627 +0000 UTC Remote: 2024-12-09 11:52:26.036115182 +0000 UTC m=+146.587055001 (delta=54.382445ms)
	I1209 11:52:26.142669  663024 fix.go:200] guest clock delta is within tolerance: 54.382445ms
	I1209 11:52:26.142681  663024 start.go:83] releasing machines lock for "default-k8s-diff-port-482476", held for 20.227543026s
	I1209 11:52:26.142723  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .DriverName
	I1209 11:52:26.143032  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetIP
	I1209 11:52:26.146118  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:26.146602  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:c9:8a", ip: ""} in network mk-default-k8s-diff-port-482476: {Iface:virbr2 ExpiryTime:2024-12-09 12:52:18 +0000 UTC Type:0 Mac:52:54:00:f0:c9:8a Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:default-k8s-diff-port-482476 Clientid:01:52:54:00:f0:c9:8a}
	I1209 11:52:26.146634  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined IP address 192.168.50.25 and MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:26.146841  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .DriverName
	I1209 11:52:26.147440  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .DriverName
	I1209 11:52:26.147709  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .DriverName
	I1209 11:52:26.147833  663024 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 11:52:26.147872  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHHostname
	I1209 11:52:26.147980  663024 ssh_runner.go:195] Run: cat /version.json
	I1209 11:52:26.148009  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHHostname
	I1209 11:52:26.151002  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:26.151346  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:26.151379  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:c9:8a", ip: ""} in network mk-default-k8s-diff-port-482476: {Iface:virbr2 ExpiryTime:2024-12-09 12:52:18 +0000 UTC Type:0 Mac:52:54:00:f0:c9:8a Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:default-k8s-diff-port-482476 Clientid:01:52:54:00:f0:c9:8a}
	I1209 11:52:26.151410  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined IP address 192.168.50.25 and MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:26.151534  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHPort
	I1209 11:52:26.151729  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHKeyPath
	I1209 11:52:26.151848  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:c9:8a", ip: ""} in network mk-default-k8s-diff-port-482476: {Iface:virbr2 ExpiryTime:2024-12-09 12:52:18 +0000 UTC Type:0 Mac:52:54:00:f0:c9:8a Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:default-k8s-diff-port-482476 Clientid:01:52:54:00:f0:c9:8a}
	I1209 11:52:26.151876  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined IP address 192.168.50.25 and MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:26.151904  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHUsername
	I1209 11:52:26.152003  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHPort
	I1209 11:52:26.152082  663024 sshutil.go:53] new ssh client: &{IP:192.168.50.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/default-k8s-diff-port-482476/id_rsa Username:docker}
	I1209 11:52:26.152159  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHKeyPath
	I1209 11:52:26.152322  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHUsername
	I1209 11:52:26.152565  663024 sshutil.go:53] new ssh client: &{IP:192.168.50.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/default-k8s-diff-port-482476/id_rsa Username:docker}
	I1209 11:52:26.231575  663024 ssh_runner.go:195] Run: systemctl --version
	I1209 11:52:26.267939  663024 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 11:52:26.418953  663024 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 11:52:26.426243  663024 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 11:52:26.426337  663024 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 11:52:26.448407  663024 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1209 11:52:26.448442  663024 start.go:495] detecting cgroup driver to use...
	I1209 11:52:26.448540  663024 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 11:52:26.469675  663024 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 11:52:26.488825  663024 docker.go:217] disabling cri-docker service (if available) ...
	I1209 11:52:26.488902  663024 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 11:52:26.507716  663024 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 11:52:26.525232  663024 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 11:52:26.664062  663024 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 11:52:26.854813  663024 docker.go:233] disabling docker service ...
	I1209 11:52:26.854883  663024 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 11:52:26.870021  663024 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 11:52:26.883610  663024 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 11:52:27.001237  663024 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 11:52:27.126865  663024 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 11:52:27.144121  663024 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 11:52:27.168073  663024 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1209 11:52:27.168242  663024 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:52:27.180516  663024 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 11:52:27.180587  663024 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:52:27.191681  663024 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:52:27.204047  663024 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:52:27.214157  663024 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 11:52:27.225934  663024 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:52:27.236691  663024 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:52:27.258774  663024 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:52:27.271986  663024 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 11:52:27.283488  663024 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1209 11:52:27.283539  663024 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1209 11:52:27.299065  663024 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 11:52:27.309203  663024 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 11:52:27.431740  663024 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 11:52:27.529577  663024 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 11:52:27.529668  663024 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 11:52:27.534733  663024 start.go:563] Will wait 60s for crictl version
	I1209 11:52:27.534800  663024 ssh_runner.go:195] Run: which crictl
	I1209 11:52:27.538544  663024 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 11:52:27.577577  663024 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1209 11:52:27.577684  663024 ssh_runner.go:195] Run: crio --version
	I1209 11:52:27.607938  663024 ssh_runner.go:195] Run: crio --version
	I1209 11:52:27.645210  663024 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1209 11:52:23.133393  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:23.632776  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:24.133286  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:24.632415  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:25.132348  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:25.632478  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:26.132982  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:26.632517  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:27.132692  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:27.633291  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:27.646510  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetIP
	I1209 11:52:27.650014  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:27.650439  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:c9:8a", ip: ""} in network mk-default-k8s-diff-port-482476: {Iface:virbr2 ExpiryTime:2024-12-09 12:52:18 +0000 UTC Type:0 Mac:52:54:00:f0:c9:8a Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:default-k8s-diff-port-482476 Clientid:01:52:54:00:f0:c9:8a}
	I1209 11:52:27.650469  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined IP address 192.168.50.25 and MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:27.650705  663024 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1209 11:52:27.654738  663024 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 11:52:27.668671  663024 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-482476 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:default-k8s-diff-port-482476 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.25 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 11:52:27.668808  663024 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 11:52:27.668873  663024 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 11:52:27.709582  663024 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1209 11:52:27.709679  663024 ssh_runner.go:195] Run: which lz4
	I1209 11:52:27.713702  663024 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1209 11:52:27.717851  663024 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1209 11:52:27.717887  663024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1209 11:52:29.037160  663024 crio.go:462] duration metric: took 1.32348676s to copy over tarball
	I1209 11:52:29.037262  663024 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1209 11:52:27.500098  661546 main.go:141] libmachine: (embed-certs-005123) Waiting to get IP...
	I1209 11:52:27.501088  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:27.501538  661546 main.go:141] libmachine: (embed-certs-005123) DBG | unable to find current IP address of domain embed-certs-005123 in network mk-embed-certs-005123
	I1209 11:52:27.501605  661546 main.go:141] libmachine: (embed-certs-005123) DBG | I1209 11:52:27.501510  663907 retry.go:31] will retry after 191.187925ms: waiting for machine to come up
	I1209 11:52:27.694017  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:27.694574  661546 main.go:141] libmachine: (embed-certs-005123) DBG | unable to find current IP address of domain embed-certs-005123 in network mk-embed-certs-005123
	I1209 11:52:27.694605  661546 main.go:141] libmachine: (embed-certs-005123) DBG | I1209 11:52:27.694512  663907 retry.go:31] will retry after 256.268ms: waiting for machine to come up
	I1209 11:52:27.952185  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:27.952863  661546 main.go:141] libmachine: (embed-certs-005123) DBG | unable to find current IP address of domain embed-certs-005123 in network mk-embed-certs-005123
	I1209 11:52:27.952908  661546 main.go:141] libmachine: (embed-certs-005123) DBG | I1209 11:52:27.952759  663907 retry.go:31] will retry after 460.272204ms: waiting for machine to come up
	I1209 11:52:28.414403  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:28.414925  661546 main.go:141] libmachine: (embed-certs-005123) DBG | unable to find current IP address of domain embed-certs-005123 in network mk-embed-certs-005123
	I1209 11:52:28.414967  661546 main.go:141] libmachine: (embed-certs-005123) DBG | I1209 11:52:28.414873  663907 retry.go:31] will retry after 450.761189ms: waiting for machine to come up
	I1209 11:52:28.867687  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:28.868350  661546 main.go:141] libmachine: (embed-certs-005123) DBG | unable to find current IP address of domain embed-certs-005123 in network mk-embed-certs-005123
	I1209 11:52:28.868389  661546 main.go:141] libmachine: (embed-certs-005123) DBG | I1209 11:52:28.868313  663907 retry.go:31] will retry after 615.800863ms: waiting for machine to come up
	I1209 11:52:29.486566  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:29.487179  661546 main.go:141] libmachine: (embed-certs-005123) DBG | unable to find current IP address of domain embed-certs-005123 in network mk-embed-certs-005123
	I1209 11:52:29.487218  661546 main.go:141] libmachine: (embed-certs-005123) DBG | I1209 11:52:29.487108  663907 retry.go:31] will retry after 628.641045ms: waiting for machine to come up
	I1209 11:52:30.117051  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:30.117424  661546 main.go:141] libmachine: (embed-certs-005123) DBG | unable to find current IP address of domain embed-certs-005123 in network mk-embed-certs-005123
	I1209 11:52:30.117459  661546 main.go:141] libmachine: (embed-certs-005123) DBG | I1209 11:52:30.117356  663907 retry.go:31] will retry after 902.465226ms: waiting for machine to come up
	I1209 11:52:31.021756  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:31.022268  661546 main.go:141] libmachine: (embed-certs-005123) DBG | unable to find current IP address of domain embed-certs-005123 in network mk-embed-certs-005123
	I1209 11:52:31.022298  661546 main.go:141] libmachine: (embed-certs-005123) DBG | I1209 11:52:31.022229  663907 retry.go:31] will retry after 918.939368ms: waiting for machine to come up
	I1209 11:52:26.594953  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:29.093499  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:28.132379  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:28.633377  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:29.132983  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:29.633370  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:30.132748  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:30.633383  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:31.133450  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:31.633210  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:32.132406  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:32.632598  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:31.234956  663024 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.197609203s)
	I1209 11:52:31.235007  663024 crio.go:469] duration metric: took 2.197798334s to extract the tarball
	I1209 11:52:31.235018  663024 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1209 11:52:31.275616  663024 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 11:52:31.320918  663024 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 11:52:31.320945  663024 cache_images.go:84] Images are preloaded, skipping loading
	I1209 11:52:31.320961  663024 kubeadm.go:934] updating node { 192.168.50.25 8444 v1.31.2 crio true true} ...
	I1209 11:52:31.321122  663024 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-482476 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.25
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-482476 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 11:52:31.321246  663024 ssh_runner.go:195] Run: crio config
	I1209 11:52:31.367805  663024 cni.go:84] Creating CNI manager for ""
	I1209 11:52:31.367827  663024 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 11:52:31.367839  663024 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1209 11:52:31.367863  663024 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.25 APIServerPort:8444 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-482476 NodeName:default-k8s-diff-port-482476 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.25"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.25 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 11:52:31.368005  663024 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.25
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-482476"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.25"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.25"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 11:52:31.368074  663024 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1209 11:52:31.377831  663024 binaries.go:44] Found k8s binaries, skipping transfer
	I1209 11:52:31.377902  663024 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 11:52:31.386872  663024 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I1209 11:52:31.403764  663024 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 11:52:31.419295  663024 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2305 bytes)
	I1209 11:52:31.435856  663024 ssh_runner.go:195] Run: grep 192.168.50.25	control-plane.minikube.internal$ /etc/hosts
	I1209 11:52:31.439480  663024 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.25	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 11:52:31.455136  663024 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 11:52:31.573295  663024 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 11:52:31.589679  663024 certs.go:68] Setting up /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/default-k8s-diff-port-482476 for IP: 192.168.50.25
	I1209 11:52:31.589703  663024 certs.go:194] generating shared ca certs ...
	I1209 11:52:31.589741  663024 certs.go:226] acquiring lock for ca certs: {Name:mk2df5887e08965d909a9c950da5dfffb8a04ddc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 11:52:31.589930  663024 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.key
	I1209 11:52:31.589982  663024 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.key
	I1209 11:52:31.589995  663024 certs.go:256] generating profile certs ...
	I1209 11:52:31.590137  663024 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/default-k8s-diff-port-482476/client.key
	I1209 11:52:31.590256  663024 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/default-k8s-diff-port-482476/apiserver.key.e2346b12
	I1209 11:52:31.590322  663024 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/default-k8s-diff-port-482476/proxy-client.key
	I1209 11:52:31.590479  663024 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017.pem (1338 bytes)
	W1209 11:52:31.590522  663024 certs.go:480] ignoring /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017_empty.pem, impossibly tiny 0 bytes
	I1209 11:52:31.590535  663024 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem (1675 bytes)
	I1209 11:52:31.590571  663024 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem (1082 bytes)
	I1209 11:52:31.590612  663024 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem (1123 bytes)
	I1209 11:52:31.590649  663024 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem (1679 bytes)
	I1209 11:52:31.590710  663024 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem (1708 bytes)
	I1209 11:52:31.591643  663024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 11:52:31.634363  663024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1209 11:52:31.660090  663024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 11:52:31.692933  663024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1209 11:52:31.726010  663024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/default-k8s-diff-port-482476/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1209 11:52:31.757565  663024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/default-k8s-diff-port-482476/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1209 11:52:31.781368  663024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/default-k8s-diff-port-482476/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 11:52:31.805233  663024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/default-k8s-diff-port-482476/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1209 11:52:31.828391  663024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem --> /usr/share/ca-certificates/6170172.pem (1708 bytes)
	I1209 11:52:31.850407  663024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 11:52:31.873159  663024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017.pem --> /usr/share/ca-certificates/617017.pem (1338 bytes)
	I1209 11:52:31.895503  663024 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 11:52:31.911754  663024 ssh_runner.go:195] Run: openssl version
	I1209 11:52:31.917771  663024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6170172.pem && ln -fs /usr/share/ca-certificates/6170172.pem /etc/ssl/certs/6170172.pem"
	I1209 11:52:31.929857  663024 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6170172.pem
	I1209 11:52:31.934518  663024 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 10:45 /usr/share/ca-certificates/6170172.pem
	I1209 11:52:31.934596  663024 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6170172.pem
	I1209 11:52:31.940382  663024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6170172.pem /etc/ssl/certs/3ec20f2e.0"
	I1209 11:52:31.951417  663024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1209 11:52:31.961966  663024 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 11:52:31.966234  663024 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 10:34 /usr/share/ca-certificates/minikubeCA.pem
	I1209 11:52:31.966286  663024 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 11:52:31.972070  663024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1209 11:52:31.982547  663024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/617017.pem && ln -fs /usr/share/ca-certificates/617017.pem /etc/ssl/certs/617017.pem"
	I1209 11:52:31.993215  663024 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/617017.pem
	I1209 11:52:31.997579  663024 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 10:45 /usr/share/ca-certificates/617017.pem
	I1209 11:52:31.997641  663024 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/617017.pem
	I1209 11:52:32.003050  663024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/617017.pem /etc/ssl/certs/51391683.0"
	I1209 11:52:32.013463  663024 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 11:52:32.017936  663024 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1209 11:52:32.024029  663024 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1209 11:52:32.029686  663024 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1209 11:52:32.035260  663024 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1209 11:52:32.040696  663024 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1209 11:52:32.046116  663024 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1209 11:52:32.051521  663024 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-482476 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.2 ClusterName:default-k8s-diff-port-482476 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.25 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 11:52:32.051605  663024 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 11:52:32.051676  663024 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 11:52:32.092529  663024 cri.go:89] found id: ""
	I1209 11:52:32.092623  663024 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 11:52:32.103153  663024 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1209 11:52:32.103183  663024 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1209 11:52:32.103247  663024 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1209 11:52:32.113029  663024 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1209 11:52:32.114506  663024 kubeconfig.go:125] found "default-k8s-diff-port-482476" server: "https://192.168.50.25:8444"
	I1209 11:52:32.116929  663024 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1209 11:52:32.127055  663024 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.25
	I1209 11:52:32.127108  663024 kubeadm.go:1160] stopping kube-system containers ...
	I1209 11:52:32.127124  663024 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1209 11:52:32.127189  663024 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 11:52:32.169401  663024 cri.go:89] found id: ""
	I1209 11:52:32.169507  663024 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1209 11:52:32.187274  663024 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 11:52:32.196843  663024 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 11:52:32.196867  663024 kubeadm.go:157] found existing configuration files:
	
	I1209 11:52:32.196925  663024 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1209 11:52:32.205670  663024 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 11:52:32.205754  663024 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 11:52:32.214977  663024 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1209 11:52:32.223707  663024 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 11:52:32.223782  663024 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 11:52:32.232514  663024 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1209 11:52:32.240999  663024 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 11:52:32.241076  663024 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 11:52:32.250049  663024 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1209 11:52:32.258782  663024 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 11:52:32.258846  663024 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 11:52:32.268447  663024 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 11:52:32.277875  663024 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:32.394016  663024 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:33.494978  663024 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.100920844s)
	I1209 11:52:33.495030  663024 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:33.719319  663024 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:33.787272  663024 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:33.882783  663024 api_server.go:52] waiting for apiserver process to appear ...
	I1209 11:52:33.882876  663024 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:34.383090  663024 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:31.942735  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:31.943207  661546 main.go:141] libmachine: (embed-certs-005123) DBG | unable to find current IP address of domain embed-certs-005123 in network mk-embed-certs-005123
	I1209 11:52:31.943244  661546 main.go:141] libmachine: (embed-certs-005123) DBG | I1209 11:52:31.943141  663907 retry.go:31] will retry after 1.153139191s: waiting for machine to come up
	I1209 11:52:33.097672  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:33.098233  661546 main.go:141] libmachine: (embed-certs-005123) DBG | unable to find current IP address of domain embed-certs-005123 in network mk-embed-certs-005123
	I1209 11:52:33.098299  661546 main.go:141] libmachine: (embed-certs-005123) DBG | I1209 11:52:33.098199  663907 retry.go:31] will retry after 2.002880852s: waiting for machine to come up
	I1209 11:52:35.103239  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:35.103693  661546 main.go:141] libmachine: (embed-certs-005123) DBG | unable to find current IP address of domain embed-certs-005123 in network mk-embed-certs-005123
	I1209 11:52:35.103724  661546 main.go:141] libmachine: (embed-certs-005123) DBG | I1209 11:52:35.103639  663907 retry.go:31] will retry after 2.219510124s: waiting for machine to come up
	I1209 11:52:31.593184  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:34.090877  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:36.094569  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:33.132924  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:33.632884  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:34.132528  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:34.632989  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:35.133398  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:35.632376  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:36.132936  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:36.633152  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:37.133343  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:37.633367  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:34.883172  663024 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:35.384008  663024 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:35.883940  663024 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:35.901453  663024 api_server.go:72] duration metric: took 2.018670363s to wait for apiserver process to appear ...
	I1209 11:52:35.901489  663024 api_server.go:88] waiting for apiserver healthz status ...
	I1209 11:52:35.901524  663024 api_server.go:253] Checking apiserver healthz at https://192.168.50.25:8444/healthz ...
	I1209 11:52:38.225976  663024 api_server.go:279] https://192.168.50.25:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1209 11:52:38.226017  663024 api_server.go:103] status: https://192.168.50.25:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1209 11:52:38.226037  663024 api_server.go:253] Checking apiserver healthz at https://192.168.50.25:8444/healthz ...
	I1209 11:52:38.269459  663024 api_server.go:279] https://192.168.50.25:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1209 11:52:38.269549  663024 api_server.go:103] status: https://192.168.50.25:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1209 11:52:38.401652  663024 api_server.go:253] Checking apiserver healthz at https://192.168.50.25:8444/healthz ...
	I1209 11:52:38.407995  663024 api_server.go:279] https://192.168.50.25:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1209 11:52:38.408028  663024 api_server.go:103] status: https://192.168.50.25:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1209 11:52:38.902416  663024 api_server.go:253] Checking apiserver healthz at https://192.168.50.25:8444/healthz ...
	I1209 11:52:38.914550  663024 api_server.go:279] https://192.168.50.25:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1209 11:52:38.914579  663024 api_server.go:103] status: https://192.168.50.25:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1209 11:52:39.401719  663024 api_server.go:253] Checking apiserver healthz at https://192.168.50.25:8444/healthz ...
	I1209 11:52:39.409382  663024 api_server.go:279] https://192.168.50.25:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1209 11:52:39.409427  663024 api_server.go:103] status: https://192.168.50.25:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1209 11:52:39.902488  663024 api_server.go:253] Checking apiserver healthz at https://192.168.50.25:8444/healthz ...
	I1209 11:52:39.907511  663024 api_server.go:279] https://192.168.50.25:8444/healthz returned 200:
	ok
	I1209 11:52:39.914532  663024 api_server.go:141] control plane version: v1.31.2
	I1209 11:52:39.914562  663024 api_server.go:131] duration metric: took 4.013066199s to wait for apiserver health ...
	I1209 11:52:39.914586  663024 cni.go:84] Creating CNI manager for ""
	I1209 11:52:39.914594  663024 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 11:52:39.915954  663024 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1209 11:52:37.324833  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:37.325397  661546 main.go:141] libmachine: (embed-certs-005123) DBG | unable to find current IP address of domain embed-certs-005123 in network mk-embed-certs-005123
	I1209 11:52:37.325430  661546 main.go:141] libmachine: (embed-certs-005123) DBG | I1209 11:52:37.325338  663907 retry.go:31] will retry after 3.636796307s: waiting for machine to come up
	I1209 11:52:40.966039  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:40.966438  661546 main.go:141] libmachine: (embed-certs-005123) DBG | unable to find current IP address of domain embed-certs-005123 in network mk-embed-certs-005123
	I1209 11:52:40.966463  661546 main.go:141] libmachine: (embed-certs-005123) DBG | I1209 11:52:40.966419  663907 retry.go:31] will retry after 3.704289622s: waiting for machine to come up
	I1209 11:52:38.592804  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:40.593407  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:38.133368  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:38.632475  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:39.132993  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:39.633225  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:40.132552  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:40.633292  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:41.132443  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:41.632994  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:42.132631  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:42.633378  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:39.917397  663024 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1209 11:52:39.928995  663024 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1209 11:52:39.953045  663024 system_pods.go:43] waiting for kube-system pods to appear ...
	I1209 11:52:39.962582  663024 system_pods.go:59] 8 kube-system pods found
	I1209 11:52:39.962628  663024 system_pods.go:61] "coredns-7c65d6cfc9-zzrgn" [dca7a835-3b66-4515-b571-6420afc42c44] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1209 11:52:39.962639  663024 system_pods.go:61] "etcd-default-k8s-diff-port-482476" [2323dbbc-9e7f-4047-b0be-b68b851f4986] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1209 11:52:39.962649  663024 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-482476" [0b7a4936-5282-46a4-a08a-e225b303f6f2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1209 11:52:39.962658  663024 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-482476" [c6ff79a0-2177-4c79-8021-c523f8d53e9b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1209 11:52:39.962666  663024 system_pods.go:61] "kube-proxy-6th5d" [0cff6df1-1adb-4b7e-8d59-a837db026339] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1209 11:52:39.962682  663024 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-482476" [524125eb-afd4-4e20-b0f0-e58019e84962] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1209 11:52:39.962694  663024 system_pods.go:61] "metrics-server-6867b74b74-bpccn" [7426c800-9ff7-4778-82a0-6c71fd05a222] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1209 11:52:39.962702  663024 system_pods.go:61] "storage-provisioner" [4478313a-58e8-4d24-ab0b-c087e664200d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1209 11:52:39.962711  663024 system_pods.go:74] duration metric: took 9.637672ms to wait for pod list to return data ...
	I1209 11:52:39.962725  663024 node_conditions.go:102] verifying NodePressure condition ...
	I1209 11:52:39.969576  663024 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1209 11:52:39.969611  663024 node_conditions.go:123] node cpu capacity is 2
	I1209 11:52:39.969627  663024 node_conditions.go:105] duration metric: took 6.893708ms to run NodePressure ...
	I1209 11:52:39.969660  663024 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:40.340239  663024 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1209 11:52:40.345384  663024 kubeadm.go:739] kubelet initialised
	I1209 11:52:40.345412  663024 kubeadm.go:740] duration metric: took 5.145751ms waiting for restarted kubelet to initialise ...
	I1209 11:52:40.345425  663024 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 11:52:40.350721  663024 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-zzrgn" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:42.357138  663024 pod_ready.go:103] pod "coredns-7c65d6cfc9-zzrgn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:44.361981  663024 pod_ready.go:103] pod "coredns-7c65d6cfc9-zzrgn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:44.674598  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:44.675048  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has current primary IP address 192.168.72.218 and MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:44.675068  661546 main.go:141] libmachine: (embed-certs-005123) Found IP for machine: 192.168.72.218
	I1209 11:52:44.675075  661546 main.go:141] libmachine: (embed-certs-005123) Reserving static IP address...
	I1209 11:52:44.675492  661546 main.go:141] libmachine: (embed-certs-005123) DBG | found host DHCP lease matching {name: "embed-certs-005123", mac: "52:54:00:ee:a0:a8", ip: "192.168.72.218"} in network mk-embed-certs-005123: {Iface:virbr4 ExpiryTime:2024-12-09 12:52:37 +0000 UTC Type:0 Mac:52:54:00:ee:a0:a8 Iaid: IPaddr:192.168.72.218 Prefix:24 Hostname:embed-certs-005123 Clientid:01:52:54:00:ee:a0:a8}
	I1209 11:52:44.675522  661546 main.go:141] libmachine: (embed-certs-005123) DBG | skip adding static IP to network mk-embed-certs-005123 - found existing host DHCP lease matching {name: "embed-certs-005123", mac: "52:54:00:ee:a0:a8", ip: "192.168.72.218"}
	I1209 11:52:44.675537  661546 main.go:141] libmachine: (embed-certs-005123) Reserved static IP address: 192.168.72.218
	I1209 11:52:44.675555  661546 main.go:141] libmachine: (embed-certs-005123) Waiting for SSH to be available...
	I1209 11:52:44.675566  661546 main.go:141] libmachine: (embed-certs-005123) DBG | Getting to WaitForSSH function...
	I1209 11:52:44.677490  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:44.677814  661546 main.go:141] libmachine: (embed-certs-005123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:a0:a8", ip: ""} in network mk-embed-certs-005123: {Iface:virbr4 ExpiryTime:2024-12-09 12:52:37 +0000 UTC Type:0 Mac:52:54:00:ee:a0:a8 Iaid: IPaddr:192.168.72.218 Prefix:24 Hostname:embed-certs-005123 Clientid:01:52:54:00:ee:a0:a8}
	I1209 11:52:44.677860  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined IP address 192.168.72.218 and MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:44.677952  661546 main.go:141] libmachine: (embed-certs-005123) DBG | Using SSH client type: external
	I1209 11:52:44.678012  661546 main.go:141] libmachine: (embed-certs-005123) DBG | Using SSH private key: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/embed-certs-005123/id_rsa (-rw-------)
	I1209 11:52:44.678042  661546 main.go:141] libmachine: (embed-certs-005123) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.218 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20068-609844/.minikube/machines/embed-certs-005123/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1209 11:52:44.678056  661546 main.go:141] libmachine: (embed-certs-005123) DBG | About to run SSH command:
	I1209 11:52:44.678068  661546 main.go:141] libmachine: (embed-certs-005123) DBG | exit 0
	I1209 11:52:44.798377  661546 main.go:141] libmachine: (embed-certs-005123) DBG | SSH cmd err, output: <nil>: 
	I1209 11:52:44.798782  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetConfigRaw
	I1209 11:52:44.799532  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetIP
	I1209 11:52:44.801853  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:44.802223  661546 main.go:141] libmachine: (embed-certs-005123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:a0:a8", ip: ""} in network mk-embed-certs-005123: {Iface:virbr4 ExpiryTime:2024-12-09 12:52:37 +0000 UTC Type:0 Mac:52:54:00:ee:a0:a8 Iaid: IPaddr:192.168.72.218 Prefix:24 Hostname:embed-certs-005123 Clientid:01:52:54:00:ee:a0:a8}
	I1209 11:52:44.802255  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined IP address 192.168.72.218 and MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:44.802539  661546 profile.go:143] Saving config to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/embed-certs-005123/config.json ...
	I1209 11:52:44.802777  661546 machine.go:93] provisionDockerMachine start ...
	I1209 11:52:44.802799  661546 main.go:141] libmachine: (embed-certs-005123) Calling .DriverName
	I1209 11:52:44.802994  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHHostname
	I1209 11:52:44.805481  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:44.805803  661546 main.go:141] libmachine: (embed-certs-005123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:a0:a8", ip: ""} in network mk-embed-certs-005123: {Iface:virbr4 ExpiryTime:2024-12-09 12:52:37 +0000 UTC Type:0 Mac:52:54:00:ee:a0:a8 Iaid: IPaddr:192.168.72.218 Prefix:24 Hostname:embed-certs-005123 Clientid:01:52:54:00:ee:a0:a8}
	I1209 11:52:44.805838  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined IP address 192.168.72.218 and MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:44.805999  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHPort
	I1209 11:52:44.806219  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHKeyPath
	I1209 11:52:44.806386  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHKeyPath
	I1209 11:52:44.806555  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHUsername
	I1209 11:52:44.806716  661546 main.go:141] libmachine: Using SSH client type: native
	I1209 11:52:44.806886  661546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.218 22 <nil> <nil>}
	I1209 11:52:44.806897  661546 main.go:141] libmachine: About to run SSH command:
	hostname
	I1209 11:52:44.914443  661546 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1209 11:52:44.914480  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetMachineName
	I1209 11:52:44.914783  661546 buildroot.go:166] provisioning hostname "embed-certs-005123"
	I1209 11:52:44.914810  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetMachineName
	I1209 11:52:44.914973  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHHostname
	I1209 11:52:44.918053  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:44.918471  661546 main.go:141] libmachine: (embed-certs-005123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:a0:a8", ip: ""} in network mk-embed-certs-005123: {Iface:virbr4 ExpiryTime:2024-12-09 12:52:37 +0000 UTC Type:0 Mac:52:54:00:ee:a0:a8 Iaid: IPaddr:192.168.72.218 Prefix:24 Hostname:embed-certs-005123 Clientid:01:52:54:00:ee:a0:a8}
	I1209 11:52:44.918508  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined IP address 192.168.72.218 and MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:44.918701  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHPort
	I1209 11:52:44.918935  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHKeyPath
	I1209 11:52:44.919087  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHKeyPath
	I1209 11:52:44.919267  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHUsername
	I1209 11:52:44.919452  661546 main.go:141] libmachine: Using SSH client type: native
	I1209 11:52:44.919624  661546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.218 22 <nil> <nil>}
	I1209 11:52:44.919645  661546 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-005123 && echo "embed-certs-005123" | sudo tee /etc/hostname
	I1209 11:52:45.032725  661546 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-005123
	
	I1209 11:52:45.032760  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHHostname
	I1209 11:52:45.035820  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:45.036222  661546 main.go:141] libmachine: (embed-certs-005123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:a0:a8", ip: ""} in network mk-embed-certs-005123: {Iface:virbr4 ExpiryTime:2024-12-09 12:52:37 +0000 UTC Type:0 Mac:52:54:00:ee:a0:a8 Iaid: IPaddr:192.168.72.218 Prefix:24 Hostname:embed-certs-005123 Clientid:01:52:54:00:ee:a0:a8}
	I1209 11:52:45.036263  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined IP address 192.168.72.218 and MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:45.036466  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHPort
	I1209 11:52:45.036666  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHKeyPath
	I1209 11:52:45.036864  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHKeyPath
	I1209 11:52:45.037003  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHUsername
	I1209 11:52:45.037189  661546 main.go:141] libmachine: Using SSH client type: native
	I1209 11:52:45.037396  661546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.218 22 <nil> <nil>}
	I1209 11:52:45.037413  661546 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-005123' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-005123/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-005123' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 11:52:45.147189  661546 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 11:52:45.147225  661546 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20068-609844/.minikube CaCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20068-609844/.minikube}
	I1209 11:52:45.147284  661546 buildroot.go:174] setting up certificates
	I1209 11:52:45.147299  661546 provision.go:84] configureAuth start
	I1209 11:52:45.147313  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetMachineName
	I1209 11:52:45.147667  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetIP
	I1209 11:52:45.150526  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:45.150965  661546 main.go:141] libmachine: (embed-certs-005123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:a0:a8", ip: ""} in network mk-embed-certs-005123: {Iface:virbr4 ExpiryTime:2024-12-09 12:52:37 +0000 UTC Type:0 Mac:52:54:00:ee:a0:a8 Iaid: IPaddr:192.168.72.218 Prefix:24 Hostname:embed-certs-005123 Clientid:01:52:54:00:ee:a0:a8}
	I1209 11:52:45.151009  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined IP address 192.168.72.218 and MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:45.151118  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHHostname
	I1209 11:52:45.153778  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:45.154178  661546 main.go:141] libmachine: (embed-certs-005123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:a0:a8", ip: ""} in network mk-embed-certs-005123: {Iface:virbr4 ExpiryTime:2024-12-09 12:52:37 +0000 UTC Type:0 Mac:52:54:00:ee:a0:a8 Iaid: IPaddr:192.168.72.218 Prefix:24 Hostname:embed-certs-005123 Clientid:01:52:54:00:ee:a0:a8}
	I1209 11:52:45.154213  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined IP address 192.168.72.218 and MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:45.154382  661546 provision.go:143] copyHostCerts
	I1209 11:52:45.154455  661546 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem, removing ...
	I1209 11:52:45.154478  661546 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem
	I1209 11:52:45.154549  661546 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem (1082 bytes)
	I1209 11:52:45.154673  661546 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem, removing ...
	I1209 11:52:45.154685  661546 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem
	I1209 11:52:45.154717  661546 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem (1123 bytes)
	I1209 11:52:45.154816  661546 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem, removing ...
	I1209 11:52:45.154829  661546 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem
	I1209 11:52:45.154857  661546 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem (1679 bytes)
	I1209 11:52:45.154935  661546 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem org=jenkins.embed-certs-005123 san=[127.0.0.1 192.168.72.218 embed-certs-005123 localhost minikube]
	I1209 11:52:45.382712  661546 provision.go:177] copyRemoteCerts
	I1209 11:52:45.382772  661546 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 11:52:45.382801  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHHostname
	I1209 11:52:45.385625  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:45.386020  661546 main.go:141] libmachine: (embed-certs-005123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:a0:a8", ip: ""} in network mk-embed-certs-005123: {Iface:virbr4 ExpiryTime:2024-12-09 12:52:37 +0000 UTC Type:0 Mac:52:54:00:ee:a0:a8 Iaid: IPaddr:192.168.72.218 Prefix:24 Hostname:embed-certs-005123 Clientid:01:52:54:00:ee:a0:a8}
	I1209 11:52:45.386050  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined IP address 192.168.72.218 and MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:45.386241  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHPort
	I1209 11:52:45.386448  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHKeyPath
	I1209 11:52:45.386626  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHUsername
	I1209 11:52:45.386765  661546 sshutil.go:53] new ssh client: &{IP:192.168.72.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/embed-certs-005123/id_rsa Username:docker}
	I1209 11:52:45.464427  661546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1209 11:52:45.488111  661546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1209 11:52:45.511231  661546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1209 11:52:45.534104  661546 provision.go:87] duration metric: took 386.787703ms to configureAuth
	I1209 11:52:45.534141  661546 buildroot.go:189] setting minikube options for container-runtime
	I1209 11:52:45.534411  661546 config.go:182] Loaded profile config "embed-certs-005123": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 11:52:45.534526  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHHostname
	I1209 11:52:45.537936  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:45.538370  661546 main.go:141] libmachine: (embed-certs-005123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:a0:a8", ip: ""} in network mk-embed-certs-005123: {Iface:virbr4 ExpiryTime:2024-12-09 12:52:37 +0000 UTC Type:0 Mac:52:54:00:ee:a0:a8 Iaid: IPaddr:192.168.72.218 Prefix:24 Hostname:embed-certs-005123 Clientid:01:52:54:00:ee:a0:a8}
	I1209 11:52:45.538402  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined IP address 192.168.72.218 and MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:45.538584  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHPort
	I1209 11:52:45.538826  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHKeyPath
	I1209 11:52:45.539019  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHKeyPath
	I1209 11:52:45.539150  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHUsername
	I1209 11:52:45.539378  661546 main.go:141] libmachine: Using SSH client type: native
	I1209 11:52:45.539551  661546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.218 22 <nil> <nil>}
	I1209 11:52:45.539568  661546 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 11:52:45.771215  661546 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 11:52:45.771259  661546 machine.go:96] duration metric: took 968.466766ms to provisionDockerMachine
	I1209 11:52:45.771276  661546 start.go:293] postStartSetup for "embed-certs-005123" (driver="kvm2")
	I1209 11:52:45.771287  661546 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 11:52:45.771316  661546 main.go:141] libmachine: (embed-certs-005123) Calling .DriverName
	I1209 11:52:45.771673  661546 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 11:52:45.771709  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHHostname
	I1209 11:52:45.774881  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:45.775294  661546 main.go:141] libmachine: (embed-certs-005123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:a0:a8", ip: ""} in network mk-embed-certs-005123: {Iface:virbr4 ExpiryTime:2024-12-09 12:52:37 +0000 UTC Type:0 Mac:52:54:00:ee:a0:a8 Iaid: IPaddr:192.168.72.218 Prefix:24 Hostname:embed-certs-005123 Clientid:01:52:54:00:ee:a0:a8}
	I1209 11:52:45.775340  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined IP address 192.168.72.218 and MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:45.775510  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHPort
	I1209 11:52:45.775714  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHKeyPath
	I1209 11:52:45.775899  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHUsername
	I1209 11:52:45.776065  661546 sshutil.go:53] new ssh client: &{IP:192.168.72.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/embed-certs-005123/id_rsa Username:docker}
	I1209 11:52:45.856991  661546 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 11:52:45.862195  661546 info.go:137] Remote host: Buildroot 2023.02.9
	I1209 11:52:45.862224  661546 filesync.go:126] Scanning /home/jenkins/minikube-integration/20068-609844/.minikube/addons for local assets ...
	I1209 11:52:45.862295  661546 filesync.go:126] Scanning /home/jenkins/minikube-integration/20068-609844/.minikube/files for local assets ...
	I1209 11:52:45.862368  661546 filesync.go:149] local asset: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem -> 6170172.pem in /etc/ssl/certs
	I1209 11:52:45.862497  661546 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 11:52:45.874850  661546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem --> /etc/ssl/certs/6170172.pem (1708 bytes)
	I1209 11:52:45.899279  661546 start.go:296] duration metric: took 127.984399ms for postStartSetup
	I1209 11:52:45.899332  661546 fix.go:56] duration metric: took 19.756446591s for fixHost
	I1209 11:52:45.899362  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHHostname
	I1209 11:52:45.902428  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:45.902828  661546 main.go:141] libmachine: (embed-certs-005123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:a0:a8", ip: ""} in network mk-embed-certs-005123: {Iface:virbr4 ExpiryTime:2024-12-09 12:52:37 +0000 UTC Type:0 Mac:52:54:00:ee:a0:a8 Iaid: IPaddr:192.168.72.218 Prefix:24 Hostname:embed-certs-005123 Clientid:01:52:54:00:ee:a0:a8}
	I1209 11:52:45.902861  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined IP address 192.168.72.218 and MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:45.903117  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHPort
	I1209 11:52:45.903344  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHKeyPath
	I1209 11:52:45.903554  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHKeyPath
	I1209 11:52:45.903704  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHUsername
	I1209 11:52:45.903955  661546 main.go:141] libmachine: Using SSH client type: native
	I1209 11:52:45.904191  661546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.218 22 <nil> <nil>}
	I1209 11:52:45.904209  661546 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1209 11:52:46.007164  661546 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733745165.964649155
	
	I1209 11:52:46.007194  661546 fix.go:216] guest clock: 1733745165.964649155
	I1209 11:52:46.007217  661546 fix.go:229] Guest: 2024-12-09 11:52:45.964649155 +0000 UTC Remote: 2024-12-09 11:52:45.899337716 +0000 UTC m=+369.711404421 (delta=65.311439ms)
	I1209 11:52:46.007267  661546 fix.go:200] guest clock delta is within tolerance: 65.311439ms
	I1209 11:52:46.007280  661546 start.go:83] releasing machines lock for "embed-certs-005123", held for 19.864428938s
	I1209 11:52:46.007313  661546 main.go:141] libmachine: (embed-certs-005123) Calling .DriverName
	I1209 11:52:46.007616  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetIP
	I1209 11:52:46.011273  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:46.011799  661546 main.go:141] libmachine: (embed-certs-005123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:a0:a8", ip: ""} in network mk-embed-certs-005123: {Iface:virbr4 ExpiryTime:2024-12-09 12:52:37 +0000 UTC Type:0 Mac:52:54:00:ee:a0:a8 Iaid: IPaddr:192.168.72.218 Prefix:24 Hostname:embed-certs-005123 Clientid:01:52:54:00:ee:a0:a8}
	I1209 11:52:46.011830  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined IP address 192.168.72.218 and MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:46.012074  661546 main.go:141] libmachine: (embed-certs-005123) Calling .DriverName
	I1209 11:52:46.012681  661546 main.go:141] libmachine: (embed-certs-005123) Calling .DriverName
	I1209 11:52:46.012907  661546 main.go:141] libmachine: (embed-certs-005123) Calling .DriverName
	I1209 11:52:46.013027  661546 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 11:52:46.013099  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHHostname
	I1209 11:52:46.013170  661546 ssh_runner.go:195] Run: cat /version.json
	I1209 11:52:46.013196  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHHostname
	I1209 11:52:46.016473  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:46.016764  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:46.016840  661546 main.go:141] libmachine: (embed-certs-005123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:a0:a8", ip: ""} in network mk-embed-certs-005123: {Iface:virbr4 ExpiryTime:2024-12-09 12:52:37 +0000 UTC Type:0 Mac:52:54:00:ee:a0:a8 Iaid: IPaddr:192.168.72.218 Prefix:24 Hostname:embed-certs-005123 Clientid:01:52:54:00:ee:a0:a8}
	I1209 11:52:46.016875  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined IP address 192.168.72.218 and MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:46.016964  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHPort
	I1209 11:52:46.017186  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHKeyPath
	I1209 11:52:46.017287  661546 main.go:141] libmachine: (embed-certs-005123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:a0:a8", ip: ""} in network mk-embed-certs-005123: {Iface:virbr4 ExpiryTime:2024-12-09 12:52:37 +0000 UTC Type:0 Mac:52:54:00:ee:a0:a8 Iaid: IPaddr:192.168.72.218 Prefix:24 Hostname:embed-certs-005123 Clientid:01:52:54:00:ee:a0:a8}
	I1209 11:52:46.017401  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHUsername
	I1209 11:52:46.017442  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined IP address 192.168.72.218 and MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:46.017480  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHPort
	I1209 11:52:46.017553  661546 sshutil.go:53] new ssh client: &{IP:192.168.72.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/embed-certs-005123/id_rsa Username:docker}
	I1209 11:52:46.017785  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHKeyPath
	I1209 11:52:46.017911  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHUsername
	I1209 11:52:46.018075  661546 sshutil.go:53] new ssh client: &{IP:192.168.72.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/embed-certs-005123/id_rsa Username:docker}
	I1209 11:52:46.129248  661546 ssh_runner.go:195] Run: systemctl --version
	I1209 11:52:46.136309  661546 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 11:52:43.091899  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:45.592415  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:46.287879  661546 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 11:52:46.293689  661546 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 11:52:46.293770  661546 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 11:52:46.311972  661546 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1209 11:52:46.312009  661546 start.go:495] detecting cgroup driver to use...
	I1209 11:52:46.312085  661546 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 11:52:46.329406  661546 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 11:52:46.344607  661546 docker.go:217] disabling cri-docker service (if available) ...
	I1209 11:52:46.344664  661546 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 11:52:46.360448  661546 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 11:52:46.374509  661546 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 11:52:46.503687  661546 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 11:52:46.649152  661546 docker.go:233] disabling docker service ...
	I1209 11:52:46.649234  661546 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 11:52:46.663277  661546 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 11:52:46.677442  661546 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 11:52:46.832667  661546 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 11:52:46.949826  661546 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 11:52:46.963119  661546 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 11:52:46.981743  661546 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1209 11:52:46.981834  661546 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:52:46.991634  661546 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 11:52:46.991706  661546 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:52:47.004032  661546 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:52:47.015001  661546 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:52:47.025000  661546 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 11:52:47.035513  661546 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:52:47.045431  661546 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:52:47.061931  661546 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:52:47.071531  661546 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 11:52:47.080492  661546 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1209 11:52:47.080559  661546 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1209 11:52:47.094021  661546 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 11:52:47.104015  661546 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 11:52:47.226538  661546 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 11:52:47.318832  661546 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 11:52:47.318911  661546 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 11:52:47.323209  661546 start.go:563] Will wait 60s for crictl version
	I1209 11:52:47.323276  661546 ssh_runner.go:195] Run: which crictl
	I1209 11:52:47.326773  661546 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 11:52:47.365536  661546 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1209 11:52:47.365629  661546 ssh_runner.go:195] Run: crio --version
	I1209 11:52:47.392781  661546 ssh_runner.go:195] Run: crio --version
	I1209 11:52:47.422945  661546 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1209 11:52:43.133189  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:43.632726  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:44.132804  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:44.632952  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:45.132474  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:45.633318  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:46.133116  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:46.632595  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:47.133211  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:47.633233  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:46.858128  663024 pod_ready.go:103] pod "coredns-7c65d6cfc9-zzrgn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:49.358845  663024 pod_ready.go:103] pod "coredns-7c65d6cfc9-zzrgn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:47.423936  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetIP
	I1209 11:52:47.426959  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:47.427401  661546 main.go:141] libmachine: (embed-certs-005123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:a0:a8", ip: ""} in network mk-embed-certs-005123: {Iface:virbr4 ExpiryTime:2024-12-09 12:52:37 +0000 UTC Type:0 Mac:52:54:00:ee:a0:a8 Iaid: IPaddr:192.168.72.218 Prefix:24 Hostname:embed-certs-005123 Clientid:01:52:54:00:ee:a0:a8}
	I1209 11:52:47.427425  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined IP address 192.168.72.218 and MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:47.427636  661546 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1209 11:52:47.432509  661546 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 11:52:47.448620  661546 kubeadm.go:883] updating cluster {Name:embed-certs-005123 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:embed-certs-005123 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.218 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 11:52:47.448772  661546 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 11:52:47.448824  661546 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 11:52:47.485100  661546 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1209 11:52:47.485173  661546 ssh_runner.go:195] Run: which lz4
	I1209 11:52:47.489202  661546 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1209 11:52:47.493060  661546 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1209 11:52:47.493093  661546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1209 11:52:48.772297  661546 crio.go:462] duration metric: took 1.283133931s to copy over tarball
	I1209 11:52:48.772381  661546 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1209 11:52:50.959318  661546 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.18690714s)
	I1209 11:52:50.959352  661546 crio.go:469] duration metric: took 2.187018432s to extract the tarball
	I1209 11:52:50.959359  661546 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1209 11:52:50.995746  661546 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 11:52:51.037764  661546 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 11:52:51.037792  661546 cache_images.go:84] Images are preloaded, skipping loading
	I1209 11:52:51.037799  661546 kubeadm.go:934] updating node { 192.168.72.218 8443 v1.31.2 crio true true} ...
	I1209 11:52:51.037909  661546 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-005123 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.218
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:embed-certs-005123 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 11:52:51.037972  661546 ssh_runner.go:195] Run: crio config
	I1209 11:52:51.080191  661546 cni.go:84] Creating CNI manager for ""
	I1209 11:52:51.080220  661546 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 11:52:51.080231  661546 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1209 11:52:51.080258  661546 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.218 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-005123 NodeName:embed-certs-005123 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.218"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.218 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 11:52:51.080442  661546 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.218
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-005123"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.218"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.218"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 11:52:51.080544  661546 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1209 11:52:51.091894  661546 binaries.go:44] Found k8s binaries, skipping transfer
	I1209 11:52:51.091975  661546 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 11:52:51.101702  661546 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I1209 11:52:51.117636  661546 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 11:52:51.133662  661546 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2298 bytes)
	I1209 11:52:51.151725  661546 ssh_runner.go:195] Run: grep 192.168.72.218	control-plane.minikube.internal$ /etc/hosts
	I1209 11:52:51.155759  661546 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.218	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 11:52:51.167480  661546 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 11:52:47.592707  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:50.093177  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:48.132348  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:48.632894  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:49.133272  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:49.633015  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:50.132977  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:50.632533  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:51.132939  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:51.632463  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:52.133082  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:52.633298  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:50.357709  663024 pod_ready.go:93] pod "coredns-7c65d6cfc9-zzrgn" in "kube-system" namespace has status "Ready":"True"
	I1209 11:52:50.357740  663024 pod_ready.go:82] duration metric: took 10.006992001s for pod "coredns-7c65d6cfc9-zzrgn" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:50.357752  663024 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-482476" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:50.363374  663024 pod_ready.go:93] pod "etcd-default-k8s-diff-port-482476" in "kube-system" namespace has status "Ready":"True"
	I1209 11:52:50.363403  663024 pod_ready.go:82] duration metric: took 5.642657ms for pod "etcd-default-k8s-diff-port-482476" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:50.363417  663024 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-482476" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:50.368456  663024 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-482476" in "kube-system" namespace has status "Ready":"True"
	I1209 11:52:50.368478  663024 pod_ready.go:82] duration metric: took 5.053713ms for pod "kube-apiserver-default-k8s-diff-port-482476" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:50.368488  663024 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-482476" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:50.374156  663024 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-482476" in "kube-system" namespace has status "Ready":"True"
	I1209 11:52:50.374205  663024 pod_ready.go:82] duration metric: took 5.708489ms for pod "kube-controller-manager-default-k8s-diff-port-482476" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:50.374219  663024 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-6th5d" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:50.378734  663024 pod_ready.go:93] pod "kube-proxy-6th5d" in "kube-system" namespace has status "Ready":"True"
	I1209 11:52:50.378752  663024 pod_ready.go:82] duration metric: took 4.526066ms for pod "kube-proxy-6th5d" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:50.378760  663024 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-482476" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:52.384763  663024 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-482476" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:53.389110  663024 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-482476" in "kube-system" namespace has status "Ready":"True"
	I1209 11:52:53.389146  663024 pod_ready.go:82] duration metric: took 3.010378852s for pod "kube-scheduler-default-k8s-diff-port-482476" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:53.389162  663024 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:51.305408  661546 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 11:52:51.330738  661546 certs.go:68] Setting up /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/embed-certs-005123 for IP: 192.168.72.218
	I1209 11:52:51.330766  661546 certs.go:194] generating shared ca certs ...
	I1209 11:52:51.330791  661546 certs.go:226] acquiring lock for ca certs: {Name:mk2df5887e08965d909a9c950da5dfffb8a04ddc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 11:52:51.331002  661546 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.key
	I1209 11:52:51.331099  661546 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.key
	I1209 11:52:51.331116  661546 certs.go:256] generating profile certs ...
	I1209 11:52:51.331252  661546 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/embed-certs-005123/client.key
	I1209 11:52:51.331333  661546 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/embed-certs-005123/apiserver.key.a40d22b0
	I1209 11:52:51.331400  661546 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/embed-certs-005123/proxy-client.key
	I1209 11:52:51.331595  661546 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017.pem (1338 bytes)
	W1209 11:52:51.331631  661546 certs.go:480] ignoring /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017_empty.pem, impossibly tiny 0 bytes
	I1209 11:52:51.331645  661546 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem (1675 bytes)
	I1209 11:52:51.331680  661546 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem (1082 bytes)
	I1209 11:52:51.331717  661546 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem (1123 bytes)
	I1209 11:52:51.331747  661546 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem (1679 bytes)
	I1209 11:52:51.331824  661546 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem (1708 bytes)
	I1209 11:52:51.332728  661546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 11:52:51.366002  661546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1209 11:52:51.400591  661546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 11:52:51.431219  661546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1209 11:52:51.459334  661546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/embed-certs-005123/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1209 11:52:51.487240  661546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/embed-certs-005123/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1209 11:52:51.522273  661546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/embed-certs-005123/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 11:52:51.545757  661546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/embed-certs-005123/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1209 11:52:51.572793  661546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 11:52:51.595719  661546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017.pem --> /usr/share/ca-certificates/617017.pem (1338 bytes)
	I1209 11:52:51.618456  661546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem --> /usr/share/ca-certificates/6170172.pem (1708 bytes)
	I1209 11:52:51.643337  661546 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 11:52:51.659719  661546 ssh_runner.go:195] Run: openssl version
	I1209 11:52:51.665339  661546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/617017.pem && ln -fs /usr/share/ca-certificates/617017.pem /etc/ssl/certs/617017.pem"
	I1209 11:52:51.676145  661546 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/617017.pem
	I1209 11:52:51.680615  661546 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 10:45 /usr/share/ca-certificates/617017.pem
	I1209 11:52:51.680670  661546 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/617017.pem
	I1209 11:52:51.686782  661546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/617017.pem /etc/ssl/certs/51391683.0"
	I1209 11:52:51.697398  661546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6170172.pem && ln -fs /usr/share/ca-certificates/6170172.pem /etc/ssl/certs/6170172.pem"
	I1209 11:52:51.707438  661546 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6170172.pem
	I1209 11:52:51.711764  661546 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 10:45 /usr/share/ca-certificates/6170172.pem
	I1209 11:52:51.711832  661546 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6170172.pem
	I1209 11:52:51.717278  661546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6170172.pem /etc/ssl/certs/3ec20f2e.0"
	I1209 11:52:51.727774  661546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1209 11:52:51.738575  661546 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 11:52:51.742996  661546 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 10:34 /usr/share/ca-certificates/minikubeCA.pem
	I1209 11:52:51.743057  661546 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 11:52:51.748505  661546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1209 11:52:51.758738  661546 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 11:52:51.763005  661546 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1209 11:52:51.768964  661546 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1209 11:52:51.775011  661546 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1209 11:52:51.780810  661546 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1209 11:52:51.786716  661546 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1209 11:52:51.792351  661546 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1209 11:52:51.798098  661546 kubeadm.go:392] StartCluster: {Name:embed-certs-005123 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:embed-certs-005123 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.218 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 11:52:51.798239  661546 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 11:52:51.798296  661546 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 11:52:51.840669  661546 cri.go:89] found id: ""
	I1209 11:52:51.840755  661546 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 11:52:51.850404  661546 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1209 11:52:51.850429  661546 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1209 11:52:51.850474  661546 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1209 11:52:51.859350  661546 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1209 11:52:51.860405  661546 kubeconfig.go:125] found "embed-certs-005123" server: "https://192.168.72.218:8443"
	I1209 11:52:51.862591  661546 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1209 11:52:51.872497  661546 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.218
	I1209 11:52:51.872539  661546 kubeadm.go:1160] stopping kube-system containers ...
	I1209 11:52:51.872558  661546 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1209 11:52:51.872638  661546 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 11:52:51.913221  661546 cri.go:89] found id: ""
	I1209 11:52:51.913316  661546 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1209 11:52:51.929885  661546 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 11:52:51.940078  661546 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 11:52:51.940105  661546 kubeadm.go:157] found existing configuration files:
	
	I1209 11:52:51.940166  661546 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 11:52:51.948911  661546 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 11:52:51.948977  661546 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 11:52:51.958278  661546 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 11:52:51.966808  661546 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 11:52:51.966879  661546 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 11:52:51.975480  661546 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 11:52:51.984071  661546 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 11:52:51.984127  661546 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 11:52:51.992480  661546 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 11:52:52.000798  661546 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 11:52:52.000873  661546 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 11:52:52.009553  661546 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 11:52:52.019274  661546 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:52.133477  661546 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:53.081976  661546 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:53.293871  661546 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:53.364259  661546 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:53.452043  661546 api_server.go:52] waiting for apiserver process to appear ...
	I1209 11:52:53.452147  661546 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:53.952743  661546 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:54.452498  661546 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:54.952482  661546 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:55.452783  661546 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:55.483411  661546 api_server.go:72] duration metric: took 2.0313706s to wait for apiserver process to appear ...
	I1209 11:52:55.483448  661546 api_server.go:88] waiting for apiserver healthz status ...
	I1209 11:52:55.483473  661546 api_server.go:253] Checking apiserver healthz at https://192.168.72.218:8443/healthz ...
	I1209 11:52:55.483982  661546 api_server.go:269] stopped: https://192.168.72.218:8443/healthz: Get "https://192.168.72.218:8443/healthz": dial tcp 192.168.72.218:8443: connect: connection refused
	I1209 11:52:55.983589  661546 api_server.go:253] Checking apiserver healthz at https://192.168.72.218:8443/healthz ...
	I1209 11:52:52.592309  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:55.257400  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:53.132520  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:53.633385  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:54.132432  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:54.632974  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:55.132958  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:55.633343  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:56.132687  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:56.633236  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:57.133489  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:57.633105  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:55.396602  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:57.397077  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:58.136225  661546 api_server.go:279] https://192.168.72.218:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1209 11:52:58.136259  661546 api_server.go:103] status: https://192.168.72.218:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1209 11:52:58.136276  661546 api_server.go:253] Checking apiserver healthz at https://192.168.72.218:8443/healthz ...
	I1209 11:52:58.174521  661546 api_server.go:279] https://192.168.72.218:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1209 11:52:58.174583  661546 api_server.go:103] status: https://192.168.72.218:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1209 11:52:58.484089  661546 api_server.go:253] Checking apiserver healthz at https://192.168.72.218:8443/healthz ...
	I1209 11:52:58.489495  661546 api_server.go:279] https://192.168.72.218:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1209 11:52:58.489536  661546 api_server.go:103] status: https://192.168.72.218:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1209 11:52:58.984185  661546 api_server.go:253] Checking apiserver healthz at https://192.168.72.218:8443/healthz ...
	I1209 11:52:58.990889  661546 api_server.go:279] https://192.168.72.218:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1209 11:52:58.990932  661546 api_server.go:103] status: https://192.168.72.218:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1209 11:52:59.484415  661546 api_server.go:253] Checking apiserver healthz at https://192.168.72.218:8443/healthz ...
	I1209 11:52:59.490878  661546 api_server.go:279] https://192.168.72.218:8443/healthz returned 200:
	ok
	I1209 11:52:59.498196  661546 api_server.go:141] control plane version: v1.31.2
	I1209 11:52:59.498231  661546 api_server.go:131] duration metric: took 4.014775842s to wait for apiserver health ...
	I1209 11:52:59.498241  661546 cni.go:84] Creating CNI manager for ""
	I1209 11:52:59.498247  661546 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 11:52:59.499779  661546 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1209 11:52:59.500941  661546 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1209 11:52:59.514201  661546 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1209 11:52:59.544391  661546 system_pods.go:43] waiting for kube-system pods to appear ...
	I1209 11:52:59.555798  661546 system_pods.go:59] 8 kube-system pods found
	I1209 11:52:59.555837  661546 system_pods.go:61] "coredns-7c65d6cfc9-cdnjm" [7cb724f8-c570-4a19-808d-da994ec43eaa] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1209 11:52:59.555849  661546 system_pods.go:61] "etcd-embed-certs-005123" [bf194765-7520-4b5d-a1e5-b49830a0f620] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1209 11:52:59.555858  661546 system_pods.go:61] "kube-apiserver-embed-certs-005123" [470f6c19-0112-4b0d-89d9-b792e912cf6d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1209 11:52:59.555863  661546 system_pods.go:61] "kube-controller-manager-embed-certs-005123" [b42748b2-f3a9-4d29-a832-a30d54b329c2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1209 11:52:59.555868  661546 system_pods.go:61] "kube-proxy-b7bf2" [f9aab69c-2232-4f56-a502-ffd033f7ac10] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1209 11:52:59.555877  661546 system_pods.go:61] "kube-scheduler-embed-certs-005123" [e61a8e3c-c1d3-4dab-abb2-6f5221bc5d25] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1209 11:52:59.555885  661546 system_pods.go:61] "metrics-server-6867b74b74-x4kvn" [210cb99c-e3e7-4337-bed4-985cb98143dd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1209 11:52:59.555893  661546 system_pods.go:61] "storage-provisioner" [f2f7d9e2-1121-4df2-adb7-a0af32f957ff] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1209 11:52:59.555903  661546 system_pods.go:74] duration metric: took 11.485008ms to wait for pod list to return data ...
	I1209 11:52:59.555913  661546 node_conditions.go:102] verifying NodePressure condition ...
	I1209 11:52:59.560077  661546 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1209 11:52:59.560100  661546 node_conditions.go:123] node cpu capacity is 2
	I1209 11:52:59.560110  661546 node_conditions.go:105] duration metric: took 4.192476ms to run NodePressure ...
	I1209 11:52:59.560132  661546 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:59.890141  661546 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1209 11:52:59.895382  661546 kubeadm.go:739] kubelet initialised
	I1209 11:52:59.895414  661546 kubeadm.go:740] duration metric: took 5.227549ms waiting for restarted kubelet to initialise ...
	I1209 11:52:59.895425  661546 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 11:52:59.901454  661546 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-cdnjm" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:57.593336  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:00.094942  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:58.132858  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:58.633386  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:59.132544  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:59.633427  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:00.133402  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:00.632719  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:01.132786  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:01.632909  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:02.133197  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:02.632620  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:59.896691  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:02.396546  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:01.907730  661546 pod_ready.go:103] pod "coredns-7c65d6cfc9-cdnjm" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:03.910835  661546 pod_ready.go:103] pod "coredns-7c65d6cfc9-cdnjm" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:02.591692  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:05.090892  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:03.133091  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:03.633389  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:04.132587  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:04.633239  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:05.132773  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:05.632456  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:06.132989  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:06.632584  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:07.133153  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:07.633389  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:04.895599  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:06.896541  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:08.912963  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:06.408122  661546 pod_ready.go:103] pod "coredns-7c65d6cfc9-cdnjm" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:08.412579  661546 pod_ready.go:103] pod "coredns-7c65d6cfc9-cdnjm" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:10.419673  661546 pod_ready.go:93] pod "coredns-7c65d6cfc9-cdnjm" in "kube-system" namespace has status "Ready":"True"
	I1209 11:53:10.419702  661546 pod_ready.go:82] duration metric: took 10.518223469s for pod "coredns-7c65d6cfc9-cdnjm" in "kube-system" namespace to be "Ready" ...
	I1209 11:53:10.419716  661546 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-005123" in "kube-system" namespace to be "Ready" ...
	I1209 11:53:07.591181  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:10.091248  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:08.132885  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:08.633192  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:09.132446  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:09.633385  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:10.132534  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:10.632399  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:11.132877  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:11.633091  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:12.132592  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:12.633185  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:11.396121  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:13.901605  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:12.425696  661546 pod_ready.go:103] pod "etcd-embed-certs-005123" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:13.926007  661546 pod_ready.go:93] pod "etcd-embed-certs-005123" in "kube-system" namespace has status "Ready":"True"
	I1209 11:53:13.926041  661546 pod_ready.go:82] duration metric: took 3.50631846s for pod "etcd-embed-certs-005123" in "kube-system" namespace to be "Ready" ...
	I1209 11:53:13.926053  661546 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-005123" in "kube-system" namespace to be "Ready" ...
	I1209 11:53:13.931124  661546 pod_ready.go:93] pod "kube-apiserver-embed-certs-005123" in "kube-system" namespace has status "Ready":"True"
	I1209 11:53:13.931150  661546 pod_ready.go:82] duration metric: took 5.090118ms for pod "kube-apiserver-embed-certs-005123" in "kube-system" namespace to be "Ready" ...
	I1209 11:53:13.931163  661546 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-005123" in "kube-system" namespace to be "Ready" ...
	I1209 11:53:13.935763  661546 pod_ready.go:93] pod "kube-controller-manager-embed-certs-005123" in "kube-system" namespace has status "Ready":"True"
	I1209 11:53:13.935783  661546 pod_ready.go:82] duration metric: took 4.613902ms for pod "kube-controller-manager-embed-certs-005123" in "kube-system" namespace to be "Ready" ...
	I1209 11:53:13.935792  661546 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-b7bf2" in "kube-system" namespace to be "Ready" ...
	I1209 11:53:13.940013  661546 pod_ready.go:93] pod "kube-proxy-b7bf2" in "kube-system" namespace has status "Ready":"True"
	I1209 11:53:13.940037  661546 pod_ready.go:82] duration metric: took 4.238468ms for pod "kube-proxy-b7bf2" in "kube-system" namespace to be "Ready" ...
	I1209 11:53:13.940050  661546 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-005123" in "kube-system" namespace to be "Ready" ...
	I1209 11:53:13.944480  661546 pod_ready.go:93] pod "kube-scheduler-embed-certs-005123" in "kube-system" namespace has status "Ready":"True"
	I1209 11:53:13.944497  661546 pod_ready.go:82] duration metric: took 4.439334ms for pod "kube-scheduler-embed-certs-005123" in "kube-system" namespace to be "Ready" ...
	I1209 11:53:13.944504  661546 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace to be "Ready" ...
	I1209 11:53:15.951194  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:12.091413  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:14.591239  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:13.132852  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:13.632863  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:14.132638  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:14.632522  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:15.133201  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:15.632442  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:16.132620  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:53:16.132747  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:53:16.171708  662586 cri.go:89] found id: ""
	I1209 11:53:16.171748  662586 logs.go:282] 0 containers: []
	W1209 11:53:16.171761  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:53:16.171768  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:53:16.171823  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:53:16.206350  662586 cri.go:89] found id: ""
	I1209 11:53:16.206381  662586 logs.go:282] 0 containers: []
	W1209 11:53:16.206390  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:53:16.206398  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:53:16.206468  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:53:16.239292  662586 cri.go:89] found id: ""
	I1209 11:53:16.239325  662586 logs.go:282] 0 containers: []
	W1209 11:53:16.239334  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:53:16.239341  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:53:16.239397  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:53:16.275809  662586 cri.go:89] found id: ""
	I1209 11:53:16.275841  662586 logs.go:282] 0 containers: []
	W1209 11:53:16.275850  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:53:16.275856  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:53:16.275913  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:53:16.310434  662586 cri.go:89] found id: ""
	I1209 11:53:16.310466  662586 logs.go:282] 0 containers: []
	W1209 11:53:16.310474  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:53:16.310480  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:53:16.310540  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:53:16.347697  662586 cri.go:89] found id: ""
	I1209 11:53:16.347729  662586 logs.go:282] 0 containers: []
	W1209 11:53:16.347738  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:53:16.347745  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:53:16.347801  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:53:16.380949  662586 cri.go:89] found id: ""
	I1209 11:53:16.380977  662586 logs.go:282] 0 containers: []
	W1209 11:53:16.380985  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:53:16.380992  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:53:16.381074  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:53:16.415236  662586 cri.go:89] found id: ""
	I1209 11:53:16.415268  662586 logs.go:282] 0 containers: []
	W1209 11:53:16.415290  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:53:16.415304  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:53:16.415321  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:53:16.459614  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:53:16.459645  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:53:16.509575  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:53:16.509617  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:53:16.522864  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:53:16.522898  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:53:16.644997  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:53:16.645059  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:53:16.645106  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:53:16.396028  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:18.397195  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:17.951721  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:19.952199  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:16.591767  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:19.091470  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:21.095835  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:19.220978  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:19.233506  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:53:19.233597  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:53:19.268975  662586 cri.go:89] found id: ""
	I1209 11:53:19.269007  662586 logs.go:282] 0 containers: []
	W1209 11:53:19.269019  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:53:19.269027  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:53:19.269103  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:53:19.304898  662586 cri.go:89] found id: ""
	I1209 11:53:19.304935  662586 logs.go:282] 0 containers: []
	W1209 11:53:19.304949  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:53:19.304957  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:53:19.305034  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:53:19.344798  662586 cri.go:89] found id: ""
	I1209 11:53:19.344835  662586 logs.go:282] 0 containers: []
	W1209 11:53:19.344846  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:53:19.344855  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:53:19.344925  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:53:19.395335  662586 cri.go:89] found id: ""
	I1209 11:53:19.395377  662586 logs.go:282] 0 containers: []
	W1209 11:53:19.395387  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:53:19.395395  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:53:19.395464  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:53:19.430334  662586 cri.go:89] found id: ""
	I1209 11:53:19.430364  662586 logs.go:282] 0 containers: []
	W1209 11:53:19.430377  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:53:19.430386  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:53:19.430465  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:53:19.468732  662586 cri.go:89] found id: ""
	I1209 11:53:19.468766  662586 logs.go:282] 0 containers: []
	W1209 11:53:19.468775  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:53:19.468782  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:53:19.468836  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:53:19.503194  662586 cri.go:89] found id: ""
	I1209 11:53:19.503242  662586 logs.go:282] 0 containers: []
	W1209 11:53:19.503255  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:53:19.503263  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:53:19.503328  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:53:19.537074  662586 cri.go:89] found id: ""
	I1209 11:53:19.537114  662586 logs.go:282] 0 containers: []
	W1209 11:53:19.537125  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:53:19.537135  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:53:19.537151  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:53:19.590081  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:53:19.590130  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:53:19.604350  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:53:19.604388  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:53:19.683073  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:53:19.683106  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:53:19.683124  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:53:19.763564  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:53:19.763611  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:53:22.302792  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:22.315992  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:53:22.316079  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:53:22.350464  662586 cri.go:89] found id: ""
	I1209 11:53:22.350495  662586 logs.go:282] 0 containers: []
	W1209 11:53:22.350505  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:53:22.350511  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:53:22.350569  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:53:22.382832  662586 cri.go:89] found id: ""
	I1209 11:53:22.382867  662586 logs.go:282] 0 containers: []
	W1209 11:53:22.382880  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:53:22.382889  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:53:22.382958  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:53:22.417826  662586 cri.go:89] found id: ""
	I1209 11:53:22.417859  662586 logs.go:282] 0 containers: []
	W1209 11:53:22.417871  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:53:22.417880  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:53:22.417963  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:53:22.451545  662586 cri.go:89] found id: ""
	I1209 11:53:22.451579  662586 logs.go:282] 0 containers: []
	W1209 11:53:22.451588  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:53:22.451594  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:53:22.451659  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:53:22.488413  662586 cri.go:89] found id: ""
	I1209 11:53:22.488448  662586 logs.go:282] 0 containers: []
	W1209 11:53:22.488458  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:53:22.488464  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:53:22.488531  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:53:22.523891  662586 cri.go:89] found id: ""
	I1209 11:53:22.523916  662586 logs.go:282] 0 containers: []
	W1209 11:53:22.523925  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:53:22.523931  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:53:22.523990  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:53:22.555828  662586 cri.go:89] found id: ""
	I1209 11:53:22.555866  662586 logs.go:282] 0 containers: []
	W1209 11:53:22.555879  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:53:22.555887  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:53:22.555960  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:53:22.592133  662586 cri.go:89] found id: ""
	I1209 11:53:22.592171  662586 logs.go:282] 0 containers: []
	W1209 11:53:22.592181  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:53:22.592192  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:53:22.592209  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:53:22.641928  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:53:22.641966  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:53:22.655182  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:53:22.655215  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 11:53:20.896189  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:23.397242  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:21.957934  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:24.451292  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:23.591147  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:25.591982  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	W1209 11:53:22.724320  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:53:22.724343  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:53:22.724359  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:53:22.811692  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:53:22.811743  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:53:25.347903  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:25.360839  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:53:25.360907  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:53:25.392880  662586 cri.go:89] found id: ""
	I1209 11:53:25.392917  662586 logs.go:282] 0 containers: []
	W1209 11:53:25.392930  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:53:25.392939  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:53:25.393008  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:53:25.427862  662586 cri.go:89] found id: ""
	I1209 11:53:25.427905  662586 logs.go:282] 0 containers: []
	W1209 11:53:25.427914  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:53:25.427921  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:53:25.428009  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:53:25.463733  662586 cri.go:89] found id: ""
	I1209 11:53:25.463767  662586 logs.go:282] 0 containers: []
	W1209 11:53:25.463778  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:53:25.463788  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:53:25.463884  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:53:25.501653  662586 cri.go:89] found id: ""
	I1209 11:53:25.501681  662586 logs.go:282] 0 containers: []
	W1209 11:53:25.501690  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:53:25.501697  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:53:25.501751  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:53:25.535368  662586 cri.go:89] found id: ""
	I1209 11:53:25.535410  662586 logs.go:282] 0 containers: []
	W1209 11:53:25.535422  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:53:25.535431  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:53:25.535511  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:53:25.569709  662586 cri.go:89] found id: ""
	I1209 11:53:25.569739  662586 logs.go:282] 0 containers: []
	W1209 11:53:25.569748  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:53:25.569761  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:53:25.569827  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:53:25.604352  662586 cri.go:89] found id: ""
	I1209 11:53:25.604389  662586 logs.go:282] 0 containers: []
	W1209 11:53:25.604404  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:53:25.604413  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:53:25.604477  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:53:25.635832  662586 cri.go:89] found id: ""
	I1209 11:53:25.635865  662586 logs.go:282] 0 containers: []
	W1209 11:53:25.635878  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:53:25.635892  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:53:25.635908  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:53:25.650611  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:53:25.650647  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:53:25.721092  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:53:25.721121  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:53:25.721139  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:53:25.795552  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:53:25.795598  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:53:25.858088  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:53:25.858161  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:53:25.898217  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:28.395882  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:26.950691  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:28.951203  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:28.091306  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:30.091842  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:28.410683  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:28.422993  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:53:28.423072  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:53:28.455054  662586 cri.go:89] found id: ""
	I1209 11:53:28.455083  662586 logs.go:282] 0 containers: []
	W1209 11:53:28.455092  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:53:28.455098  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:53:28.455162  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:53:28.493000  662586 cri.go:89] found id: ""
	I1209 11:53:28.493037  662586 logs.go:282] 0 containers: []
	W1209 11:53:28.493046  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:53:28.493052  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:53:28.493104  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:53:28.526294  662586 cri.go:89] found id: ""
	I1209 11:53:28.526333  662586 logs.go:282] 0 containers: []
	W1209 11:53:28.526346  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:53:28.526354  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:53:28.526417  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:53:28.560383  662586 cri.go:89] found id: ""
	I1209 11:53:28.560414  662586 logs.go:282] 0 containers: []
	W1209 11:53:28.560423  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:53:28.560430  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:53:28.560485  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:53:28.595906  662586 cri.go:89] found id: ""
	I1209 11:53:28.595935  662586 logs.go:282] 0 containers: []
	W1209 11:53:28.595946  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:53:28.595954  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:53:28.596021  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:53:28.629548  662586 cri.go:89] found id: ""
	I1209 11:53:28.629584  662586 logs.go:282] 0 containers: []
	W1209 11:53:28.629597  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:53:28.629607  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:53:28.629673  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:53:28.666362  662586 cri.go:89] found id: ""
	I1209 11:53:28.666398  662586 logs.go:282] 0 containers: []
	W1209 11:53:28.666410  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:53:28.666418  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:53:28.666494  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:53:28.697704  662586 cri.go:89] found id: ""
	I1209 11:53:28.697736  662586 logs.go:282] 0 containers: []
	W1209 11:53:28.697746  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:53:28.697756  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:53:28.697769  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:53:28.745774  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:53:28.745816  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:53:28.759543  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:53:28.759582  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:53:28.834772  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:53:28.834795  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:53:28.834812  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:53:28.913137  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:53:28.913178  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:53:31.460658  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:31.473503  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:53:31.473575  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:53:31.506710  662586 cri.go:89] found id: ""
	I1209 11:53:31.506748  662586 logs.go:282] 0 containers: []
	W1209 11:53:31.506760  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:53:31.506770  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:53:31.506842  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:53:31.544127  662586 cri.go:89] found id: ""
	I1209 11:53:31.544188  662586 logs.go:282] 0 containers: []
	W1209 11:53:31.544202  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:53:31.544211  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:53:31.544289  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:53:31.591081  662586 cri.go:89] found id: ""
	I1209 11:53:31.591116  662586 logs.go:282] 0 containers: []
	W1209 11:53:31.591128  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:53:31.591135  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:53:31.591213  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:53:31.629311  662586 cri.go:89] found id: ""
	I1209 11:53:31.629340  662586 logs.go:282] 0 containers: []
	W1209 11:53:31.629348  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:53:31.629355  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:53:31.629432  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:53:31.671035  662586 cri.go:89] found id: ""
	I1209 11:53:31.671069  662586 logs.go:282] 0 containers: []
	W1209 11:53:31.671081  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:53:31.671090  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:53:31.671162  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:53:31.705753  662586 cri.go:89] found id: ""
	I1209 11:53:31.705792  662586 logs.go:282] 0 containers: []
	W1209 11:53:31.705805  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:53:31.705815  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:53:31.705889  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:53:31.739118  662586 cri.go:89] found id: ""
	I1209 11:53:31.739146  662586 logs.go:282] 0 containers: []
	W1209 11:53:31.739155  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:53:31.739162  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:53:31.739225  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:53:31.771085  662586 cri.go:89] found id: ""
	I1209 11:53:31.771120  662586 logs.go:282] 0 containers: []
	W1209 11:53:31.771129  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:53:31.771139  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:53:31.771152  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:53:31.820993  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:53:31.821049  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:53:31.835576  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:53:31.835612  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:53:31.903011  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:53:31.903039  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:53:31.903056  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:53:31.977784  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:53:31.977830  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:53:30.896197  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:33.395937  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:31.450832  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:33.451161  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:35.451446  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:32.590724  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:34.592352  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:34.514654  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:34.529156  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:53:34.529236  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:53:34.567552  662586 cri.go:89] found id: ""
	I1209 11:53:34.567580  662586 logs.go:282] 0 containers: []
	W1209 11:53:34.567590  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:53:34.567598  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:53:34.567665  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:53:34.608863  662586 cri.go:89] found id: ""
	I1209 11:53:34.608891  662586 logs.go:282] 0 containers: []
	W1209 11:53:34.608900  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:53:34.608907  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:53:34.608970  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:53:34.647204  662586 cri.go:89] found id: ""
	I1209 11:53:34.647242  662586 logs.go:282] 0 containers: []
	W1209 11:53:34.647254  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:53:34.647263  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:53:34.647333  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:53:34.682511  662586 cri.go:89] found id: ""
	I1209 11:53:34.682565  662586 logs.go:282] 0 containers: []
	W1209 11:53:34.682580  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:53:34.682596  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:53:34.682674  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:53:34.717557  662586 cri.go:89] found id: ""
	I1209 11:53:34.717585  662586 logs.go:282] 0 containers: []
	W1209 11:53:34.717595  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:53:34.717602  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:53:34.717670  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:53:34.749814  662586 cri.go:89] found id: ""
	I1209 11:53:34.749851  662586 logs.go:282] 0 containers: []
	W1209 11:53:34.749865  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:53:34.749876  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:53:34.749949  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:53:34.782732  662586 cri.go:89] found id: ""
	I1209 11:53:34.782766  662586 logs.go:282] 0 containers: []
	W1209 11:53:34.782776  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:53:34.782782  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:53:34.782846  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:53:34.817114  662586 cri.go:89] found id: ""
	I1209 11:53:34.817149  662586 logs.go:282] 0 containers: []
	W1209 11:53:34.817162  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:53:34.817175  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:53:34.817192  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:53:34.885963  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:53:34.885986  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:53:34.886001  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:53:34.969858  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:53:34.969905  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:53:35.006981  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:53:35.007024  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:53:35.055360  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:53:35.055401  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:53:37.570641  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:37.595904  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:53:37.595986  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:53:37.642205  662586 cri.go:89] found id: ""
	I1209 11:53:37.642248  662586 logs.go:282] 0 containers: []
	W1209 11:53:37.642261  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:53:37.642270  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:53:37.642347  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:53:37.676666  662586 cri.go:89] found id: ""
	I1209 11:53:37.676692  662586 logs.go:282] 0 containers: []
	W1209 11:53:37.676701  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:53:37.676707  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:53:37.676760  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:53:35.396037  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:37.896489  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:37.952569  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:40.450464  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:37.092250  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:39.092392  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:37.714201  662586 cri.go:89] found id: ""
	I1209 11:53:37.714233  662586 logs.go:282] 0 containers: []
	W1209 11:53:37.714243  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:53:37.714249  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:53:37.714311  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:53:37.748018  662586 cri.go:89] found id: ""
	I1209 11:53:37.748047  662586 logs.go:282] 0 containers: []
	W1209 11:53:37.748058  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:53:37.748067  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:53:37.748127  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:53:37.783763  662586 cri.go:89] found id: ""
	I1209 11:53:37.783799  662586 logs.go:282] 0 containers: []
	W1209 11:53:37.783807  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:53:37.783823  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:53:37.783898  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:53:37.822470  662586 cri.go:89] found id: ""
	I1209 11:53:37.822502  662586 logs.go:282] 0 containers: []
	W1209 11:53:37.822514  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:53:37.822523  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:53:37.822585  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:53:37.858493  662586 cri.go:89] found id: ""
	I1209 11:53:37.858527  662586 logs.go:282] 0 containers: []
	W1209 11:53:37.858537  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:53:37.858543  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:53:37.858599  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:53:37.899263  662586 cri.go:89] found id: ""
	I1209 11:53:37.899288  662586 logs.go:282] 0 containers: []
	W1209 11:53:37.899295  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:53:37.899304  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:53:37.899321  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:53:37.972531  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:53:37.972559  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:53:37.972575  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:53:38.046271  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:53:38.046315  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:53:38.088829  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:53:38.088867  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:53:38.141935  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:53:38.141985  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:53:40.657131  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:40.669884  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:53:40.669954  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:53:40.704291  662586 cri.go:89] found id: ""
	I1209 11:53:40.704332  662586 logs.go:282] 0 containers: []
	W1209 11:53:40.704345  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:53:40.704357  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:53:40.704435  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:53:40.738637  662586 cri.go:89] found id: ""
	I1209 11:53:40.738673  662586 logs.go:282] 0 containers: []
	W1209 11:53:40.738684  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:53:40.738690  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:53:40.738747  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:53:40.770737  662586 cri.go:89] found id: ""
	I1209 11:53:40.770774  662586 logs.go:282] 0 containers: []
	W1209 11:53:40.770787  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:53:40.770796  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:53:40.770865  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:53:40.805667  662586 cri.go:89] found id: ""
	I1209 11:53:40.805702  662586 logs.go:282] 0 containers: []
	W1209 11:53:40.805729  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:53:40.805739  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:53:40.805812  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:53:40.838444  662586 cri.go:89] found id: ""
	I1209 11:53:40.838482  662586 logs.go:282] 0 containers: []
	W1209 11:53:40.838496  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:53:40.838505  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:53:40.838578  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:53:40.871644  662586 cri.go:89] found id: ""
	I1209 11:53:40.871679  662586 logs.go:282] 0 containers: []
	W1209 11:53:40.871691  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:53:40.871700  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:53:40.871763  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:53:40.907242  662586 cri.go:89] found id: ""
	I1209 11:53:40.907275  662586 logs.go:282] 0 containers: []
	W1209 11:53:40.907284  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:53:40.907291  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:53:40.907359  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:53:40.941542  662586 cri.go:89] found id: ""
	I1209 11:53:40.941570  662586 logs.go:282] 0 containers: []
	W1209 11:53:40.941583  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:53:40.941595  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:53:40.941616  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:53:41.022344  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:53:41.022373  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:53:41.022387  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:53:41.097083  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:53:41.097129  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:53:41.135303  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:53:41.135349  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:53:41.191400  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:53:41.191447  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:53:40.396681  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:42.895118  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:42.451217  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:44.951893  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:41.591753  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:44.090762  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:46.091821  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:43.705246  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:43.717939  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:53:43.718001  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:53:43.750027  662586 cri.go:89] found id: ""
	I1209 11:53:43.750066  662586 logs.go:282] 0 containers: []
	W1209 11:53:43.750079  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:53:43.750087  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:53:43.750156  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:53:43.782028  662586 cri.go:89] found id: ""
	I1209 11:53:43.782067  662586 logs.go:282] 0 containers: []
	W1209 11:53:43.782081  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:53:43.782090  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:53:43.782153  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:53:43.815509  662586 cri.go:89] found id: ""
	I1209 11:53:43.815549  662586 logs.go:282] 0 containers: []
	W1209 11:53:43.815562  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:53:43.815570  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:53:43.815629  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:53:43.852803  662586 cri.go:89] found id: ""
	I1209 11:53:43.852834  662586 logs.go:282] 0 containers: []
	W1209 11:53:43.852842  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:53:43.852850  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:53:43.852915  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:53:43.886761  662586 cri.go:89] found id: ""
	I1209 11:53:43.886789  662586 logs.go:282] 0 containers: []
	W1209 11:53:43.886798  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:53:43.886805  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:53:43.886883  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:53:43.924427  662586 cri.go:89] found id: ""
	I1209 11:53:43.924458  662586 logs.go:282] 0 containers: []
	W1209 11:53:43.924466  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:53:43.924478  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:53:43.924542  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:53:43.960351  662586 cri.go:89] found id: ""
	I1209 11:53:43.960381  662586 logs.go:282] 0 containers: []
	W1209 11:53:43.960398  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:53:43.960407  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:53:43.960476  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:53:43.993933  662586 cri.go:89] found id: ""
	I1209 11:53:43.993960  662586 logs.go:282] 0 containers: []
	W1209 11:53:43.993969  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:53:43.993979  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:53:43.994002  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:53:44.006915  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:53:44.006952  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:53:44.078928  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:53:44.078981  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:53:44.078999  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:53:44.158129  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:53:44.158188  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:53:44.199543  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:53:44.199577  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:53:46.748607  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:46.762381  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:53:46.762494  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:53:46.795674  662586 cri.go:89] found id: ""
	I1209 11:53:46.795713  662586 logs.go:282] 0 containers: []
	W1209 11:53:46.795727  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:53:46.795737  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:53:46.795812  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:53:46.834027  662586 cri.go:89] found id: ""
	I1209 11:53:46.834055  662586 logs.go:282] 0 containers: []
	W1209 11:53:46.834065  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:53:46.834072  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:53:46.834124  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:53:46.872116  662586 cri.go:89] found id: ""
	I1209 11:53:46.872156  662586 logs.go:282] 0 containers: []
	W1209 11:53:46.872169  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:53:46.872179  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:53:46.872264  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:53:46.906571  662586 cri.go:89] found id: ""
	I1209 11:53:46.906599  662586 logs.go:282] 0 containers: []
	W1209 11:53:46.906608  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:53:46.906615  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:53:46.906676  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:53:46.938266  662586 cri.go:89] found id: ""
	I1209 11:53:46.938303  662586 logs.go:282] 0 containers: []
	W1209 11:53:46.938315  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:53:46.938323  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:53:46.938381  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:53:46.972281  662586 cri.go:89] found id: ""
	I1209 11:53:46.972318  662586 logs.go:282] 0 containers: []
	W1209 11:53:46.972329  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:53:46.972337  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:53:46.972391  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:53:47.004797  662586 cri.go:89] found id: ""
	I1209 11:53:47.004828  662586 logs.go:282] 0 containers: []
	W1209 11:53:47.004837  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:53:47.004843  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:53:47.004908  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:53:47.035877  662586 cri.go:89] found id: ""
	I1209 11:53:47.035905  662586 logs.go:282] 0 containers: []
	W1209 11:53:47.035917  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:53:47.035931  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:53:47.035947  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:53:47.087654  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:53:47.087706  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:53:47.102311  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:53:47.102346  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:53:47.195370  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:53:47.195396  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:53:47.195414  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:53:47.279103  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:53:47.279158  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:53:44.895382  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:46.895838  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:48.896133  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:47.453879  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:49.951686  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:48.591393  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:51.090806  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:49.817942  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:49.830291  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:53:49.830357  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:53:49.862917  662586 cri.go:89] found id: ""
	I1209 11:53:49.862950  662586 logs.go:282] 0 containers: []
	W1209 11:53:49.862959  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:53:49.862965  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:53:49.863033  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:53:49.894971  662586 cri.go:89] found id: ""
	I1209 11:53:49.895005  662586 logs.go:282] 0 containers: []
	W1209 11:53:49.895018  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:53:49.895027  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:53:49.895097  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:53:49.931737  662586 cri.go:89] found id: ""
	I1209 11:53:49.931775  662586 logs.go:282] 0 containers: []
	W1209 11:53:49.931786  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:53:49.931800  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:53:49.931862  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:53:49.971064  662586 cri.go:89] found id: ""
	I1209 11:53:49.971097  662586 logs.go:282] 0 containers: []
	W1209 11:53:49.971109  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:53:49.971118  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:53:49.971210  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:53:50.005354  662586 cri.go:89] found id: ""
	I1209 11:53:50.005393  662586 logs.go:282] 0 containers: []
	W1209 11:53:50.005417  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:53:50.005427  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:53:50.005501  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:53:50.044209  662586 cri.go:89] found id: ""
	I1209 11:53:50.044240  662586 logs.go:282] 0 containers: []
	W1209 11:53:50.044249  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:53:50.044257  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:53:50.044313  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:53:50.076360  662586 cri.go:89] found id: ""
	I1209 11:53:50.076408  662586 logs.go:282] 0 containers: []
	W1209 11:53:50.076418  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:53:50.076426  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:53:50.076494  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:53:50.112125  662586 cri.go:89] found id: ""
	I1209 11:53:50.112168  662586 logs.go:282] 0 containers: []
	W1209 11:53:50.112196  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:53:50.112210  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:53:50.112228  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:53:50.164486  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:53:50.164530  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:53:50.178489  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:53:50.178525  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:53:50.250131  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:53:50.250165  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:53:50.250196  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:53:50.329733  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:53:50.329779  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:53:50.896354  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:53.395149  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:52.450595  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:54.450939  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:53.092311  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:55.590766  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:52.874887  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:52.888518  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:53:52.888607  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:53:52.924361  662586 cri.go:89] found id: ""
	I1209 11:53:52.924389  662586 logs.go:282] 0 containers: []
	W1209 11:53:52.924398  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:53:52.924404  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:53:52.924467  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:53:52.957769  662586 cri.go:89] found id: ""
	I1209 11:53:52.957803  662586 logs.go:282] 0 containers: []
	W1209 11:53:52.957816  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:53:52.957824  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:53:52.957891  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:53:52.990339  662586 cri.go:89] found id: ""
	I1209 11:53:52.990376  662586 logs.go:282] 0 containers: []
	W1209 11:53:52.990388  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:53:52.990397  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:53:52.990461  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:53:53.022959  662586 cri.go:89] found id: ""
	I1209 11:53:53.023003  662586 logs.go:282] 0 containers: []
	W1209 11:53:53.023017  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:53:53.023028  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:53:53.023111  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:53:53.060271  662586 cri.go:89] found id: ""
	I1209 11:53:53.060299  662586 logs.go:282] 0 containers: []
	W1209 11:53:53.060315  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:53:53.060321  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:53:53.060390  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:53:53.093470  662586 cri.go:89] found id: ""
	I1209 11:53:53.093500  662586 logs.go:282] 0 containers: []
	W1209 11:53:53.093511  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:53:53.093519  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:53:53.093575  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:53:53.128902  662586 cri.go:89] found id: ""
	I1209 11:53:53.128941  662586 logs.go:282] 0 containers: []
	W1209 11:53:53.128955  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:53:53.128963  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:53:53.129036  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:53:53.161927  662586 cri.go:89] found id: ""
	I1209 11:53:53.161955  662586 logs.go:282] 0 containers: []
	W1209 11:53:53.161964  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:53:53.161974  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:53:53.161988  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:53:53.214098  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:53:53.214140  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:53:53.229191  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:53:53.229232  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:53:53.308648  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:53:53.308678  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:53:53.308695  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:53:53.386776  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:53:53.386816  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:53:55.929307  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:55.942217  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:53:55.942285  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:53:55.983522  662586 cri.go:89] found id: ""
	I1209 11:53:55.983563  662586 logs.go:282] 0 containers: []
	W1209 11:53:55.983572  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:53:55.983579  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:53:55.983645  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:53:56.017262  662586 cri.go:89] found id: ""
	I1209 11:53:56.017293  662586 logs.go:282] 0 containers: []
	W1209 11:53:56.017308  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:53:56.017314  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:53:56.017367  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:53:56.052385  662586 cri.go:89] found id: ""
	I1209 11:53:56.052419  662586 logs.go:282] 0 containers: []
	W1209 11:53:56.052429  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:53:56.052436  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:53:56.052489  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:53:56.085385  662586 cri.go:89] found id: ""
	I1209 11:53:56.085432  662586 logs.go:282] 0 containers: []
	W1209 11:53:56.085444  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:53:56.085452  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:53:56.085519  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:53:56.122754  662586 cri.go:89] found id: ""
	I1209 11:53:56.122785  662586 logs.go:282] 0 containers: []
	W1209 11:53:56.122794  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:53:56.122800  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:53:56.122862  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:53:56.159033  662586 cri.go:89] found id: ""
	I1209 11:53:56.159061  662586 logs.go:282] 0 containers: []
	W1209 11:53:56.159070  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:53:56.159077  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:53:56.159128  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:53:56.198022  662586 cri.go:89] found id: ""
	I1209 11:53:56.198058  662586 logs.go:282] 0 containers: []
	W1209 11:53:56.198070  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:53:56.198078  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:53:56.198148  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:53:56.231475  662586 cri.go:89] found id: ""
	I1209 11:53:56.231515  662586 logs.go:282] 0 containers: []
	W1209 11:53:56.231528  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:53:56.231542  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:53:56.231559  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:53:56.304922  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:53:56.304969  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:53:56.339875  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:53:56.339916  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:53:56.392893  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:53:56.392929  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:53:56.406334  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:53:56.406376  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:53:56.474037  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:53:55.895077  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:57.895835  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:56.452163  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:58.950981  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:57.590943  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:00.091057  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:58.974725  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:58.987817  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:53:58.987890  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:53:59.020951  662586 cri.go:89] found id: ""
	I1209 11:53:59.020987  662586 logs.go:282] 0 containers: []
	W1209 11:53:59.020996  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:53:59.021003  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:53:59.021055  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:53:59.055675  662586 cri.go:89] found id: ""
	I1209 11:53:59.055715  662586 logs.go:282] 0 containers: []
	W1209 11:53:59.055727  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:53:59.055733  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:53:59.055800  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:53:59.090099  662586 cri.go:89] found id: ""
	I1209 11:53:59.090138  662586 logs.go:282] 0 containers: []
	W1209 11:53:59.090150  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:53:59.090158  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:53:59.090252  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:53:59.124680  662586 cri.go:89] found id: ""
	I1209 11:53:59.124718  662586 logs.go:282] 0 containers: []
	W1209 11:53:59.124730  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:53:59.124739  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:53:59.124802  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:53:59.157772  662586 cri.go:89] found id: ""
	I1209 11:53:59.157808  662586 logs.go:282] 0 containers: []
	W1209 11:53:59.157819  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:53:59.157828  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:53:59.157892  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:53:59.191098  662586 cri.go:89] found id: ""
	I1209 11:53:59.191132  662586 logs.go:282] 0 containers: []
	W1209 11:53:59.191141  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:53:59.191148  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:53:59.191212  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:53:59.224050  662586 cri.go:89] found id: ""
	I1209 11:53:59.224090  662586 logs.go:282] 0 containers: []
	W1209 11:53:59.224102  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:53:59.224110  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:53:59.224198  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:53:59.262361  662586 cri.go:89] found id: ""
	I1209 11:53:59.262397  662586 logs.go:282] 0 containers: []
	W1209 11:53:59.262418  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:53:59.262432  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:53:59.262456  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:53:59.276811  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:53:59.276844  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:53:59.349465  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:53:59.349492  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:53:59.349506  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:53:59.429146  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:53:59.429192  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:53:59.470246  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:53:59.470287  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:02.021651  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:02.036039  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:02.036109  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:02.070999  662586 cri.go:89] found id: ""
	I1209 11:54:02.071034  662586 logs.go:282] 0 containers: []
	W1209 11:54:02.071045  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:02.071052  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:02.071119  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:02.107506  662586 cri.go:89] found id: ""
	I1209 11:54:02.107536  662586 logs.go:282] 0 containers: []
	W1209 11:54:02.107546  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:02.107554  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:02.107624  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:02.146279  662586 cri.go:89] found id: ""
	I1209 11:54:02.146314  662586 logs.go:282] 0 containers: []
	W1209 11:54:02.146326  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:02.146342  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:02.146408  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:02.178349  662586 cri.go:89] found id: ""
	I1209 11:54:02.178378  662586 logs.go:282] 0 containers: []
	W1209 11:54:02.178387  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:02.178402  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:02.178460  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:02.211916  662586 cri.go:89] found id: ""
	I1209 11:54:02.211952  662586 logs.go:282] 0 containers: []
	W1209 11:54:02.211963  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:02.211969  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:02.212038  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:02.246334  662586 cri.go:89] found id: ""
	I1209 11:54:02.246370  662586 logs.go:282] 0 containers: []
	W1209 11:54:02.246380  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:02.246387  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:02.246452  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:02.280111  662586 cri.go:89] found id: ""
	I1209 11:54:02.280157  662586 logs.go:282] 0 containers: []
	W1209 11:54:02.280168  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:02.280175  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:02.280246  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:02.314141  662586 cri.go:89] found id: ""
	I1209 11:54:02.314188  662586 logs.go:282] 0 containers: []
	W1209 11:54:02.314203  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:02.314216  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:02.314236  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:02.327220  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:02.327253  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:02.396099  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:02.396127  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:02.396142  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:02.478096  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:02.478148  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:02.515144  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:02.515175  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:53:59.896109  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:02.396485  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:04.396925  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:01.450279  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:03.450732  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:05.451265  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:02.092010  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:04.092427  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:05.069286  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:05.082453  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:05.082540  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:05.116263  662586 cri.go:89] found id: ""
	I1209 11:54:05.116299  662586 logs.go:282] 0 containers: []
	W1209 11:54:05.116313  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:05.116321  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:05.116388  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:05.150736  662586 cri.go:89] found id: ""
	I1209 11:54:05.150775  662586 logs.go:282] 0 containers: []
	W1209 11:54:05.150788  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:05.150796  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:05.150864  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:05.183757  662586 cri.go:89] found id: ""
	I1209 11:54:05.183792  662586 logs.go:282] 0 containers: []
	W1209 11:54:05.183804  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:05.183812  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:05.183873  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:05.215986  662586 cri.go:89] found id: ""
	I1209 11:54:05.216017  662586 logs.go:282] 0 containers: []
	W1209 11:54:05.216029  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:05.216038  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:05.216096  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:05.247648  662586 cri.go:89] found id: ""
	I1209 11:54:05.247686  662586 logs.go:282] 0 containers: []
	W1209 11:54:05.247698  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:05.247707  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:05.247776  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:05.279455  662586 cri.go:89] found id: ""
	I1209 11:54:05.279484  662586 logs.go:282] 0 containers: []
	W1209 11:54:05.279495  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:05.279504  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:05.279567  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:05.320334  662586 cri.go:89] found id: ""
	I1209 11:54:05.320374  662586 logs.go:282] 0 containers: []
	W1209 11:54:05.320387  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:05.320398  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:05.320490  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:05.353475  662586 cri.go:89] found id: ""
	I1209 11:54:05.353503  662586 logs.go:282] 0 containers: []
	W1209 11:54:05.353512  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:05.353522  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:05.353536  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:05.368320  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:05.368357  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:05.442152  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:05.442193  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:05.442212  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:05.523726  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:05.523768  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:05.562405  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:05.562438  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:06.895764  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:08.897032  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:07.454237  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:09.456440  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:06.591474  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:08.591578  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:11.091599  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:08.115564  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:08.129426  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:08.129523  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:08.162412  662586 cri.go:89] found id: ""
	I1209 11:54:08.162454  662586 logs.go:282] 0 containers: []
	W1209 11:54:08.162467  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:08.162477  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:08.162543  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:08.196821  662586 cri.go:89] found id: ""
	I1209 11:54:08.196860  662586 logs.go:282] 0 containers: []
	W1209 11:54:08.196873  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:08.196882  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:08.196949  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:08.233068  662586 cri.go:89] found id: ""
	I1209 11:54:08.233106  662586 logs.go:282] 0 containers: []
	W1209 11:54:08.233117  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:08.233124  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:08.233184  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:08.268683  662586 cri.go:89] found id: ""
	I1209 11:54:08.268715  662586 logs.go:282] 0 containers: []
	W1209 11:54:08.268724  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:08.268731  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:08.268790  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:08.303237  662586 cri.go:89] found id: ""
	I1209 11:54:08.303276  662586 logs.go:282] 0 containers: []
	W1209 11:54:08.303288  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:08.303309  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:08.303393  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:08.339513  662586 cri.go:89] found id: ""
	I1209 11:54:08.339543  662586 logs.go:282] 0 containers: []
	W1209 11:54:08.339551  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:08.339557  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:08.339612  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:08.376237  662586 cri.go:89] found id: ""
	I1209 11:54:08.376268  662586 logs.go:282] 0 containers: []
	W1209 11:54:08.376289  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:08.376298  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:08.376363  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:08.410530  662586 cri.go:89] found id: ""
	I1209 11:54:08.410560  662586 logs.go:282] 0 containers: []
	W1209 11:54:08.410568  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:08.410577  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:08.410589  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:08.460064  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:08.460101  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:08.474547  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:08.474582  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:08.544231  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:08.544260  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:08.544277  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:08.624727  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:08.624775  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:11.167943  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:11.183210  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:11.183294  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:11.221326  662586 cri.go:89] found id: ""
	I1209 11:54:11.221356  662586 logs.go:282] 0 containers: []
	W1209 11:54:11.221365  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:11.221371  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:11.221434  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:11.254688  662586 cri.go:89] found id: ""
	I1209 11:54:11.254721  662586 logs.go:282] 0 containers: []
	W1209 11:54:11.254730  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:11.254736  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:11.254801  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:11.287611  662586 cri.go:89] found id: ""
	I1209 11:54:11.287649  662586 logs.go:282] 0 containers: []
	W1209 11:54:11.287660  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:11.287666  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:11.287738  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:11.320533  662586 cri.go:89] found id: ""
	I1209 11:54:11.320565  662586 logs.go:282] 0 containers: []
	W1209 11:54:11.320574  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:11.320580  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:11.320638  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:11.362890  662586 cri.go:89] found id: ""
	I1209 11:54:11.362923  662586 logs.go:282] 0 containers: []
	W1209 11:54:11.362933  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:11.362939  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:11.363007  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:11.418729  662586 cri.go:89] found id: ""
	I1209 11:54:11.418762  662586 logs.go:282] 0 containers: []
	W1209 11:54:11.418772  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:11.418779  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:11.418837  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:11.455336  662586 cri.go:89] found id: ""
	I1209 11:54:11.455374  662586 logs.go:282] 0 containers: []
	W1209 11:54:11.455388  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:11.455397  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:11.455479  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:11.491307  662586 cri.go:89] found id: ""
	I1209 11:54:11.491334  662586 logs.go:282] 0 containers: []
	W1209 11:54:11.491344  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:11.491355  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:11.491369  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:11.543161  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:11.543204  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:11.556633  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:11.556670  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:11.626971  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:11.627001  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:11.627025  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:11.702061  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:11.702107  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:11.396167  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:13.897097  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:11.952029  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:14.451701  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:13.590749  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:15.591845  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:14.245241  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:14.258461  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:14.258544  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:14.292108  662586 cri.go:89] found id: ""
	I1209 11:54:14.292147  662586 logs.go:282] 0 containers: []
	W1209 11:54:14.292156  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:14.292163  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:14.292219  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:14.327347  662586 cri.go:89] found id: ""
	I1209 11:54:14.327381  662586 logs.go:282] 0 containers: []
	W1209 11:54:14.327394  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:14.327403  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:14.327484  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:14.361188  662586 cri.go:89] found id: ""
	I1209 11:54:14.361220  662586 logs.go:282] 0 containers: []
	W1209 11:54:14.361229  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:14.361236  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:14.361290  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:14.394898  662586 cri.go:89] found id: ""
	I1209 11:54:14.394936  662586 logs.go:282] 0 containers: []
	W1209 11:54:14.394948  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:14.394960  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:14.395027  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:14.429326  662586 cri.go:89] found id: ""
	I1209 11:54:14.429402  662586 logs.go:282] 0 containers: []
	W1209 11:54:14.429420  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:14.429431  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:14.429510  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:14.462903  662586 cri.go:89] found id: ""
	I1209 11:54:14.462938  662586 logs.go:282] 0 containers: []
	W1209 11:54:14.462947  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:14.462954  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:14.463009  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:14.496362  662586 cri.go:89] found id: ""
	I1209 11:54:14.496396  662586 logs.go:282] 0 containers: []
	W1209 11:54:14.496409  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:14.496418  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:14.496562  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:14.530052  662586 cri.go:89] found id: ""
	I1209 11:54:14.530085  662586 logs.go:282] 0 containers: []
	W1209 11:54:14.530098  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:14.530111  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:14.530131  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:14.543096  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:14.543133  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:14.611030  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:14.611055  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:14.611067  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:14.684984  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:14.685041  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:14.722842  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:14.722881  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:17.275868  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:17.288812  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:17.288898  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:17.323732  662586 cri.go:89] found id: ""
	I1209 11:54:17.323766  662586 logs.go:282] 0 containers: []
	W1209 11:54:17.323777  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:17.323786  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:17.323852  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:17.367753  662586 cri.go:89] found id: ""
	I1209 11:54:17.367788  662586 logs.go:282] 0 containers: []
	W1209 11:54:17.367801  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:17.367810  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:17.367878  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:17.411444  662586 cri.go:89] found id: ""
	I1209 11:54:17.411476  662586 logs.go:282] 0 containers: []
	W1209 11:54:17.411488  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:17.411496  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:17.411563  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:17.450790  662586 cri.go:89] found id: ""
	I1209 11:54:17.450821  662586 logs.go:282] 0 containers: []
	W1209 11:54:17.450832  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:17.450840  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:17.450913  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:17.488824  662586 cri.go:89] found id: ""
	I1209 11:54:17.488859  662586 logs.go:282] 0 containers: []
	W1209 11:54:17.488869  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:17.488876  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:17.488948  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:17.522051  662586 cri.go:89] found id: ""
	I1209 11:54:17.522085  662586 logs.go:282] 0 containers: []
	W1209 11:54:17.522094  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:17.522102  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:17.522165  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:17.556653  662586 cri.go:89] found id: ""
	I1209 11:54:17.556687  662586 logs.go:282] 0 containers: []
	W1209 11:54:17.556700  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:17.556707  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:17.556783  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:17.591303  662586 cri.go:89] found id: ""
	I1209 11:54:17.591337  662586 logs.go:282] 0 containers: []
	W1209 11:54:17.591355  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:17.591367  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:17.591384  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:17.656675  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:17.656699  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:17.656712  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:16.396574  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:18.896050  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:16.950508  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:19.451294  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:18.091307  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:20.091489  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:17.739894  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:17.739939  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:17.789486  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:17.789517  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:17.843606  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:17.843648  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:20.361896  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:20.378015  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:20.378105  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:20.412252  662586 cri.go:89] found id: ""
	I1209 11:54:20.412299  662586 logs.go:282] 0 containers: []
	W1209 11:54:20.412311  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:20.412327  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:20.412396  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:20.443638  662586 cri.go:89] found id: ""
	I1209 11:54:20.443671  662586 logs.go:282] 0 containers: []
	W1209 11:54:20.443682  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:20.443690  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:20.443758  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:20.478578  662586 cri.go:89] found id: ""
	I1209 11:54:20.478613  662586 logs.go:282] 0 containers: []
	W1209 11:54:20.478625  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:20.478634  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:20.478704  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:20.512232  662586 cri.go:89] found id: ""
	I1209 11:54:20.512266  662586 logs.go:282] 0 containers: []
	W1209 11:54:20.512279  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:20.512295  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:20.512357  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:20.544358  662586 cri.go:89] found id: ""
	I1209 11:54:20.544398  662586 logs.go:282] 0 containers: []
	W1209 11:54:20.544413  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:20.544429  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:20.544494  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:20.579476  662586 cri.go:89] found id: ""
	I1209 11:54:20.579513  662586 logs.go:282] 0 containers: []
	W1209 11:54:20.579525  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:20.579533  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:20.579600  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:20.613851  662586 cri.go:89] found id: ""
	I1209 11:54:20.613884  662586 logs.go:282] 0 containers: []
	W1209 11:54:20.613897  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:20.613903  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:20.613973  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:20.647311  662586 cri.go:89] found id: ""
	I1209 11:54:20.647342  662586 logs.go:282] 0 containers: []
	W1209 11:54:20.647351  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:20.647362  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:20.647375  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:20.695798  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:20.695839  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:20.709443  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:20.709478  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:20.779211  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:20.779237  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:20.779253  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:20.857966  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:20.858012  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:20.896168  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:22.896667  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:21.455716  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:23.950823  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:25.952038  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:22.592225  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:25.091934  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:23.398095  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:23.412622  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:23.412686  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:23.446582  662586 cri.go:89] found id: ""
	I1209 11:54:23.446616  662586 logs.go:282] 0 containers: []
	W1209 11:54:23.446628  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:23.446637  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:23.446705  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:23.487896  662586 cri.go:89] found id: ""
	I1209 11:54:23.487926  662586 logs.go:282] 0 containers: []
	W1209 11:54:23.487935  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:23.487941  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:23.488007  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:23.521520  662586 cri.go:89] found id: ""
	I1209 11:54:23.521559  662586 logs.go:282] 0 containers: []
	W1209 11:54:23.521571  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:23.521579  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:23.521651  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:23.561296  662586 cri.go:89] found id: ""
	I1209 11:54:23.561329  662586 logs.go:282] 0 containers: []
	W1209 11:54:23.561342  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:23.561350  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:23.561417  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:23.604936  662586 cri.go:89] found id: ""
	I1209 11:54:23.604965  662586 logs.go:282] 0 containers: []
	W1209 11:54:23.604976  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:23.604985  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:23.605055  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:23.665193  662586 cri.go:89] found id: ""
	I1209 11:54:23.665225  662586 logs.go:282] 0 containers: []
	W1209 11:54:23.665237  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:23.665247  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:23.665315  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:23.700202  662586 cri.go:89] found id: ""
	I1209 11:54:23.700239  662586 logs.go:282] 0 containers: []
	W1209 11:54:23.700251  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:23.700259  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:23.700336  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:23.734877  662586 cri.go:89] found id: ""
	I1209 11:54:23.734907  662586 logs.go:282] 0 containers: []
	W1209 11:54:23.734917  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:23.734927  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:23.734941  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:23.817328  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:23.817371  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:23.855052  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:23.855085  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:23.909107  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:23.909154  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:23.924198  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:23.924227  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:23.991976  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:26.492366  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:26.506223  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:26.506299  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:26.544932  662586 cri.go:89] found id: ""
	I1209 11:54:26.544974  662586 logs.go:282] 0 containers: []
	W1209 11:54:26.544987  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:26.544997  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:26.545080  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:26.579581  662586 cri.go:89] found id: ""
	I1209 11:54:26.579621  662586 logs.go:282] 0 containers: []
	W1209 11:54:26.579634  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:26.579643  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:26.579716  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:26.612510  662586 cri.go:89] found id: ""
	I1209 11:54:26.612545  662586 logs.go:282] 0 containers: []
	W1209 11:54:26.612567  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:26.612577  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:26.612646  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:26.646273  662586 cri.go:89] found id: ""
	I1209 11:54:26.646306  662586 logs.go:282] 0 containers: []
	W1209 11:54:26.646316  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:26.646322  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:26.646376  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:26.682027  662586 cri.go:89] found id: ""
	I1209 11:54:26.682063  662586 logs.go:282] 0 containers: []
	W1209 11:54:26.682072  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:26.682078  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:26.682132  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:26.715822  662586 cri.go:89] found id: ""
	I1209 11:54:26.715876  662586 logs.go:282] 0 containers: []
	W1209 11:54:26.715889  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:26.715898  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:26.715964  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:26.755976  662586 cri.go:89] found id: ""
	I1209 11:54:26.756016  662586 logs.go:282] 0 containers: []
	W1209 11:54:26.756031  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:26.756040  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:26.756122  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:26.787258  662586 cri.go:89] found id: ""
	I1209 11:54:26.787297  662586 logs.go:282] 0 containers: []
	W1209 11:54:26.787308  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:26.787319  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:26.787333  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:26.800534  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:26.800573  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:26.865767  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:26.865798  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:26.865824  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:26.950409  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:26.950460  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:26.994281  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:26.994320  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:25.396411  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:27.894846  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:28.451141  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:30.455101  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:27.591769  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:30.091528  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:29.544568  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:29.565182  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:29.565263  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:29.625116  662586 cri.go:89] found id: ""
	I1209 11:54:29.625155  662586 logs.go:282] 0 containers: []
	W1209 11:54:29.625168  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:29.625181  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:29.625257  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:29.673689  662586 cri.go:89] found id: ""
	I1209 11:54:29.673727  662586 logs.go:282] 0 containers: []
	W1209 11:54:29.673739  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:29.673747  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:29.673811  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:29.705925  662586 cri.go:89] found id: ""
	I1209 11:54:29.705959  662586 logs.go:282] 0 containers: []
	W1209 11:54:29.705971  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:29.705979  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:29.706033  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:29.738731  662586 cri.go:89] found id: ""
	I1209 11:54:29.738759  662586 logs.go:282] 0 containers: []
	W1209 11:54:29.738767  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:29.738774  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:29.738832  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:29.770778  662586 cri.go:89] found id: ""
	I1209 11:54:29.770814  662586 logs.go:282] 0 containers: []
	W1209 11:54:29.770826  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:29.770833  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:29.770899  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:29.801925  662586 cri.go:89] found id: ""
	I1209 11:54:29.801961  662586 logs.go:282] 0 containers: []
	W1209 11:54:29.801973  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:29.801981  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:29.802050  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:29.833681  662586 cri.go:89] found id: ""
	I1209 11:54:29.833712  662586 logs.go:282] 0 containers: []
	W1209 11:54:29.833722  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:29.833727  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:29.833791  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:29.873666  662586 cri.go:89] found id: ""
	I1209 11:54:29.873700  662586 logs.go:282] 0 containers: []
	W1209 11:54:29.873712  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:29.873722  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:29.873735  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:29.914855  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:29.914895  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:29.967730  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:29.967772  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:29.982037  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:29.982070  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:30.047168  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:30.047195  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:30.047212  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:32.623371  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:32.636346  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:32.636411  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:32.677709  662586 cri.go:89] found id: ""
	I1209 11:54:32.677736  662586 logs.go:282] 0 containers: []
	W1209 11:54:32.677744  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:32.677753  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:32.677805  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:29.896176  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:32.395216  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:32.952287  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:35.451456  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:32.092615  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:34.591397  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:32.710906  662586 cri.go:89] found id: ""
	I1209 11:54:32.710933  662586 logs.go:282] 0 containers: []
	W1209 11:54:32.710942  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:32.710948  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:32.711000  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:32.744623  662586 cri.go:89] found id: ""
	I1209 11:54:32.744654  662586 logs.go:282] 0 containers: []
	W1209 11:54:32.744667  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:32.744676  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:32.744736  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:32.779334  662586 cri.go:89] found id: ""
	I1209 11:54:32.779364  662586 logs.go:282] 0 containers: []
	W1209 11:54:32.779375  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:32.779382  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:32.779443  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:32.814998  662586 cri.go:89] found id: ""
	I1209 11:54:32.815032  662586 logs.go:282] 0 containers: []
	W1209 11:54:32.815046  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:32.815055  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:32.815128  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:32.850054  662586 cri.go:89] found id: ""
	I1209 11:54:32.850099  662586 logs.go:282] 0 containers: []
	W1209 11:54:32.850116  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:32.850127  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:32.850213  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:32.885769  662586 cri.go:89] found id: ""
	I1209 11:54:32.885805  662586 logs.go:282] 0 containers: []
	W1209 11:54:32.885818  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:32.885827  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:32.885899  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:32.927973  662586 cri.go:89] found id: ""
	I1209 11:54:32.928001  662586 logs.go:282] 0 containers: []
	W1209 11:54:32.928010  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:32.928019  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:32.928032  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:32.981915  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:32.981966  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:32.995817  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:32.995851  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:33.062409  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:33.062445  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:33.062462  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:33.146967  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:33.147011  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:35.688225  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:35.701226  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:35.701325  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:35.738628  662586 cri.go:89] found id: ""
	I1209 11:54:35.738655  662586 logs.go:282] 0 containers: []
	W1209 11:54:35.738663  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:35.738670  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:35.738737  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:35.771125  662586 cri.go:89] found id: ""
	I1209 11:54:35.771163  662586 logs.go:282] 0 containers: []
	W1209 11:54:35.771177  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:35.771187  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:35.771260  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:35.806244  662586 cri.go:89] found id: ""
	I1209 11:54:35.806277  662586 logs.go:282] 0 containers: []
	W1209 11:54:35.806290  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:35.806301  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:35.806376  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:35.839871  662586 cri.go:89] found id: ""
	I1209 11:54:35.839912  662586 logs.go:282] 0 containers: []
	W1209 11:54:35.839925  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:35.839932  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:35.840010  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:35.874994  662586 cri.go:89] found id: ""
	I1209 11:54:35.875034  662586 logs.go:282] 0 containers: []
	W1209 11:54:35.875046  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:35.875054  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:35.875129  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:35.910802  662586 cri.go:89] found id: ""
	I1209 11:54:35.910834  662586 logs.go:282] 0 containers: []
	W1209 11:54:35.910846  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:35.910855  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:35.910927  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:35.944633  662586 cri.go:89] found id: ""
	I1209 11:54:35.944663  662586 logs.go:282] 0 containers: []
	W1209 11:54:35.944672  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:35.944678  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:35.944749  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:35.982732  662586 cri.go:89] found id: ""
	I1209 11:54:35.982781  662586 logs.go:282] 0 containers: []
	W1209 11:54:35.982796  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:35.982811  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:35.982830  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:35.996271  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:35.996302  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:36.063463  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:36.063533  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:36.063554  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:36.141789  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:36.141833  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:36.187015  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:36.187047  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:34.895890  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:37.396472  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:37.951404  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:40.452814  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:37.091548  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:39.092168  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:38.739585  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:38.754322  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:38.754394  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:38.792497  662586 cri.go:89] found id: ""
	I1209 11:54:38.792525  662586 logs.go:282] 0 containers: []
	W1209 11:54:38.792535  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:38.792543  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:38.792608  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:38.829730  662586 cri.go:89] found id: ""
	I1209 11:54:38.829759  662586 logs.go:282] 0 containers: []
	W1209 11:54:38.829768  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:38.829774  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:38.829834  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:38.869942  662586 cri.go:89] found id: ""
	I1209 11:54:38.869981  662586 logs.go:282] 0 containers: []
	W1209 11:54:38.869994  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:38.870015  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:38.870085  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:38.906001  662586 cri.go:89] found id: ""
	I1209 11:54:38.906041  662586 logs.go:282] 0 containers: []
	W1209 11:54:38.906054  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:38.906063  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:38.906133  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:38.944389  662586 cri.go:89] found id: ""
	I1209 11:54:38.944427  662586 logs.go:282] 0 containers: []
	W1209 11:54:38.944445  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:38.944453  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:38.944534  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:38.979633  662586 cri.go:89] found id: ""
	I1209 11:54:38.979665  662586 logs.go:282] 0 containers: []
	W1209 11:54:38.979674  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:38.979681  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:38.979735  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:39.016366  662586 cri.go:89] found id: ""
	I1209 11:54:39.016402  662586 logs.go:282] 0 containers: []
	W1209 11:54:39.016416  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:39.016424  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:39.016489  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:39.049084  662586 cri.go:89] found id: ""
	I1209 11:54:39.049116  662586 logs.go:282] 0 containers: []
	W1209 11:54:39.049125  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:39.049134  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:39.049148  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:39.113953  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:39.113985  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:39.114004  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:39.191715  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:39.191767  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:39.232127  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:39.232167  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:39.281406  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:39.281448  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:41.795395  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:41.810293  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:41.810364  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:41.849819  662586 cri.go:89] found id: ""
	I1209 11:54:41.849858  662586 logs.go:282] 0 containers: []
	W1209 11:54:41.849872  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:41.849882  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:41.849952  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:41.883871  662586 cri.go:89] found id: ""
	I1209 11:54:41.883908  662586 logs.go:282] 0 containers: []
	W1209 11:54:41.883934  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:41.883942  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:41.884017  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:41.918194  662586 cri.go:89] found id: ""
	I1209 11:54:41.918230  662586 logs.go:282] 0 containers: []
	W1209 11:54:41.918239  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:41.918245  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:41.918312  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:41.950878  662586 cri.go:89] found id: ""
	I1209 11:54:41.950912  662586 logs.go:282] 0 containers: []
	W1209 11:54:41.950924  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:41.950933  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:41.950995  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:41.982922  662586 cri.go:89] found id: ""
	I1209 11:54:41.982964  662586 logs.go:282] 0 containers: []
	W1209 11:54:41.982976  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:41.982985  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:41.983064  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:42.014066  662586 cri.go:89] found id: ""
	I1209 11:54:42.014107  662586 logs.go:282] 0 containers: []
	W1209 11:54:42.014120  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:42.014129  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:42.014229  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:42.048017  662586 cri.go:89] found id: ""
	I1209 11:54:42.048056  662586 logs.go:282] 0 containers: []
	W1209 11:54:42.048070  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:42.048079  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:42.048146  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:42.080585  662586 cri.go:89] found id: ""
	I1209 11:54:42.080614  662586 logs.go:282] 0 containers: []
	W1209 11:54:42.080624  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:42.080634  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:42.080646  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:42.135012  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:42.135054  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:42.148424  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:42.148462  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:42.219179  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:42.219206  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:42.219230  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:42.305855  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:42.305902  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:39.895830  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:41.896255  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:44.398373  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:42.949835  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:44.951542  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:41.590831  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:43.592053  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:45.593044  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:44.843158  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:44.856317  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:44.856380  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:44.890940  662586 cri.go:89] found id: ""
	I1209 11:54:44.890984  662586 logs.go:282] 0 containers: []
	W1209 11:54:44.891003  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:44.891012  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:44.891081  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:44.923657  662586 cri.go:89] found id: ""
	I1209 11:54:44.923684  662586 logs.go:282] 0 containers: []
	W1209 11:54:44.923692  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:44.923698  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:44.923769  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:44.957512  662586 cri.go:89] found id: ""
	I1209 11:54:44.957545  662586 logs.go:282] 0 containers: []
	W1209 11:54:44.957558  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:44.957566  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:44.957636  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:44.998084  662586 cri.go:89] found id: ""
	I1209 11:54:44.998112  662586 logs.go:282] 0 containers: []
	W1209 11:54:44.998121  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:44.998128  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:44.998210  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:45.030335  662586 cri.go:89] found id: ""
	I1209 11:54:45.030360  662586 logs.go:282] 0 containers: []
	W1209 11:54:45.030369  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:45.030375  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:45.030447  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:45.063098  662586 cri.go:89] found id: ""
	I1209 11:54:45.063127  662586 logs.go:282] 0 containers: []
	W1209 11:54:45.063135  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:45.063141  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:45.063210  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:45.098430  662586 cri.go:89] found id: ""
	I1209 11:54:45.098458  662586 logs.go:282] 0 containers: []
	W1209 11:54:45.098466  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:45.098472  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:45.098526  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:45.132064  662586 cri.go:89] found id: ""
	I1209 11:54:45.132094  662586 logs.go:282] 0 containers: []
	W1209 11:54:45.132102  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:45.132113  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:45.132131  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:45.185512  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:45.185556  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:45.199543  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:45.199572  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:45.268777  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:45.268803  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:45.268817  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:45.352250  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:45.352299  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:46.897153  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:49.395935  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:46.952862  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:49.450006  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:48.092394  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:50.591937  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:47.892201  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:47.906961  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:47.907053  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:47.941349  662586 cri.go:89] found id: ""
	I1209 11:54:47.941394  662586 logs.go:282] 0 containers: []
	W1209 11:54:47.941408  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:47.941418  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:47.941479  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:47.981086  662586 cri.go:89] found id: ""
	I1209 11:54:47.981120  662586 logs.go:282] 0 containers: []
	W1209 11:54:47.981133  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:47.981141  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:47.981210  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:48.014105  662586 cri.go:89] found id: ""
	I1209 11:54:48.014142  662586 logs.go:282] 0 containers: []
	W1209 11:54:48.014151  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:48.014162  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:48.014249  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:48.049506  662586 cri.go:89] found id: ""
	I1209 11:54:48.049535  662586 logs.go:282] 0 containers: []
	W1209 11:54:48.049544  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:48.049552  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:48.049619  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:48.084284  662586 cri.go:89] found id: ""
	I1209 11:54:48.084314  662586 logs.go:282] 0 containers: []
	W1209 11:54:48.084324  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:48.084336  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:48.084406  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:48.117318  662586 cri.go:89] found id: ""
	I1209 11:54:48.117349  662586 logs.go:282] 0 containers: []
	W1209 11:54:48.117362  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:48.117371  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:48.117441  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:48.150121  662586 cri.go:89] found id: ""
	I1209 11:54:48.150151  662586 logs.go:282] 0 containers: []
	W1209 11:54:48.150187  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:48.150198  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:48.150266  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:48.180919  662586 cri.go:89] found id: ""
	I1209 11:54:48.180947  662586 logs.go:282] 0 containers: []
	W1209 11:54:48.180955  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:48.180966  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:48.180978  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:48.249572  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:48.249602  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:48.249617  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:48.324508  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:48.324552  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:48.363856  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:48.363901  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:48.415662  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:48.415721  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:50.929811  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:50.943650  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:50.943714  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:50.976444  662586 cri.go:89] found id: ""
	I1209 11:54:50.976480  662586 logs.go:282] 0 containers: []
	W1209 11:54:50.976493  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:50.976502  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:50.976574  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:51.016567  662586 cri.go:89] found id: ""
	I1209 11:54:51.016600  662586 logs.go:282] 0 containers: []
	W1209 11:54:51.016613  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:51.016621  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:51.016699  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:51.048933  662586 cri.go:89] found id: ""
	I1209 11:54:51.048967  662586 logs.go:282] 0 containers: []
	W1209 11:54:51.048977  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:51.048986  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:51.049073  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:51.083292  662586 cri.go:89] found id: ""
	I1209 11:54:51.083333  662586 logs.go:282] 0 containers: []
	W1209 11:54:51.083345  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:51.083354  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:51.083423  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:51.118505  662586 cri.go:89] found id: ""
	I1209 11:54:51.118547  662586 logs.go:282] 0 containers: []
	W1209 11:54:51.118560  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:51.118571  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:51.118644  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:51.152818  662586 cri.go:89] found id: ""
	I1209 11:54:51.152847  662586 logs.go:282] 0 containers: []
	W1209 11:54:51.152856  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:51.152870  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:51.152922  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:51.186953  662586 cri.go:89] found id: ""
	I1209 11:54:51.186981  662586 logs.go:282] 0 containers: []
	W1209 11:54:51.186991  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:51.186997  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:51.187063  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:51.219305  662586 cri.go:89] found id: ""
	I1209 11:54:51.219337  662586 logs.go:282] 0 containers: []
	W1209 11:54:51.219348  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:51.219361  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:51.219380  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:51.256295  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:51.256338  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:51.313751  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:51.313806  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:51.326940  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:51.326977  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:51.397395  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:51.397428  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:51.397445  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:51.396434  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:53.896554  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:51.456719  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:53.951566  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:52.592043  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:55.091800  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:53.975557  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:53.989509  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:53.989581  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:54.024363  662586 cri.go:89] found id: ""
	I1209 11:54:54.024403  662586 logs.go:282] 0 containers: []
	W1209 11:54:54.024416  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:54.024423  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:54.024484  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:54.062618  662586 cri.go:89] found id: ""
	I1209 11:54:54.062649  662586 logs.go:282] 0 containers: []
	W1209 11:54:54.062659  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:54.062667  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:54.062739  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:54.100194  662586 cri.go:89] found id: ""
	I1209 11:54:54.100231  662586 logs.go:282] 0 containers: []
	W1209 11:54:54.100243  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:54.100252  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:54.100324  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:54.135302  662586 cri.go:89] found id: ""
	I1209 11:54:54.135341  662586 logs.go:282] 0 containers: []
	W1209 11:54:54.135354  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:54.135363  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:54.135447  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:54.170898  662586 cri.go:89] found id: ""
	I1209 11:54:54.170940  662586 logs.go:282] 0 containers: []
	W1209 11:54:54.170953  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:54.170963  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:54.171035  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:54.205098  662586 cri.go:89] found id: ""
	I1209 11:54:54.205138  662586 logs.go:282] 0 containers: []
	W1209 11:54:54.205151  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:54.205159  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:54.205223  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:54.239153  662586 cri.go:89] found id: ""
	I1209 11:54:54.239210  662586 logs.go:282] 0 containers: []
	W1209 11:54:54.239226  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:54.239234  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:54.239307  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:54.278213  662586 cri.go:89] found id: ""
	I1209 11:54:54.278248  662586 logs.go:282] 0 containers: []
	W1209 11:54:54.278260  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:54.278275  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:54.278296  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:54.348095  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:54.348128  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:54.348156  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:54.427181  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:54.427224  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:54.467623  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:54.467656  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:54.519690  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:54.519734  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:57.033524  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:57.046420  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:57.046518  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:57.079588  662586 cri.go:89] found id: ""
	I1209 11:54:57.079616  662586 logs.go:282] 0 containers: []
	W1209 11:54:57.079626  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:57.079633  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:57.079687  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:57.114944  662586 cri.go:89] found id: ""
	I1209 11:54:57.114973  662586 logs.go:282] 0 containers: []
	W1209 11:54:57.114982  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:57.114988  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:57.115043  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:57.147667  662586 cri.go:89] found id: ""
	I1209 11:54:57.147708  662586 logs.go:282] 0 containers: []
	W1209 11:54:57.147721  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:57.147730  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:57.147794  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:57.182339  662586 cri.go:89] found id: ""
	I1209 11:54:57.182370  662586 logs.go:282] 0 containers: []
	W1209 11:54:57.182386  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:57.182395  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:57.182470  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:57.223129  662586 cri.go:89] found id: ""
	I1209 11:54:57.223170  662586 logs.go:282] 0 containers: []
	W1209 11:54:57.223186  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:57.223197  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:57.223270  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:57.262351  662586 cri.go:89] found id: ""
	I1209 11:54:57.262386  662586 logs.go:282] 0 containers: []
	W1209 11:54:57.262398  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:57.262409  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:57.262471  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:57.298743  662586 cri.go:89] found id: ""
	I1209 11:54:57.298772  662586 logs.go:282] 0 containers: []
	W1209 11:54:57.298782  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:57.298789  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:57.298856  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:57.339030  662586 cri.go:89] found id: ""
	I1209 11:54:57.339064  662586 logs.go:282] 0 containers: []
	W1209 11:54:57.339073  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:57.339085  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:57.339122  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:57.352603  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:57.352637  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:57.426627  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:57.426653  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:57.426669  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:57.515357  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:57.515401  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:57.554882  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:57.554925  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:56.396610  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:58.895822  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:56.451429  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:58.951440  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:57.590864  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:00.091967  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:00.112082  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:00.124977  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:00.125056  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:00.159003  662586 cri.go:89] found id: ""
	I1209 11:55:00.159032  662586 logs.go:282] 0 containers: []
	W1209 11:55:00.159041  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:00.159048  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:00.159101  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:00.192479  662586 cri.go:89] found id: ""
	I1209 11:55:00.192515  662586 logs.go:282] 0 containers: []
	W1209 11:55:00.192527  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:00.192533  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:00.192587  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:00.226146  662586 cri.go:89] found id: ""
	I1209 11:55:00.226194  662586 logs.go:282] 0 containers: []
	W1209 11:55:00.226208  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:00.226216  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:00.226273  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:00.260389  662586 cri.go:89] found id: ""
	I1209 11:55:00.260420  662586 logs.go:282] 0 containers: []
	W1209 11:55:00.260430  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:00.260442  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:00.260500  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:00.296091  662586 cri.go:89] found id: ""
	I1209 11:55:00.296121  662586 logs.go:282] 0 containers: []
	W1209 11:55:00.296131  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:00.296138  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:00.296195  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:00.332101  662586 cri.go:89] found id: ""
	I1209 11:55:00.332137  662586 logs.go:282] 0 containers: []
	W1209 11:55:00.332150  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:00.332158  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:00.332244  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:00.377329  662586 cri.go:89] found id: ""
	I1209 11:55:00.377358  662586 logs.go:282] 0 containers: []
	W1209 11:55:00.377368  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:00.377374  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:00.377438  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:00.415660  662586 cri.go:89] found id: ""
	I1209 11:55:00.415688  662586 logs.go:282] 0 containers: []
	W1209 11:55:00.415751  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:00.415767  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:00.415781  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:00.467734  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:00.467776  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:00.481244  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:00.481280  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:00.545721  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:00.545755  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:00.545777  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:00.624482  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:00.624533  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:01.396452  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:03.895539  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:01.452337  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:03.950752  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:05.951246  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:02.092654  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:04.592173  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:03.168340  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:03.183354  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:03.183439  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:03.223131  662586 cri.go:89] found id: ""
	I1209 11:55:03.223171  662586 logs.go:282] 0 containers: []
	W1209 11:55:03.223185  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:03.223193  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:03.223263  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:03.256561  662586 cri.go:89] found id: ""
	I1209 11:55:03.256595  662586 logs.go:282] 0 containers: []
	W1209 11:55:03.256603  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:03.256609  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:03.256667  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:03.289670  662586 cri.go:89] found id: ""
	I1209 11:55:03.289707  662586 logs.go:282] 0 containers: []
	W1209 11:55:03.289722  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:03.289738  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:03.289813  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:03.323687  662586 cri.go:89] found id: ""
	I1209 11:55:03.323714  662586 logs.go:282] 0 containers: []
	W1209 11:55:03.323724  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:03.323730  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:03.323786  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:03.358163  662586 cri.go:89] found id: ""
	I1209 11:55:03.358221  662586 logs.go:282] 0 containers: []
	W1209 11:55:03.358233  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:03.358241  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:03.358311  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:03.399688  662586 cri.go:89] found id: ""
	I1209 11:55:03.399721  662586 logs.go:282] 0 containers: []
	W1209 11:55:03.399734  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:03.399744  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:03.399812  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:03.433909  662586 cri.go:89] found id: ""
	I1209 11:55:03.433939  662586 logs.go:282] 0 containers: []
	W1209 11:55:03.433948  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:03.433954  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:03.434011  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:03.470208  662586 cri.go:89] found id: ""
	I1209 11:55:03.470239  662586 logs.go:282] 0 containers: []
	W1209 11:55:03.470248  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:03.470270  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:03.470289  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:03.545801  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:03.545848  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:03.584357  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:03.584389  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:03.641241  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:03.641283  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:03.657034  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:03.657080  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:03.731285  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:06.232380  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:06.246339  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:06.246411  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:06.281323  662586 cri.go:89] found id: ""
	I1209 11:55:06.281362  662586 logs.go:282] 0 containers: []
	W1209 11:55:06.281377  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:06.281385  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:06.281444  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:06.318225  662586 cri.go:89] found id: ""
	I1209 11:55:06.318261  662586 logs.go:282] 0 containers: []
	W1209 11:55:06.318277  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:06.318293  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:06.318364  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:06.353649  662586 cri.go:89] found id: ""
	I1209 11:55:06.353685  662586 logs.go:282] 0 containers: []
	W1209 11:55:06.353699  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:06.353708  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:06.353782  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:06.395204  662586 cri.go:89] found id: ""
	I1209 11:55:06.395242  662586 logs.go:282] 0 containers: []
	W1209 11:55:06.395257  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:06.395266  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:06.395335  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:06.436421  662586 cri.go:89] found id: ""
	I1209 11:55:06.436452  662586 logs.go:282] 0 containers: []
	W1209 11:55:06.436462  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:06.436469  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:06.436524  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:06.472218  662586 cri.go:89] found id: ""
	I1209 11:55:06.472246  662586 logs.go:282] 0 containers: []
	W1209 11:55:06.472255  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:06.472268  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:06.472335  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:06.506585  662586 cri.go:89] found id: ""
	I1209 11:55:06.506629  662586 logs.go:282] 0 containers: []
	W1209 11:55:06.506640  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:06.506647  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:06.506702  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:06.541442  662586 cri.go:89] found id: ""
	I1209 11:55:06.541472  662586 logs.go:282] 0 containers: []
	W1209 11:55:06.541481  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:06.541493  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:06.541512  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:06.592642  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:06.592682  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:06.606764  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:06.606805  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:06.677693  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:06.677720  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:06.677740  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:06.766074  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:06.766124  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:05.896263  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:08.396283  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:07.951409  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:10.451540  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:06.592724  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:09.091961  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:09.305144  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:09.319352  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:09.319444  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:09.357918  662586 cri.go:89] found id: ""
	I1209 11:55:09.358027  662586 logs.go:282] 0 containers: []
	W1209 11:55:09.358066  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:09.358077  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:09.358139  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:09.413181  662586 cri.go:89] found id: ""
	I1209 11:55:09.413213  662586 logs.go:282] 0 containers: []
	W1209 11:55:09.413226  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:09.413234  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:09.413310  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:09.448417  662586 cri.go:89] found id: ""
	I1209 11:55:09.448460  662586 logs.go:282] 0 containers: []
	W1209 11:55:09.448471  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:09.448480  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:09.448566  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:09.489732  662586 cri.go:89] found id: ""
	I1209 11:55:09.489765  662586 logs.go:282] 0 containers: []
	W1209 11:55:09.489775  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:09.489781  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:09.489845  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:09.524919  662586 cri.go:89] found id: ""
	I1209 11:55:09.524948  662586 logs.go:282] 0 containers: []
	W1209 11:55:09.524959  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:09.524968  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:09.525051  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:09.563268  662586 cri.go:89] found id: ""
	I1209 11:55:09.563301  662586 logs.go:282] 0 containers: []
	W1209 11:55:09.563311  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:09.563318  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:09.563373  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:09.598747  662586 cri.go:89] found id: ""
	I1209 11:55:09.598780  662586 logs.go:282] 0 containers: []
	W1209 11:55:09.598790  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:09.598798  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:09.598866  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:09.634447  662586 cri.go:89] found id: ""
	I1209 11:55:09.634479  662586 logs.go:282] 0 containers: []
	W1209 11:55:09.634492  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:09.634505  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:09.634520  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:09.647380  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:09.647419  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:09.721335  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:09.721363  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:09.721380  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:09.801039  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:09.801088  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:09.840929  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:09.840971  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:12.393810  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:12.407553  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:12.407654  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:12.444391  662586 cri.go:89] found id: ""
	I1209 11:55:12.444437  662586 logs.go:282] 0 containers: []
	W1209 11:55:12.444450  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:12.444459  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:12.444533  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:12.482714  662586 cri.go:89] found id: ""
	I1209 11:55:12.482752  662586 logs.go:282] 0 containers: []
	W1209 11:55:12.482764  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:12.482771  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:12.482853  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:12.518139  662586 cri.go:89] found id: ""
	I1209 11:55:12.518187  662586 logs.go:282] 0 containers: []
	W1209 11:55:12.518202  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:12.518211  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:12.518281  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:12.556903  662586 cri.go:89] found id: ""
	I1209 11:55:12.556938  662586 logs.go:282] 0 containers: []
	W1209 11:55:12.556950  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:12.556958  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:12.557028  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:12.591915  662586 cri.go:89] found id: ""
	I1209 11:55:12.591953  662586 logs.go:282] 0 containers: []
	W1209 11:55:12.591963  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:12.591971  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:12.592038  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:12.629767  662586 cri.go:89] found id: ""
	I1209 11:55:12.629797  662586 logs.go:282] 0 containers: []
	W1209 11:55:12.629806  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:12.629812  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:12.629878  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:12.667677  662586 cri.go:89] found id: ""
	I1209 11:55:12.667710  662586 logs.go:282] 0 containers: []
	W1209 11:55:12.667720  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:12.667727  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:12.667781  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:10.896109  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:12.896992  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:12.451770  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:14.952359  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:11.591952  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:14.092213  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:12.705720  662586 cri.go:89] found id: ""
	I1209 11:55:12.705747  662586 logs.go:282] 0 containers: []
	W1209 11:55:12.705756  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:12.705766  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:12.705780  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:12.758399  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:12.758441  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:12.772297  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:12.772336  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:12.839545  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:12.839569  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:12.839582  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:12.918424  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:12.918467  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:15.458122  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:15.473193  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:15.473284  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:15.508756  662586 cri.go:89] found id: ""
	I1209 11:55:15.508790  662586 logs.go:282] 0 containers: []
	W1209 11:55:15.508799  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:15.508806  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:15.508862  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:15.544735  662586 cri.go:89] found id: ""
	I1209 11:55:15.544770  662586 logs.go:282] 0 containers: []
	W1209 11:55:15.544782  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:15.544791  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:15.544866  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:15.577169  662586 cri.go:89] found id: ""
	I1209 11:55:15.577200  662586 logs.go:282] 0 containers: []
	W1209 11:55:15.577210  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:15.577216  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:15.577277  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:15.610662  662586 cri.go:89] found id: ""
	I1209 11:55:15.610690  662586 logs.go:282] 0 containers: []
	W1209 11:55:15.610700  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:15.610707  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:15.610763  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:15.645339  662586 cri.go:89] found id: ""
	I1209 11:55:15.645375  662586 logs.go:282] 0 containers: []
	W1209 11:55:15.645386  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:15.645394  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:15.645469  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:15.682044  662586 cri.go:89] found id: ""
	I1209 11:55:15.682079  662586 logs.go:282] 0 containers: []
	W1209 11:55:15.682096  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:15.682106  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:15.682201  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:15.717193  662586 cri.go:89] found id: ""
	I1209 11:55:15.717228  662586 logs.go:282] 0 containers: []
	W1209 11:55:15.717245  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:15.717256  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:15.717332  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:15.751756  662586 cri.go:89] found id: ""
	I1209 11:55:15.751792  662586 logs.go:282] 0 containers: []
	W1209 11:55:15.751803  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:15.751813  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:15.751827  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:15.811010  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:15.811063  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:15.842556  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:15.842597  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:15.920169  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:15.920195  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:15.920209  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:16.003180  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:16.003226  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:15.395666  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:17.396041  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:19.396262  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:17.451272  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:19.951638  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:16.591423  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:18.592456  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:21.090108  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:18.542563  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:18.555968  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:18.556059  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:18.588746  662586 cri.go:89] found id: ""
	I1209 11:55:18.588780  662586 logs.go:282] 0 containers: []
	W1209 11:55:18.588790  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:18.588797  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:18.588854  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:18.623664  662586 cri.go:89] found id: ""
	I1209 11:55:18.623707  662586 logs.go:282] 0 containers: []
	W1209 11:55:18.623720  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:18.623728  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:18.623798  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:18.659012  662586 cri.go:89] found id: ""
	I1209 11:55:18.659051  662586 logs.go:282] 0 containers: []
	W1209 11:55:18.659065  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:18.659074  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:18.659148  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:18.693555  662586 cri.go:89] found id: ""
	I1209 11:55:18.693588  662586 logs.go:282] 0 containers: []
	W1209 11:55:18.693600  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:18.693607  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:18.693661  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:18.726609  662586 cri.go:89] found id: ""
	I1209 11:55:18.726641  662586 logs.go:282] 0 containers: []
	W1209 11:55:18.726652  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:18.726659  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:18.726712  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:18.760654  662586 cri.go:89] found id: ""
	I1209 11:55:18.760682  662586 logs.go:282] 0 containers: []
	W1209 11:55:18.760694  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:18.760704  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:18.760761  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:18.794656  662586 cri.go:89] found id: ""
	I1209 11:55:18.794688  662586 logs.go:282] 0 containers: []
	W1209 11:55:18.794699  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:18.794706  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:18.794769  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:18.829988  662586 cri.go:89] found id: ""
	I1209 11:55:18.830030  662586 logs.go:282] 0 containers: []
	W1209 11:55:18.830045  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:18.830059  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:18.830073  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:18.872523  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:18.872558  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:18.929408  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:18.929449  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:18.943095  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:18.943133  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:19.009125  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:19.009150  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:19.009164  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:21.587418  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:21.606271  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:21.606358  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:21.653536  662586 cri.go:89] found id: ""
	I1209 11:55:21.653574  662586 logs.go:282] 0 containers: []
	W1209 11:55:21.653586  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:21.653595  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:21.653671  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:21.687023  662586 cri.go:89] found id: ""
	I1209 11:55:21.687049  662586 logs.go:282] 0 containers: []
	W1209 11:55:21.687060  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:21.687068  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:21.687131  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:21.720112  662586 cri.go:89] found id: ""
	I1209 11:55:21.720150  662586 logs.go:282] 0 containers: []
	W1209 11:55:21.720163  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:21.720171  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:21.720243  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:21.754697  662586 cri.go:89] found id: ""
	I1209 11:55:21.754729  662586 logs.go:282] 0 containers: []
	W1209 11:55:21.754740  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:21.754749  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:21.754814  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:21.793926  662586 cri.go:89] found id: ""
	I1209 11:55:21.793957  662586 logs.go:282] 0 containers: []
	W1209 11:55:21.793967  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:21.793973  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:21.794040  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:21.827572  662586 cri.go:89] found id: ""
	I1209 11:55:21.827609  662586 logs.go:282] 0 containers: []
	W1209 11:55:21.827622  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:21.827633  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:21.827700  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:21.861442  662586 cri.go:89] found id: ""
	I1209 11:55:21.861472  662586 logs.go:282] 0 containers: []
	W1209 11:55:21.861490  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:21.861499  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:21.861565  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:21.894858  662586 cri.go:89] found id: ""
	I1209 11:55:21.894884  662586 logs.go:282] 0 containers: []
	W1209 11:55:21.894892  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:21.894901  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:21.894914  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:21.942567  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:21.942625  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:21.956849  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:21.956879  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:22.020700  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:22.020724  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:22.020738  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:22.095730  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:22.095767  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:21.896304  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:24.395936  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:21.951928  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:24.450997  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:23.090962  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:25.091816  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:24.631715  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:24.644165  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:24.644252  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:24.677720  662586 cri.go:89] found id: ""
	I1209 11:55:24.677757  662586 logs.go:282] 0 containers: []
	W1209 11:55:24.677769  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:24.677778  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:24.677835  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:24.711053  662586 cri.go:89] found id: ""
	I1209 11:55:24.711086  662586 logs.go:282] 0 containers: []
	W1209 11:55:24.711095  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:24.711101  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:24.711154  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:24.744107  662586 cri.go:89] found id: ""
	I1209 11:55:24.744139  662586 logs.go:282] 0 containers: []
	W1209 11:55:24.744148  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:24.744154  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:24.744210  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:24.777811  662586 cri.go:89] found id: ""
	I1209 11:55:24.777853  662586 logs.go:282] 0 containers: []
	W1209 11:55:24.777866  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:24.777876  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:24.777938  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:24.810524  662586 cri.go:89] found id: ""
	I1209 11:55:24.810558  662586 logs.go:282] 0 containers: []
	W1209 11:55:24.810571  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:24.810580  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:24.810648  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:24.843551  662586 cri.go:89] found id: ""
	I1209 11:55:24.843582  662586 logs.go:282] 0 containers: []
	W1209 11:55:24.843590  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:24.843597  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:24.843649  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:24.875342  662586 cri.go:89] found id: ""
	I1209 11:55:24.875371  662586 logs.go:282] 0 containers: []
	W1209 11:55:24.875384  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:24.875390  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:24.875446  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:24.910298  662586 cri.go:89] found id: ""
	I1209 11:55:24.910329  662586 logs.go:282] 0 containers: []
	W1209 11:55:24.910340  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:24.910352  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:24.910377  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:24.962151  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:24.962204  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:24.976547  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:24.976577  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:25.050606  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:25.050635  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:25.050652  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:25.134204  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:25.134254  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:27.671220  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:27.685132  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:27.685194  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:26.895311  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:28.895954  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:26.950106  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:28.950915  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:30.952019  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:27.591908  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:30.090353  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:27.718113  662586 cri.go:89] found id: ""
	I1209 11:55:27.718141  662586 logs.go:282] 0 containers: []
	W1209 11:55:27.718150  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:27.718160  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:27.718242  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:27.752350  662586 cri.go:89] found id: ""
	I1209 11:55:27.752384  662586 logs.go:282] 0 containers: []
	W1209 11:55:27.752395  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:27.752401  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:27.752481  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:27.797360  662586 cri.go:89] found id: ""
	I1209 11:55:27.797393  662586 logs.go:282] 0 containers: []
	W1209 11:55:27.797406  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:27.797415  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:27.797488  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:27.834549  662586 cri.go:89] found id: ""
	I1209 11:55:27.834579  662586 logs.go:282] 0 containers: []
	W1209 11:55:27.834588  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:27.834594  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:27.834655  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:27.874403  662586 cri.go:89] found id: ""
	I1209 11:55:27.874440  662586 logs.go:282] 0 containers: []
	W1209 11:55:27.874465  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:27.874474  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:27.874557  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:27.914324  662586 cri.go:89] found id: ""
	I1209 11:55:27.914360  662586 logs.go:282] 0 containers: []
	W1209 11:55:27.914373  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:27.914380  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:27.914450  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:27.948001  662586 cri.go:89] found id: ""
	I1209 11:55:27.948043  662586 logs.go:282] 0 containers: []
	W1209 11:55:27.948056  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:27.948066  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:27.948219  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:27.982329  662586 cri.go:89] found id: ""
	I1209 11:55:27.982360  662586 logs.go:282] 0 containers: []
	W1209 11:55:27.982369  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:27.982379  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:27.982391  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:28.038165  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:28.038228  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:28.051578  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:28.051609  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:28.119914  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:28.119937  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:28.119951  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:28.195634  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:28.195679  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:30.735392  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:30.748430  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:30.748521  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:30.780500  662586 cri.go:89] found id: ""
	I1209 11:55:30.780528  662586 logs.go:282] 0 containers: []
	W1209 11:55:30.780537  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:30.780544  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:30.780606  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:30.812430  662586 cri.go:89] found id: ""
	I1209 11:55:30.812462  662586 logs.go:282] 0 containers: []
	W1209 11:55:30.812470  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:30.812477  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:30.812530  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:30.854030  662586 cri.go:89] found id: ""
	I1209 11:55:30.854057  662586 logs.go:282] 0 containers: []
	W1209 11:55:30.854066  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:30.854073  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:30.854130  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:30.892144  662586 cri.go:89] found id: ""
	I1209 11:55:30.892182  662586 logs.go:282] 0 containers: []
	W1209 11:55:30.892202  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:30.892211  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:30.892284  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:30.927540  662586 cri.go:89] found id: ""
	I1209 11:55:30.927576  662586 logs.go:282] 0 containers: []
	W1209 11:55:30.927590  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:30.927597  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:30.927660  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:30.963820  662586 cri.go:89] found id: ""
	I1209 11:55:30.963852  662586 logs.go:282] 0 containers: []
	W1209 11:55:30.963861  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:30.963867  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:30.963920  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:30.997793  662586 cri.go:89] found id: ""
	I1209 11:55:30.997819  662586 logs.go:282] 0 containers: []
	W1209 11:55:30.997828  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:30.997836  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:30.997902  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:31.031649  662586 cri.go:89] found id: ""
	I1209 11:55:31.031699  662586 logs.go:282] 0 containers: []
	W1209 11:55:31.031712  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:31.031726  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:31.031746  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:31.101464  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:31.101492  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:31.101509  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:31.184635  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:31.184681  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:31.222690  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:31.222732  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:31.276518  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:31.276566  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:30.896544  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:33.395861  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:33.451560  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:35.952567  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:32.091788  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:34.592091  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:33.790941  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:33.805299  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:33.805390  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:33.844205  662586 cri.go:89] found id: ""
	I1209 11:55:33.844241  662586 logs.go:282] 0 containers: []
	W1209 11:55:33.844253  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:33.844262  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:33.844337  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:33.883378  662586 cri.go:89] found id: ""
	I1209 11:55:33.883410  662586 logs.go:282] 0 containers: []
	W1209 11:55:33.883424  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:33.883431  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:33.883505  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:33.920007  662586 cri.go:89] found id: ""
	I1209 11:55:33.920049  662586 logs.go:282] 0 containers: []
	W1209 11:55:33.920061  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:33.920074  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:33.920141  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:33.956111  662586 cri.go:89] found id: ""
	I1209 11:55:33.956163  662586 logs.go:282] 0 containers: []
	W1209 11:55:33.956175  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:33.956183  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:33.956241  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:33.990057  662586 cri.go:89] found id: ""
	I1209 11:55:33.990092  662586 logs.go:282] 0 containers: []
	W1209 11:55:33.990102  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:33.990109  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:33.990166  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:34.023046  662586 cri.go:89] found id: ""
	I1209 11:55:34.023082  662586 logs.go:282] 0 containers: []
	W1209 11:55:34.023096  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:34.023103  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:34.023171  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:34.055864  662586 cri.go:89] found id: ""
	I1209 11:55:34.055898  662586 logs.go:282] 0 containers: []
	W1209 11:55:34.055909  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:34.055916  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:34.055987  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:34.091676  662586 cri.go:89] found id: ""
	I1209 11:55:34.091710  662586 logs.go:282] 0 containers: []
	W1209 11:55:34.091722  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:34.091733  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:34.091747  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:34.142959  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:34.143002  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:34.156431  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:34.156466  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:34.230277  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:34.230303  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:34.230320  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:34.313660  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:34.313713  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:36.850056  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:36.862486  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:36.862582  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:36.893134  662586 cri.go:89] found id: ""
	I1209 11:55:36.893163  662586 logs.go:282] 0 containers: []
	W1209 11:55:36.893173  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:36.893179  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:36.893257  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:36.927438  662586 cri.go:89] found id: ""
	I1209 11:55:36.927469  662586 logs.go:282] 0 containers: []
	W1209 11:55:36.927479  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:36.927485  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:36.927546  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:36.958787  662586 cri.go:89] found id: ""
	I1209 11:55:36.958818  662586 logs.go:282] 0 containers: []
	W1209 11:55:36.958829  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:36.958837  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:36.958901  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:36.995470  662586 cri.go:89] found id: ""
	I1209 11:55:36.995508  662586 logs.go:282] 0 containers: []
	W1209 11:55:36.995520  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:36.995529  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:36.995590  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:37.026705  662586 cri.go:89] found id: ""
	I1209 11:55:37.026736  662586 logs.go:282] 0 containers: []
	W1209 11:55:37.026746  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:37.026752  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:37.026805  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:37.059717  662586 cri.go:89] found id: ""
	I1209 11:55:37.059748  662586 logs.go:282] 0 containers: []
	W1209 11:55:37.059756  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:37.059762  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:37.059820  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:37.094049  662586 cri.go:89] found id: ""
	I1209 11:55:37.094076  662586 logs.go:282] 0 containers: []
	W1209 11:55:37.094088  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:37.094097  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:37.094190  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:37.128684  662586 cri.go:89] found id: ""
	I1209 11:55:37.128715  662586 logs.go:282] 0 containers: []
	W1209 11:55:37.128724  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:37.128735  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:37.128755  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:37.177932  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:37.177973  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:37.191218  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:37.191252  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:37.256488  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:37.256521  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:37.256538  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:37.330603  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:37.330647  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:35.895823  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:37.895972  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:37.952764  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:40.450704  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:37.092013  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:39.591402  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:39.868604  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:39.881991  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:39.882063  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:39.916750  662586 cri.go:89] found id: ""
	I1209 11:55:39.916786  662586 logs.go:282] 0 containers: []
	W1209 11:55:39.916799  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:39.916806  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:39.916874  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:39.957744  662586 cri.go:89] found id: ""
	I1209 11:55:39.957773  662586 logs.go:282] 0 containers: []
	W1209 11:55:39.957781  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:39.957788  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:39.957854  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:39.994613  662586 cri.go:89] found id: ""
	I1209 11:55:39.994645  662586 logs.go:282] 0 containers: []
	W1209 11:55:39.994654  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:39.994661  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:39.994726  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:40.032606  662586 cri.go:89] found id: ""
	I1209 11:55:40.032635  662586 logs.go:282] 0 containers: []
	W1209 11:55:40.032644  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:40.032650  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:40.032710  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:40.067172  662586 cri.go:89] found id: ""
	I1209 11:55:40.067204  662586 logs.go:282] 0 containers: []
	W1209 11:55:40.067214  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:40.067221  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:40.067278  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:40.101391  662586 cri.go:89] found id: ""
	I1209 11:55:40.101423  662586 logs.go:282] 0 containers: []
	W1209 11:55:40.101432  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:40.101439  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:40.101510  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:40.133160  662586 cri.go:89] found id: ""
	I1209 11:55:40.133196  662586 logs.go:282] 0 containers: []
	W1209 11:55:40.133209  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:40.133217  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:40.133283  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:40.166105  662586 cri.go:89] found id: ""
	I1209 11:55:40.166137  662586 logs.go:282] 0 containers: []
	W1209 11:55:40.166145  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:40.166160  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:40.166187  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:40.231525  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:40.231559  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:40.231582  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:40.311298  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:40.311354  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:40.350040  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:40.350077  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:40.404024  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:40.404061  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:39.896541  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:42.396800  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:42.453720  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:44.950595  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:42.091300  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:44.591230  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:42.917868  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:42.930289  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:42.930357  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:42.962822  662586 cri.go:89] found id: ""
	I1209 11:55:42.962856  662586 logs.go:282] 0 containers: []
	W1209 11:55:42.962869  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:42.962878  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:42.962950  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:42.996932  662586 cri.go:89] found id: ""
	I1209 11:55:42.996962  662586 logs.go:282] 0 containers: []
	W1209 11:55:42.996972  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:42.996979  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:42.997040  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:43.031782  662586 cri.go:89] found id: ""
	I1209 11:55:43.031824  662586 logs.go:282] 0 containers: []
	W1209 11:55:43.031837  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:43.031846  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:43.031915  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:43.064717  662586 cri.go:89] found id: ""
	I1209 11:55:43.064751  662586 logs.go:282] 0 containers: []
	W1209 11:55:43.064764  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:43.064774  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:43.064851  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:43.097248  662586 cri.go:89] found id: ""
	I1209 11:55:43.097278  662586 logs.go:282] 0 containers: []
	W1209 11:55:43.097287  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:43.097294  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:43.097356  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:43.135726  662586 cri.go:89] found id: ""
	I1209 11:55:43.135766  662586 logs.go:282] 0 containers: []
	W1209 11:55:43.135779  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:43.135788  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:43.135881  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:43.171120  662586 cri.go:89] found id: ""
	I1209 11:55:43.171148  662586 logs.go:282] 0 containers: []
	W1209 11:55:43.171157  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:43.171163  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:43.171216  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:43.207488  662586 cri.go:89] found id: ""
	I1209 11:55:43.207523  662586 logs.go:282] 0 containers: []
	W1209 11:55:43.207533  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:43.207545  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:43.207565  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:43.276112  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:43.276142  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:43.276159  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:43.354942  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:43.354990  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:43.392755  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:43.392800  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:43.445708  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:43.445752  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:45.962533  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:45.975508  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:45.975589  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:46.009619  662586 cri.go:89] found id: ""
	I1209 11:55:46.009653  662586 logs.go:282] 0 containers: []
	W1209 11:55:46.009663  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:46.009670  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:46.009726  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:46.042218  662586 cri.go:89] found id: ""
	I1209 11:55:46.042250  662586 logs.go:282] 0 containers: []
	W1209 11:55:46.042259  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:46.042265  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:46.042318  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:46.076204  662586 cri.go:89] found id: ""
	I1209 11:55:46.076239  662586 logs.go:282] 0 containers: []
	W1209 11:55:46.076249  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:46.076255  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:46.076326  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:46.113117  662586 cri.go:89] found id: ""
	I1209 11:55:46.113145  662586 logs.go:282] 0 containers: []
	W1209 11:55:46.113154  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:46.113160  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:46.113225  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:46.148232  662586 cri.go:89] found id: ""
	I1209 11:55:46.148277  662586 logs.go:282] 0 containers: []
	W1209 11:55:46.148293  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:46.148303  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:46.148379  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:46.185028  662586 cri.go:89] found id: ""
	I1209 11:55:46.185083  662586 logs.go:282] 0 containers: []
	W1209 11:55:46.185096  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:46.185106  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:46.185200  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:46.222882  662586 cri.go:89] found id: ""
	I1209 11:55:46.222920  662586 logs.go:282] 0 containers: []
	W1209 11:55:46.222933  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:46.222941  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:46.223007  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:46.263486  662586 cri.go:89] found id: ""
	I1209 11:55:46.263528  662586 logs.go:282] 0 containers: []
	W1209 11:55:46.263538  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:46.263549  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:46.263565  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:46.340524  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:46.340550  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:46.340567  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:46.422768  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:46.422810  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:46.464344  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:46.464382  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:46.517311  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:46.517354  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:44.895283  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:46.895427  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:48.895674  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:46.952912  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:48.953432  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:46.591521  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:49.093057  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:49.031192  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:49.043840  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:49.043929  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:49.077648  662586 cri.go:89] found id: ""
	I1209 11:55:49.077705  662586 logs.go:282] 0 containers: []
	W1209 11:55:49.077720  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:49.077730  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:49.077802  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:49.114111  662586 cri.go:89] found id: ""
	I1209 11:55:49.114138  662586 logs.go:282] 0 containers: []
	W1209 11:55:49.114146  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:49.114154  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:49.114236  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:49.147870  662586 cri.go:89] found id: ""
	I1209 11:55:49.147908  662586 logs.go:282] 0 containers: []
	W1209 11:55:49.147917  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:49.147923  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:49.147976  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:49.185223  662586 cri.go:89] found id: ""
	I1209 11:55:49.185256  662586 logs.go:282] 0 containers: []
	W1209 11:55:49.185269  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:49.185277  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:49.185350  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:49.218037  662586 cri.go:89] found id: ""
	I1209 11:55:49.218068  662586 logs.go:282] 0 containers: []
	W1209 11:55:49.218077  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:49.218084  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:49.218138  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:49.255483  662586 cri.go:89] found id: ""
	I1209 11:55:49.255522  662586 logs.go:282] 0 containers: []
	W1209 11:55:49.255535  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:49.255549  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:49.255629  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:49.288623  662586 cri.go:89] found id: ""
	I1209 11:55:49.288650  662586 logs.go:282] 0 containers: []
	W1209 11:55:49.288659  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:49.288666  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:49.288732  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:49.322880  662586 cri.go:89] found id: ""
	I1209 11:55:49.322913  662586 logs.go:282] 0 containers: []
	W1209 11:55:49.322921  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:49.322930  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:49.322943  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:49.372380  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:49.372428  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:49.385877  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:49.385914  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:49.460078  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:49.460101  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:49.460114  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:49.534588  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:49.534647  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:52.071408  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:52.084198  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:52.084276  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:52.118908  662586 cri.go:89] found id: ""
	I1209 11:55:52.118937  662586 logs.go:282] 0 containers: []
	W1209 11:55:52.118950  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:52.118958  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:52.119026  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:52.156494  662586 cri.go:89] found id: ""
	I1209 11:55:52.156521  662586 logs.go:282] 0 containers: []
	W1209 11:55:52.156530  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:52.156535  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:52.156586  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:52.196037  662586 cri.go:89] found id: ""
	I1209 11:55:52.196075  662586 logs.go:282] 0 containers: []
	W1209 11:55:52.196094  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:52.196102  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:52.196177  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:52.229436  662586 cri.go:89] found id: ""
	I1209 11:55:52.229465  662586 logs.go:282] 0 containers: []
	W1209 11:55:52.229477  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:52.229486  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:52.229558  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:52.268751  662586 cri.go:89] found id: ""
	I1209 11:55:52.268785  662586 logs.go:282] 0 containers: []
	W1209 11:55:52.268797  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:52.268805  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:52.268871  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:52.302405  662586 cri.go:89] found id: ""
	I1209 11:55:52.302436  662586 logs.go:282] 0 containers: []
	W1209 11:55:52.302446  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:52.302453  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:52.302522  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:52.338641  662586 cri.go:89] found id: ""
	I1209 11:55:52.338676  662586 logs.go:282] 0 containers: []
	W1209 11:55:52.338688  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:52.338698  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:52.338754  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:52.375541  662586 cri.go:89] found id: ""
	I1209 11:55:52.375578  662586 logs.go:282] 0 containers: []
	W1209 11:55:52.375591  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:52.375604  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:52.375624  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:52.389140  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:52.389190  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:52.460520  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:52.460546  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:52.460562  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:52.535234  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:52.535280  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:52.573317  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:52.573354  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:50.896292  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:52.896875  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:51.453540  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:53.456640  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:55.950197  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:51.590899  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:53.591317  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:56.092219  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:55.124068  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:55.136800  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:55.136868  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:55.169724  662586 cri.go:89] found id: ""
	I1209 11:55:55.169757  662586 logs.go:282] 0 containers: []
	W1209 11:55:55.169769  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:55.169777  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:55.169843  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:55.207466  662586 cri.go:89] found id: ""
	I1209 11:55:55.207514  662586 logs.go:282] 0 containers: []
	W1209 11:55:55.207528  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:55.207537  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:55.207600  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:55.241761  662586 cri.go:89] found id: ""
	I1209 11:55:55.241790  662586 logs.go:282] 0 containers: []
	W1209 11:55:55.241801  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:55.241809  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:55.241874  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:55.274393  662586 cri.go:89] found id: ""
	I1209 11:55:55.274434  662586 logs.go:282] 0 containers: []
	W1209 11:55:55.274447  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:55.274455  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:55.274522  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:55.307942  662586 cri.go:89] found id: ""
	I1209 11:55:55.307988  662586 logs.go:282] 0 containers: []
	W1209 11:55:55.308002  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:55.308012  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:55.308088  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:55.340074  662586 cri.go:89] found id: ""
	I1209 11:55:55.340107  662586 logs.go:282] 0 containers: []
	W1209 11:55:55.340116  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:55.340122  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:55.340196  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:55.388077  662586 cri.go:89] found id: ""
	I1209 11:55:55.388119  662586 logs.go:282] 0 containers: []
	W1209 11:55:55.388140  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:55.388149  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:55.388230  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:55.422923  662586 cri.go:89] found id: ""
	I1209 11:55:55.422961  662586 logs.go:282] 0 containers: []
	W1209 11:55:55.422975  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:55.422990  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:55.423008  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:55.476178  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:55.476219  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:55.489891  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:55.489919  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:55.555705  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:55.555726  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:55.555745  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:55.634818  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:55.634862  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:55.396320  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:57.895122  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:57.951119  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:00.451659  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:58.092427  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:00.590304  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:58.173169  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:58.188529  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:58.188620  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:58.225602  662586 cri.go:89] found id: ""
	I1209 11:55:58.225630  662586 logs.go:282] 0 containers: []
	W1209 11:55:58.225641  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:58.225649  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:58.225709  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:58.259597  662586 cri.go:89] found id: ""
	I1209 11:55:58.259638  662586 logs.go:282] 0 containers: []
	W1209 11:55:58.259652  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:58.259662  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:58.259744  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:58.293287  662586 cri.go:89] found id: ""
	I1209 11:55:58.293320  662586 logs.go:282] 0 containers: []
	W1209 11:55:58.293329  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:58.293336  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:58.293390  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:58.326581  662586 cri.go:89] found id: ""
	I1209 11:55:58.326611  662586 logs.go:282] 0 containers: []
	W1209 11:55:58.326622  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:58.326630  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:58.326699  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:58.359636  662586 cri.go:89] found id: ""
	I1209 11:55:58.359665  662586 logs.go:282] 0 containers: []
	W1209 11:55:58.359675  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:58.359681  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:58.359736  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:58.396767  662586 cri.go:89] found id: ""
	I1209 11:55:58.396798  662586 logs.go:282] 0 containers: []
	W1209 11:55:58.396809  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:58.396818  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:58.396887  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:58.428907  662586 cri.go:89] found id: ""
	I1209 11:55:58.428941  662586 logs.go:282] 0 containers: []
	W1209 11:55:58.428954  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:58.428962  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:58.429032  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:58.466082  662586 cri.go:89] found id: ""
	I1209 11:55:58.466124  662586 logs.go:282] 0 containers: []
	W1209 11:55:58.466136  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:58.466149  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:58.466186  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:58.542333  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:58.542378  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:58.582397  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:58.582436  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:58.632980  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:58.633030  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:58.648464  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:58.648514  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:58.711714  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:56:01.212475  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:56:01.225574  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:56:01.225642  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:56:01.259666  662586 cri.go:89] found id: ""
	I1209 11:56:01.259704  662586 logs.go:282] 0 containers: []
	W1209 11:56:01.259718  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:56:01.259726  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:56:01.259800  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:56:01.295433  662586 cri.go:89] found id: ""
	I1209 11:56:01.295474  662586 logs.go:282] 0 containers: []
	W1209 11:56:01.295495  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:56:01.295503  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:56:01.295561  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:56:01.330316  662586 cri.go:89] found id: ""
	I1209 11:56:01.330352  662586 logs.go:282] 0 containers: []
	W1209 11:56:01.330364  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:56:01.330373  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:56:01.330447  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:56:01.366762  662586 cri.go:89] found id: ""
	I1209 11:56:01.366797  662586 logs.go:282] 0 containers: []
	W1209 11:56:01.366808  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:56:01.366814  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:56:01.366878  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:56:01.403511  662586 cri.go:89] found id: ""
	I1209 11:56:01.403539  662586 logs.go:282] 0 containers: []
	W1209 11:56:01.403547  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:56:01.403553  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:56:01.403604  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:56:01.436488  662586 cri.go:89] found id: ""
	I1209 11:56:01.436526  662586 logs.go:282] 0 containers: []
	W1209 11:56:01.436538  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:56:01.436546  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:56:01.436617  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:56:01.471647  662586 cri.go:89] found id: ""
	I1209 11:56:01.471676  662586 logs.go:282] 0 containers: []
	W1209 11:56:01.471685  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:56:01.471690  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:56:01.471744  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:56:01.504065  662586 cri.go:89] found id: ""
	I1209 11:56:01.504099  662586 logs.go:282] 0 containers: []
	W1209 11:56:01.504111  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:56:01.504124  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:56:01.504143  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:56:01.553434  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:56:01.553482  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:56:01.567537  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:56:01.567579  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:56:01.636968  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:56:01.636995  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:56:01.637012  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:56:01.713008  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:56:01.713049  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:59.896841  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:02.396972  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:02.451893  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:04.453118  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:02.591218  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:04.592199  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:04.253143  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:56:04.266428  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:56:04.266512  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:56:04.298769  662586 cri.go:89] found id: ""
	I1209 11:56:04.298810  662586 logs.go:282] 0 containers: []
	W1209 11:56:04.298823  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:56:04.298833  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:56:04.298913  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:56:04.330392  662586 cri.go:89] found id: ""
	I1209 11:56:04.330428  662586 logs.go:282] 0 containers: []
	W1209 11:56:04.330441  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:56:04.330449  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:56:04.330528  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:56:04.362409  662586 cri.go:89] found id: ""
	I1209 11:56:04.362443  662586 logs.go:282] 0 containers: []
	W1209 11:56:04.362455  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:56:04.362463  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:56:04.362544  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:56:04.396853  662586 cri.go:89] found id: ""
	I1209 11:56:04.396884  662586 logs.go:282] 0 containers: []
	W1209 11:56:04.396893  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:56:04.396899  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:56:04.396966  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:56:04.430425  662586 cri.go:89] found id: ""
	I1209 11:56:04.430461  662586 logs.go:282] 0 containers: []
	W1209 11:56:04.430470  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:56:04.430477  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:56:04.430531  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:56:04.465354  662586 cri.go:89] found id: ""
	I1209 11:56:04.465391  662586 logs.go:282] 0 containers: []
	W1209 11:56:04.465403  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:56:04.465411  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:56:04.465480  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:56:04.500114  662586 cri.go:89] found id: ""
	I1209 11:56:04.500156  662586 logs.go:282] 0 containers: []
	W1209 11:56:04.500167  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:56:04.500179  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:56:04.500259  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:56:04.534853  662586 cri.go:89] found id: ""
	I1209 11:56:04.534888  662586 logs.go:282] 0 containers: []
	W1209 11:56:04.534902  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:56:04.534914  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:56:04.534928  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:56:04.586419  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:56:04.586457  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:56:04.600690  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:56:04.600728  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:56:04.669645  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:56:04.669685  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:56:04.669703  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:56:04.747973  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:56:04.748026  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:56:07.288721  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:56:07.302905  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:56:07.302975  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:56:07.336686  662586 cri.go:89] found id: ""
	I1209 11:56:07.336720  662586 logs.go:282] 0 containers: []
	W1209 11:56:07.336728  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:56:07.336735  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:56:07.336798  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:56:07.370119  662586 cri.go:89] found id: ""
	I1209 11:56:07.370150  662586 logs.go:282] 0 containers: []
	W1209 11:56:07.370159  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:56:07.370165  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:56:07.370245  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:56:07.402818  662586 cri.go:89] found id: ""
	I1209 11:56:07.402845  662586 logs.go:282] 0 containers: []
	W1209 11:56:07.402853  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:56:07.402861  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:56:07.402923  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:56:07.437694  662586 cri.go:89] found id: ""
	I1209 11:56:07.437722  662586 logs.go:282] 0 containers: []
	W1209 11:56:07.437732  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:56:07.437741  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:56:07.437806  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:56:07.474576  662586 cri.go:89] found id: ""
	I1209 11:56:07.474611  662586 logs.go:282] 0 containers: []
	W1209 11:56:07.474622  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:56:07.474629  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:56:07.474705  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:56:07.508538  662586 cri.go:89] found id: ""
	I1209 11:56:07.508575  662586 logs.go:282] 0 containers: []
	W1209 11:56:07.508585  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:56:07.508592  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:56:07.508661  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:56:07.548863  662586 cri.go:89] found id: ""
	I1209 11:56:07.548897  662586 logs.go:282] 0 containers: []
	W1209 11:56:07.548911  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:56:07.548922  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:56:07.549093  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:56:07.592515  662586 cri.go:89] found id: ""
	I1209 11:56:07.592543  662586 logs.go:282] 0 containers: []
	W1209 11:56:07.592555  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:56:07.592564  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:56:07.592579  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:56:07.652176  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:56:07.652219  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:56:04.895898  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:07.395712  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:09.398273  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:06.950668  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:09.450539  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:07.091573  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:09.591049  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:07.703040  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:56:07.703094  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:56:07.717880  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:56:07.717924  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:56:07.783396  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:56:07.783425  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:56:07.783441  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:56:10.362395  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:56:10.377478  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:56:10.377574  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:56:10.411923  662586 cri.go:89] found id: ""
	I1209 11:56:10.411956  662586 logs.go:282] 0 containers: []
	W1209 11:56:10.411969  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:56:10.411978  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:56:10.412049  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:56:10.444601  662586 cri.go:89] found id: ""
	I1209 11:56:10.444633  662586 logs.go:282] 0 containers: []
	W1209 11:56:10.444642  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:56:10.444648  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:56:10.444705  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:56:10.486720  662586 cri.go:89] found id: ""
	I1209 11:56:10.486753  662586 logs.go:282] 0 containers: []
	W1209 11:56:10.486763  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:56:10.486769  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:56:10.486822  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:56:10.523535  662586 cri.go:89] found id: ""
	I1209 11:56:10.523572  662586 logs.go:282] 0 containers: []
	W1209 11:56:10.523581  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:56:10.523587  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:56:10.523641  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:56:10.557701  662586 cri.go:89] found id: ""
	I1209 11:56:10.557741  662586 logs.go:282] 0 containers: []
	W1209 11:56:10.557754  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:56:10.557762  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:56:10.557834  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:56:10.593914  662586 cri.go:89] found id: ""
	I1209 11:56:10.593949  662586 logs.go:282] 0 containers: []
	W1209 11:56:10.593959  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:56:10.593965  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:56:10.594017  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:56:10.626367  662586 cri.go:89] found id: ""
	I1209 11:56:10.626469  662586 logs.go:282] 0 containers: []
	W1209 11:56:10.626482  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:56:10.626489  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:56:10.626547  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:56:10.665415  662586 cri.go:89] found id: ""
	I1209 11:56:10.665446  662586 logs.go:282] 0 containers: []
	W1209 11:56:10.665456  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:56:10.665467  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:56:10.665480  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:56:10.747483  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:56:10.747532  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:56:10.787728  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:56:10.787758  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:56:10.840678  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:56:10.840722  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:56:10.855774  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:56:10.855809  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:56:10.929638  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:56:11.896254  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:14.395661  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:11.451031  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:13.452502  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:15.951720  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:11.592197  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:13.593711  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:16.091641  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:13.430793  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:56:13.446156  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:56:13.446261  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:56:13.491624  662586 cri.go:89] found id: ""
	I1209 11:56:13.491662  662586 logs.go:282] 0 containers: []
	W1209 11:56:13.491675  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:56:13.491684  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:56:13.491758  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:56:13.537619  662586 cri.go:89] found id: ""
	I1209 11:56:13.537653  662586 logs.go:282] 0 containers: []
	W1209 11:56:13.537666  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:56:13.537675  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:56:13.537750  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:56:13.585761  662586 cri.go:89] found id: ""
	I1209 11:56:13.585796  662586 logs.go:282] 0 containers: []
	W1209 11:56:13.585810  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:56:13.585819  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:56:13.585883  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:56:13.620740  662586 cri.go:89] found id: ""
	I1209 11:56:13.620774  662586 logs.go:282] 0 containers: []
	W1209 11:56:13.620785  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:56:13.620791  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:56:13.620858  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:56:13.654405  662586 cri.go:89] found id: ""
	I1209 11:56:13.654433  662586 logs.go:282] 0 containers: []
	W1209 11:56:13.654442  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:56:13.654448  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:56:13.654509  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:56:13.687520  662586 cri.go:89] found id: ""
	I1209 11:56:13.687547  662586 logs.go:282] 0 containers: []
	W1209 11:56:13.687558  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:56:13.687566  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:56:13.687642  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:56:13.721105  662586 cri.go:89] found id: ""
	I1209 11:56:13.721140  662586 logs.go:282] 0 containers: []
	W1209 11:56:13.721153  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:56:13.721162  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:56:13.721238  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:56:13.753900  662586 cri.go:89] found id: ""
	I1209 11:56:13.753933  662586 logs.go:282] 0 containers: []
	W1209 11:56:13.753945  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:56:13.753960  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:56:13.753978  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:56:13.805864  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:56:13.805909  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:56:13.819356  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:56:13.819393  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:56:13.896097  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:56:13.896128  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:56:13.896150  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:56:13.979041  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:56:13.979084  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:56:16.516777  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:56:16.529916  662586 kubeadm.go:597] duration metric: took 4m1.869807937s to restartPrimaryControlPlane
	W1209 11:56:16.530015  662586 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1209 11:56:16.530067  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1209 11:56:16.396353  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:18.896097  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:18.451294  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:20.452525  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:18.092780  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:20.593275  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:18.635832  662586 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.105742271s)
	I1209 11:56:18.635914  662586 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 11:56:18.651678  662586 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 11:56:18.661965  662586 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 11:56:18.672060  662586 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 11:56:18.672082  662586 kubeadm.go:157] found existing configuration files:
	
	I1209 11:56:18.672147  662586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 11:56:18.681627  662586 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 11:56:18.681697  662586 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 11:56:18.691514  662586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 11:56:18.701210  662586 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 11:56:18.701292  662586 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 11:56:18.710934  662586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 11:56:18.720506  662586 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 11:56:18.720583  662586 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 11:56:18.729996  662586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 11:56:18.739425  662586 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 11:56:18.739486  662586 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 11:56:18.748788  662586 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1209 11:56:18.981849  662586 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1209 11:56:21.396764  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:23.894781  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:22.950912  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:24.951678  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:23.091306  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:24.592439  662109 pod_ready.go:82] duration metric: took 4m0.007699806s for pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace to be "Ready" ...
	E1209 11:56:24.592477  662109 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1209 11:56:24.592486  662109 pod_ready.go:39] duration metric: took 4m7.416528348s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 11:56:24.592504  662109 api_server.go:52] waiting for apiserver process to appear ...
	I1209 11:56:24.592537  662109 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:56:24.592590  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:56:24.643050  662109 cri.go:89] found id: "478ca5095dcdb90c166952aee1291e7de907591fa02ac65de132b6d7e6ea79cb"
	I1209 11:56:24.643085  662109 cri.go:89] found id: ""
	I1209 11:56:24.643094  662109 logs.go:282] 1 containers: [478ca5095dcdb90c166952aee1291e7de907591fa02ac65de132b6d7e6ea79cb]
	I1209 11:56:24.643151  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:24.647529  662109 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:56:24.647590  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:56:24.683125  662109 cri.go:89] found id: "13e00a6fef368f36aa166cfe9a5a6e37476d0bff708b09a552a102e6103aff16"
	I1209 11:56:24.683150  662109 cri.go:89] found id: ""
	I1209 11:56:24.683159  662109 logs.go:282] 1 containers: [13e00a6fef368f36aa166cfe9a5a6e37476d0bff708b09a552a102e6103aff16]
	I1209 11:56:24.683222  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:24.687584  662109 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:56:24.687706  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:56:24.720663  662109 cri.go:89] found id: "909852cc820d2c5faeb9b92e83a833c397dbbf46bbc715f79ef8b77f0a338f42"
	I1209 11:56:24.720699  662109 cri.go:89] found id: ""
	I1209 11:56:24.720708  662109 logs.go:282] 1 containers: [909852cc820d2c5faeb9b92e83a833c397dbbf46bbc715f79ef8b77f0a338f42]
	I1209 11:56:24.720769  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:24.724881  662109 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:56:24.724942  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:56:24.766055  662109 cri.go:89] found id: "73b01a8a4080f1488054462bb97f98c08528c799629927ce6a684fb0ade63413"
	I1209 11:56:24.766081  662109 cri.go:89] found id: ""
	I1209 11:56:24.766091  662109 logs.go:282] 1 containers: [73b01a8a4080f1488054462bb97f98c08528c799629927ce6a684fb0ade63413]
	I1209 11:56:24.766152  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:24.770491  662109 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:56:24.770557  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:56:24.804523  662109 cri.go:89] found id: "de64a319ab30ae80695633af0e1c9206551e2408918b743c2258216259de56d2"
	I1209 11:56:24.804549  662109 cri.go:89] found id: ""
	I1209 11:56:24.804558  662109 logs.go:282] 1 containers: [de64a319ab30ae80695633af0e1c9206551e2408918b743c2258216259de56d2]
	I1209 11:56:24.804607  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:24.808452  662109 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:56:24.808528  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:56:24.846043  662109 cri.go:89] found id: "b6662f1bed1995329291aee23d70ac4157125e45c75dd92e34e07fa257f2be2d"
	I1209 11:56:24.846072  662109 cri.go:89] found id: ""
	I1209 11:56:24.846084  662109 logs.go:282] 1 containers: [b6662f1bed1995329291aee23d70ac4157125e45c75dd92e34e07fa257f2be2d]
	I1209 11:56:24.846140  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:24.849991  662109 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:56:24.850057  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:56:24.884853  662109 cri.go:89] found id: ""
	I1209 11:56:24.884889  662109 logs.go:282] 0 containers: []
	W1209 11:56:24.884902  662109 logs.go:284] No container was found matching "kindnet"
	I1209 11:56:24.884912  662109 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 11:56:24.884983  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 11:56:24.920103  662109 cri.go:89] found id: "d184b6139f52f80605886a70426659453eee61916ead187b881c64f6b4c59bbb"
	I1209 11:56:24.920131  662109 cri.go:89] found id: "0ef403336ca71e96a468d654fafbbe9fd9c37a3d04d2578e06771b425f20004f"
	I1209 11:56:24.920135  662109 cri.go:89] found id: ""
	I1209 11:56:24.920152  662109 logs.go:282] 2 containers: [d184b6139f52f80605886a70426659453eee61916ead187b881c64f6b4c59bbb 0ef403336ca71e96a468d654fafbbe9fd9c37a3d04d2578e06771b425f20004f]
	I1209 11:56:24.920223  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:24.924212  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:24.928416  662109 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:56:24.928436  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 11:56:25.077407  662109 logs.go:123] Gathering logs for kube-apiserver [478ca5095dcdb90c166952aee1291e7de907591fa02ac65de132b6d7e6ea79cb] ...
	I1209 11:56:25.077468  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 478ca5095dcdb90c166952aee1291e7de907591fa02ac65de132b6d7e6ea79cb"
	I1209 11:56:25.125600  662109 logs.go:123] Gathering logs for storage-provisioner [d184b6139f52f80605886a70426659453eee61916ead187b881c64f6b4c59bbb] ...
	I1209 11:56:25.125649  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d184b6139f52f80605886a70426659453eee61916ead187b881c64f6b4c59bbb"
	I1209 11:56:25.163222  662109 logs.go:123] Gathering logs for container status ...
	I1209 11:56:25.163268  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:56:25.208430  662109 logs.go:123] Gathering logs for storage-provisioner [0ef403336ca71e96a468d654fafbbe9fd9c37a3d04d2578e06771b425f20004f] ...
	I1209 11:56:25.208465  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0ef403336ca71e96a468d654fafbbe9fd9c37a3d04d2578e06771b425f20004f"
	I1209 11:56:25.245884  662109 logs.go:123] Gathering logs for kubelet ...
	I1209 11:56:25.245917  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:56:25.318723  662109 logs.go:123] Gathering logs for dmesg ...
	I1209 11:56:25.318775  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:56:25.333173  662109 logs.go:123] Gathering logs for etcd [13e00a6fef368f36aa166cfe9a5a6e37476d0bff708b09a552a102e6103aff16] ...
	I1209 11:56:25.333207  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 13e00a6fef368f36aa166cfe9a5a6e37476d0bff708b09a552a102e6103aff16"
	I1209 11:56:25.394636  662109 logs.go:123] Gathering logs for coredns [909852cc820d2c5faeb9b92e83a833c397dbbf46bbc715f79ef8b77f0a338f42] ...
	I1209 11:56:25.394683  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 909852cc820d2c5faeb9b92e83a833c397dbbf46bbc715f79ef8b77f0a338f42"
	I1209 11:56:25.435210  662109 logs.go:123] Gathering logs for kube-scheduler [73b01a8a4080f1488054462bb97f98c08528c799629927ce6a684fb0ade63413] ...
	I1209 11:56:25.435248  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 73b01a8a4080f1488054462bb97f98c08528c799629927ce6a684fb0ade63413"
	I1209 11:56:25.482142  662109 logs.go:123] Gathering logs for kube-proxy [de64a319ab30ae80695633af0e1c9206551e2408918b743c2258216259de56d2] ...
	I1209 11:56:25.482184  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de64a319ab30ae80695633af0e1c9206551e2408918b743c2258216259de56d2"
	I1209 11:56:25.516975  662109 logs.go:123] Gathering logs for kube-controller-manager [b6662f1bed1995329291aee23d70ac4157125e45c75dd92e34e07fa257f2be2d] ...
	I1209 11:56:25.517006  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b6662f1bed1995329291aee23d70ac4157125e45c75dd92e34e07fa257f2be2d"
	I1209 11:56:25.565526  662109 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:56:25.565565  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:56:25.896281  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:28.395529  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:27.454449  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:29.950704  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:28.549071  662109 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:56:28.567288  662109 api_server.go:72] duration metric: took 4m18.770451099s to wait for apiserver process to appear ...
	I1209 11:56:28.567319  662109 api_server.go:88] waiting for apiserver healthz status ...
	I1209 11:56:28.567367  662109 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:56:28.567418  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:56:28.603341  662109 cri.go:89] found id: "478ca5095dcdb90c166952aee1291e7de907591fa02ac65de132b6d7e6ea79cb"
	I1209 11:56:28.603365  662109 cri.go:89] found id: ""
	I1209 11:56:28.603372  662109 logs.go:282] 1 containers: [478ca5095dcdb90c166952aee1291e7de907591fa02ac65de132b6d7e6ea79cb]
	I1209 11:56:28.603423  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:28.607416  662109 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:56:28.607493  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:56:28.647437  662109 cri.go:89] found id: "13e00a6fef368f36aa166cfe9a5a6e37476d0bff708b09a552a102e6103aff16"
	I1209 11:56:28.647465  662109 cri.go:89] found id: ""
	I1209 11:56:28.647477  662109 logs.go:282] 1 containers: [13e00a6fef368f36aa166cfe9a5a6e37476d0bff708b09a552a102e6103aff16]
	I1209 11:56:28.647539  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:28.651523  662109 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:56:28.651584  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:56:28.687889  662109 cri.go:89] found id: "909852cc820d2c5faeb9b92e83a833c397dbbf46bbc715f79ef8b77f0a338f42"
	I1209 11:56:28.687920  662109 cri.go:89] found id: ""
	I1209 11:56:28.687929  662109 logs.go:282] 1 containers: [909852cc820d2c5faeb9b92e83a833c397dbbf46bbc715f79ef8b77f0a338f42]
	I1209 11:56:28.687983  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:28.692025  662109 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:56:28.692100  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:56:28.728934  662109 cri.go:89] found id: "73b01a8a4080f1488054462bb97f98c08528c799629927ce6a684fb0ade63413"
	I1209 11:56:28.728961  662109 cri.go:89] found id: ""
	I1209 11:56:28.728969  662109 logs.go:282] 1 containers: [73b01a8a4080f1488054462bb97f98c08528c799629927ce6a684fb0ade63413]
	I1209 11:56:28.729020  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:28.733217  662109 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:56:28.733300  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:56:28.768700  662109 cri.go:89] found id: "de64a319ab30ae80695633af0e1c9206551e2408918b743c2258216259de56d2"
	I1209 11:56:28.768726  662109 cri.go:89] found id: ""
	I1209 11:56:28.768735  662109 logs.go:282] 1 containers: [de64a319ab30ae80695633af0e1c9206551e2408918b743c2258216259de56d2]
	I1209 11:56:28.768790  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:28.772844  662109 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:56:28.772921  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:56:28.812073  662109 cri.go:89] found id: "b6662f1bed1995329291aee23d70ac4157125e45c75dd92e34e07fa257f2be2d"
	I1209 11:56:28.812104  662109 cri.go:89] found id: ""
	I1209 11:56:28.812116  662109 logs.go:282] 1 containers: [b6662f1bed1995329291aee23d70ac4157125e45c75dd92e34e07fa257f2be2d]
	I1209 11:56:28.812195  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:28.816542  662109 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:56:28.816612  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:56:28.850959  662109 cri.go:89] found id: ""
	I1209 11:56:28.850997  662109 logs.go:282] 0 containers: []
	W1209 11:56:28.851010  662109 logs.go:284] No container was found matching "kindnet"
	I1209 11:56:28.851018  662109 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 11:56:28.851075  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 11:56:28.894115  662109 cri.go:89] found id: "d184b6139f52f80605886a70426659453eee61916ead187b881c64f6b4c59bbb"
	I1209 11:56:28.894142  662109 cri.go:89] found id: "0ef403336ca71e96a468d654fafbbe9fd9c37a3d04d2578e06771b425f20004f"
	I1209 11:56:28.894148  662109 cri.go:89] found id: ""
	I1209 11:56:28.894157  662109 logs.go:282] 2 containers: [d184b6139f52f80605886a70426659453eee61916ead187b881c64f6b4c59bbb 0ef403336ca71e96a468d654fafbbe9fd9c37a3d04d2578e06771b425f20004f]
	I1209 11:56:28.894228  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:28.899260  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:28.903033  662109 logs.go:123] Gathering logs for dmesg ...
	I1209 11:56:28.903055  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:56:28.916411  662109 logs.go:123] Gathering logs for kube-apiserver [478ca5095dcdb90c166952aee1291e7de907591fa02ac65de132b6d7e6ea79cb] ...
	I1209 11:56:28.916447  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 478ca5095dcdb90c166952aee1291e7de907591fa02ac65de132b6d7e6ea79cb"
	I1209 11:56:28.965873  662109 logs.go:123] Gathering logs for coredns [909852cc820d2c5faeb9b92e83a833c397dbbf46bbc715f79ef8b77f0a338f42] ...
	I1209 11:56:28.965911  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 909852cc820d2c5faeb9b92e83a833c397dbbf46bbc715f79ef8b77f0a338f42"
	I1209 11:56:29.003553  662109 logs.go:123] Gathering logs for kube-scheduler [73b01a8a4080f1488054462bb97f98c08528c799629927ce6a684fb0ade63413] ...
	I1209 11:56:29.003591  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 73b01a8a4080f1488054462bb97f98c08528c799629927ce6a684fb0ade63413"
	I1209 11:56:29.038945  662109 logs.go:123] Gathering logs for kube-proxy [de64a319ab30ae80695633af0e1c9206551e2408918b743c2258216259de56d2] ...
	I1209 11:56:29.038989  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de64a319ab30ae80695633af0e1c9206551e2408918b743c2258216259de56d2"
	I1209 11:56:29.079595  662109 logs.go:123] Gathering logs for storage-provisioner [d184b6139f52f80605886a70426659453eee61916ead187b881c64f6b4c59bbb] ...
	I1209 11:56:29.079636  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d184b6139f52f80605886a70426659453eee61916ead187b881c64f6b4c59bbb"
	I1209 11:56:29.117632  662109 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:56:29.117665  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:56:29.556193  662109 logs.go:123] Gathering logs for kubelet ...
	I1209 11:56:29.556245  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:56:29.629530  662109 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:56:29.629571  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 11:56:29.746102  662109 logs.go:123] Gathering logs for etcd [13e00a6fef368f36aa166cfe9a5a6e37476d0bff708b09a552a102e6103aff16] ...
	I1209 11:56:29.746137  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 13e00a6fef368f36aa166cfe9a5a6e37476d0bff708b09a552a102e6103aff16"
	I1209 11:56:29.799342  662109 logs.go:123] Gathering logs for kube-controller-manager [b6662f1bed1995329291aee23d70ac4157125e45c75dd92e34e07fa257f2be2d] ...
	I1209 11:56:29.799379  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b6662f1bed1995329291aee23d70ac4157125e45c75dd92e34e07fa257f2be2d"
	I1209 11:56:29.851197  662109 logs.go:123] Gathering logs for storage-provisioner [0ef403336ca71e96a468d654fafbbe9fd9c37a3d04d2578e06771b425f20004f] ...
	I1209 11:56:29.851254  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0ef403336ca71e96a468d654fafbbe9fd9c37a3d04d2578e06771b425f20004f"
	I1209 11:56:29.884688  662109 logs.go:123] Gathering logs for container status ...
	I1209 11:56:29.884725  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:56:30.396025  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:32.396195  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:34.396605  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:31.951405  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:34.451838  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:32.425773  662109 api_server.go:253] Checking apiserver healthz at https://192.168.39.169:8443/healthz ...
	I1209 11:56:32.432276  662109 api_server.go:279] https://192.168.39.169:8443/healthz returned 200:
	ok
	I1209 11:56:32.433602  662109 api_server.go:141] control plane version: v1.31.2
	I1209 11:56:32.433634  662109 api_server.go:131] duration metric: took 3.866306159s to wait for apiserver health ...
	I1209 11:56:32.433647  662109 system_pods.go:43] waiting for kube-system pods to appear ...
	I1209 11:56:32.433680  662109 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:56:32.433744  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:56:32.471560  662109 cri.go:89] found id: "478ca5095dcdb90c166952aee1291e7de907591fa02ac65de132b6d7e6ea79cb"
	I1209 11:56:32.471593  662109 cri.go:89] found id: ""
	I1209 11:56:32.471604  662109 logs.go:282] 1 containers: [478ca5095dcdb90c166952aee1291e7de907591fa02ac65de132b6d7e6ea79cb]
	I1209 11:56:32.471684  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:32.475735  662109 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:56:32.475809  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:56:32.509788  662109 cri.go:89] found id: "13e00a6fef368f36aa166cfe9a5a6e37476d0bff708b09a552a102e6103aff16"
	I1209 11:56:32.509821  662109 cri.go:89] found id: ""
	I1209 11:56:32.509833  662109 logs.go:282] 1 containers: [13e00a6fef368f36aa166cfe9a5a6e37476d0bff708b09a552a102e6103aff16]
	I1209 11:56:32.509889  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:32.513849  662109 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:56:32.513908  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:56:32.547022  662109 cri.go:89] found id: "909852cc820d2c5faeb9b92e83a833c397dbbf46bbc715f79ef8b77f0a338f42"
	I1209 11:56:32.547046  662109 cri.go:89] found id: ""
	I1209 11:56:32.547055  662109 logs.go:282] 1 containers: [909852cc820d2c5faeb9b92e83a833c397dbbf46bbc715f79ef8b77f0a338f42]
	I1209 11:56:32.547113  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:32.551393  662109 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:56:32.551476  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:56:32.586478  662109 cri.go:89] found id: "73b01a8a4080f1488054462bb97f98c08528c799629927ce6a684fb0ade63413"
	I1209 11:56:32.586516  662109 cri.go:89] found id: ""
	I1209 11:56:32.586536  662109 logs.go:282] 1 containers: [73b01a8a4080f1488054462bb97f98c08528c799629927ce6a684fb0ade63413]
	I1209 11:56:32.586605  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:32.592876  662109 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:56:32.592950  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:56:32.626775  662109 cri.go:89] found id: "de64a319ab30ae80695633af0e1c9206551e2408918b743c2258216259de56d2"
	I1209 11:56:32.626803  662109 cri.go:89] found id: ""
	I1209 11:56:32.626812  662109 logs.go:282] 1 containers: [de64a319ab30ae80695633af0e1c9206551e2408918b743c2258216259de56d2]
	I1209 11:56:32.626869  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:32.630757  662109 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:56:32.630825  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:56:32.663980  662109 cri.go:89] found id: "b6662f1bed1995329291aee23d70ac4157125e45c75dd92e34e07fa257f2be2d"
	I1209 11:56:32.664013  662109 cri.go:89] found id: ""
	I1209 11:56:32.664026  662109 logs.go:282] 1 containers: [b6662f1bed1995329291aee23d70ac4157125e45c75dd92e34e07fa257f2be2d]
	I1209 11:56:32.664093  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:32.668368  662109 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:56:32.668449  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:56:32.704638  662109 cri.go:89] found id: ""
	I1209 11:56:32.704675  662109 logs.go:282] 0 containers: []
	W1209 11:56:32.704688  662109 logs.go:284] No container was found matching "kindnet"
	I1209 11:56:32.704695  662109 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 11:56:32.704752  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 11:56:32.743694  662109 cri.go:89] found id: "d184b6139f52f80605886a70426659453eee61916ead187b881c64f6b4c59bbb"
	I1209 11:56:32.743729  662109 cri.go:89] found id: "0ef403336ca71e96a468d654fafbbe9fd9c37a3d04d2578e06771b425f20004f"
	I1209 11:56:32.743735  662109 cri.go:89] found id: ""
	I1209 11:56:32.743746  662109 logs.go:282] 2 containers: [d184b6139f52f80605886a70426659453eee61916ead187b881c64f6b4c59bbb 0ef403336ca71e96a468d654fafbbe9fd9c37a3d04d2578e06771b425f20004f]
	I1209 11:56:32.743814  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:32.749146  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:32.753226  662109 logs.go:123] Gathering logs for kube-scheduler [73b01a8a4080f1488054462bb97f98c08528c799629927ce6a684fb0ade63413] ...
	I1209 11:56:32.753253  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 73b01a8a4080f1488054462bb97f98c08528c799629927ce6a684fb0ade63413"
	I1209 11:56:32.787832  662109 logs.go:123] Gathering logs for kube-proxy [de64a319ab30ae80695633af0e1c9206551e2408918b743c2258216259de56d2] ...
	I1209 11:56:32.787877  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de64a319ab30ae80695633af0e1c9206551e2408918b743c2258216259de56d2"
	I1209 11:56:32.824859  662109 logs.go:123] Gathering logs for kube-controller-manager [b6662f1bed1995329291aee23d70ac4157125e45c75dd92e34e07fa257f2be2d] ...
	I1209 11:56:32.824891  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b6662f1bed1995329291aee23d70ac4157125e45c75dd92e34e07fa257f2be2d"
	I1209 11:56:32.881776  662109 logs.go:123] Gathering logs for storage-provisioner [d184b6139f52f80605886a70426659453eee61916ead187b881c64f6b4c59bbb] ...
	I1209 11:56:32.881808  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d184b6139f52f80605886a70426659453eee61916ead187b881c64f6b4c59bbb"
	I1209 11:56:32.919018  662109 logs.go:123] Gathering logs for storage-provisioner [0ef403336ca71e96a468d654fafbbe9fd9c37a3d04d2578e06771b425f20004f] ...
	I1209 11:56:32.919064  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0ef403336ca71e96a468d654fafbbe9fd9c37a3d04d2578e06771b425f20004f"
	I1209 11:56:32.956839  662109 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:56:32.956869  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:56:33.334255  662109 logs.go:123] Gathering logs for kubelet ...
	I1209 11:56:33.334300  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:56:33.406008  662109 logs.go:123] Gathering logs for etcd [13e00a6fef368f36aa166cfe9a5a6e37476d0bff708b09a552a102e6103aff16] ...
	I1209 11:56:33.406049  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 13e00a6fef368f36aa166cfe9a5a6e37476d0bff708b09a552a102e6103aff16"
	I1209 11:56:33.453689  662109 logs.go:123] Gathering logs for kube-apiserver [478ca5095dcdb90c166952aee1291e7de907591fa02ac65de132b6d7e6ea79cb] ...
	I1209 11:56:33.453724  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 478ca5095dcdb90c166952aee1291e7de907591fa02ac65de132b6d7e6ea79cb"
	I1209 11:56:33.496168  662109 logs.go:123] Gathering logs for coredns [909852cc820d2c5faeb9b92e83a833c397dbbf46bbc715f79ef8b77f0a338f42] ...
	I1209 11:56:33.496209  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 909852cc820d2c5faeb9b92e83a833c397dbbf46bbc715f79ef8b77f0a338f42"
	I1209 11:56:33.532057  662109 logs.go:123] Gathering logs for container status ...
	I1209 11:56:33.532090  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:56:33.575050  662109 logs.go:123] Gathering logs for dmesg ...
	I1209 11:56:33.575087  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:56:33.588543  662109 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:56:33.588575  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 11:56:36.194483  662109 system_pods.go:59] 8 kube-system pods found
	I1209 11:56:36.194516  662109 system_pods.go:61] "coredns-7c65d6cfc9-z647g" [0e15e13e-efe6-4ae2-8bac-205aadf8f95a] Running
	I1209 11:56:36.194522  662109 system_pods.go:61] "etcd-no-preload-820741" [3ea97088-f3b5-4c8f-ac11-7ab96f37bc5b] Running
	I1209 11:56:36.194527  662109 system_pods.go:61] "kube-apiserver-no-preload-820741" [bd0d3e20-bdab-41c1-bb25-0c5149b4e456] Running
	I1209 11:56:36.194531  662109 system_pods.go:61] "kube-controller-manager-no-preload-820741" [5eda77af-a169-4be5-9f96-6ad5406f9036] Running
	I1209 11:56:36.194534  662109 system_pods.go:61] "kube-proxy-hpvvp" [0945206c-8d1e-47e0-b35b-9011073423b2] Running
	I1209 11:56:36.194538  662109 system_pods.go:61] "kube-scheduler-no-preload-820741" [e7003668-a70d-4334-bb99-c63c463d87e0] Running
	I1209 11:56:36.194543  662109 system_pods.go:61] "metrics-server-6867b74b74-pwcsr" [40d4df7e-de82-478b-a77b-b27208d8262e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1209 11:56:36.194549  662109 system_pods.go:61] "storage-provisioner" [aeba46d3-ecf1-4923-b89c-75b34e75a06d] Running
	I1209 11:56:36.194559  662109 system_pods.go:74] duration metric: took 3.76090495s to wait for pod list to return data ...
	I1209 11:56:36.194567  662109 default_sa.go:34] waiting for default service account to be created ...
	I1209 11:56:36.197070  662109 default_sa.go:45] found service account: "default"
	I1209 11:56:36.197094  662109 default_sa.go:55] duration metric: took 2.520926ms for default service account to be created ...
	I1209 11:56:36.197104  662109 system_pods.go:116] waiting for k8s-apps to be running ...
	I1209 11:56:36.201494  662109 system_pods.go:86] 8 kube-system pods found
	I1209 11:56:36.201518  662109 system_pods.go:89] "coredns-7c65d6cfc9-z647g" [0e15e13e-efe6-4ae2-8bac-205aadf8f95a] Running
	I1209 11:56:36.201524  662109 system_pods.go:89] "etcd-no-preload-820741" [3ea97088-f3b5-4c8f-ac11-7ab96f37bc5b] Running
	I1209 11:56:36.201528  662109 system_pods.go:89] "kube-apiserver-no-preload-820741" [bd0d3e20-bdab-41c1-bb25-0c5149b4e456] Running
	I1209 11:56:36.201533  662109 system_pods.go:89] "kube-controller-manager-no-preload-820741" [5eda77af-a169-4be5-9f96-6ad5406f9036] Running
	I1209 11:56:36.201537  662109 system_pods.go:89] "kube-proxy-hpvvp" [0945206c-8d1e-47e0-b35b-9011073423b2] Running
	I1209 11:56:36.201540  662109 system_pods.go:89] "kube-scheduler-no-preload-820741" [e7003668-a70d-4334-bb99-c63c463d87e0] Running
	I1209 11:56:36.201547  662109 system_pods.go:89] "metrics-server-6867b74b74-pwcsr" [40d4df7e-de82-478b-a77b-b27208d8262e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1209 11:56:36.201551  662109 system_pods.go:89] "storage-provisioner" [aeba46d3-ecf1-4923-b89c-75b34e75a06d] Running
	I1209 11:56:36.201558  662109 system_pods.go:126] duration metric: took 4.448871ms to wait for k8s-apps to be running ...
	I1209 11:56:36.201567  662109 system_svc.go:44] waiting for kubelet service to be running ....
	I1209 11:56:36.201628  662109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 11:56:36.217457  662109 system_svc.go:56] duration metric: took 15.878252ms WaitForService to wait for kubelet
	I1209 11:56:36.217503  662109 kubeadm.go:582] duration metric: took 4m26.420670146s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 11:56:36.217527  662109 node_conditions.go:102] verifying NodePressure condition ...
	I1209 11:56:36.220498  662109 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1209 11:56:36.220526  662109 node_conditions.go:123] node cpu capacity is 2
	I1209 11:56:36.220572  662109 node_conditions.go:105] duration metric: took 3.039367ms to run NodePressure ...
	I1209 11:56:36.220586  662109 start.go:241] waiting for startup goroutines ...
	I1209 11:56:36.220597  662109 start.go:246] waiting for cluster config update ...
	I1209 11:56:36.220628  662109 start.go:255] writing updated cluster config ...
	I1209 11:56:36.220974  662109 ssh_runner.go:195] Run: rm -f paused
	I1209 11:56:36.272920  662109 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1209 11:56:36.274686  662109 out.go:177] * Done! kubectl is now configured to use "no-preload-820741" cluster and "default" namespace by default
	I1209 11:56:36.895681  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:38.896066  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:36.951281  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:39.455225  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:41.395880  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:43.895464  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:41.951287  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:44.451357  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:45.896184  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:48.398617  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:46.451733  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:48.950857  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:50.950964  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:50.895678  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:52.896291  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:53.389365  663024 pod_ready.go:82] duration metric: took 4m0.00015362s for pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace to be "Ready" ...
	E1209 11:56:53.389414  663024 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1209 11:56:53.389440  663024 pod_ready.go:39] duration metric: took 4m13.044002506s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 11:56:53.389480  663024 kubeadm.go:597] duration metric: took 4m21.286289463s to restartPrimaryControlPlane
	W1209 11:56:53.389572  663024 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1209 11:56:53.389610  663024 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1209 11:56:52.951153  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:55.451223  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:57.950413  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:57:00.449904  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:57:02.450069  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:57:04.451074  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:57:06.950873  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:57:08.951176  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:57:11.450596  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:57:13.451552  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:57:13.944884  661546 pod_ready.go:82] duration metric: took 4m0.000348644s for pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace to be "Ready" ...
	E1209 11:57:13.944919  661546 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace to be "Ready" (will not retry!)
	I1209 11:57:13.944943  661546 pod_ready.go:39] duration metric: took 4m14.049505666s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 11:57:13.944980  661546 kubeadm.go:597] duration metric: took 4m22.094543781s to restartPrimaryControlPlane
	W1209 11:57:13.945086  661546 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1209 11:57:13.945123  661546 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1209 11:57:19.569119  663024 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.179481312s)
	I1209 11:57:19.569196  663024 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 11:57:19.583584  663024 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 11:57:19.592807  663024 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 11:57:19.602121  663024 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 11:57:19.602190  663024 kubeadm.go:157] found existing configuration files:
	
	I1209 11:57:19.602249  663024 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1209 11:57:19.611109  663024 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 11:57:19.611187  663024 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 11:57:19.620264  663024 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1209 11:57:19.629026  663024 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 11:57:19.629103  663024 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 11:57:19.638036  663024 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1209 11:57:19.646265  663024 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 11:57:19.646331  663024 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 11:57:19.655187  663024 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1209 11:57:19.663908  663024 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 11:57:19.663962  663024 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 11:57:19.673002  663024 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1209 11:57:19.717664  663024 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1209 11:57:19.717737  663024 kubeadm.go:310] [preflight] Running pre-flight checks
	I1209 11:57:19.818945  663024 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1209 11:57:19.819065  663024 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1209 11:57:19.819160  663024 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1209 11:57:19.828186  663024 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1209 11:57:19.829831  663024 out.go:235]   - Generating certificates and keys ...
	I1209 11:57:19.829938  663024 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1209 11:57:19.830031  663024 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1209 11:57:19.830145  663024 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1209 11:57:19.830252  663024 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1209 11:57:19.830377  663024 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1209 11:57:19.830470  663024 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1209 11:57:19.830568  663024 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1209 11:57:19.830644  663024 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1209 11:57:19.830745  663024 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1209 11:57:19.830825  663024 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1209 11:57:19.830878  663024 kubeadm.go:310] [certs] Using the existing "sa" key
	I1209 11:57:19.830963  663024 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1209 11:57:19.961813  663024 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1209 11:57:20.436964  663024 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1209 11:57:20.652041  663024 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1209 11:57:20.837664  663024 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1209 11:57:20.892035  663024 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1209 11:57:20.892497  663024 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1209 11:57:20.895295  663024 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1209 11:57:20.896871  663024 out.go:235]   - Booting up control plane ...
	I1209 11:57:20.896992  663024 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1209 11:57:20.897139  663024 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1209 11:57:20.897260  663024 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1209 11:57:20.914735  663024 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1209 11:57:20.920520  663024 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1209 11:57:20.920566  663024 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1209 11:57:21.047290  663024 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1209 11:57:21.047437  663024 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1209 11:57:22.049131  663024 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001914766s
	I1209 11:57:22.049257  663024 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1209 11:57:27.053443  663024 kubeadm.go:310] [api-check] The API server is healthy after 5.002570817s
	I1209 11:57:27.068518  663024 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1209 11:57:27.086371  663024 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1209 11:57:27.114617  663024 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1209 11:57:27.114833  663024 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-482476 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1209 11:57:27.131354  663024 kubeadm.go:310] [bootstrap-token] Using token: 6aanjy.0y855mmcca5ic9co
	I1209 11:57:27.132852  663024 out.go:235]   - Configuring RBAC rules ...
	I1209 11:57:27.132992  663024 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1209 11:57:27.139770  663024 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1209 11:57:27.147974  663024 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1209 11:57:27.155508  663024 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1209 11:57:27.159181  663024 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1209 11:57:27.163403  663024 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1209 11:57:27.458812  663024 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1209 11:57:27.900322  663024 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1209 11:57:28.458864  663024 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1209 11:57:28.459944  663024 kubeadm.go:310] 
	I1209 11:57:28.460043  663024 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1209 11:57:28.460054  663024 kubeadm.go:310] 
	I1209 11:57:28.460156  663024 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1209 11:57:28.460166  663024 kubeadm.go:310] 
	I1209 11:57:28.460198  663024 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1209 11:57:28.460284  663024 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1209 11:57:28.460385  663024 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1209 11:57:28.460414  663024 kubeadm.go:310] 
	I1209 11:57:28.460499  663024 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1209 11:57:28.460509  663024 kubeadm.go:310] 
	I1209 11:57:28.460576  663024 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1209 11:57:28.460586  663024 kubeadm.go:310] 
	I1209 11:57:28.460663  663024 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1209 11:57:28.460766  663024 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1209 11:57:28.460862  663024 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1209 11:57:28.460871  663024 kubeadm.go:310] 
	I1209 11:57:28.460992  663024 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1209 11:57:28.461096  663024 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1209 11:57:28.461121  663024 kubeadm.go:310] 
	I1209 11:57:28.461244  663024 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token 6aanjy.0y855mmcca5ic9co \
	I1209 11:57:28.461395  663024 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c011865ce27f188d55feab5f04226df0aae8d2e7cfe661283806c6f2de0ef29a \
	I1209 11:57:28.461435  663024 kubeadm.go:310] 	--control-plane 
	I1209 11:57:28.461446  663024 kubeadm.go:310] 
	I1209 11:57:28.461551  663024 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1209 11:57:28.461574  663024 kubeadm.go:310] 
	I1209 11:57:28.461679  663024 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token 6aanjy.0y855mmcca5ic9co \
	I1209 11:57:28.461832  663024 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c011865ce27f188d55feab5f04226df0aae8d2e7cfe661283806c6f2de0ef29a 
	I1209 11:57:28.462544  663024 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1209 11:57:28.462594  663024 cni.go:84] Creating CNI manager for ""
	I1209 11:57:28.462620  663024 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 11:57:28.464574  663024 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1209 11:57:28.465952  663024 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1209 11:57:28.476155  663024 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1209 11:57:28.493471  663024 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1209 11:57:28.493551  663024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 11:57:28.493594  663024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-482476 minikube.k8s.io/updated_at=2024_12_09T11_57_28_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=60110addcdbf0fec7168b962521659e922988d6c minikube.k8s.io/name=default-k8s-diff-port-482476 minikube.k8s.io/primary=true
	I1209 11:57:28.506467  663024 ops.go:34] apiserver oom_adj: -16
	I1209 11:57:28.724224  663024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 11:57:29.224971  663024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 11:57:29.724660  663024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 11:57:30.224466  663024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 11:57:30.724354  663024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 11:57:31.224702  663024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 11:57:31.725101  663024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 11:57:32.224364  663024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 11:57:32.724357  663024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 11:57:32.844191  663024 kubeadm.go:1113] duration metric: took 4.350713188s to wait for elevateKubeSystemPrivileges
	I1209 11:57:32.844243  663024 kubeadm.go:394] duration metric: took 5m0.79272843s to StartCluster
	I1209 11:57:32.844287  663024 settings.go:142] acquiring lock: {Name:mkd819f3469f56ef651f2e5bb461e93bb6b2b5fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 11:57:32.844417  663024 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20068-609844/kubeconfig
	I1209 11:57:32.846697  663024 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/kubeconfig: {Name:mk32876a1ab47f13c0d7c0ba1ee5e32de01b4a4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 11:57:32.847014  663024 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.25 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 11:57:32.847067  663024 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1209 11:57:32.847162  663024 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-482476"
	I1209 11:57:32.847186  663024 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-482476"
	I1209 11:57:32.847192  663024 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-482476"
	W1209 11:57:32.847201  663024 addons.go:243] addon storage-provisioner should already be in state true
	I1209 11:57:32.847204  663024 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-482476"
	I1209 11:57:32.847228  663024 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-482476"
	I1209 11:57:32.847272  663024 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-482476"
	W1209 11:57:32.847287  663024 addons.go:243] addon metrics-server should already be in state true
	I1209 11:57:32.847285  663024 config.go:182] Loaded profile config "default-k8s-diff-port-482476": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 11:57:32.847328  663024 host.go:66] Checking if "default-k8s-diff-port-482476" exists ...
	I1209 11:57:32.847237  663024 host.go:66] Checking if "default-k8s-diff-port-482476" exists ...
	I1209 11:57:32.847705  663024 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:57:32.847713  663024 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:57:32.847750  663024 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:57:32.847755  663024 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:57:32.847841  663024 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:57:32.847873  663024 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:57:32.848599  663024 out.go:177] * Verifying Kubernetes components...
	I1209 11:57:32.850246  663024 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 11:57:32.864945  663024 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44639
	I1209 11:57:32.865141  663024 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37211
	I1209 11:57:32.865203  663024 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42835
	I1209 11:57:32.865473  663024 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:57:32.865635  663024 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:57:32.865733  663024 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:57:32.866096  663024 main.go:141] libmachine: Using API Version  1
	I1209 11:57:32.866115  663024 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:57:32.866241  663024 main.go:141] libmachine: Using API Version  1
	I1209 11:57:32.866264  663024 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:57:32.866241  663024 main.go:141] libmachine: Using API Version  1
	I1209 11:57:32.866316  663024 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:57:32.866642  663024 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:57:32.866654  663024 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:57:32.866656  663024 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:57:32.866865  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetState
	I1209 11:57:32.867243  663024 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:57:32.867287  663024 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:57:32.867321  663024 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:57:32.867358  663024 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:57:32.871085  663024 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-482476"
	W1209 11:57:32.871109  663024 addons.go:243] addon default-storageclass should already be in state true
	I1209 11:57:32.871142  663024 host.go:66] Checking if "default-k8s-diff-port-482476" exists ...
	I1209 11:57:32.871395  663024 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:57:32.871431  663024 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:57:32.883301  663024 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45049
	I1209 11:57:32.883976  663024 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:57:32.884508  663024 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36509
	I1209 11:57:32.884758  663024 main.go:141] libmachine: Using API Version  1
	I1209 11:57:32.884775  663024 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:57:32.885123  663024 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:57:32.885279  663024 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:57:32.885610  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetState
	I1209 11:57:32.885801  663024 main.go:141] libmachine: Using API Version  1
	I1209 11:57:32.885817  663024 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:57:32.886142  663024 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:57:32.886347  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetState
	I1209 11:57:32.888357  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .DriverName
	I1209 11:57:32.888762  663024 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37611
	I1209 11:57:32.889103  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .DriverName
	I1209 11:57:32.889192  663024 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:57:32.889669  663024 main.go:141] libmachine: Using API Version  1
	I1209 11:57:32.889692  663024 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:57:32.890035  663024 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:57:32.890082  663024 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 11:57:32.890647  663024 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:57:32.890687  663024 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:57:32.890867  663024 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1209 11:57:32.891756  663024 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 11:57:32.891774  663024 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1209 11:57:32.891794  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHHostname
	I1209 11:57:32.892543  663024 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1209 11:57:32.892563  663024 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1209 11:57:32.892587  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHHostname
	I1209 11:57:32.896754  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:57:32.897437  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:c9:8a", ip: ""} in network mk-default-k8s-diff-port-482476: {Iface:virbr2 ExpiryTime:2024-12-09 12:52:18 +0000 UTC Type:0 Mac:52:54:00:f0:c9:8a Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:default-k8s-diff-port-482476 Clientid:01:52:54:00:f0:c9:8a}
	I1209 11:57:32.897471  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined IP address 192.168.50.25 and MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:57:32.897752  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHPort
	I1209 11:57:32.897836  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:57:32.897975  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHKeyPath
	I1209 11:57:32.898370  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:c9:8a", ip: ""} in network mk-default-k8s-diff-port-482476: {Iface:virbr2 ExpiryTime:2024-12-09 12:52:18 +0000 UTC Type:0 Mac:52:54:00:f0:c9:8a Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:default-k8s-diff-port-482476 Clientid:01:52:54:00:f0:c9:8a}
	I1209 11:57:32.898381  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHUsername
	I1209 11:57:32.898395  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined IP address 192.168.50.25 and MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:57:32.898556  663024 sshutil.go:53] new ssh client: &{IP:192.168.50.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/default-k8s-diff-port-482476/id_rsa Username:docker}
	I1209 11:57:32.898649  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHPort
	I1209 11:57:32.898829  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHKeyPath
	I1209 11:57:32.898975  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHUsername
	I1209 11:57:32.899101  663024 sshutil.go:53] new ssh client: &{IP:192.168.50.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/default-k8s-diff-port-482476/id_rsa Username:docker}
	I1209 11:57:32.907891  663024 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34063
	I1209 11:57:32.908317  663024 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:57:32.908827  663024 main.go:141] libmachine: Using API Version  1
	I1209 11:57:32.908848  663024 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:57:32.909352  663024 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:57:32.909551  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetState
	I1209 11:57:32.911172  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .DriverName
	I1209 11:57:32.911417  663024 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1209 11:57:32.911434  663024 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1209 11:57:32.911460  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHHostname
	I1209 11:57:32.914016  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:57:32.914474  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:c9:8a", ip: ""} in network mk-default-k8s-diff-port-482476: {Iface:virbr2 ExpiryTime:2024-12-09 12:52:18 +0000 UTC Type:0 Mac:52:54:00:f0:c9:8a Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:default-k8s-diff-port-482476 Clientid:01:52:54:00:f0:c9:8a}
	I1209 11:57:32.914490  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined IP address 192.168.50.25 and MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:57:32.914646  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHPort
	I1209 11:57:32.914838  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHKeyPath
	I1209 11:57:32.914965  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHUsername
	I1209 11:57:32.915071  663024 sshutil.go:53] new ssh client: &{IP:192.168.50.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/default-k8s-diff-port-482476/id_rsa Username:docker}
	I1209 11:57:33.067075  663024 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 11:57:33.085671  663024 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-482476" to be "Ready" ...
	I1209 11:57:33.095765  663024 node_ready.go:49] node "default-k8s-diff-port-482476" has status "Ready":"True"
	I1209 11:57:33.095801  663024 node_ready.go:38] duration metric: took 10.096442ms for node "default-k8s-diff-port-482476" to be "Ready" ...
	I1209 11:57:33.095815  663024 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 11:57:33.105497  663024 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-7rr27" in "kube-system" namespace to be "Ready" ...
	I1209 11:57:33.200059  663024 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 11:57:33.218467  663024 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1209 11:57:33.218496  663024 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1209 11:57:33.225990  663024 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1209 11:57:33.278736  663024 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1209 11:57:33.278772  663024 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1209 11:57:33.342270  663024 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1209 11:57:33.342304  663024 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1209 11:57:33.412771  663024 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1209 11:57:34.250639  663024 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.050535014s)
	I1209 11:57:34.250706  663024 main.go:141] libmachine: Making call to close driver server
	I1209 11:57:34.250720  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .Close
	I1209 11:57:34.250704  663024 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.024681453s)
	I1209 11:57:34.250811  663024 main.go:141] libmachine: Making call to close driver server
	I1209 11:57:34.250820  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .Close
	I1209 11:57:34.251151  663024 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:57:34.251170  663024 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:57:34.251182  663024 main.go:141] libmachine: Making call to close driver server
	I1209 11:57:34.251192  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .Close
	I1209 11:57:34.251197  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | Closing plugin on server side
	I1209 11:57:34.251238  663024 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:57:34.251245  663024 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:57:34.251253  663024 main.go:141] libmachine: Making call to close driver server
	I1209 11:57:34.251261  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .Close
	I1209 11:57:34.253136  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | Closing plugin on server side
	I1209 11:57:34.253141  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | Closing plugin on server side
	I1209 11:57:34.253180  663024 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:57:34.253182  663024 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:57:34.253194  663024 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:57:34.253214  663024 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:57:34.279650  663024 main.go:141] libmachine: Making call to close driver server
	I1209 11:57:34.279682  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .Close
	I1209 11:57:34.280064  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | Closing plugin on server side
	I1209 11:57:34.280116  663024 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:57:34.280130  663024 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:57:34.656217  663024 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.243394493s)
	I1209 11:57:34.656287  663024 main.go:141] libmachine: Making call to close driver server
	I1209 11:57:34.656305  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .Close
	I1209 11:57:34.656641  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | Closing plugin on server side
	I1209 11:57:34.656655  663024 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:57:34.656671  663024 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:57:34.656683  663024 main.go:141] libmachine: Making call to close driver server
	I1209 11:57:34.656691  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .Close
	I1209 11:57:34.656982  663024 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:57:34.656999  663024 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:57:34.657011  663024 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-482476"
	I1209 11:57:34.658878  663024 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1209 11:57:34.660089  663024 addons.go:510] duration metric: took 1.813029421s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1209 11:57:35.122487  663024 pod_ready.go:103] pod "coredns-7c65d6cfc9-7rr27" in "kube-system" namespace has status "Ready":"False"
	I1209 11:57:36.112072  663024 pod_ready.go:93] pod "coredns-7c65d6cfc9-7rr27" in "kube-system" namespace has status "Ready":"True"
	I1209 11:57:36.112097  663024 pod_ready.go:82] duration metric: took 3.006564547s for pod "coredns-7c65d6cfc9-7rr27" in "kube-system" namespace to be "Ready" ...
	I1209 11:57:36.112110  663024 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-bb47s" in "kube-system" namespace to be "Ready" ...
	I1209 11:57:36.117521  663024 pod_ready.go:93] pod "coredns-7c65d6cfc9-bb47s" in "kube-system" namespace has status "Ready":"True"
	I1209 11:57:36.117545  663024 pod_ready.go:82] duration metric: took 5.428168ms for pod "coredns-7c65d6cfc9-bb47s" in "kube-system" namespace to be "Ready" ...
	I1209 11:57:36.117554  663024 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-482476" in "kube-system" namespace to be "Ready" ...
	I1209 11:57:36.122929  663024 pod_ready.go:93] pod "etcd-default-k8s-diff-port-482476" in "kube-system" namespace has status "Ready":"True"
	I1209 11:57:36.122953  663024 pod_ready.go:82] duration metric: took 5.392834ms for pod "etcd-default-k8s-diff-port-482476" in "kube-system" namespace to be "Ready" ...
	I1209 11:57:36.122972  663024 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-482476" in "kube-system" namespace to be "Ready" ...
	I1209 11:57:36.127025  663024 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-482476" in "kube-system" namespace has status "Ready":"True"
	I1209 11:57:36.127047  663024 pod_ready.go:82] duration metric: took 4.068175ms for pod "kube-apiserver-default-k8s-diff-port-482476" in "kube-system" namespace to be "Ready" ...
	I1209 11:57:36.127056  663024 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-482476" in "kube-system" namespace to be "Ready" ...
	I1209 11:57:36.131036  663024 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-482476" in "kube-system" namespace has status "Ready":"True"
	I1209 11:57:36.131055  663024 pod_ready.go:82] duration metric: took 3.993825ms for pod "kube-controller-manager-default-k8s-diff-port-482476" in "kube-system" namespace to be "Ready" ...
	I1209 11:57:36.131064  663024 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-pgs52" in "kube-system" namespace to be "Ready" ...
	I1209 11:57:36.508951  663024 pod_ready.go:93] pod "kube-proxy-pgs52" in "kube-system" namespace has status "Ready":"True"
	I1209 11:57:36.508980  663024 pod_ready.go:82] duration metric: took 377.910722ms for pod "kube-proxy-pgs52" in "kube-system" namespace to be "Ready" ...
	I1209 11:57:36.508991  663024 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-482476" in "kube-system" namespace to be "Ready" ...
	I1209 11:57:36.909065  663024 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-482476" in "kube-system" namespace has status "Ready":"True"
	I1209 11:57:36.909093  663024 pod_ready.go:82] duration metric: took 400.095775ms for pod "kube-scheduler-default-k8s-diff-port-482476" in "kube-system" namespace to be "Ready" ...
	I1209 11:57:36.909100  663024 pod_ready.go:39] duration metric: took 3.813270613s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 11:57:36.909116  663024 api_server.go:52] waiting for apiserver process to appear ...
	I1209 11:57:36.909169  663024 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:57:36.924688  663024 api_server.go:72] duration metric: took 4.077626254s to wait for apiserver process to appear ...
	I1209 11:57:36.924726  663024 api_server.go:88] waiting for apiserver healthz status ...
	I1209 11:57:36.924752  663024 api_server.go:253] Checking apiserver healthz at https://192.168.50.25:8444/healthz ...
	I1209 11:57:36.930782  663024 api_server.go:279] https://192.168.50.25:8444/healthz returned 200:
	ok
	I1209 11:57:36.931734  663024 api_server.go:141] control plane version: v1.31.2
	I1209 11:57:36.931758  663024 api_server.go:131] duration metric: took 7.024599ms to wait for apiserver health ...
	I1209 11:57:36.931766  663024 system_pods.go:43] waiting for kube-system pods to appear ...
	I1209 11:57:37.112291  663024 system_pods.go:59] 9 kube-system pods found
	I1209 11:57:37.112323  663024 system_pods.go:61] "coredns-7c65d6cfc9-7rr27" [a5dd0401-80bf-4c87-9771-e1837c960425] Running
	I1209 11:57:37.112328  663024 system_pods.go:61] "coredns-7c65d6cfc9-bb47s" [2ff9fcf2-1494-4739-9bf4-6c9dd5bcbbf4] Running
	I1209 11:57:37.112332  663024 system_pods.go:61] "etcd-default-k8s-diff-port-482476" [01dfcc5e-2ac5-4cee-8b68-ac1f3bf8866b] Running
	I1209 11:57:37.112337  663024 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-482476" [c65f1162-8d8a-47e8-8f1a-4abc0ebc8649] Running
	I1209 11:57:37.112340  663024 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-482476" [363e5f34-878a-434e-8bf2-21cdc06be305] Running
	I1209 11:57:37.112343  663024 system_pods.go:61] "kube-proxy-pgs52" [d5a3463e-e955-4345-9559-b23cce44fa0e] Running
	I1209 11:57:37.112346  663024 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-482476" [53f924fa-554b-4ceb-b171-efcfecdc137e] Running
	I1209 11:57:37.112356  663024 system_pods.go:61] "metrics-server-6867b74b74-2lmtn" [60803d31-d0b0-4d51-a9f2-cadafd184a90] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1209 11:57:37.112363  663024 system_pods.go:61] "storage-provisioner" [6b53e3ba-9bc9-4b5a-bec9-d06336616c8a] Running
	I1209 11:57:37.112373  663024 system_pods.go:74] duration metric: took 180.599339ms to wait for pod list to return data ...
	I1209 11:57:37.112387  663024 default_sa.go:34] waiting for default service account to be created ...
	I1209 11:57:37.309750  663024 default_sa.go:45] found service account: "default"
	I1209 11:57:37.309777  663024 default_sa.go:55] duration metric: took 197.382304ms for default service account to be created ...
	I1209 11:57:37.309787  663024 system_pods.go:116] waiting for k8s-apps to be running ...
	I1209 11:57:37.513080  663024 system_pods.go:86] 9 kube-system pods found
	I1209 11:57:37.513112  663024 system_pods.go:89] "coredns-7c65d6cfc9-7rr27" [a5dd0401-80bf-4c87-9771-e1837c960425] Running
	I1209 11:57:37.513118  663024 system_pods.go:89] "coredns-7c65d6cfc9-bb47s" [2ff9fcf2-1494-4739-9bf4-6c9dd5bcbbf4] Running
	I1209 11:57:37.513121  663024 system_pods.go:89] "etcd-default-k8s-diff-port-482476" [01dfcc5e-2ac5-4cee-8b68-ac1f3bf8866b] Running
	I1209 11:57:37.513128  663024 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-482476" [c65f1162-8d8a-47e8-8f1a-4abc0ebc8649] Running
	I1209 11:57:37.513133  663024 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-482476" [363e5f34-878a-434e-8bf2-21cdc06be305] Running
	I1209 11:57:37.513136  663024 system_pods.go:89] "kube-proxy-pgs52" [d5a3463e-e955-4345-9559-b23cce44fa0e] Running
	I1209 11:57:37.513141  663024 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-482476" [53f924fa-554b-4ceb-b171-efcfecdc137e] Running
	I1209 11:57:37.513150  663024 system_pods.go:89] "metrics-server-6867b74b74-2lmtn" [60803d31-d0b0-4d51-a9f2-cadafd184a90] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1209 11:57:37.513156  663024 system_pods.go:89] "storage-provisioner" [6b53e3ba-9bc9-4b5a-bec9-d06336616c8a] Running
	I1209 11:57:37.513168  663024 system_pods.go:126] duration metric: took 203.373238ms to wait for k8s-apps to be running ...
	I1209 11:57:37.513181  663024 system_svc.go:44] waiting for kubelet service to be running ....
	I1209 11:57:37.513233  663024 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 11:57:37.527419  663024 system_svc.go:56] duration metric: took 14.22618ms WaitForService to wait for kubelet
	I1209 11:57:37.527451  663024 kubeadm.go:582] duration metric: took 4.680397826s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 11:57:37.527473  663024 node_conditions.go:102] verifying NodePressure condition ...
	I1209 11:57:37.710396  663024 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1209 11:57:37.710429  663024 node_conditions.go:123] node cpu capacity is 2
	I1209 11:57:37.710447  663024 node_conditions.go:105] duration metric: took 182.968526ms to run NodePressure ...
	I1209 11:57:37.710463  663024 start.go:241] waiting for startup goroutines ...
	I1209 11:57:37.710473  663024 start.go:246] waiting for cluster config update ...
	I1209 11:57:37.710487  663024 start.go:255] writing updated cluster config ...
	I1209 11:57:37.710799  663024 ssh_runner.go:195] Run: rm -f paused
	I1209 11:57:37.760468  663024 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1209 11:57:37.762472  663024 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-482476" cluster and "default" namespace by default
	I1209 11:57:40.219406  661546 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.274255602s)
	I1209 11:57:40.219478  661546 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 11:57:40.234863  661546 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 11:57:40.245357  661546 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 11:57:40.255253  661546 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 11:57:40.255276  661546 kubeadm.go:157] found existing configuration files:
	
	I1209 11:57:40.255319  661546 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 11:57:40.264881  661546 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 11:57:40.264934  661546 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 11:57:40.274990  661546 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 11:57:40.284941  661546 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 11:57:40.284998  661546 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 11:57:40.295188  661546 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 11:57:40.305136  661546 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 11:57:40.305181  661546 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 11:57:40.315125  661546 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 11:57:40.324727  661546 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 11:57:40.324789  661546 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 11:57:40.333574  661546 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1209 11:57:40.378743  661546 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1209 11:57:40.378932  661546 kubeadm.go:310] [preflight] Running pre-flight checks
	I1209 11:57:40.492367  661546 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1209 11:57:40.492493  661546 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1209 11:57:40.492658  661546 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1209 11:57:40.504994  661546 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1209 11:57:40.506760  661546 out.go:235]   - Generating certificates and keys ...
	I1209 11:57:40.506878  661546 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1209 11:57:40.506955  661546 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1209 11:57:40.507033  661546 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1209 11:57:40.507088  661546 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1209 11:57:40.507156  661546 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1209 11:57:40.507274  661546 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1209 11:57:40.507377  661546 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1209 11:57:40.507463  661546 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1209 11:57:40.507573  661546 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1209 11:57:40.507692  661546 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1209 11:57:40.507756  661546 kubeadm.go:310] [certs] Using the existing "sa" key
	I1209 11:57:40.507836  661546 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1209 11:57:40.607744  661546 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1209 11:57:40.684950  661546 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1209 11:57:40.826079  661546 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1209 11:57:40.945768  661546 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1209 11:57:41.212984  661546 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1209 11:57:41.213406  661546 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1209 11:57:41.216390  661546 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1209 11:57:41.218053  661546 out.go:235]   - Booting up control plane ...
	I1209 11:57:41.218202  661546 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1209 11:57:41.218307  661546 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1209 11:57:41.220009  661546 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1209 11:57:41.237816  661546 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1209 11:57:41.244148  661546 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1209 11:57:41.244204  661546 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1209 11:57:41.371083  661546 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1209 11:57:41.371245  661546 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1209 11:57:41.872938  661546 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.998998ms
	I1209 11:57:41.873141  661546 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1209 11:57:46.874725  661546 kubeadm.go:310] [api-check] The API server is healthy after 5.001587898s
	I1209 11:57:46.886996  661546 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1209 11:57:46.897941  661546 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1209 11:57:46.927451  661546 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1209 11:57:46.927718  661546 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-005123 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1209 11:57:46.945578  661546 kubeadm.go:310] [bootstrap-token] Using token: bhdcn7.orsewwwtbk1gmdg8
	I1209 11:57:46.946894  661546 out.go:235]   - Configuring RBAC rules ...
	I1209 11:57:46.947041  661546 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1209 11:57:46.950006  661546 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1209 11:57:46.956761  661546 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1209 11:57:46.959756  661546 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1209 11:57:46.962973  661546 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1209 11:57:46.970016  661546 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1209 11:57:47.282251  661546 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1209 11:57:47.714588  661546 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1209 11:57:48.283610  661546 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1209 11:57:48.283671  661546 kubeadm.go:310] 
	I1209 11:57:48.283774  661546 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1209 11:57:48.283786  661546 kubeadm.go:310] 
	I1209 11:57:48.283901  661546 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1209 11:57:48.283948  661546 kubeadm.go:310] 
	I1209 11:57:48.283995  661546 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1209 11:57:48.284089  661546 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1209 11:57:48.284139  661546 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1209 11:57:48.284148  661546 kubeadm.go:310] 
	I1209 11:57:48.284216  661546 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1209 11:57:48.284224  661546 kubeadm.go:310] 
	I1209 11:57:48.284281  661546 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1209 11:57:48.284291  661546 kubeadm.go:310] 
	I1209 11:57:48.284359  661546 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1209 11:57:48.284465  661546 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1209 11:57:48.284583  661546 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1209 11:57:48.284596  661546 kubeadm.go:310] 
	I1209 11:57:48.284739  661546 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1209 11:57:48.284846  661546 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1209 11:57:48.284859  661546 kubeadm.go:310] 
	I1209 11:57:48.284972  661546 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token bhdcn7.orsewwwtbk1gmdg8 \
	I1209 11:57:48.285133  661546 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c011865ce27f188d55feab5f04226df0aae8d2e7cfe661283806c6f2de0ef29a \
	I1209 11:57:48.285170  661546 kubeadm.go:310] 	--control-plane 
	I1209 11:57:48.285184  661546 kubeadm.go:310] 
	I1209 11:57:48.285312  661546 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1209 11:57:48.285321  661546 kubeadm.go:310] 
	I1209 11:57:48.285388  661546 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token bhdcn7.orsewwwtbk1gmdg8 \
	I1209 11:57:48.285530  661546 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c011865ce27f188d55feab5f04226df0aae8d2e7cfe661283806c6f2de0ef29a 
	I1209 11:57:48.286117  661546 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1209 11:57:48.286246  661546 cni.go:84] Creating CNI manager for ""
	I1209 11:57:48.286263  661546 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 11:57:48.288141  661546 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1209 11:57:48.289484  661546 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1209 11:57:48.301160  661546 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1209 11:57:48.320752  661546 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1209 11:57:48.320831  661546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 11:57:48.320831  661546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-005123 minikube.k8s.io/updated_at=2024_12_09T11_57_48_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=60110addcdbf0fec7168b962521659e922988d6c minikube.k8s.io/name=embed-certs-005123 minikube.k8s.io/primary=true
	I1209 11:57:48.552069  661546 ops.go:34] apiserver oom_adj: -16
	I1209 11:57:48.552119  661546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 11:57:49.052304  661546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 11:57:49.552516  661546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 11:57:50.052548  661546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 11:57:50.552931  661546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 11:57:51.052381  661546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 11:57:51.552589  661546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 11:57:52.052273  661546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 11:57:52.552546  661546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 11:57:52.645059  661546 kubeadm.go:1113] duration metric: took 4.324296774s to wait for elevateKubeSystemPrivileges
	I1209 11:57:52.645107  661546 kubeadm.go:394] duration metric: took 5m0.847017281s to StartCluster
	I1209 11:57:52.645133  661546 settings.go:142] acquiring lock: {Name:mkd819f3469f56ef651f2e5bb461e93bb6b2b5fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 11:57:52.645241  661546 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20068-609844/kubeconfig
	I1209 11:57:52.647822  661546 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/kubeconfig: {Name:mk32876a1ab47f13c0d7c0ba1ee5e32de01b4a4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 11:57:52.648129  661546 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.218 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 11:57:52.648226  661546 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1209 11:57:52.648338  661546 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-005123"
	I1209 11:57:52.648354  661546 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-005123"
	W1209 11:57:52.648366  661546 addons.go:243] addon storage-provisioner should already be in state true
	I1209 11:57:52.648367  661546 addons.go:69] Setting default-storageclass=true in profile "embed-certs-005123"
	I1209 11:57:52.648396  661546 config.go:182] Loaded profile config "embed-certs-005123": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 11:57:52.648397  661546 addons.go:69] Setting metrics-server=true in profile "embed-certs-005123"
	I1209 11:57:52.648434  661546 addons.go:234] Setting addon metrics-server=true in "embed-certs-005123"
	I1209 11:57:52.648399  661546 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-005123"
	W1209 11:57:52.648448  661546 addons.go:243] addon metrics-server should already be in state true
	I1209 11:57:52.648499  661546 host.go:66] Checking if "embed-certs-005123" exists ...
	I1209 11:57:52.648400  661546 host.go:66] Checking if "embed-certs-005123" exists ...
	I1209 11:57:52.648867  661546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:57:52.648883  661546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:57:52.648914  661546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:57:52.648932  661546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:57:52.648947  661546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:57:52.648917  661546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:57:52.649702  661546 out.go:177] * Verifying Kubernetes components...
	I1209 11:57:52.651094  661546 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 11:57:52.665090  661546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38065
	I1209 11:57:52.665309  661546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35905
	I1209 11:57:52.665602  661546 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:57:52.665889  661546 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:57:52.666308  661546 main.go:141] libmachine: Using API Version  1
	I1209 11:57:52.666329  661546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:57:52.666470  661546 main.go:141] libmachine: Using API Version  1
	I1209 11:57:52.666492  661546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:57:52.666768  661546 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:57:52.666907  661546 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:57:52.667140  661546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35159
	I1209 11:57:52.667344  661546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:57:52.667387  661546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:57:52.667536  661546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:57:52.667580  661546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:57:52.667652  661546 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:57:52.668127  661546 main.go:141] libmachine: Using API Version  1
	I1209 11:57:52.668154  661546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:57:52.668657  661546 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:57:52.668868  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetState
	I1209 11:57:52.672550  661546 addons.go:234] Setting addon default-storageclass=true in "embed-certs-005123"
	W1209 11:57:52.672580  661546 addons.go:243] addon default-storageclass should already be in state true
	I1209 11:57:52.672612  661546 host.go:66] Checking if "embed-certs-005123" exists ...
	I1209 11:57:52.672985  661546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:57:52.673032  661546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:57:52.684848  661546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43667
	I1209 11:57:52.684854  661546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36851
	I1209 11:57:52.685398  661546 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:57:52.685451  661546 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:57:52.686054  661546 main.go:141] libmachine: Using API Version  1
	I1209 11:57:52.686081  661546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:57:52.686155  661546 main.go:141] libmachine: Using API Version  1
	I1209 11:57:52.686228  661546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:57:52.686553  661546 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:57:52.686614  661546 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:57:52.686753  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetState
	I1209 11:57:52.686930  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetState
	I1209 11:57:52.687838  661546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33245
	I1209 11:57:52.688391  661546 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:57:52.688818  661546 main.go:141] libmachine: (embed-certs-005123) Calling .DriverName
	I1209 11:57:52.689013  661546 main.go:141] libmachine: Using API Version  1
	I1209 11:57:52.689040  661546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:57:52.689314  661546 main.go:141] libmachine: (embed-certs-005123) Calling .DriverName
	I1209 11:57:52.689450  661546 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:57:52.689908  661546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:57:52.689943  661546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:57:52.691136  661546 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 11:57:52.691137  661546 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1209 11:57:52.692714  661546 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1209 11:57:52.692732  661546 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1209 11:57:52.692749  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHHostname
	I1209 11:57:52.692789  661546 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 11:57:52.692800  661546 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1209 11:57:52.692813  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHHostname
	I1209 11:57:52.696349  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:57:52.696791  661546 main.go:141] libmachine: (embed-certs-005123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:a0:a8", ip: ""} in network mk-embed-certs-005123: {Iface:virbr4 ExpiryTime:2024-12-09 12:52:37 +0000 UTC Type:0 Mac:52:54:00:ee:a0:a8 Iaid: IPaddr:192.168.72.218 Prefix:24 Hostname:embed-certs-005123 Clientid:01:52:54:00:ee:a0:a8}
	I1209 11:57:52.696815  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined IP address 192.168.72.218 and MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:57:52.697088  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:57:52.697143  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHPort
	I1209 11:57:52.697314  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHKeyPath
	I1209 11:57:52.697482  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHUsername
	I1209 11:57:52.697512  661546 main.go:141] libmachine: (embed-certs-005123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:a0:a8", ip: ""} in network mk-embed-certs-005123: {Iface:virbr4 ExpiryTime:2024-12-09 12:52:37 +0000 UTC Type:0 Mac:52:54:00:ee:a0:a8 Iaid: IPaddr:192.168.72.218 Prefix:24 Hostname:embed-certs-005123 Clientid:01:52:54:00:ee:a0:a8}
	I1209 11:57:52.697547  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined IP address 192.168.72.218 and MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:57:52.697658  661546 sshutil.go:53] new ssh client: &{IP:192.168.72.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/embed-certs-005123/id_rsa Username:docker}
	I1209 11:57:52.697787  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHPort
	I1209 11:57:52.697962  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHKeyPath
	I1209 11:57:52.698093  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHUsername
	I1209 11:57:52.698209  661546 sshutil.go:53] new ssh client: &{IP:192.168.72.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/embed-certs-005123/id_rsa Username:docker}
	I1209 11:57:52.705766  661546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37265
	I1209 11:57:52.706265  661546 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:57:52.706694  661546 main.go:141] libmachine: Using API Version  1
	I1209 11:57:52.706721  661546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:57:52.707031  661546 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:57:52.707241  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetState
	I1209 11:57:52.708747  661546 main.go:141] libmachine: (embed-certs-005123) Calling .DriverName
	I1209 11:57:52.708980  661546 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1209 11:57:52.708997  661546 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1209 11:57:52.709016  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHHostname
	I1209 11:57:52.711546  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:57:52.711986  661546 main.go:141] libmachine: (embed-certs-005123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:a0:a8", ip: ""} in network mk-embed-certs-005123: {Iface:virbr4 ExpiryTime:2024-12-09 12:52:37 +0000 UTC Type:0 Mac:52:54:00:ee:a0:a8 Iaid: IPaddr:192.168.72.218 Prefix:24 Hostname:embed-certs-005123 Clientid:01:52:54:00:ee:a0:a8}
	I1209 11:57:52.712011  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined IP address 192.168.72.218 and MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:57:52.712263  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHPort
	I1209 11:57:52.712438  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHKeyPath
	I1209 11:57:52.712604  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHUsername
	I1209 11:57:52.712751  661546 sshutil.go:53] new ssh client: &{IP:192.168.72.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/embed-certs-005123/id_rsa Username:docker}
	I1209 11:57:52.858535  661546 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 11:57:52.879035  661546 node_ready.go:35] waiting up to 6m0s for node "embed-certs-005123" to be "Ready" ...
	I1209 11:57:52.899550  661546 node_ready.go:49] node "embed-certs-005123" has status "Ready":"True"
	I1209 11:57:52.899575  661546 node_ready.go:38] duration metric: took 20.508179ms for node "embed-certs-005123" to be "Ready" ...
	I1209 11:57:52.899589  661546 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 11:57:52.960716  661546 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-t49mk" in "kube-system" namespace to be "Ready" ...
	I1209 11:57:52.962755  661546 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1209 11:57:52.962779  661546 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1209 11:57:52.995747  661546 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1209 11:57:52.995787  661546 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1209 11:57:53.031395  661546 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1209 11:57:53.031426  661546 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1209 11:57:53.031535  661546 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1209 11:57:53.049695  661546 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 11:57:53.061716  661546 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1209 11:57:53.314158  661546 main.go:141] libmachine: Making call to close driver server
	I1209 11:57:53.314212  661546 main.go:141] libmachine: (embed-certs-005123) Calling .Close
	I1209 11:57:53.314523  661546 main.go:141] libmachine: (embed-certs-005123) DBG | Closing plugin on server side
	I1209 11:57:53.314548  661546 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:57:53.314565  661546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:57:53.314586  661546 main.go:141] libmachine: Making call to close driver server
	I1209 11:57:53.314598  661546 main.go:141] libmachine: (embed-certs-005123) Calling .Close
	I1209 11:57:53.314857  661546 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:57:53.314875  661546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:57:53.323573  661546 main.go:141] libmachine: Making call to close driver server
	I1209 11:57:53.323590  661546 main.go:141] libmachine: (embed-certs-005123) Calling .Close
	I1209 11:57:53.323822  661546 main.go:141] libmachine: (embed-certs-005123) DBG | Closing plugin on server side
	I1209 11:57:53.323873  661546 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:57:53.323882  661546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:57:54.004616  661546 main.go:141] libmachine: Making call to close driver server
	I1209 11:57:54.004655  661546 main.go:141] libmachine: (embed-certs-005123) Calling .Close
	I1209 11:57:54.005050  661546 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:57:54.005067  661546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:57:54.005075  661546 main.go:141] libmachine: Making call to close driver server
	I1209 11:57:54.005083  661546 main.go:141] libmachine: (embed-certs-005123) Calling .Close
	I1209 11:57:54.005351  661546 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:57:54.005372  661546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:57:54.005370  661546 main.go:141] libmachine: (embed-certs-005123) DBG | Closing plugin on server side
	I1209 11:57:54.352527  661546 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.290758533s)
	I1209 11:57:54.352616  661546 main.go:141] libmachine: Making call to close driver server
	I1209 11:57:54.352636  661546 main.go:141] libmachine: (embed-certs-005123) Calling .Close
	I1209 11:57:54.352957  661546 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:57:54.352977  661546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:57:54.352987  661546 main.go:141] libmachine: Making call to close driver server
	I1209 11:57:54.352995  661546 main.go:141] libmachine: (embed-certs-005123) Calling .Close
	I1209 11:57:54.353278  661546 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:57:54.353320  661546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:57:54.353336  661546 addons.go:475] Verifying addon metrics-server=true in "embed-certs-005123"
	I1209 11:57:54.353387  661546 main.go:141] libmachine: (embed-certs-005123) DBG | Closing plugin on server side
	I1209 11:57:54.355153  661546 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1209 11:57:54.356250  661546 addons.go:510] duration metric: took 1.708044398s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1209 11:57:54.968202  661546 pod_ready.go:103] pod "coredns-7c65d6cfc9-t49mk" in "kube-system" namespace has status "Ready":"False"
	I1209 11:57:57.467948  661546 pod_ready.go:93] pod "coredns-7c65d6cfc9-t49mk" in "kube-system" namespace has status "Ready":"True"
	I1209 11:57:57.467979  661546 pod_ready.go:82] duration metric: took 4.507228843s for pod "coredns-7c65d6cfc9-t49mk" in "kube-system" namespace to be "Ready" ...
	I1209 11:57:57.467992  661546 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-xspr9" in "kube-system" namespace to be "Ready" ...
	I1209 11:57:59.475024  661546 pod_ready.go:103] pod "coredns-7c65d6cfc9-xspr9" in "kube-system" namespace has status "Ready":"False"
	I1209 11:58:00.473961  661546 pod_ready.go:93] pod "coredns-7c65d6cfc9-xspr9" in "kube-system" namespace has status "Ready":"True"
	I1209 11:58:00.473987  661546 pod_ready.go:82] duration metric: took 3.005987981s for pod "coredns-7c65d6cfc9-xspr9" in "kube-system" namespace to be "Ready" ...
	I1209 11:58:00.473996  661546 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-005123" in "kube-system" namespace to be "Ready" ...
	I1209 11:58:00.478022  661546 pod_ready.go:93] pod "etcd-embed-certs-005123" in "kube-system" namespace has status "Ready":"True"
	I1209 11:58:00.478040  661546 pod_ready.go:82] duration metric: took 4.038353ms for pod "etcd-embed-certs-005123" in "kube-system" namespace to be "Ready" ...
	I1209 11:58:00.478049  661546 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-005123" in "kube-system" namespace to be "Ready" ...
	I1209 11:58:00.482415  661546 pod_ready.go:93] pod "kube-apiserver-embed-certs-005123" in "kube-system" namespace has status "Ready":"True"
	I1209 11:58:00.482439  661546 pod_ready.go:82] duration metric: took 4.384854ms for pod "kube-apiserver-embed-certs-005123" in "kube-system" namespace to be "Ready" ...
	I1209 11:58:00.482449  661546 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-005123" in "kube-system" namespace to be "Ready" ...
	I1209 11:58:00.486284  661546 pod_ready.go:93] pod "kube-controller-manager-embed-certs-005123" in "kube-system" namespace has status "Ready":"True"
	I1209 11:58:00.486311  661546 pod_ready.go:82] duration metric: took 3.85467ms for pod "kube-controller-manager-embed-certs-005123" in "kube-system" namespace to be "Ready" ...
	I1209 11:58:00.486326  661546 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-n4pph" in "kube-system" namespace to be "Ready" ...
	I1209 11:58:00.490260  661546 pod_ready.go:93] pod "kube-proxy-n4pph" in "kube-system" namespace has status "Ready":"True"
	I1209 11:58:00.490284  661546 pod_ready.go:82] duration metric: took 3.949342ms for pod "kube-proxy-n4pph" in "kube-system" namespace to be "Ready" ...
	I1209 11:58:00.490296  661546 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-005123" in "kube-system" namespace to be "Ready" ...
	I1209 11:58:00.872396  661546 pod_ready.go:93] pod "kube-scheduler-embed-certs-005123" in "kube-system" namespace has status "Ready":"True"
	I1209 11:58:00.872420  661546 pod_ready.go:82] duration metric: took 382.116873ms for pod "kube-scheduler-embed-certs-005123" in "kube-system" namespace to be "Ready" ...
	I1209 11:58:00.872428  661546 pod_ready.go:39] duration metric: took 7.97282742s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 11:58:00.872446  661546 api_server.go:52] waiting for apiserver process to appear ...
	I1209 11:58:00.872502  661546 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:58:00.887281  661546 api_server.go:72] duration metric: took 8.239108757s to wait for apiserver process to appear ...
	I1209 11:58:00.887312  661546 api_server.go:88] waiting for apiserver healthz status ...
	I1209 11:58:00.887333  661546 api_server.go:253] Checking apiserver healthz at https://192.168.72.218:8443/healthz ...
	I1209 11:58:00.892005  661546 api_server.go:279] https://192.168.72.218:8443/healthz returned 200:
	ok
	I1209 11:58:00.893247  661546 api_server.go:141] control plane version: v1.31.2
	I1209 11:58:00.893277  661546 api_server.go:131] duration metric: took 5.95753ms to wait for apiserver health ...
	I1209 11:58:00.893288  661546 system_pods.go:43] waiting for kube-system pods to appear ...
	I1209 11:58:01.074723  661546 system_pods.go:59] 9 kube-system pods found
	I1209 11:58:01.074756  661546 system_pods.go:61] "coredns-7c65d6cfc9-t49mk" [ca3ba094-58a2-401d-8aea-46d6d96baacb] Running
	I1209 11:58:01.074762  661546 system_pods.go:61] "coredns-7c65d6cfc9-xspr9" [9384e9ea-987e-4728-bdf2-773645d52ab1] Running
	I1209 11:58:01.074766  661546 system_pods.go:61] "etcd-embed-certs-005123" [f8b23ab4-b852-4598-93d7-3b6eb1543a4b] Running
	I1209 11:58:01.074771  661546 system_pods.go:61] "kube-apiserver-embed-certs-005123" [96b0cb5a-1d81-48fc-bbc9-015de7c48ac5] Running
	I1209 11:58:01.074774  661546 system_pods.go:61] "kube-controller-manager-embed-certs-005123" [33c237d8-8f5a-4d8f-870a-7ba0dc2180dd] Running
	I1209 11:58:01.074777  661546 system_pods.go:61] "kube-proxy-n4pph" [520d101f-0df0-413f-a0fc-22ecc2884d40] Running
	I1209 11:58:01.074780  661546 system_pods.go:61] "kube-scheduler-embed-certs-005123" [11dcaf15-fc3b-4945-af9b-d08aa5528679] Running
	I1209 11:58:01.074786  661546 system_pods.go:61] "metrics-server-6867b74b74-zfw9r" [8438b820-4cc5-4d7b-8af5-9349fdd87ca8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1209 11:58:01.074791  661546 system_pods.go:61] "storage-provisioner" [91ceb801-7262-4d7e-9623-c8c1931fc34b] Running
	I1209 11:58:01.074797  661546 system_pods.go:74] duration metric: took 181.502993ms to wait for pod list to return data ...
	I1209 11:58:01.074804  661546 default_sa.go:34] waiting for default service account to be created ...
	I1209 11:58:01.272664  661546 default_sa.go:45] found service account: "default"
	I1209 11:58:01.272697  661546 default_sa.go:55] duration metric: took 197.886347ms for default service account to be created ...
	I1209 11:58:01.272707  661546 system_pods.go:116] waiting for k8s-apps to be running ...
	I1209 11:58:01.475062  661546 system_pods.go:86] 9 kube-system pods found
	I1209 11:58:01.475096  661546 system_pods.go:89] "coredns-7c65d6cfc9-t49mk" [ca3ba094-58a2-401d-8aea-46d6d96baacb] Running
	I1209 11:58:01.475102  661546 system_pods.go:89] "coredns-7c65d6cfc9-xspr9" [9384e9ea-987e-4728-bdf2-773645d52ab1] Running
	I1209 11:58:01.475105  661546 system_pods.go:89] "etcd-embed-certs-005123" [f8b23ab4-b852-4598-93d7-3b6eb1543a4b] Running
	I1209 11:58:01.475109  661546 system_pods.go:89] "kube-apiserver-embed-certs-005123" [96b0cb5a-1d81-48fc-bbc9-015de7c48ac5] Running
	I1209 11:58:01.475114  661546 system_pods.go:89] "kube-controller-manager-embed-certs-005123" [33c237d8-8f5a-4d8f-870a-7ba0dc2180dd] Running
	I1209 11:58:01.475118  661546 system_pods.go:89] "kube-proxy-n4pph" [520d101f-0df0-413f-a0fc-22ecc2884d40] Running
	I1209 11:58:01.475121  661546 system_pods.go:89] "kube-scheduler-embed-certs-005123" [11dcaf15-fc3b-4945-af9b-d08aa5528679] Running
	I1209 11:58:01.475131  661546 system_pods.go:89] "metrics-server-6867b74b74-zfw9r" [8438b820-4cc5-4d7b-8af5-9349fdd87ca8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1209 11:58:01.475138  661546 system_pods.go:89] "storage-provisioner" [91ceb801-7262-4d7e-9623-c8c1931fc34b] Running
	I1209 11:58:01.475148  661546 system_pods.go:126] duration metric: took 202.434687ms to wait for k8s-apps to be running ...
	I1209 11:58:01.475158  661546 system_svc.go:44] waiting for kubelet service to be running ....
	I1209 11:58:01.475220  661546 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 11:58:01.490373  661546 system_svc.go:56] duration metric: took 15.20079ms WaitForService to wait for kubelet
	I1209 11:58:01.490416  661546 kubeadm.go:582] duration metric: took 8.842250416s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 11:58:01.490451  661546 node_conditions.go:102] verifying NodePressure condition ...
	I1209 11:58:01.673621  661546 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1209 11:58:01.673651  661546 node_conditions.go:123] node cpu capacity is 2
	I1209 11:58:01.673662  661546 node_conditions.go:105] duration metric: took 183.205852ms to run NodePressure ...
	I1209 11:58:01.673674  661546 start.go:241] waiting for startup goroutines ...
	I1209 11:58:01.673681  661546 start.go:246] waiting for cluster config update ...
	I1209 11:58:01.673691  661546 start.go:255] writing updated cluster config ...
	I1209 11:58:01.673995  661546 ssh_runner.go:195] Run: rm -f paused
	I1209 11:58:01.725363  661546 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1209 11:58:01.727275  661546 out.go:177] * Done! kubectl is now configured to use "embed-certs-005123" cluster and "default" namespace by default
	I1209 11:58:14.994765  662586 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1209 11:58:14.994918  662586 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1209 11:58:14.995050  662586 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1209 11:58:14.995118  662586 kubeadm.go:310] [preflight] Running pre-flight checks
	I1209 11:58:14.995182  662586 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1209 11:58:14.995272  662586 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1209 11:58:14.995353  662586 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1209 11:58:14.995410  662586 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1209 11:58:14.996905  662586 out.go:235]   - Generating certificates and keys ...
	I1209 11:58:14.997000  662586 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1209 11:58:14.997055  662586 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1209 11:58:14.997123  662586 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1209 11:58:14.997184  662586 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1209 11:58:14.997278  662586 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1209 11:58:14.997349  662586 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1209 11:58:14.997474  662586 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1209 11:58:14.997567  662586 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1209 11:58:14.997631  662586 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1209 11:58:14.997700  662586 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1209 11:58:14.997736  662586 kubeadm.go:310] [certs] Using the existing "sa" key
	I1209 11:58:14.997783  662586 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1209 11:58:14.997826  662586 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1209 11:58:14.997871  662586 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1209 11:58:14.997930  662586 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1209 11:58:14.997977  662586 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1209 11:58:14.998063  662586 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1209 11:58:14.998141  662586 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1209 11:58:14.998199  662586 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1209 11:58:14.998264  662586 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1209 11:58:14.999539  662586 out.go:235]   - Booting up control plane ...
	I1209 11:58:14.999663  662586 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1209 11:58:14.999748  662586 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1209 11:58:14.999824  662586 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1209 11:58:14.999946  662586 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1209 11:58:15.000148  662586 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1209 11:58:15.000221  662586 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1209 11:58:15.000326  662586 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1209 11:58:15.000532  662586 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1209 11:58:15.000598  662586 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1209 11:58:15.000753  662586 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1209 11:58:15.000814  662586 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1209 11:58:15.000971  662586 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1209 11:58:15.001064  662586 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1209 11:58:15.001273  662586 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1209 11:58:15.001335  662586 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1209 11:58:15.001486  662586 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1209 11:58:15.001493  662586 kubeadm.go:310] 
	I1209 11:58:15.001553  662586 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1209 11:58:15.001616  662586 kubeadm.go:310] 		timed out waiting for the condition
	I1209 11:58:15.001631  662586 kubeadm.go:310] 
	I1209 11:58:15.001685  662586 kubeadm.go:310] 	This error is likely caused by:
	I1209 11:58:15.001732  662586 kubeadm.go:310] 		- The kubelet is not running
	I1209 11:58:15.001883  662586 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1209 11:58:15.001897  662586 kubeadm.go:310] 
	I1209 11:58:15.002041  662586 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1209 11:58:15.002087  662586 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1209 11:58:15.002146  662586 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1209 11:58:15.002156  662586 kubeadm.go:310] 
	I1209 11:58:15.002294  662586 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1209 11:58:15.002373  662586 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1209 11:58:15.002380  662586 kubeadm.go:310] 
	I1209 11:58:15.002502  662586 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1209 11:58:15.002623  662586 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1209 11:58:15.002725  662586 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1209 11:58:15.002799  662586 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1209 11:58:15.002835  662586 kubeadm.go:310] 
	W1209 11:58:15.002956  662586 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1209 11:58:15.003022  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1209 11:58:15.469838  662586 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 11:58:15.484503  662586 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 11:58:15.493409  662586 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 11:58:15.493430  662586 kubeadm.go:157] found existing configuration files:
	
	I1209 11:58:15.493487  662586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 11:58:15.502508  662586 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 11:58:15.502568  662586 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 11:58:15.511743  662586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 11:58:15.519855  662586 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 11:58:15.519913  662586 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 11:58:15.528743  662586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 11:58:15.537000  662586 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 11:58:15.537072  662586 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 11:58:15.546520  662586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 11:58:15.555448  662586 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 11:58:15.555526  662586 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 11:58:15.565618  662586 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1209 11:58:15.631763  662586 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1209 11:58:15.631832  662586 kubeadm.go:310] [preflight] Running pre-flight checks
	I1209 11:58:15.798683  662586 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1209 11:58:15.798822  662586 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1209 11:58:15.798957  662586 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1209 11:58:15.974522  662586 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1209 11:58:15.976286  662586 out.go:235]   - Generating certificates and keys ...
	I1209 11:58:15.976408  662586 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1209 11:58:15.976492  662586 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1209 11:58:15.976616  662586 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1209 11:58:15.976714  662586 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1209 11:58:15.976813  662586 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1209 11:58:15.976889  662586 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1209 11:58:15.976978  662586 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1209 11:58:15.977064  662586 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1209 11:58:15.977184  662586 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1209 11:58:15.977251  662586 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1209 11:58:15.977287  662586 kubeadm.go:310] [certs] Using the existing "sa" key
	I1209 11:58:15.977363  662586 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1209 11:58:16.193383  662586 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1209 11:58:16.324912  662586 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1209 11:58:16.541372  662586 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1209 11:58:16.786389  662586 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1209 11:58:16.807241  662586 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1209 11:58:16.808750  662586 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1209 11:58:16.808823  662586 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1209 11:58:16.951756  662586 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1209 11:58:16.954338  662586 out.go:235]   - Booting up control plane ...
	I1209 11:58:16.954486  662586 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1209 11:58:16.968892  662586 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1209 11:58:16.970556  662586 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1209 11:58:16.971301  662586 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1209 11:58:16.974040  662586 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1209 11:58:56.976537  662586 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1209 11:58:56.976966  662586 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1209 11:58:56.977214  662586 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1209 11:59:01.977861  662586 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1209 11:59:01.978074  662586 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1209 11:59:11.978821  662586 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1209 11:59:11.979056  662586 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1209 11:59:31.980118  662586 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1209 11:59:31.980386  662586 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1209 12:00:11.981507  662586 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1209 12:00:11.981791  662586 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1209 12:00:11.981804  662586 kubeadm.go:310] 
	I1209 12:00:11.981863  662586 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1209 12:00:11.981916  662586 kubeadm.go:310] 		timed out waiting for the condition
	I1209 12:00:11.981926  662586 kubeadm.go:310] 
	I1209 12:00:11.981977  662586 kubeadm.go:310] 	This error is likely caused by:
	I1209 12:00:11.982028  662586 kubeadm.go:310] 		- The kubelet is not running
	I1209 12:00:11.982232  662586 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1209 12:00:11.982262  662586 kubeadm.go:310] 
	I1209 12:00:11.982449  662586 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1209 12:00:11.982506  662586 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1209 12:00:11.982555  662586 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1209 12:00:11.982564  662586 kubeadm.go:310] 
	I1209 12:00:11.982709  662586 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1209 12:00:11.982824  662586 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1209 12:00:11.982837  662586 kubeadm.go:310] 
	I1209 12:00:11.982975  662586 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1209 12:00:11.983092  662586 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1209 12:00:11.983186  662586 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1209 12:00:11.983259  662586 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1209 12:00:11.983308  662586 kubeadm.go:310] 
	I1209 12:00:11.983442  662586 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1209 12:00:11.983534  662586 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1209 12:00:11.983622  662586 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1209 12:00:11.983692  662586 kubeadm.go:394] duration metric: took 7m57.372617524s to StartCluster
	I1209 12:00:11.983778  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 12:00:11.983852  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 12:00:12.032068  662586 cri.go:89] found id: ""
	I1209 12:00:12.032110  662586 logs.go:282] 0 containers: []
	W1209 12:00:12.032126  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 12:00:12.032139  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 12:00:12.032232  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 12:00:12.074929  662586 cri.go:89] found id: ""
	I1209 12:00:12.074977  662586 logs.go:282] 0 containers: []
	W1209 12:00:12.074990  662586 logs.go:284] No container was found matching "etcd"
	I1209 12:00:12.075001  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 12:00:12.075074  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 12:00:12.113547  662586 cri.go:89] found id: ""
	I1209 12:00:12.113582  662586 logs.go:282] 0 containers: []
	W1209 12:00:12.113592  662586 logs.go:284] No container was found matching "coredns"
	I1209 12:00:12.113598  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 12:00:12.113661  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 12:00:12.147436  662586 cri.go:89] found id: ""
	I1209 12:00:12.147465  662586 logs.go:282] 0 containers: []
	W1209 12:00:12.147475  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 12:00:12.147481  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 12:00:12.147535  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 12:00:12.184398  662586 cri.go:89] found id: ""
	I1209 12:00:12.184439  662586 logs.go:282] 0 containers: []
	W1209 12:00:12.184453  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 12:00:12.184463  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 12:00:12.184541  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 12:00:12.230844  662586 cri.go:89] found id: ""
	I1209 12:00:12.230884  662586 logs.go:282] 0 containers: []
	W1209 12:00:12.230896  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 12:00:12.230905  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 12:00:12.230981  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 12:00:12.264897  662586 cri.go:89] found id: ""
	I1209 12:00:12.264930  662586 logs.go:282] 0 containers: []
	W1209 12:00:12.264939  662586 logs.go:284] No container was found matching "kindnet"
	I1209 12:00:12.264946  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 12:00:12.265001  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 12:00:12.303553  662586 cri.go:89] found id: ""
	I1209 12:00:12.303594  662586 logs.go:282] 0 containers: []
	W1209 12:00:12.303607  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 12:00:12.303622  662586 logs.go:123] Gathering logs for container status ...
	I1209 12:00:12.303638  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 12:00:12.342799  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 12:00:12.342838  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 12:00:12.392992  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 12:00:12.393039  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 12:00:12.407065  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 12:00:12.407100  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 12:00:12.483599  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 12:00:12.483651  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 12:00:12.483675  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1209 12:00:12.591518  662586 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1209 12:00:12.591615  662586 out.go:270] * 
	W1209 12:00:12.591715  662586 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1209 12:00:12.591737  662586 out.go:270] * 
	W1209 12:00:12.592644  662586 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 12:00:12.596340  662586 out.go:201] 
	W1209 12:00:12.597706  662586 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1209 12:00:12.597757  662586 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1209 12:00:12.597798  662586 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1209 12:00:12.599219  662586 out.go:201] 
	
	
	==> CRI-O <==
	Dec 09 12:05:38 no-preload-820741 crio[714]: time="2024-12-09 12:05:38.257588276Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733745938257553392,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3f30eca0-ba63-4dbe-ab86-83c15caa51e7 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 12:05:38 no-preload-820741 crio[714]: time="2024-12-09 12:05:38.258099524Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=262e3742-d36c-4886-bf86-61a97357cff3 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 12:05:38 no-preload-820741 crio[714]: time="2024-12-09 12:05:38.258250672Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=262e3742-d36c-4886-bf86-61a97357cff3 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 12:05:38 no-preload-820741 crio[714]: time="2024-12-09 12:05:38.258444891Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1cd2924576549a280dcba998d853546db6a30837efcbf285175564babcbff919,PodSandboxId:b98ca63b5a1555dc050a61075fce6bc10f4f1a77958ce4d0b60df2933510611c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1733745146632686621,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4e76af62-1ba8-410c-ace3-c92e48840825,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container
.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:909852cc820d2c5faeb9b92e83a833c397dbbf46bbc715f79ef8b77f0a338f42,PodSandboxId:9cc643ec88d327b685ab6fa714ccf96a1c9b2cc90138ceaa78baa070fed18a92,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733745142990608503,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-z647g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e15e13e-efe6-4ae2-8bac-205aadf8f95a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{
\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d184b6139f52f80605886a70426659453eee61916ead187b881c64f6b4c59bbb,PodSandboxId:fc6d68de344af38241a746310493add2f1c11df00bcbb98f413d294108bf2a52,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733745142977116901,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: aeba46d3-ecf1-4923-b89c-75b34e75a06d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ef403336ca71e96a468d654fafbbe9fd9c37a3d04d2578e06771b425f20004f,PodSandboxId:fc6d68de344af38241a746310493add2f1c11df00bcbb98f413d294108bf2a52,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1733745128033456849,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a
eba46d3-ecf1-4923-b89c-75b34e75a06d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de64a319ab30ae80695633af0e1c9206551e2408918b743c2258216259de56d2,PodSandboxId:caa8831be0bd8e39cb1d1990ba51ad6c70c99c9d531e9420f22596be2f01b978,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733745127414709401,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hpvvp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0945206c-8d1e-47e0-b35b-9011073423
b2,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13e00a6fef368f36aa166cfe9a5a6e37476d0bff708b09a552a102e6103aff16,PodSandboxId:910053c757fafdf5b1c3ff2c244f3d09d3ff14ad898cdf63561e1845d9373e02,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733745123566013883,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-820741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 286a8335482d6443f935ef423fb83f8c,},Annotations:map[string]string{io.kuber
netes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73b01a8a4080f1488054462bb97f98c08528c799629927ce6a684fb0ade63413,PodSandboxId:5218b6309474d233bd08077d66abd5c967dd3f75b3b28ec1a3f9c5a30ea04ed1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733745123587161547,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-820741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: feed5b01992a8257b2679a0cdc55f40b,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6662f1bed1995329291aee23d70ac4157125e45c75dd92e34e07fa257f2be2d,PodSandboxId:bc507605abc8700e8e949c93148b9faf0f46443616e103e6042634e7ad45bc52,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733745123542798054,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-820741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5afc6fc69dc6125a529552eeff4d23ad,},Annotations:map[string]string{io.kube
rnetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:478ca5095dcdb90c166952aee1291e7de907591fa02ac65de132b6d7e6ea79cb,PodSandboxId:119dbeb98f771e4092d9710b08a04c92705c549afb512e90f252736f96c6c997,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733745123548954480,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-820741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ebc694b948cf176fee9c9bd3684e24c,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=262e3742-d36c-4886-bf86-61a97357cff3 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 12:05:38 no-preload-820741 crio[714]: time="2024-12-09 12:05:38.301749291Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=437ea313-6616-46ea-ac44-a9ab7bdbdf70 name=/runtime.v1.RuntimeService/Version
	Dec 09 12:05:38 no-preload-820741 crio[714]: time="2024-12-09 12:05:38.301906447Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=437ea313-6616-46ea-ac44-a9ab7bdbdf70 name=/runtime.v1.RuntimeService/Version
	Dec 09 12:05:38 no-preload-820741 crio[714]: time="2024-12-09 12:05:38.303385938Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f132a53a-16ca-47af-b7cb-df602786686e name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 12:05:38 no-preload-820741 crio[714]: time="2024-12-09 12:05:38.303884829Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733745938303790950,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f132a53a-16ca-47af-b7cb-df602786686e name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 12:05:38 no-preload-820741 crio[714]: time="2024-12-09 12:05:38.304504264Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9c698197-5c0c-413f-a5d9-cf18a3c8290d name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 12:05:38 no-preload-820741 crio[714]: time="2024-12-09 12:05:38.304632781Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9c698197-5c0c-413f-a5d9-cf18a3c8290d name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 12:05:38 no-preload-820741 crio[714]: time="2024-12-09 12:05:38.304987827Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1cd2924576549a280dcba998d853546db6a30837efcbf285175564babcbff919,PodSandboxId:b98ca63b5a1555dc050a61075fce6bc10f4f1a77958ce4d0b60df2933510611c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1733745146632686621,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4e76af62-1ba8-410c-ace3-c92e48840825,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container
.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:909852cc820d2c5faeb9b92e83a833c397dbbf46bbc715f79ef8b77f0a338f42,PodSandboxId:9cc643ec88d327b685ab6fa714ccf96a1c9b2cc90138ceaa78baa070fed18a92,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733745142990608503,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-z647g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e15e13e-efe6-4ae2-8bac-205aadf8f95a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{
\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d184b6139f52f80605886a70426659453eee61916ead187b881c64f6b4c59bbb,PodSandboxId:fc6d68de344af38241a746310493add2f1c11df00bcbb98f413d294108bf2a52,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733745142977116901,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: aeba46d3-ecf1-4923-b89c-75b34e75a06d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ef403336ca71e96a468d654fafbbe9fd9c37a3d04d2578e06771b425f20004f,PodSandboxId:fc6d68de344af38241a746310493add2f1c11df00bcbb98f413d294108bf2a52,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1733745128033456849,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a
eba46d3-ecf1-4923-b89c-75b34e75a06d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de64a319ab30ae80695633af0e1c9206551e2408918b743c2258216259de56d2,PodSandboxId:caa8831be0bd8e39cb1d1990ba51ad6c70c99c9d531e9420f22596be2f01b978,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733745127414709401,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hpvvp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0945206c-8d1e-47e0-b35b-9011073423
b2,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13e00a6fef368f36aa166cfe9a5a6e37476d0bff708b09a552a102e6103aff16,PodSandboxId:910053c757fafdf5b1c3ff2c244f3d09d3ff14ad898cdf63561e1845d9373e02,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733745123566013883,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-820741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 286a8335482d6443f935ef423fb83f8c,},Annotations:map[string]string{io.kuber
netes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73b01a8a4080f1488054462bb97f98c08528c799629927ce6a684fb0ade63413,PodSandboxId:5218b6309474d233bd08077d66abd5c967dd3f75b3b28ec1a3f9c5a30ea04ed1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733745123587161547,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-820741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: feed5b01992a8257b2679a0cdc55f40b,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6662f1bed1995329291aee23d70ac4157125e45c75dd92e34e07fa257f2be2d,PodSandboxId:bc507605abc8700e8e949c93148b9faf0f46443616e103e6042634e7ad45bc52,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733745123542798054,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-820741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5afc6fc69dc6125a529552eeff4d23ad,},Annotations:map[string]string{io.kube
rnetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:478ca5095dcdb90c166952aee1291e7de907591fa02ac65de132b6d7e6ea79cb,PodSandboxId:119dbeb98f771e4092d9710b08a04c92705c549afb512e90f252736f96c6c997,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733745123548954480,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-820741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ebc694b948cf176fee9c9bd3684e24c,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9c698197-5c0c-413f-a5d9-cf18a3c8290d name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 12:05:38 no-preload-820741 crio[714]: time="2024-12-09 12:05:38.345050020Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=aeda7ed0-9780-402c-8b58-349bc9db5a3a name=/runtime.v1.RuntimeService/Version
	Dec 09 12:05:38 no-preload-820741 crio[714]: time="2024-12-09 12:05:38.345123970Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=aeda7ed0-9780-402c-8b58-349bc9db5a3a name=/runtime.v1.RuntimeService/Version
	Dec 09 12:05:38 no-preload-820741 crio[714]: time="2024-12-09 12:05:38.346786826Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ba0bf408-011c-4159-aa08-ef404321f9c9 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 12:05:38 no-preload-820741 crio[714]: time="2024-12-09 12:05:38.347203915Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733745938347178462,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ba0bf408-011c-4159-aa08-ef404321f9c9 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 12:05:38 no-preload-820741 crio[714]: time="2024-12-09 12:05:38.347800529Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fa66765d-569a-4af6-bebe-dc59a95cdc5d name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 12:05:38 no-preload-820741 crio[714]: time="2024-12-09 12:05:38.347930652Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fa66765d-569a-4af6-bebe-dc59a95cdc5d name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 12:05:38 no-preload-820741 crio[714]: time="2024-12-09 12:05:38.348454000Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1cd2924576549a280dcba998d853546db6a30837efcbf285175564babcbff919,PodSandboxId:b98ca63b5a1555dc050a61075fce6bc10f4f1a77958ce4d0b60df2933510611c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1733745146632686621,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4e76af62-1ba8-410c-ace3-c92e48840825,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container
.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:909852cc820d2c5faeb9b92e83a833c397dbbf46bbc715f79ef8b77f0a338f42,PodSandboxId:9cc643ec88d327b685ab6fa714ccf96a1c9b2cc90138ceaa78baa070fed18a92,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733745142990608503,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-z647g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e15e13e-efe6-4ae2-8bac-205aadf8f95a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{
\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d184b6139f52f80605886a70426659453eee61916ead187b881c64f6b4c59bbb,PodSandboxId:fc6d68de344af38241a746310493add2f1c11df00bcbb98f413d294108bf2a52,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733745142977116901,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: aeba46d3-ecf1-4923-b89c-75b34e75a06d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ef403336ca71e96a468d654fafbbe9fd9c37a3d04d2578e06771b425f20004f,PodSandboxId:fc6d68de344af38241a746310493add2f1c11df00bcbb98f413d294108bf2a52,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1733745128033456849,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a
eba46d3-ecf1-4923-b89c-75b34e75a06d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de64a319ab30ae80695633af0e1c9206551e2408918b743c2258216259de56d2,PodSandboxId:caa8831be0bd8e39cb1d1990ba51ad6c70c99c9d531e9420f22596be2f01b978,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733745127414709401,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hpvvp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0945206c-8d1e-47e0-b35b-9011073423
b2,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13e00a6fef368f36aa166cfe9a5a6e37476d0bff708b09a552a102e6103aff16,PodSandboxId:910053c757fafdf5b1c3ff2c244f3d09d3ff14ad898cdf63561e1845d9373e02,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733745123566013883,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-820741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 286a8335482d6443f935ef423fb83f8c,},Annotations:map[string]string{io.kuber
netes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73b01a8a4080f1488054462bb97f98c08528c799629927ce6a684fb0ade63413,PodSandboxId:5218b6309474d233bd08077d66abd5c967dd3f75b3b28ec1a3f9c5a30ea04ed1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733745123587161547,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-820741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: feed5b01992a8257b2679a0cdc55f40b,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6662f1bed1995329291aee23d70ac4157125e45c75dd92e34e07fa257f2be2d,PodSandboxId:bc507605abc8700e8e949c93148b9faf0f46443616e103e6042634e7ad45bc52,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733745123542798054,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-820741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5afc6fc69dc6125a529552eeff4d23ad,},Annotations:map[string]string{io.kube
rnetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:478ca5095dcdb90c166952aee1291e7de907591fa02ac65de132b6d7e6ea79cb,PodSandboxId:119dbeb98f771e4092d9710b08a04c92705c549afb512e90f252736f96c6c997,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733745123548954480,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-820741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ebc694b948cf176fee9c9bd3684e24c,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fa66765d-569a-4af6-bebe-dc59a95cdc5d name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 12:05:38 no-preload-820741 crio[714]: time="2024-12-09 12:05:38.390092993Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e796188c-2d8e-422e-93d4-fc83ff0212d0 name=/runtime.v1.RuntimeService/Version
	Dec 09 12:05:38 no-preload-820741 crio[714]: time="2024-12-09 12:05:38.390188178Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e796188c-2d8e-422e-93d4-fc83ff0212d0 name=/runtime.v1.RuntimeService/Version
	Dec 09 12:05:38 no-preload-820741 crio[714]: time="2024-12-09 12:05:38.391729142Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4d3d595f-cf38-4717-8cdc-4d833512de3b name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 12:05:38 no-preload-820741 crio[714]: time="2024-12-09 12:05:38.392293605Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733745938392269606,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4d3d595f-cf38-4717-8cdc-4d833512de3b name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 12:05:38 no-preload-820741 crio[714]: time="2024-12-09 12:05:38.393189632Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3c4a0527-1104-49ad-a419-431a4fa0abd9 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 12:05:38 no-preload-820741 crio[714]: time="2024-12-09 12:05:38.393247487Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3c4a0527-1104-49ad-a419-431a4fa0abd9 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 12:05:38 no-preload-820741 crio[714]: time="2024-12-09 12:05:38.393433847Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1cd2924576549a280dcba998d853546db6a30837efcbf285175564babcbff919,PodSandboxId:b98ca63b5a1555dc050a61075fce6bc10f4f1a77958ce4d0b60df2933510611c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1733745146632686621,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4e76af62-1ba8-410c-ace3-c92e48840825,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container
.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:909852cc820d2c5faeb9b92e83a833c397dbbf46bbc715f79ef8b77f0a338f42,PodSandboxId:9cc643ec88d327b685ab6fa714ccf96a1c9b2cc90138ceaa78baa070fed18a92,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733745142990608503,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-z647g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e15e13e-efe6-4ae2-8bac-205aadf8f95a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{
\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d184b6139f52f80605886a70426659453eee61916ead187b881c64f6b4c59bbb,PodSandboxId:fc6d68de344af38241a746310493add2f1c11df00bcbb98f413d294108bf2a52,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733745142977116901,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: aeba46d3-ecf1-4923-b89c-75b34e75a06d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ef403336ca71e96a468d654fafbbe9fd9c37a3d04d2578e06771b425f20004f,PodSandboxId:fc6d68de344af38241a746310493add2f1c11df00bcbb98f413d294108bf2a52,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1733745128033456849,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a
eba46d3-ecf1-4923-b89c-75b34e75a06d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de64a319ab30ae80695633af0e1c9206551e2408918b743c2258216259de56d2,PodSandboxId:caa8831be0bd8e39cb1d1990ba51ad6c70c99c9d531e9420f22596be2f01b978,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733745127414709401,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hpvvp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0945206c-8d1e-47e0-b35b-9011073423
b2,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13e00a6fef368f36aa166cfe9a5a6e37476d0bff708b09a552a102e6103aff16,PodSandboxId:910053c757fafdf5b1c3ff2c244f3d09d3ff14ad898cdf63561e1845d9373e02,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733745123566013883,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-820741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 286a8335482d6443f935ef423fb83f8c,},Annotations:map[string]string{io.kuber
netes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73b01a8a4080f1488054462bb97f98c08528c799629927ce6a684fb0ade63413,PodSandboxId:5218b6309474d233bd08077d66abd5c967dd3f75b3b28ec1a3f9c5a30ea04ed1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733745123587161547,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-820741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: feed5b01992a8257b2679a0cdc55f40b,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6662f1bed1995329291aee23d70ac4157125e45c75dd92e34e07fa257f2be2d,PodSandboxId:bc507605abc8700e8e949c93148b9faf0f46443616e103e6042634e7ad45bc52,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733745123542798054,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-820741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5afc6fc69dc6125a529552eeff4d23ad,},Annotations:map[string]string{io.kube
rnetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:478ca5095dcdb90c166952aee1291e7de907591fa02ac65de132b6d7e6ea79cb,PodSandboxId:119dbeb98f771e4092d9710b08a04c92705c549afb512e90f252736f96c6c997,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733745123548954480,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-820741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ebc694b948cf176fee9c9bd3684e24c,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3c4a0527-1104-49ad-a419-431a4fa0abd9 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	1cd2924576549       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   b98ca63b5a155       busybox
	909852cc820d2       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      13 minutes ago      Running             coredns                   1                   9cc643ec88d32       coredns-7c65d6cfc9-z647g
	d184b6139f52f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Running             storage-provisioner       3                   fc6d68de344af       storage-provisioner
	0ef403336ca71       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       2                   fc6d68de344af       storage-provisioner
	de64a319ab30a       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                      13 minutes ago      Running             kube-proxy                1                   caa8831be0bd8       kube-proxy-hpvvp
	73b01a8a4080f       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                      13 minutes ago      Running             kube-scheduler            1                   5218b6309474d       kube-scheduler-no-preload-820741
	13e00a6fef368       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      13 minutes ago      Running             etcd                      1                   910053c757faf       etcd-no-preload-820741
	478ca5095dcdb       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                      13 minutes ago      Running             kube-apiserver            1                   119dbeb98f771       kube-apiserver-no-preload-820741
	b6662f1bed199       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                      13 minutes ago      Running             kube-controller-manager   1                   bc507605abc87       kube-controller-manager-no-preload-820741
	
	
	==> coredns [909852cc820d2c5faeb9b92e83a833c397dbbf46bbc715f79ef8b77f0a338f42] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:42173 - 37964 "HINFO IN 7368892457938397498.2172018361582216149. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011174169s
	
	
	==> describe nodes <==
	Name:               no-preload-820741
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-820741
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=60110addcdbf0fec7168b962521659e922988d6c
	                    minikube.k8s.io/name=no-preload-820741
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_09T11_44_20_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 09 Dec 2024 11:44:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-820741
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 09 Dec 2024 12:05:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 09 Dec 2024 12:02:50 +0000   Mon, 09 Dec 2024 11:44:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 09 Dec 2024 12:02:50 +0000   Mon, 09 Dec 2024 11:44:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 09 Dec 2024 12:02:50 +0000   Mon, 09 Dec 2024 11:44:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 09 Dec 2024 12:02:50 +0000   Mon, 09 Dec 2024 11:52:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.169
	  Hostname:    no-preload-820741
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 7f24c170235740c2b22e7e8cd666993b
	  System UUID:                7f24c170-2357-40c2-b22e-7e8cd666993b
	  Boot ID:                    aa8f51f5-2473-41a2-8839-2f66039495cc
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 coredns-7c65d6cfc9-z647g                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     21m
	  kube-system                 etcd-no-preload-820741                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         21m
	  kube-system                 kube-apiserver-no-preload-820741             250m (12%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-controller-manager-no-preload-820741    200m (10%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-proxy-hpvvp                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-scheduler-no-preload-820741             100m (5%)     0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 metrics-server-6867b74b74-pwcsr              100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         20m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 21m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     21m                kubelet          Node no-preload-820741 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  21m                kubelet          Node no-preload-820741 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m                kubelet          Node no-preload-820741 status is now: NodeHasNoDiskPressure
	  Normal  NodeReady                21m                kubelet          Node no-preload-820741 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           21m                node-controller  Node no-preload-820741 event: Registered Node no-preload-820741 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node no-preload-820741 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node no-preload-820741 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node no-preload-820741 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           13m                node-controller  Node no-preload-820741 event: Registered Node no-preload-820741 in Controller
	
	
	==> dmesg <==
	[Dec 9 11:51] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053494] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.038497] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.816659] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.043961] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.600221] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.746987] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +0.056332] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.059060] systemd-fstab-generator[648]: Ignoring "noauto" option for root device
	[  +0.193561] systemd-fstab-generator[662]: Ignoring "noauto" option for root device
	[  +0.130875] systemd-fstab-generator[674]: Ignoring "noauto" option for root device
	[  +0.294390] systemd-fstab-generator[704]: Ignoring "noauto" option for root device
	[Dec 9 11:52] systemd-fstab-generator[1305]: Ignoring "noauto" option for root device
	[  +0.059653] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.811855] systemd-fstab-generator[1428]: Ignoring "noauto" option for root device
	[  +3.321821] kauditd_printk_skb: 97 callbacks suppressed
	[  +3.920131] systemd-fstab-generator[2123]: Ignoring "noauto" option for root device
	[  +5.064464] kauditd_printk_skb: 67 callbacks suppressed
	[  +7.796709] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [13e00a6fef368f36aa166cfe9a5a6e37476d0bff708b09a552a102e6103aff16] <==
	{"level":"warn","ts":"2024-12-09T11:52:14.492207Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"382.580926ms","expected-duration":"100ms","prefix":"","request":"header:<ID:16466167026371683151 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/coredns-7c65d6cfc9-z647g.180f8001d24d58d7\" mod_revision:552 > success:<request_put:<key:\"/registry/events/kube-system/coredns-7c65d6cfc9-z647g.180f8001d24d58d7\" value_size:838 lease:7242794989516906910 >> failure:<request_range:<key:\"/registry/events/kube-system/coredns-7c65d6cfc9-z647g.180f8001d24d58d7\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-12-09T11:52:14.492356Z","caller":"traceutil/trace.go:171","msg":"trace[1110572743] linearizableReadLoop","detail":"{readStateIndex:596; appliedIndex:595; }","duration":"438.227883ms","start":"2024-12-09T11:52:14.054117Z","end":"2024-12-09T11:52:14.492345Z","steps":["trace[1110572743] 'read index received'  (duration: 55.388232ms)","trace[1110572743] 'applied index is now lower than readState.Index'  (duration: 382.838608ms)"],"step_count":2}
	{"level":"info","ts":"2024-12-09T11:52:14.492458Z","caller":"traceutil/trace.go:171","msg":"trace[961554836] transaction","detail":"{read_only:false; response_revision:559; number_of_response:1; }","duration":"509.888202ms","start":"2024-12-09T11:52:13.982551Z","end":"2024-12-09T11:52:14.492439Z","steps":["trace[961554836] 'process raft request'  (duration: 127.023217ms)","trace[961554836] 'compare'  (duration: 382.508741ms)"],"step_count":2}
	{"level":"warn","ts":"2024-12-09T11:52:14.492537Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"438.394922ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2024-12-09T11:52:14.492558Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-09T11:52:13.982522Z","time spent":"509.991305ms","remote":"127.0.0.1:46046","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":926,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/coredns-7c65d6cfc9-z647g.180f8001d24d58d7\" mod_revision:552 > success:<request_put:<key:\"/registry/events/kube-system/coredns-7c65d6cfc9-z647g.180f8001d24d58d7\" value_size:838 lease:7242794989516906910 >> failure:<request_range:<key:\"/registry/events/kube-system/coredns-7c65d6cfc9-z647g.180f8001d24d58d7\" > >"}
	{"level":"info","ts":"2024-12-09T11:52:14.492574Z","caller":"traceutil/trace.go:171","msg":"trace[1446533464] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:559; }","duration":"438.474787ms","start":"2024-12-09T11:52:14.054093Z","end":"2024-12-09T11:52:14.492567Z","steps":["trace[1446533464] 'agreement among raft nodes before linearized reading'  (duration: 438.352785ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-09T11:52:14.492642Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-09T11:52:14.054059Z","time spent":"438.577758ms","remote":"127.0.0.1:45932","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-12-09T11:52:14.492884Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"326.224947ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/no-preload-820741\" ","response":"range_response_count:1 size:4646"}
	{"level":"info","ts":"2024-12-09T11:52:14.492949Z","caller":"traceutil/trace.go:171","msg":"trace[1314823773] range","detail":"{range_begin:/registry/minions/no-preload-820741; range_end:; response_count:1; response_revision:559; }","duration":"326.290523ms","start":"2024-12-09T11:52:14.166652Z","end":"2024-12-09T11:52:14.492942Z","steps":["trace[1314823773] 'agreement among raft nodes before linearized reading'  (duration: 326.126909ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-09T11:52:14.493034Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-09T11:52:14.166536Z","time spent":"326.488113ms","remote":"127.0.0.1:46150","response type":"/etcdserverpb.KV/Range","request count":0,"request size":37,"response count":1,"response size":4670,"request content":"key:\"/registry/minions/no-preload-820741\" "}
	{"level":"warn","ts":"2024-12-09T11:52:14.809687Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"108.062889ms","expected-duration":"100ms","prefix":"","request":"header:<ID:16466167026371683156 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kube-apiserver-no-preload-820741\" mod_revision:470 > success:<request_put:<key:\"/registry/pods/kube-system/kube-apiserver-no-preload-820741\" value_size:6987 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-apiserver-no-preload-820741\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-12-09T11:52:14.809876Z","caller":"traceutil/trace.go:171","msg":"trace[408837113] linearizableReadLoop","detail":"{readStateIndex:597; appliedIndex:596; }","duration":"230.703343ms","start":"2024-12-09T11:52:14.579157Z","end":"2024-12-09T11:52:14.809861Z","steps":["trace[408837113] 'read index received'  (duration: 122.327389ms)","trace[408837113] 'applied index is now lower than readState.Index'  (duration: 108.374233ms)"],"step_count":2}
	{"level":"info","ts":"2024-12-09T11:52:14.809986Z","caller":"traceutil/trace.go:171","msg":"trace[1519864487] transaction","detail":"{read_only:false; response_revision:560; number_of_response:1; }","duration":"306.333232ms","start":"2024-12-09T11:52:14.503644Z","end":"2024-12-09T11:52:14.809978Z","steps":["trace[1519864487] 'process raft request'  (duration: 197.900023ms)","trace[1519864487] 'compare'  (duration: 107.921495ms)"],"step_count":2}
	{"level":"warn","ts":"2024-12-09T11:52:14.810083Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-09T11:52:14.503631Z","time spent":"306.400251ms","remote":"127.0.0.1:46164","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":7054,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-apiserver-no-preload-820741\" mod_revision:470 > success:<request_put:<key:\"/registry/pods/kube-system/kube-apiserver-no-preload-820741\" value_size:6987 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-apiserver-no-preload-820741\" > >"}
	{"level":"warn","ts":"2024-12-09T11:52:14.810243Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"143.146794ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/no-preload-820741\" ","response":"range_response_count:1 size:4646"}
	{"level":"info","ts":"2024-12-09T11:52:14.810795Z","caller":"traceutil/trace.go:171","msg":"trace[1954236310] range","detail":"{range_begin:/registry/minions/no-preload-820741; range_end:; response_count:1; response_revision:560; }","duration":"143.697928ms","start":"2024-12-09T11:52:14.667081Z","end":"2024-12-09T11:52:14.810779Z","steps":["trace[1954236310] 'agreement among raft nodes before linearized reading'  (duration: 143.026382ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-09T11:52:14.810313Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"231.178944ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kube-system/coredns-7c65d6cfc9-z647g.180f8001d928315c\" ","response":"range_response_count:1 size:810"}
	{"level":"info","ts":"2024-12-09T11:52:14.811109Z","caller":"traceutil/trace.go:171","msg":"trace[760022203] range","detail":"{range_begin:/registry/events/kube-system/coredns-7c65d6cfc9-z647g.180f8001d928315c; range_end:; response_count:1; response_revision:560; }","duration":"231.967268ms","start":"2024-12-09T11:52:14.579128Z","end":"2024-12-09T11:52:14.811095Z","steps":["trace[760022203] 'agreement among raft nodes before linearized reading'  (duration: 231.136879ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-09T11:52:54.938191Z","caller":"traceutil/trace.go:171","msg":"trace[157916628] transaction","detail":"{read_only:false; response_revision:629; number_of_response:1; }","duration":"334.346305ms","start":"2024-12-09T11:52:54.603779Z","end":"2024-12-09T11:52:54.938125Z","steps":["trace[157916628] 'process raft request'  (duration: 334.146809ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-09T11:52:54.938541Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-09T11:52:54.603765Z","time spent":"334.626726ms","remote":"127.0.0.1:46138","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1102,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:628 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-12-09T11:52:55.243229Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"166.08882ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-6867b74b74-pwcsr\" ","response":"range_response_count:1 size:4385"}
	{"level":"info","ts":"2024-12-09T11:52:55.243339Z","caller":"traceutil/trace.go:171","msg":"trace[1486569575] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-6867b74b74-pwcsr; range_end:; response_count:1; response_revision:629; }","duration":"166.203658ms","start":"2024-12-09T11:52:55.077120Z","end":"2024-12-09T11:52:55.243324Z","steps":["trace[1486569575] 'range keys from in-memory index tree'  (duration: 165.980555ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-09T12:02:05.099120Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":848}
	{"level":"info","ts":"2024-12-09T12:02:05.110968Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":848,"took":"11.198155ms","hash":3965474849,"current-db-size-bytes":2596864,"current-db-size":"2.6 MB","current-db-size-in-use-bytes":2596864,"current-db-size-in-use":"2.6 MB"}
	{"level":"info","ts":"2024-12-09T12:02:05.111072Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3965474849,"revision":848,"compact-revision":-1}
	
	
	==> kernel <==
	 12:05:38 up 14 min,  0 users,  load average: 0.07, 0.12, 0.13
	Linux no-preload-820741 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [478ca5095dcdb90c166952aee1291e7de907591fa02ac65de132b6d7e6ea79cb] <==
	W1209 12:02:07.528516       1 handler_proxy.go:99] no RequestInfo found in the context
	E1209 12:02:07.528613       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1209 12:02:07.529658       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1209 12:02:07.529705       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1209 12:03:07.530546       1 handler_proxy.go:99] no RequestInfo found in the context
	E1209 12:03:07.530702       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1209 12:03:07.530545       1 handler_proxy.go:99] no RequestInfo found in the context
	E1209 12:03:07.530757       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1209 12:03:07.532045       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1209 12:03:07.532117       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1209 12:05:07.532726       1 handler_proxy.go:99] no RequestInfo found in the context
	W1209 12:05:07.532727       1 handler_proxy.go:99] no RequestInfo found in the context
	E1209 12:05:07.533110       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E1209 12:05:07.533207       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1209 12:05:07.534322       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1209 12:05:07.534388       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [b6662f1bed1995329291aee23d70ac4157125e45c75dd92e34e07fa257f2be2d] <==
	E1209 12:00:10.254736       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1209 12:00:10.707876       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1209 12:00:40.262225       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1209 12:00:40.716074       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1209 12:01:10.269685       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1209 12:01:10.724720       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1209 12:01:40.276642       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1209 12:01:40.733232       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1209 12:02:10.282749       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1209 12:02:10.740021       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1209 12:02:40.289476       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1209 12:02:40.748943       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1209 12:02:50.109231       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-820741"
	E1209 12:03:10.294992       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1209 12:03:10.756777       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1209 12:03:21.960300       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="345.426µs"
	I1209 12:03:33.954289       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="55.472µs"
	E1209 12:03:40.301082       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1209 12:03:40.766454       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1209 12:04:10.307576       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1209 12:04:10.774086       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1209 12:04:40.314398       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1209 12:04:40.782767       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1209 12:05:10.320954       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1209 12:05:10.790734       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [de64a319ab30ae80695633af0e1c9206551e2408918b743c2258216259de56d2] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1209 11:52:07.936672       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1209 11:52:07.962984       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.169"]
	E1209 11:52:07.963187       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1209 11:52:08.120354       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1209 11:52:08.120432       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1209 11:52:08.120499       1 server_linux.go:169] "Using iptables Proxier"
	I1209 11:52:08.128711       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1209 11:52:08.131933       1 server.go:483] "Version info" version="v1.31.2"
	I1209 11:52:08.132013       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1209 11:52:08.139432       1 config.go:199] "Starting service config controller"
	I1209 11:52:08.139718       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1209 11:52:08.139989       1 config.go:328] "Starting node config controller"
	I1209 11:52:08.140009       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1209 11:52:08.140904       1 config.go:105] "Starting endpoint slice config controller"
	I1209 11:52:08.140918       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1209 11:52:08.240969       1 shared_informer.go:320] Caches are synced for node config
	I1209 11:52:08.240998       1 shared_informer.go:320] Caches are synced for service config
	I1209 11:52:08.241010       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [73b01a8a4080f1488054462bb97f98c08528c799629927ce6a684fb0ade63413] <==
	I1209 11:52:04.346699       1 serving.go:386] Generated self-signed cert in-memory
	W1209 11:52:06.471192       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1209 11:52:06.471232       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1209 11:52:06.471243       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1209 11:52:06.471294       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1209 11:52:06.528300       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.2"
	I1209 11:52:06.528361       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1209 11:52:06.537494       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1209 11:52:06.537538       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1209 11:52:06.538205       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1209 11:52:06.538275       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1209 11:52:06.638397       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 09 12:04:31 no-preload-820741 kubelet[1435]: E1209 12:04:31.940947    1435 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-pwcsr" podUID="40d4df7e-de82-478b-a77b-b27208d8262e"
	Dec 09 12:04:33 no-preload-820741 kubelet[1435]: E1209 12:04:33.143425    1435 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733745873142576897,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 12:04:33 no-preload-820741 kubelet[1435]: E1209 12:04:33.143454    1435 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733745873142576897,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 12:04:43 no-preload-820741 kubelet[1435]: E1209 12:04:43.144747    1435 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733745883144522411,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 12:04:43 no-preload-820741 kubelet[1435]: E1209 12:04:43.144870    1435 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733745883144522411,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 12:04:44 no-preload-820741 kubelet[1435]: E1209 12:04:44.940793    1435 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-pwcsr" podUID="40d4df7e-de82-478b-a77b-b27208d8262e"
	Dec 09 12:04:53 no-preload-820741 kubelet[1435]: E1209 12:04:53.146476    1435 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733745893146034781,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 12:04:53 no-preload-820741 kubelet[1435]: E1209 12:04:53.146785    1435 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733745893146034781,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 12:04:59 no-preload-820741 kubelet[1435]: E1209 12:04:59.941770    1435 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-pwcsr" podUID="40d4df7e-de82-478b-a77b-b27208d8262e"
	Dec 09 12:05:02 no-preload-820741 kubelet[1435]: E1209 12:05:02.976372    1435 iptables.go:577] "Could not set up iptables canary" err=<
	Dec 09 12:05:02 no-preload-820741 kubelet[1435]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 09 12:05:02 no-preload-820741 kubelet[1435]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 09 12:05:02 no-preload-820741 kubelet[1435]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 09 12:05:02 no-preload-820741 kubelet[1435]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 09 12:05:03 no-preload-820741 kubelet[1435]: E1209 12:05:03.148341    1435 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733745903147993203,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 12:05:03 no-preload-820741 kubelet[1435]: E1209 12:05:03.148381    1435 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733745903147993203,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 12:05:13 no-preload-820741 kubelet[1435]: E1209 12:05:13.149525    1435 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733745913149124698,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 12:05:13 no-preload-820741 kubelet[1435]: E1209 12:05:13.149965    1435 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733745913149124698,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 12:05:14 no-preload-820741 kubelet[1435]: E1209 12:05:14.940642    1435 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-pwcsr" podUID="40d4df7e-de82-478b-a77b-b27208d8262e"
	Dec 09 12:05:23 no-preload-820741 kubelet[1435]: E1209 12:05:23.152496    1435 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733745923152049759,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 12:05:23 no-preload-820741 kubelet[1435]: E1209 12:05:23.152532    1435 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733745923152049759,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 12:05:25 no-preload-820741 kubelet[1435]: E1209 12:05:25.940914    1435 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-pwcsr" podUID="40d4df7e-de82-478b-a77b-b27208d8262e"
	Dec 09 12:05:33 no-preload-820741 kubelet[1435]: E1209 12:05:33.154977    1435 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733745933153321740,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 12:05:33 no-preload-820741 kubelet[1435]: E1209 12:05:33.155015    1435 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733745933153321740,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 12:05:37 no-preload-820741 kubelet[1435]: E1209 12:05:37.940023    1435 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-pwcsr" podUID="40d4df7e-de82-478b-a77b-b27208d8262e"
	
	
	==> storage-provisioner [0ef403336ca71e96a468d654fafbbe9fd9c37a3d04d2578e06771b425f20004f] <==
	I1209 11:52:08.198656       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1209 11:52:08.203115       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [d184b6139f52f80605886a70426659453eee61916ead187b881c64f6b4c59bbb] <==
	I1209 11:52:23.062937       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1209 11:52:23.097448       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1209 11:52:23.097523       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1209 11:52:40.518291       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1209 11:52:40.519561       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"445575d8-e094-46ac-b459-bc165449ec3d", APIVersion:"v1", ResourceVersion:"615", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-820741_8697bd8a-a10e-4417-905d-a77078050fe9 became leader
	I1209 11:52:40.519883       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-820741_8697bd8a-a10e-4417-905d-a77078050fe9!
	I1209 11:52:40.621043       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-820741_8697bd8a-a10e-4417-905d-a77078050fe9!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-820741 -n no-preload-820741
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-820741 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-pwcsr
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-820741 describe pod metrics-server-6867b74b74-pwcsr
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-820741 describe pod metrics-server-6867b74b74-pwcsr: exit status 1 (63.639779ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-pwcsr" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-820741 describe pod metrics-server-6867b74b74-pwcsr: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-482476 -n default-k8s-diff-port-482476
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-12-09 12:06:38.326055603 +0000 UTC m=+5586.786781109
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-482476 -n default-k8s-diff-port-482476
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-482476 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-482476 logs -n 25: (2.06778253s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p running-upgrade-119214                              | running-upgrade-119214       | jenkins | v1.34.0 | 09 Dec 24 11:43 UTC | 09 Dec 24 11:43 UTC |
	| delete  | -p                                                     | disable-driver-mounts-905993 | jenkins | v1.34.0 | 09 Dec 24 11:43 UTC | 09 Dec 24 11:43 UTC |
	|         | disable-driver-mounts-905993                           |                              |         |         |                     |                     |
	| start   | -p no-preload-820741                                   | no-preload-820741            | jenkins | v1.34.0 | 09 Dec 24 11:43 UTC | 09 Dec 24 11:44 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-005123            | embed-certs-005123           | jenkins | v1.34.0 | 09 Dec 24 11:44 UTC | 09 Dec 24 11:44 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-005123                                  | embed-certs-005123           | jenkins | v1.34.0 | 09 Dec 24 11:44 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-820741             | no-preload-820741            | jenkins | v1.34.0 | 09 Dec 24 11:44 UTC | 09 Dec 24 11:44 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-820741                                   | no-preload-820741            | jenkins | v1.34.0 | 09 Dec 24 11:44 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-835095                           | kubernetes-upgrade-835095    | jenkins | v1.34.0 | 09 Dec 24 11:45 UTC | 09 Dec 24 11:45 UTC |
	| start   | -p kubernetes-upgrade-835095                           | kubernetes-upgrade-835095    | jenkins | v1.34.0 | 09 Dec 24 11:45 UTC | 09 Dec 24 11:45 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-835095                           | kubernetes-upgrade-835095    | jenkins | v1.34.0 | 09 Dec 24 11:45 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-835095                           | kubernetes-upgrade-835095    | jenkins | v1.34.0 | 09 Dec 24 11:45 UTC | 09 Dec 24 11:46 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-835095                           | kubernetes-upgrade-835095    | jenkins | v1.34.0 | 09 Dec 24 11:46 UTC | 09 Dec 24 11:46 UTC |
	| start   | -p                                                     | default-k8s-diff-port-482476 | jenkins | v1.34.0 | 09 Dec 24 11:46 UTC | 09 Dec 24 11:47 UTC |
	|         | default-k8s-diff-port-482476                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-005123                 | embed-certs-005123           | jenkins | v1.34.0 | 09 Dec 24 11:46 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-005123                                  | embed-certs-005123           | jenkins | v1.34.0 | 09 Dec 24 11:46 UTC | 09 Dec 24 11:58 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-014592        | old-k8s-version-014592       | jenkins | v1.34.0 | 09 Dec 24 11:46 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-820741                  | no-preload-820741            | jenkins | v1.34.0 | 09 Dec 24 11:47 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-820741                                   | no-preload-820741            | jenkins | v1.34.0 | 09 Dec 24 11:47 UTC | 09 Dec 24 11:56 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-482476  | default-k8s-diff-port-482476 | jenkins | v1.34.0 | 09 Dec 24 11:47 UTC | 09 Dec 24 11:47 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-482476 | jenkins | v1.34.0 | 09 Dec 24 11:47 UTC |                     |
	|         | default-k8s-diff-port-482476                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-014592                              | old-k8s-version-014592       | jenkins | v1.34.0 | 09 Dec 24 11:48 UTC | 09 Dec 24 11:48 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-014592             | old-k8s-version-014592       | jenkins | v1.34.0 | 09 Dec 24 11:48 UTC | 09 Dec 24 11:48 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-014592                              | old-k8s-version-014592       | jenkins | v1.34.0 | 09 Dec 24 11:48 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-482476       | default-k8s-diff-port-482476 | jenkins | v1.34.0 | 09 Dec 24 11:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-482476 | jenkins | v1.34.0 | 09 Dec 24 11:49 UTC | 09 Dec 24 11:57 UTC |
	|         | default-k8s-diff-port-482476                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/09 11:49:59
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 11:49:59.489110  663024 out.go:345] Setting OutFile to fd 1 ...
	I1209 11:49:59.489218  663024 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 11:49:59.489223  663024 out.go:358] Setting ErrFile to fd 2...
	I1209 11:49:59.489227  663024 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 11:49:59.489393  663024 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20068-609844/.minikube/bin
	I1209 11:49:59.489968  663024 out.go:352] Setting JSON to false
	I1209 11:49:59.491001  663024 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":16343,"bootTime":1733728656,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 11:49:59.491116  663024 start.go:139] virtualization: kvm guest
	I1209 11:49:59.493422  663024 out.go:177] * [default-k8s-diff-port-482476] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1209 11:49:59.495230  663024 notify.go:220] Checking for updates...
	I1209 11:49:59.495310  663024 out.go:177]   - MINIKUBE_LOCATION=20068
	I1209 11:49:59.496833  663024 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 11:49:59.498350  663024 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20068-609844/kubeconfig
	I1209 11:49:59.499799  663024 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20068-609844/.minikube
	I1209 11:49:59.501159  663024 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1209 11:49:59.502351  663024 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 11:49:59.503976  663024 config.go:182] Loaded profile config "default-k8s-diff-port-482476": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 11:49:59.504355  663024 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:49:59.504434  663024 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:49:59.519867  663024 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39863
	I1209 11:49:59.520292  663024 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:49:59.520859  663024 main.go:141] libmachine: Using API Version  1
	I1209 11:49:59.520886  663024 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:49:59.521235  663024 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:49:59.521438  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .DriverName
	I1209 11:49:59.521739  663024 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 11:49:59.522124  663024 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:49:59.522225  663024 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:49:59.537355  663024 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39881
	I1209 11:49:59.537882  663024 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:49:59.538473  663024 main.go:141] libmachine: Using API Version  1
	I1209 11:49:59.538507  663024 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:49:59.538862  663024 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:49:59.539111  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .DriverName
	I1209 11:49:59.573642  663024 out.go:177] * Using the kvm2 driver based on existing profile
	I1209 11:49:59.574808  663024 start.go:297] selected driver: kvm2
	I1209 11:49:59.574821  663024 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-482476 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-482476 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.25 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 11:49:59.574939  663024 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 11:49:59.575618  663024 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 11:49:59.575711  663024 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20068-609844/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1209 11:49:59.591990  663024 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1209 11:49:59.592425  663024 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 11:49:59.592468  663024 cni.go:84] Creating CNI manager for ""
	I1209 11:49:59.592500  663024 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 11:49:59.592535  663024 start.go:340] cluster config:
	{Name:default-k8s-diff-port-482476 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-482476 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.25 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-h
ost Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 11:49:59.592645  663024 iso.go:125] acquiring lock: {Name:mk3bf4d9da9b75cc0ae0a3732f4492bac54a4b46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 11:49:59.594451  663024 out.go:177] * Starting "default-k8s-diff-port-482476" primary control-plane node in "default-k8s-diff-port-482476" cluster
	I1209 11:49:56.270467  661546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.218:22: connect: no route to host
	I1209 11:49:59.342522  661546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.218:22: connect: no route to host
	I1209 11:49:59.595812  663024 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 11:49:59.595868  663024 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1209 11:49:59.595876  663024 cache.go:56] Caching tarball of preloaded images
	I1209 11:49:59.595966  663024 preload.go:172] Found /home/jenkins/minikube-integration/20068-609844/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1209 11:49:59.595978  663024 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1209 11:49:59.596080  663024 profile.go:143] Saving config to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/default-k8s-diff-port-482476/config.json ...
	I1209 11:49:59.596311  663024 start.go:360] acquireMachinesLock for default-k8s-diff-port-482476: {Name:mka6afffe9ddf2faebdc603ed0401805d58bf31e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 11:50:05.422464  661546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.218:22: connect: no route to host
	I1209 11:50:08.494459  661546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.218:22: connect: no route to host
	I1209 11:50:14.574530  661546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.218:22: connect: no route to host
	I1209 11:50:17.646514  661546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.218:22: connect: no route to host
	I1209 11:50:23.726481  661546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.218:22: connect: no route to host
	I1209 11:50:26.798485  661546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.218:22: connect: no route to host
	I1209 11:50:32.878439  661546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.218:22: connect: no route to host
	I1209 11:50:35.950501  661546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.218:22: connect: no route to host
	I1209 11:50:42.030519  661546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.218:22: connect: no route to host
	I1209 11:50:45.102528  661546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.218:22: connect: no route to host
	I1209 11:50:51.182489  661546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.218:22: connect: no route to host
	I1209 11:50:54.254539  661546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.218:22: connect: no route to host
	I1209 11:51:00.334461  661546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.218:22: connect: no route to host
	I1209 11:51:03.406475  661546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.218:22: connect: no route to host
	I1209 11:51:09.486483  661546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.218:22: connect: no route to host
	I1209 11:51:12.558522  661546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.218:22: connect: no route to host
	I1209 11:51:18.638454  661546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.218:22: connect: no route to host
	I1209 11:51:24.715494  662109 start.go:364] duration metric: took 4m3.035196519s to acquireMachinesLock for "no-preload-820741"
	I1209 11:51:24.715567  662109 start.go:96] Skipping create...Using existing machine configuration
	I1209 11:51:24.715578  662109 fix.go:54] fixHost starting: 
	I1209 11:51:24.715984  662109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:51:24.716040  662109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:51:24.731722  662109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43799
	I1209 11:51:24.732247  662109 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:51:24.732853  662109 main.go:141] libmachine: Using API Version  1
	I1209 11:51:24.732876  662109 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:51:24.733244  662109 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:51:24.733437  662109 main.go:141] libmachine: (no-preload-820741) Calling .DriverName
	I1209 11:51:24.733606  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetState
	I1209 11:51:24.735295  662109 fix.go:112] recreateIfNeeded on no-preload-820741: state=Stopped err=<nil>
	I1209 11:51:24.735325  662109 main.go:141] libmachine: (no-preload-820741) Calling .DriverName
	W1209 11:51:24.735521  662109 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 11:51:24.737237  662109 out.go:177] * Restarting existing kvm2 VM for "no-preload-820741" ...
	I1209 11:51:21.710446  661546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.218:22: connect: no route to host
	I1209 11:51:24.712631  661546 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 11:51:24.712695  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetMachineName
	I1209 11:51:24.713111  661546 buildroot.go:166] provisioning hostname "embed-certs-005123"
	I1209 11:51:24.713140  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetMachineName
	I1209 11:51:24.713398  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHHostname
	I1209 11:51:24.715321  661546 machine.go:96] duration metric: took 4m34.547615205s to provisionDockerMachine
	I1209 11:51:24.715372  661546 fix.go:56] duration metric: took 4m34.572283015s for fixHost
	I1209 11:51:24.715381  661546 start.go:83] releasing machines lock for "embed-certs-005123", held for 4m34.572321017s
	W1209 11:51:24.715401  661546 start.go:714] error starting host: provision: host is not running
	W1209 11:51:24.715538  661546 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1209 11:51:24.715550  661546 start.go:729] Will try again in 5 seconds ...
	I1209 11:51:24.738507  662109 main.go:141] libmachine: (no-preload-820741) Calling .Start
	I1209 11:51:24.738692  662109 main.go:141] libmachine: (no-preload-820741) Ensuring networks are active...
	I1209 11:51:24.739450  662109 main.go:141] libmachine: (no-preload-820741) Ensuring network default is active
	I1209 11:51:24.739799  662109 main.go:141] libmachine: (no-preload-820741) Ensuring network mk-no-preload-820741 is active
	I1209 11:51:24.740206  662109 main.go:141] libmachine: (no-preload-820741) Getting domain xml...
	I1209 11:51:24.740963  662109 main.go:141] libmachine: (no-preload-820741) Creating domain...
	I1209 11:51:25.958244  662109 main.go:141] libmachine: (no-preload-820741) Waiting to get IP...
	I1209 11:51:25.959122  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:25.959507  662109 main.go:141] libmachine: (no-preload-820741) DBG | unable to find current IP address of domain no-preload-820741 in network mk-no-preload-820741
	I1209 11:51:25.959585  662109 main.go:141] libmachine: (no-preload-820741) DBG | I1209 11:51:25.959486  663348 retry.go:31] will retry after 256.759149ms: waiting for machine to come up
	I1209 11:51:26.218626  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:26.219187  662109 main.go:141] libmachine: (no-preload-820741) DBG | unable to find current IP address of domain no-preload-820741 in network mk-no-preload-820741
	I1209 11:51:26.219222  662109 main.go:141] libmachine: (no-preload-820741) DBG | I1209 11:51:26.219121  663348 retry.go:31] will retry after 259.957451ms: waiting for machine to come up
	I1209 11:51:26.480403  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:26.480800  662109 main.go:141] libmachine: (no-preload-820741) DBG | unable to find current IP address of domain no-preload-820741 in network mk-no-preload-820741
	I1209 11:51:26.480828  662109 main.go:141] libmachine: (no-preload-820741) DBG | I1209 11:51:26.480753  663348 retry.go:31] will retry after 482.242492ms: waiting for machine to come up
	I1209 11:51:29.718422  661546 start.go:360] acquireMachinesLock for embed-certs-005123: {Name:mka6afffe9ddf2faebdc603ed0401805d58bf31e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 11:51:26.964420  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:26.964870  662109 main.go:141] libmachine: (no-preload-820741) DBG | unable to find current IP address of domain no-preload-820741 in network mk-no-preload-820741
	I1209 11:51:26.964903  662109 main.go:141] libmachine: (no-preload-820741) DBG | I1209 11:51:26.964821  663348 retry.go:31] will retry after 386.489156ms: waiting for machine to come up
	I1209 11:51:27.353471  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:27.353850  662109 main.go:141] libmachine: (no-preload-820741) DBG | unable to find current IP address of domain no-preload-820741 in network mk-no-preload-820741
	I1209 11:51:27.353875  662109 main.go:141] libmachine: (no-preload-820741) DBG | I1209 11:51:27.353796  663348 retry.go:31] will retry after 602.322538ms: waiting for machine to come up
	I1209 11:51:27.957621  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:27.958020  662109 main.go:141] libmachine: (no-preload-820741) DBG | unable to find current IP address of domain no-preload-820741 in network mk-no-preload-820741
	I1209 11:51:27.958051  662109 main.go:141] libmachine: (no-preload-820741) DBG | I1209 11:51:27.957967  663348 retry.go:31] will retry after 747.355263ms: waiting for machine to come up
	I1209 11:51:28.707049  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:28.707486  662109 main.go:141] libmachine: (no-preload-820741) DBG | unable to find current IP address of domain no-preload-820741 in network mk-no-preload-820741
	I1209 11:51:28.707515  662109 main.go:141] libmachine: (no-preload-820741) DBG | I1209 11:51:28.707436  663348 retry.go:31] will retry after 1.034218647s: waiting for machine to come up
	I1209 11:51:29.743755  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:29.744171  662109 main.go:141] libmachine: (no-preload-820741) DBG | unable to find current IP address of domain no-preload-820741 in network mk-no-preload-820741
	I1209 11:51:29.744213  662109 main.go:141] libmachine: (no-preload-820741) DBG | I1209 11:51:29.744119  663348 retry.go:31] will retry after 1.348194555s: waiting for machine to come up
	I1209 11:51:31.094696  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:31.095202  662109 main.go:141] libmachine: (no-preload-820741) DBG | unable to find current IP address of domain no-preload-820741 in network mk-no-preload-820741
	I1209 11:51:31.095234  662109 main.go:141] libmachine: (no-preload-820741) DBG | I1209 11:51:31.095124  663348 retry.go:31] will retry after 1.226653754s: waiting for machine to come up
	I1209 11:51:32.323529  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:32.323935  662109 main.go:141] libmachine: (no-preload-820741) DBG | unable to find current IP address of domain no-preload-820741 in network mk-no-preload-820741
	I1209 11:51:32.323959  662109 main.go:141] libmachine: (no-preload-820741) DBG | I1209 11:51:32.323884  663348 retry.go:31] will retry after 2.008914491s: waiting for machine to come up
	I1209 11:51:34.335246  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:34.335619  662109 main.go:141] libmachine: (no-preload-820741) DBG | unable to find current IP address of domain no-preload-820741 in network mk-no-preload-820741
	I1209 11:51:34.335658  662109 main.go:141] libmachine: (no-preload-820741) DBG | I1209 11:51:34.335593  663348 retry.go:31] will retry after 1.835576732s: waiting for machine to come up
	I1209 11:51:36.173316  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:36.173752  662109 main.go:141] libmachine: (no-preload-820741) DBG | unable to find current IP address of domain no-preload-820741 in network mk-no-preload-820741
	I1209 11:51:36.173786  662109 main.go:141] libmachine: (no-preload-820741) DBG | I1209 11:51:36.173711  663348 retry.go:31] will retry after 3.204076548s: waiting for machine to come up
	I1209 11:51:39.382184  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:39.382619  662109 main.go:141] libmachine: (no-preload-820741) DBG | unable to find current IP address of domain no-preload-820741 in network mk-no-preload-820741
	I1209 11:51:39.382656  662109 main.go:141] libmachine: (no-preload-820741) DBG | I1209 11:51:39.382560  663348 retry.go:31] will retry after 3.298451611s: waiting for machine to come up
	I1209 11:51:44.103077  662586 start.go:364] duration metric: took 3m16.308265809s to acquireMachinesLock for "old-k8s-version-014592"
	I1209 11:51:44.103164  662586 start.go:96] Skipping create...Using existing machine configuration
	I1209 11:51:44.103178  662586 fix.go:54] fixHost starting: 
	I1209 11:51:44.103657  662586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:51:44.103716  662586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:51:44.121162  662586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33531
	I1209 11:51:44.121672  662586 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:51:44.122203  662586 main.go:141] libmachine: Using API Version  1
	I1209 11:51:44.122232  662586 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:51:44.122644  662586 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:51:44.122852  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .DriverName
	I1209 11:51:44.123023  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetState
	I1209 11:51:44.124544  662586 fix.go:112] recreateIfNeeded on old-k8s-version-014592: state=Stopped err=<nil>
	I1209 11:51:44.124567  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .DriverName
	W1209 11:51:44.124704  662586 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 11:51:44.126942  662586 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-014592" ...
	I1209 11:51:42.684438  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:42.684824  662109 main.go:141] libmachine: (no-preload-820741) Found IP for machine: 192.168.39.169
	I1209 11:51:42.684859  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has current primary IP address 192.168.39.169 and MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:42.684867  662109 main.go:141] libmachine: (no-preload-820741) Reserving static IP address...
	I1209 11:51:42.685269  662109 main.go:141] libmachine: (no-preload-820741) DBG | found host DHCP lease matching {name: "no-preload-820741", mac: "52:54:00:27:4c:0e", ip: "192.168.39.169"} in network mk-no-preload-820741: {Iface:virbr1 ExpiryTime:2024-12-09 12:43:41 +0000 UTC Type:0 Mac:52:54:00:27:4c:0e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:no-preload-820741 Clientid:01:52:54:00:27:4c:0e}
	I1209 11:51:42.685296  662109 main.go:141] libmachine: (no-preload-820741) DBG | skip adding static IP to network mk-no-preload-820741 - found existing host DHCP lease matching {name: "no-preload-820741", mac: "52:54:00:27:4c:0e", ip: "192.168.39.169"}
	I1209 11:51:42.685311  662109 main.go:141] libmachine: (no-preload-820741) Reserved static IP address: 192.168.39.169
	I1209 11:51:42.685334  662109 main.go:141] libmachine: (no-preload-820741) Waiting for SSH to be available...
	I1209 11:51:42.685348  662109 main.go:141] libmachine: (no-preload-820741) DBG | Getting to WaitForSSH function...
	I1209 11:51:42.687295  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:42.687588  662109 main.go:141] libmachine: (no-preload-820741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4c:0e", ip: ""} in network mk-no-preload-820741: {Iface:virbr1 ExpiryTime:2024-12-09 12:43:41 +0000 UTC Type:0 Mac:52:54:00:27:4c:0e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:no-preload-820741 Clientid:01:52:54:00:27:4c:0e}
	I1209 11:51:42.687625  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined IP address 192.168.39.169 and MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:42.687702  662109 main.go:141] libmachine: (no-preload-820741) DBG | Using SSH client type: external
	I1209 11:51:42.687790  662109 main.go:141] libmachine: (no-preload-820741) DBG | Using SSH private key: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/no-preload-820741/id_rsa (-rw-------)
	I1209 11:51:42.687824  662109 main.go:141] libmachine: (no-preload-820741) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.169 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20068-609844/.minikube/machines/no-preload-820741/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1209 11:51:42.687844  662109 main.go:141] libmachine: (no-preload-820741) DBG | About to run SSH command:
	I1209 11:51:42.687857  662109 main.go:141] libmachine: (no-preload-820741) DBG | exit 0
	I1209 11:51:42.822609  662109 main.go:141] libmachine: (no-preload-820741) DBG | SSH cmd err, output: <nil>: 
	I1209 11:51:42.822996  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetConfigRaw
	I1209 11:51:42.823665  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetIP
	I1209 11:51:42.826484  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:42.826783  662109 main.go:141] libmachine: (no-preload-820741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4c:0e", ip: ""} in network mk-no-preload-820741: {Iface:virbr1 ExpiryTime:2024-12-09 12:43:41 +0000 UTC Type:0 Mac:52:54:00:27:4c:0e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:no-preload-820741 Clientid:01:52:54:00:27:4c:0e}
	I1209 11:51:42.826808  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined IP address 192.168.39.169 and MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:42.827050  662109 profile.go:143] Saving config to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/no-preload-820741/config.json ...
	I1209 11:51:42.827323  662109 machine.go:93] provisionDockerMachine start ...
	I1209 11:51:42.827346  662109 main.go:141] libmachine: (no-preload-820741) Calling .DriverName
	I1209 11:51:42.827620  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHHostname
	I1209 11:51:42.830224  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:42.830569  662109 main.go:141] libmachine: (no-preload-820741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4c:0e", ip: ""} in network mk-no-preload-820741: {Iface:virbr1 ExpiryTime:2024-12-09 12:43:41 +0000 UTC Type:0 Mac:52:54:00:27:4c:0e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:no-preload-820741 Clientid:01:52:54:00:27:4c:0e}
	I1209 11:51:42.830599  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined IP address 192.168.39.169 and MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:42.830717  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHPort
	I1209 11:51:42.830909  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHKeyPath
	I1209 11:51:42.831107  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHKeyPath
	I1209 11:51:42.831274  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHUsername
	I1209 11:51:42.831454  662109 main.go:141] libmachine: Using SSH client type: native
	I1209 11:51:42.831790  662109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.169 22 <nil> <nil>}
	I1209 11:51:42.831807  662109 main.go:141] libmachine: About to run SSH command:
	hostname
	I1209 11:51:42.938456  662109 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1209 11:51:42.938500  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetMachineName
	I1209 11:51:42.938778  662109 buildroot.go:166] provisioning hostname "no-preload-820741"
	I1209 11:51:42.938813  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetMachineName
	I1209 11:51:42.939023  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHHostname
	I1209 11:51:42.941706  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:42.942236  662109 main.go:141] libmachine: (no-preload-820741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4c:0e", ip: ""} in network mk-no-preload-820741: {Iface:virbr1 ExpiryTime:2024-12-09 12:43:41 +0000 UTC Type:0 Mac:52:54:00:27:4c:0e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:no-preload-820741 Clientid:01:52:54:00:27:4c:0e}
	I1209 11:51:42.942267  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined IP address 192.168.39.169 and MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:42.942390  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHPort
	I1209 11:51:42.942606  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHKeyPath
	I1209 11:51:42.942773  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHKeyPath
	I1209 11:51:42.942922  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHUsername
	I1209 11:51:42.943177  662109 main.go:141] libmachine: Using SSH client type: native
	I1209 11:51:42.943382  662109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.169 22 <nil> <nil>}
	I1209 11:51:42.943406  662109 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-820741 && echo "no-preload-820741" | sudo tee /etc/hostname
	I1209 11:51:43.065816  662109 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-820741
	
	I1209 11:51:43.065849  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHHostname
	I1209 11:51:43.068607  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:43.068916  662109 main.go:141] libmachine: (no-preload-820741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4c:0e", ip: ""} in network mk-no-preload-820741: {Iface:virbr1 ExpiryTime:2024-12-09 12:43:41 +0000 UTC Type:0 Mac:52:54:00:27:4c:0e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:no-preload-820741 Clientid:01:52:54:00:27:4c:0e}
	I1209 11:51:43.068951  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined IP address 192.168.39.169 and MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:43.069127  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHPort
	I1209 11:51:43.069256  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHKeyPath
	I1209 11:51:43.069351  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHKeyPath
	I1209 11:51:43.069514  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHUsername
	I1209 11:51:43.069637  662109 main.go:141] libmachine: Using SSH client type: native
	I1209 11:51:43.069841  662109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.169 22 <nil> <nil>}
	I1209 11:51:43.069861  662109 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-820741' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-820741/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-820741' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 11:51:43.182210  662109 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 11:51:43.182257  662109 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20068-609844/.minikube CaCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20068-609844/.minikube}
	I1209 11:51:43.182289  662109 buildroot.go:174] setting up certificates
	I1209 11:51:43.182305  662109 provision.go:84] configureAuth start
	I1209 11:51:43.182323  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetMachineName
	I1209 11:51:43.182674  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetIP
	I1209 11:51:43.185513  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:43.185872  662109 main.go:141] libmachine: (no-preload-820741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4c:0e", ip: ""} in network mk-no-preload-820741: {Iface:virbr1 ExpiryTime:2024-12-09 12:43:41 +0000 UTC Type:0 Mac:52:54:00:27:4c:0e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:no-preload-820741 Clientid:01:52:54:00:27:4c:0e}
	I1209 11:51:43.185897  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined IP address 192.168.39.169 and MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:43.186018  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHHostname
	I1209 11:51:43.188128  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:43.188482  662109 main.go:141] libmachine: (no-preload-820741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4c:0e", ip: ""} in network mk-no-preload-820741: {Iface:virbr1 ExpiryTime:2024-12-09 12:43:41 +0000 UTC Type:0 Mac:52:54:00:27:4c:0e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:no-preload-820741 Clientid:01:52:54:00:27:4c:0e}
	I1209 11:51:43.188534  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined IP address 192.168.39.169 and MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:43.188668  662109 provision.go:143] copyHostCerts
	I1209 11:51:43.188752  662109 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem, removing ...
	I1209 11:51:43.188774  662109 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem
	I1209 11:51:43.188840  662109 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem (1679 bytes)
	I1209 11:51:43.188928  662109 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem, removing ...
	I1209 11:51:43.188936  662109 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem
	I1209 11:51:43.188963  662109 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem (1082 bytes)
	I1209 11:51:43.189019  662109 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem, removing ...
	I1209 11:51:43.189027  662109 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem
	I1209 11:51:43.189049  662109 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem (1123 bytes)
	I1209 11:51:43.189104  662109 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem org=jenkins.no-preload-820741 san=[127.0.0.1 192.168.39.169 localhost minikube no-preload-820741]
	I1209 11:51:43.488258  662109 provision.go:177] copyRemoteCerts
	I1209 11:51:43.488336  662109 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 11:51:43.488367  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHHostname
	I1209 11:51:43.491689  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:43.492025  662109 main.go:141] libmachine: (no-preload-820741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4c:0e", ip: ""} in network mk-no-preload-820741: {Iface:virbr1 ExpiryTime:2024-12-09 12:43:41 +0000 UTC Type:0 Mac:52:54:00:27:4c:0e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:no-preload-820741 Clientid:01:52:54:00:27:4c:0e}
	I1209 11:51:43.492059  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined IP address 192.168.39.169 and MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:43.492267  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHPort
	I1209 11:51:43.492465  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHKeyPath
	I1209 11:51:43.492635  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHUsername
	I1209 11:51:43.492768  662109 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/no-preload-820741/id_rsa Username:docker}
	I1209 11:51:43.577708  662109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1209 11:51:43.602000  662109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1209 11:51:43.627251  662109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1209 11:51:43.651591  662109 provision.go:87] duration metric: took 469.266358ms to configureAuth
	I1209 11:51:43.651626  662109 buildroot.go:189] setting minikube options for container-runtime
	I1209 11:51:43.651863  662109 config.go:182] Loaded profile config "no-preload-820741": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 11:51:43.652059  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHHostname
	I1209 11:51:43.655150  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:43.655489  662109 main.go:141] libmachine: (no-preload-820741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4c:0e", ip: ""} in network mk-no-preload-820741: {Iface:virbr1 ExpiryTime:2024-12-09 12:43:41 +0000 UTC Type:0 Mac:52:54:00:27:4c:0e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:no-preload-820741 Clientid:01:52:54:00:27:4c:0e}
	I1209 11:51:43.655518  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined IP address 192.168.39.169 and MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:43.655738  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHPort
	I1209 11:51:43.655963  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHKeyPath
	I1209 11:51:43.656146  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHKeyPath
	I1209 11:51:43.656295  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHUsername
	I1209 11:51:43.656483  662109 main.go:141] libmachine: Using SSH client type: native
	I1209 11:51:43.656688  662109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.169 22 <nil> <nil>}
	I1209 11:51:43.656710  662109 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 11:51:43.870704  662109 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 11:51:43.870738  662109 machine.go:96] duration metric: took 1.043398486s to provisionDockerMachine
	I1209 11:51:43.870756  662109 start.go:293] postStartSetup for "no-preload-820741" (driver="kvm2")
	I1209 11:51:43.870771  662109 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 11:51:43.870796  662109 main.go:141] libmachine: (no-preload-820741) Calling .DriverName
	I1209 11:51:43.871158  662109 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 11:51:43.871186  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHHostname
	I1209 11:51:43.873863  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:43.874207  662109 main.go:141] libmachine: (no-preload-820741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4c:0e", ip: ""} in network mk-no-preload-820741: {Iface:virbr1 ExpiryTime:2024-12-09 12:43:41 +0000 UTC Type:0 Mac:52:54:00:27:4c:0e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:no-preload-820741 Clientid:01:52:54:00:27:4c:0e}
	I1209 11:51:43.874230  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined IP address 192.168.39.169 and MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:43.874408  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHPort
	I1209 11:51:43.874610  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHKeyPath
	I1209 11:51:43.874800  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHUsername
	I1209 11:51:43.874925  662109 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/no-preload-820741/id_rsa Username:docker}
	I1209 11:51:43.956874  662109 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 11:51:43.960825  662109 info.go:137] Remote host: Buildroot 2023.02.9
	I1209 11:51:43.960853  662109 filesync.go:126] Scanning /home/jenkins/minikube-integration/20068-609844/.minikube/addons for local assets ...
	I1209 11:51:43.960919  662109 filesync.go:126] Scanning /home/jenkins/minikube-integration/20068-609844/.minikube/files for local assets ...
	I1209 11:51:43.960993  662109 filesync.go:149] local asset: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem -> 6170172.pem in /etc/ssl/certs
	I1209 11:51:43.961095  662109 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 11:51:43.970138  662109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem --> /etc/ssl/certs/6170172.pem (1708 bytes)
	I1209 11:51:43.991975  662109 start.go:296] duration metric: took 121.20118ms for postStartSetup
	I1209 11:51:43.992020  662109 fix.go:56] duration metric: took 19.276442325s for fixHost
	I1209 11:51:43.992043  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHHostname
	I1209 11:51:43.994707  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:43.995035  662109 main.go:141] libmachine: (no-preload-820741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4c:0e", ip: ""} in network mk-no-preload-820741: {Iface:virbr1 ExpiryTime:2024-12-09 12:43:41 +0000 UTC Type:0 Mac:52:54:00:27:4c:0e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:no-preload-820741 Clientid:01:52:54:00:27:4c:0e}
	I1209 11:51:43.995069  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined IP address 192.168.39.169 and MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:43.995186  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHPort
	I1209 11:51:43.995403  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHKeyPath
	I1209 11:51:43.995568  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHKeyPath
	I1209 11:51:43.995716  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHUsername
	I1209 11:51:43.995927  662109 main.go:141] libmachine: Using SSH client type: native
	I1209 11:51:43.996107  662109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.169 22 <nil> <nil>}
	I1209 11:51:43.996117  662109 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1209 11:51:44.102890  662109 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733745104.077047488
	
	I1209 11:51:44.102914  662109 fix.go:216] guest clock: 1733745104.077047488
	I1209 11:51:44.102922  662109 fix.go:229] Guest: 2024-12-09 11:51:44.077047488 +0000 UTC Remote: 2024-12-09 11:51:43.992024296 +0000 UTC m=+262.463051778 (delta=85.023192ms)
	I1209 11:51:44.102952  662109 fix.go:200] guest clock delta is within tolerance: 85.023192ms
	I1209 11:51:44.102957  662109 start.go:83] releasing machines lock for "no-preload-820741", held for 19.387413234s
	I1209 11:51:44.102980  662109 main.go:141] libmachine: (no-preload-820741) Calling .DriverName
	I1209 11:51:44.103272  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetIP
	I1209 11:51:44.105929  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:44.106314  662109 main.go:141] libmachine: (no-preload-820741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4c:0e", ip: ""} in network mk-no-preload-820741: {Iface:virbr1 ExpiryTime:2024-12-09 12:43:41 +0000 UTC Type:0 Mac:52:54:00:27:4c:0e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:no-preload-820741 Clientid:01:52:54:00:27:4c:0e}
	I1209 11:51:44.106341  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined IP address 192.168.39.169 and MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:44.106567  662109 main.go:141] libmachine: (no-preload-820741) Calling .DriverName
	I1209 11:51:44.107102  662109 main.go:141] libmachine: (no-preload-820741) Calling .DriverName
	I1209 11:51:44.107323  662109 main.go:141] libmachine: (no-preload-820741) Calling .DriverName
	I1209 11:51:44.107453  662109 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 11:51:44.107507  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHHostname
	I1209 11:51:44.107640  662109 ssh_runner.go:195] Run: cat /version.json
	I1209 11:51:44.107672  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHHostname
	I1209 11:51:44.110422  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:44.110792  662109 main.go:141] libmachine: (no-preload-820741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4c:0e", ip: ""} in network mk-no-preload-820741: {Iface:virbr1 ExpiryTime:2024-12-09 12:43:41 +0000 UTC Type:0 Mac:52:54:00:27:4c:0e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:no-preload-820741 Clientid:01:52:54:00:27:4c:0e}
	I1209 11:51:44.110822  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined IP address 192.168.39.169 and MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:44.110840  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:44.110984  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHPort
	I1209 11:51:44.111194  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHKeyPath
	I1209 11:51:44.111376  662109 main.go:141] libmachine: (no-preload-820741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4c:0e", ip: ""} in network mk-no-preload-820741: {Iface:virbr1 ExpiryTime:2024-12-09 12:43:41 +0000 UTC Type:0 Mac:52:54:00:27:4c:0e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:no-preload-820741 Clientid:01:52:54:00:27:4c:0e}
	I1209 11:51:44.111395  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined IP address 192.168.39.169 and MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:44.111408  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHUsername
	I1209 11:51:44.111569  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHPort
	I1209 11:51:44.111589  662109 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/no-preload-820741/id_rsa Username:docker}
	I1209 11:51:44.111722  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHKeyPath
	I1209 11:51:44.111827  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHUsername
	I1209 11:51:44.111986  662109 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/no-preload-820741/id_rsa Username:docker}
	I1209 11:51:44.228799  662109 ssh_runner.go:195] Run: systemctl --version
	I1209 11:51:44.234678  662109 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 11:51:44.383290  662109 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 11:51:44.388906  662109 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 11:51:44.388981  662109 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 11:51:44.405271  662109 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1209 11:51:44.405308  662109 start.go:495] detecting cgroup driver to use...
	I1209 11:51:44.405389  662109 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 11:51:44.425480  662109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 11:51:44.439827  662109 docker.go:217] disabling cri-docker service (if available) ...
	I1209 11:51:44.439928  662109 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 11:51:44.454750  662109 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 11:51:44.470828  662109 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 11:51:44.595400  662109 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 11:51:44.756743  662109 docker.go:233] disabling docker service ...
	I1209 11:51:44.756817  662109 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 11:51:44.774069  662109 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 11:51:44.788188  662109 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 11:51:44.909156  662109 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 11:51:45.036992  662109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 11:51:45.051284  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 11:51:45.071001  662109 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1209 11:51:45.071074  662109 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:51:45.081491  662109 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 11:51:45.081549  662109 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:51:45.091476  662109 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:51:45.103237  662109 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:51:45.114723  662109 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 11:51:45.126330  662109 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:51:45.136501  662109 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:51:45.152804  662109 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:51:45.163221  662109 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 11:51:45.173297  662109 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1209 11:51:45.173379  662109 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1209 11:51:45.186209  662109 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 11:51:45.195773  662109 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 11:51:45.339593  662109 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 11:51:45.438766  662109 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 11:51:45.438851  662109 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 11:51:45.444775  662109 start.go:563] Will wait 60s for crictl version
	I1209 11:51:45.444847  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:51:45.449585  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 11:51:45.493796  662109 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1209 11:51:45.493899  662109 ssh_runner.go:195] Run: crio --version
	I1209 11:51:45.521391  662109 ssh_runner.go:195] Run: crio --version
	I1209 11:51:45.551249  662109 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1209 11:51:45.552714  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetIP
	I1209 11:51:45.555910  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:45.556271  662109 main.go:141] libmachine: (no-preload-820741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4c:0e", ip: ""} in network mk-no-preload-820741: {Iface:virbr1 ExpiryTime:2024-12-09 12:43:41 +0000 UTC Type:0 Mac:52:54:00:27:4c:0e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:no-preload-820741 Clientid:01:52:54:00:27:4c:0e}
	I1209 11:51:45.556298  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined IP address 192.168.39.169 and MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:45.556571  662109 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1209 11:51:45.560718  662109 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 11:51:45.573027  662109 kubeadm.go:883] updating cluster {Name:no-preload-820741 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:no-preload-820741 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.169 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 11:51:45.573171  662109 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 11:51:45.573226  662109 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 11:51:45.613696  662109 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1209 11:51:45.613724  662109 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.2 registry.k8s.io/kube-controller-manager:v1.31.2 registry.k8s.io/kube-scheduler:v1.31.2 registry.k8s.io/kube-proxy:v1.31.2 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1209 11:51:45.613810  662109 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1209 11:51:45.613847  662109 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.2
	I1209 11:51:45.613864  662109 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.2
	I1209 11:51:45.613880  662109 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1209 11:51:45.613857  662109 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1209 11:51:45.613939  662109 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1209 11:51:45.613801  662109 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.2
	I1209 11:51:45.613810  662109 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 11:51:45.615885  662109 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1209 11:51:45.615983  662109 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.2
	I1209 11:51:45.615889  662109 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1209 11:51:45.615885  662109 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 11:51:45.615893  662109 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1209 11:51:45.615891  662109 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1209 11:51:45.615897  662109 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.2
	I1209 11:51:45.615893  662109 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.2
	I1209 11:51:45.819757  662109 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1209 11:51:45.836546  662109 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1209 11:51:45.851918  662109 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.2
	I1209 11:51:45.857461  662109 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.2
	I1209 11:51:45.857468  662109 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.2
	I1209 11:51:45.863981  662109 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1209 11:51:45.864038  662109 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1209 11:51:45.864122  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:51:45.865289  662109 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.2
	I1209 11:51:45.868361  662109 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I1209 11:51:46.030476  662109 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.2" does not exist at hash "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173" in container runtime
	I1209 11:51:46.030525  662109 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.2
	I1209 11:51:46.030582  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:51:46.030525  662109 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.2" needs transfer: "registry.k8s.io/kube-proxy:v1.31.2" does not exist at hash "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38" in container runtime
	I1209 11:51:46.030603  662109 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.2" does not exist at hash "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856" in container runtime
	I1209 11:51:46.030625  662109 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.2
	I1209 11:51:46.030652  662109 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.2
	I1209 11:51:46.030694  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:51:46.030655  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:51:46.030720  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1209 11:51:46.030760  662109 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.2" does not exist at hash "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503" in container runtime
	I1209 11:51:46.030794  662109 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1209 11:51:46.030823  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:51:46.030823  662109 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I1209 11:51:46.030845  662109 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I1209 11:51:46.030868  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:51:46.041983  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1209 11:51:46.042072  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1209 11:51:46.042088  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1209 11:51:46.086909  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1209 11:51:46.086966  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1209 11:51:46.086997  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1209 11:51:46.141636  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1209 11:51:46.141723  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1209 11:51:46.141779  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1209 11:51:46.249908  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1209 11:51:46.249972  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1209 11:51:46.250024  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1209 11:51:46.250056  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1209 11:51:46.266345  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1209 11:51:46.266425  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1209 11:51:46.376691  662109 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1209 11:51:46.376784  662109 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2
	I1209 11:51:46.376904  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1209 11:51:46.376937  662109 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1209 11:51:46.376911  662109 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1209 11:51:46.376980  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1209 11:51:46.407997  662109 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2
	I1209 11:51:46.408015  662109 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2
	I1209 11:51:46.408120  662109 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.2
	I1209 11:51:46.408120  662109 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1209 11:51:46.450341  662109 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.2 (exists)
	I1209 11:51:46.450374  662109 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1209 11:51:46.450445  662109 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1209 11:51:46.450503  662109 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I1209 11:51:46.450537  662109 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I1209 11:51:46.450541  662109 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2
	I1209 11:51:46.450570  662109 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.2 (exists)
	I1209 11:51:46.450621  662109 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I1209 11:51:46.450621  662109 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1209 11:51:46.450621  662109 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.2 (exists)
	I1209 11:51:44.128421  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .Start
	I1209 11:51:44.128663  662586 main.go:141] libmachine: (old-k8s-version-014592) Ensuring networks are active...
	I1209 11:51:44.129435  662586 main.go:141] libmachine: (old-k8s-version-014592) Ensuring network default is active
	I1209 11:51:44.129805  662586 main.go:141] libmachine: (old-k8s-version-014592) Ensuring network mk-old-k8s-version-014592 is active
	I1209 11:51:44.130314  662586 main.go:141] libmachine: (old-k8s-version-014592) Getting domain xml...
	I1209 11:51:44.131070  662586 main.go:141] libmachine: (old-k8s-version-014592) Creating domain...
	I1209 11:51:45.405214  662586 main.go:141] libmachine: (old-k8s-version-014592) Waiting to get IP...
	I1209 11:51:45.406116  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:51:45.406680  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | unable to find current IP address of domain old-k8s-version-014592 in network mk-old-k8s-version-014592
	I1209 11:51:45.406716  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | I1209 11:51:45.406613  663492 retry.go:31] will retry after 249.130873ms: waiting for machine to come up
	I1209 11:51:45.657224  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:51:45.657727  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | unable to find current IP address of domain old-k8s-version-014592 in network mk-old-k8s-version-014592
	I1209 11:51:45.657756  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | I1209 11:51:45.657687  663492 retry.go:31] will retry after 363.458278ms: waiting for machine to come up
	I1209 11:51:46.023431  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:51:46.023912  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | unable to find current IP address of domain old-k8s-version-014592 in network mk-old-k8s-version-014592
	I1209 11:51:46.023945  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | I1209 11:51:46.023851  663492 retry.go:31] will retry after 313.220722ms: waiting for machine to come up
	I1209 11:51:46.339300  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:51:46.339850  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | unable to find current IP address of domain old-k8s-version-014592 in network mk-old-k8s-version-014592
	I1209 11:51:46.339876  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | I1209 11:51:46.339791  663492 retry.go:31] will retry after 517.613322ms: waiting for machine to come up
	I1209 11:51:46.859825  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:51:46.860229  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | unable to find current IP address of domain old-k8s-version-014592 in network mk-old-k8s-version-014592
	I1209 11:51:46.860260  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | I1209 11:51:46.860198  663492 retry.go:31] will retry after 710.195232ms: waiting for machine to come up
	I1209 11:51:47.572460  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:51:47.573030  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | unable to find current IP address of domain old-k8s-version-014592 in network mk-old-k8s-version-014592
	I1209 11:51:47.573080  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | I1209 11:51:47.573008  663492 retry.go:31] will retry after 620.717522ms: waiting for machine to come up
	I1209 11:51:46.869631  662109 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 11:51:48.822213  662109 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2: (2.371704342s)
	I1209 11:51:48.822263  662109 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2 from cache
	I1209 11:51:48.822262  662109 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0: (2.371603127s)
	I1209 11:51:48.822296  662109 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I1209 11:51:48.822295  662109 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.2: (2.371584353s)
	I1209 11:51:48.822298  662109 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1209 11:51:48.822309  662109 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.2 (exists)
	I1209 11:51:48.822324  662109 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.952666874s)
	I1209 11:51:48.822364  662109 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1209 11:51:48.822367  662109 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1209 11:51:48.822416  662109 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 11:51:48.822460  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:51:50.794288  662109 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (1.971891497s)
	I1209 11:51:50.794330  662109 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1209 11:51:50.794357  662109 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.2
	I1209 11:51:50.794357  662109 ssh_runner.go:235] Completed: which crictl: (1.971876587s)
	I1209 11:51:50.794417  662109 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2
	I1209 11:51:50.794437  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 11:51:48.195603  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:51:48.196140  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | unable to find current IP address of domain old-k8s-version-014592 in network mk-old-k8s-version-014592
	I1209 11:51:48.196172  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | I1209 11:51:48.196083  663492 retry.go:31] will retry after 747.45082ms: waiting for machine to come up
	I1209 11:51:48.945230  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:51:48.945682  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | unable to find current IP address of domain old-k8s-version-014592 in network mk-old-k8s-version-014592
	I1209 11:51:48.945737  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | I1209 11:51:48.945661  663492 retry.go:31] will retry after 1.307189412s: waiting for machine to come up
	I1209 11:51:50.254747  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:51:50.255335  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | unable to find current IP address of domain old-k8s-version-014592 in network mk-old-k8s-version-014592
	I1209 11:51:50.255359  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | I1209 11:51:50.255276  663492 retry.go:31] will retry after 1.269881759s: waiting for machine to come up
	I1209 11:51:51.526966  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:51:51.527400  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | unable to find current IP address of domain old-k8s-version-014592 in network mk-old-k8s-version-014592
	I1209 11:51:51.527431  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | I1209 11:51:51.527348  663492 retry.go:31] will retry after 1.424091669s: waiting for machine to come up
	I1209 11:51:52.958981  662109 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.164517823s)
	I1209 11:51:52.959044  662109 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2: (2.164597978s)
	I1209 11:51:52.959089  662109 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2 from cache
	I1209 11:51:52.959120  662109 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1209 11:51:52.959057  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 11:51:52.959203  662109 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1209 11:51:53.007629  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 11:51:54.832641  662109 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2: (1.873398185s)
	I1209 11:51:54.832686  662109 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2 from cache
	I1209 11:51:54.832694  662109 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.825022672s)
	I1209 11:51:54.832714  662109 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I1209 11:51:54.832748  662109 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1209 11:51:54.832769  662109 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I1209 11:51:54.832853  662109 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1209 11:51:52.953290  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:51:52.953711  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | unable to find current IP address of domain old-k8s-version-014592 in network mk-old-k8s-version-014592
	I1209 11:51:52.953743  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | I1209 11:51:52.953658  663492 retry.go:31] will retry after 2.009829783s: waiting for machine to come up
	I1209 11:51:54.965818  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:51:54.966337  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | unable to find current IP address of domain old-k8s-version-014592 in network mk-old-k8s-version-014592
	I1209 11:51:54.966372  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | I1209 11:51:54.966285  663492 retry.go:31] will retry after 2.209879817s: waiting for machine to come up
	I1209 11:51:57.177397  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:51:57.177870  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | unable to find current IP address of domain old-k8s-version-014592 in network mk-old-k8s-version-014592
	I1209 11:51:57.177901  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | I1209 11:51:57.177805  663492 retry.go:31] will retry after 2.999056002s: waiting for machine to come up
	I1209 11:51:58.433813  662109 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.600992195s)
	I1209 11:51:58.433889  662109 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I1209 11:51:58.433913  662109 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1209 11:51:58.433831  662109 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (3.600948593s)
	I1209 11:51:58.433947  662109 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1209 11:51:58.433961  662109 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1209 11:51:59.792012  662109 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2: (1.35801884s)
	I1209 11:51:59.792049  662109 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2 from cache
	I1209 11:51:59.792078  662109 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1209 11:51:59.792127  662109 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1209 11:52:00.635140  662109 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1209 11:52:00.635193  662109 cache_images.go:123] Successfully loaded all cached images
	I1209 11:52:00.635212  662109 cache_images.go:92] duration metric: took 15.021464053s to LoadCachedImages
	I1209 11:52:00.635232  662109 kubeadm.go:934] updating node { 192.168.39.169 8443 v1.31.2 crio true true} ...
	I1209 11:52:00.635395  662109 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-820741 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.169
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:no-preload-820741 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 11:52:00.635481  662109 ssh_runner.go:195] Run: crio config
	I1209 11:52:00.680321  662109 cni.go:84] Creating CNI manager for ""
	I1209 11:52:00.680345  662109 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 11:52:00.680370  662109 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1209 11:52:00.680394  662109 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.169 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-820741 NodeName:no-preload-820741 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.169"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.169 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 11:52:00.680545  662109 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.169
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-820741"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.169"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.169"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 11:52:00.680614  662109 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1209 11:52:00.690391  662109 binaries.go:44] Found k8s binaries, skipping transfer
	I1209 11:52:00.690484  662109 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 11:52:00.699034  662109 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1209 11:52:00.714710  662109 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 11:52:00.730375  662109 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2297 bytes)
	I1209 11:52:00.747519  662109 ssh_runner.go:195] Run: grep 192.168.39.169	control-plane.minikube.internal$ /etc/hosts
	I1209 11:52:00.751163  662109 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.169	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 11:52:00.762405  662109 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 11:52:00.881308  662109 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 11:52:00.898028  662109 certs.go:68] Setting up /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/no-preload-820741 for IP: 192.168.39.169
	I1209 11:52:00.898060  662109 certs.go:194] generating shared ca certs ...
	I1209 11:52:00.898085  662109 certs.go:226] acquiring lock for ca certs: {Name:mk2df5887e08965d909a9c950da5dfffb8a04ddc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 11:52:00.898349  662109 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.key
	I1209 11:52:00.898415  662109 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.key
	I1209 11:52:00.898429  662109 certs.go:256] generating profile certs ...
	I1209 11:52:00.898565  662109 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/no-preload-820741/client.key
	I1209 11:52:00.898646  662109 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/no-preload-820741/apiserver.key.814e22a1
	I1209 11:52:00.898701  662109 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/no-preload-820741/proxy-client.key
	I1209 11:52:00.898859  662109 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017.pem (1338 bytes)
	W1209 11:52:00.898904  662109 certs.go:480] ignoring /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017_empty.pem, impossibly tiny 0 bytes
	I1209 11:52:00.898918  662109 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem (1675 bytes)
	I1209 11:52:00.898949  662109 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem (1082 bytes)
	I1209 11:52:00.898982  662109 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem (1123 bytes)
	I1209 11:52:00.899007  662109 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem (1679 bytes)
	I1209 11:52:00.899045  662109 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem (1708 bytes)
	I1209 11:52:00.899994  662109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 11:52:00.943848  662109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1209 11:52:00.970587  662109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 11:52:01.025164  662109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1209 11:52:01.055766  662109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/no-preload-820741/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1209 11:52:01.089756  662109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/no-preload-820741/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1209 11:52:01.112171  662109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/no-preload-820741/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 11:52:01.135928  662109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/no-preload-820741/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1209 11:52:01.157703  662109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017.pem --> /usr/share/ca-certificates/617017.pem (1338 bytes)
	I1209 11:52:01.179806  662109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem --> /usr/share/ca-certificates/6170172.pem (1708 bytes)
	I1209 11:52:01.201663  662109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 11:52:01.223314  662109 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 11:52:01.239214  662109 ssh_runner.go:195] Run: openssl version
	I1209 11:52:01.244687  662109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/617017.pem && ln -fs /usr/share/ca-certificates/617017.pem /etc/ssl/certs/617017.pem"
	I1209 11:52:01.254630  662109 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/617017.pem
	I1209 11:52:01.258801  662109 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 10:45 /usr/share/ca-certificates/617017.pem
	I1209 11:52:01.258849  662109 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/617017.pem
	I1209 11:52:01.264219  662109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/617017.pem /etc/ssl/certs/51391683.0"
	I1209 11:52:01.274077  662109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6170172.pem && ln -fs /usr/share/ca-certificates/6170172.pem /etc/ssl/certs/6170172.pem"
	I1209 11:52:01.284511  662109 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6170172.pem
	I1209 11:52:01.289141  662109 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 10:45 /usr/share/ca-certificates/6170172.pem
	I1209 11:52:01.289216  662109 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6170172.pem
	I1209 11:52:01.295079  662109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6170172.pem /etc/ssl/certs/3ec20f2e.0"
	I1209 11:52:01.305606  662109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1209 11:52:01.315795  662109 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 11:52:01.320085  662109 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 10:34 /usr/share/ca-certificates/minikubeCA.pem
	I1209 11:52:01.320147  662109 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 11:52:01.325590  662109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1209 11:52:01.335747  662109 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 11:52:01.340113  662109 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1209 11:52:01.346217  662109 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1209 11:52:01.351799  662109 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1209 11:52:01.357441  662109 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1209 11:52:01.362784  662109 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1209 11:52:01.368210  662109 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1209 11:52:01.373975  662109 kubeadm.go:392] StartCluster: {Name:no-preload-820741 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:no-preload-820741 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.169 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 11:52:01.374101  662109 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 11:52:01.374160  662109 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 11:52:01.409780  662109 cri.go:89] found id: ""
	I1209 11:52:01.409852  662109 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 11:52:01.419505  662109 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1209 11:52:01.419550  662109 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1209 11:52:01.419603  662109 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1209 11:52:01.429000  662109 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1209 11:52:01.429999  662109 kubeconfig.go:125] found "no-preload-820741" server: "https://192.168.39.169:8443"
	I1209 11:52:01.432151  662109 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1209 11:52:01.440964  662109 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.169
	I1209 11:52:01.441003  662109 kubeadm.go:1160] stopping kube-system containers ...
	I1209 11:52:01.441021  662109 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1209 11:52:01.441084  662109 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 11:52:01.474788  662109 cri.go:89] found id: ""
	I1209 11:52:01.474865  662109 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1209 11:52:01.491360  662109 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 11:52:01.500483  662109 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 11:52:01.500505  662109 kubeadm.go:157] found existing configuration files:
	
	I1209 11:52:01.500558  662109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 11:52:01.509190  662109 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 11:52:01.509251  662109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 11:52:01.518248  662109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 11:52:01.526845  662109 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 11:52:01.526909  662109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 11:52:01.535849  662109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 11:52:01.544609  662109 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 11:52:01.544672  662109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 11:52:01.553527  662109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 11:52:01.561876  662109 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 11:52:01.561928  662109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 11:52:00.178781  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:00.179225  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | unable to find current IP address of domain old-k8s-version-014592 in network mk-old-k8s-version-014592
	I1209 11:52:00.179273  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | I1209 11:52:00.179165  663492 retry.go:31] will retry after 4.532370187s: waiting for machine to come up
	I1209 11:52:05.915073  663024 start.go:364] duration metric: took 2m6.318720193s to acquireMachinesLock for "default-k8s-diff-port-482476"
	I1209 11:52:05.915166  663024 start.go:96] Skipping create...Using existing machine configuration
	I1209 11:52:05.915179  663024 fix.go:54] fixHost starting: 
	I1209 11:52:05.915652  663024 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:52:05.915716  663024 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:52:05.933810  663024 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35679
	I1209 11:52:05.934363  663024 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:52:05.935019  663024 main.go:141] libmachine: Using API Version  1
	I1209 11:52:05.935071  663024 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:52:05.935489  663024 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:52:05.935682  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .DriverName
	I1209 11:52:05.935879  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetState
	I1209 11:52:05.937627  663024 fix.go:112] recreateIfNeeded on default-k8s-diff-port-482476: state=Stopped err=<nil>
	I1209 11:52:05.937660  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .DriverName
	W1209 11:52:05.937842  663024 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 11:52:05.939893  663024 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-482476" ...
	I1209 11:52:01.570657  662109 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 11:52:01.579782  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:01.680268  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:02.573653  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:02.762024  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:02.826444  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:02.932170  662109 api_server.go:52] waiting for apiserver process to appear ...
	I1209 11:52:02.932291  662109 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:03.432933  662109 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:03.933186  662109 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:03.948529  662109 api_server.go:72] duration metric: took 1.016357501s to wait for apiserver process to appear ...
	I1209 11:52:03.948565  662109 api_server.go:88] waiting for apiserver healthz status ...
	I1209 11:52:03.948595  662109 api_server.go:253] Checking apiserver healthz at https://192.168.39.169:8443/healthz ...
	I1209 11:52:06.443635  662109 api_server.go:279] https://192.168.39.169:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1209 11:52:06.443675  662109 api_server.go:103] status: https://192.168.39.169:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1209 11:52:06.443692  662109 api_server.go:253] Checking apiserver healthz at https://192.168.39.169:8443/healthz ...
	I1209 11:52:06.490801  662109 api_server.go:279] https://192.168.39.169:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1209 11:52:06.490839  662109 api_server.go:103] status: https://192.168.39.169:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1209 11:52:06.490860  662109 api_server.go:253] Checking apiserver healthz at https://192.168.39.169:8443/healthz ...
	I1209 11:52:06.502460  662109 api_server.go:279] https://192.168.39.169:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1209 11:52:06.502497  662109 api_server.go:103] status: https://192.168.39.169:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1209 11:52:04.713201  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:04.713711  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has current primary IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:04.713817  662586 main.go:141] libmachine: (old-k8s-version-014592) Found IP for machine: 192.168.61.132
	I1209 11:52:04.713853  662586 main.go:141] libmachine: (old-k8s-version-014592) Reserving static IP address...
	I1209 11:52:04.714267  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "old-k8s-version-014592", mac: "52:54:00:54:72:3e", ip: "192.168.61.132"} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:51:55 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:52:04.714298  662586 main.go:141] libmachine: (old-k8s-version-014592) Reserved static IP address: 192.168.61.132
	I1209 11:52:04.714318  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | skip adding static IP to network mk-old-k8s-version-014592 - found existing host DHCP lease matching {name: "old-k8s-version-014592", mac: "52:54:00:54:72:3e", ip: "192.168.61.132"}
	I1209 11:52:04.714332  662586 main.go:141] libmachine: (old-k8s-version-014592) Waiting for SSH to be available...
	I1209 11:52:04.714347  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | Getting to WaitForSSH function...
	I1209 11:52:04.716632  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:04.716972  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:72:3e", ip: ""} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:51:55 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:52:04.717005  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:04.717129  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | Using SSH client type: external
	I1209 11:52:04.717157  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | Using SSH private key: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/old-k8s-version-014592/id_rsa (-rw-------)
	I1209 11:52:04.717192  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.132 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20068-609844/.minikube/machines/old-k8s-version-014592/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1209 11:52:04.717206  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | About to run SSH command:
	I1209 11:52:04.717223  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | exit 0
	I1209 11:52:04.846290  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | SSH cmd err, output: <nil>: 
	I1209 11:52:04.846675  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetConfigRaw
	I1209 11:52:04.847483  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetIP
	I1209 11:52:04.850430  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:04.850859  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:72:3e", ip: ""} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:51:55 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:52:04.850888  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:04.851113  662586 profile.go:143] Saving config to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/old-k8s-version-014592/config.json ...
	I1209 11:52:04.851328  662586 machine.go:93] provisionDockerMachine start ...
	I1209 11:52:04.851348  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .DriverName
	I1209 11:52:04.851547  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHHostname
	I1209 11:52:04.854318  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:04.854622  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:72:3e", ip: ""} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:51:55 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:52:04.854654  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:04.854782  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHPort
	I1209 11:52:04.854959  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHKeyPath
	I1209 11:52:04.855134  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHKeyPath
	I1209 11:52:04.855276  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHUsername
	I1209 11:52:04.855438  662586 main.go:141] libmachine: Using SSH client type: native
	I1209 11:52:04.855696  662586 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.132 22 <nil> <nil>}
	I1209 11:52:04.855709  662586 main.go:141] libmachine: About to run SSH command:
	hostname
	I1209 11:52:04.963021  662586 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1209 11:52:04.963059  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetMachineName
	I1209 11:52:04.963344  662586 buildroot.go:166] provisioning hostname "old-k8s-version-014592"
	I1209 11:52:04.963368  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetMachineName
	I1209 11:52:04.963545  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHHostname
	I1209 11:52:04.966102  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:04.966461  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:72:3e", ip: ""} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:51:55 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:52:04.966496  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:04.966607  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHPort
	I1209 11:52:04.966780  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHKeyPath
	I1209 11:52:04.966919  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHKeyPath
	I1209 11:52:04.967056  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHUsername
	I1209 11:52:04.967221  662586 main.go:141] libmachine: Using SSH client type: native
	I1209 11:52:04.967407  662586 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.132 22 <nil> <nil>}
	I1209 11:52:04.967419  662586 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-014592 && echo "old-k8s-version-014592" | sudo tee /etc/hostname
	I1209 11:52:05.094147  662586 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-014592
	
	I1209 11:52:05.094210  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHHostname
	I1209 11:52:05.097298  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.097729  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:72:3e", ip: ""} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:51:55 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:52:05.097765  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.097949  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHPort
	I1209 11:52:05.098197  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHKeyPath
	I1209 11:52:05.098460  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHKeyPath
	I1209 11:52:05.098632  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHUsername
	I1209 11:52:05.098829  662586 main.go:141] libmachine: Using SSH client type: native
	I1209 11:52:05.099046  662586 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.132 22 <nil> <nil>}
	I1209 11:52:05.099082  662586 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-014592' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-014592/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-014592' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 11:52:05.210739  662586 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 11:52:05.210785  662586 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20068-609844/.minikube CaCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20068-609844/.minikube}
	I1209 11:52:05.210846  662586 buildroot.go:174] setting up certificates
	I1209 11:52:05.210859  662586 provision.go:84] configureAuth start
	I1209 11:52:05.210881  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetMachineName
	I1209 11:52:05.211210  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetIP
	I1209 11:52:05.214546  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.214937  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:72:3e", ip: ""} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:51:55 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:52:05.214967  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.215167  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHHostname
	I1209 11:52:05.217866  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.218269  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:72:3e", ip: ""} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:51:55 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:52:05.218300  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.218452  662586 provision.go:143] copyHostCerts
	I1209 11:52:05.218530  662586 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem, removing ...
	I1209 11:52:05.218558  662586 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem
	I1209 11:52:05.218630  662586 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem (1082 bytes)
	I1209 11:52:05.218807  662586 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem, removing ...
	I1209 11:52:05.218820  662586 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem
	I1209 11:52:05.218863  662586 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem (1123 bytes)
	I1209 11:52:05.218943  662586 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem, removing ...
	I1209 11:52:05.218953  662586 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem
	I1209 11:52:05.218983  662586 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem (1679 bytes)
	I1209 11:52:05.219060  662586 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-014592 san=[127.0.0.1 192.168.61.132 localhost minikube old-k8s-version-014592]
	I1209 11:52:05.292744  662586 provision.go:177] copyRemoteCerts
	I1209 11:52:05.292830  662586 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 11:52:05.292867  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHHostname
	I1209 11:52:05.296244  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.296670  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:72:3e", ip: ""} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:51:55 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:52:05.296712  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.296896  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHPort
	I1209 11:52:05.297111  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHKeyPath
	I1209 11:52:05.297330  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHUsername
	I1209 11:52:05.297514  662586 sshutil.go:53] new ssh client: &{IP:192.168.61.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/old-k8s-version-014592/id_rsa Username:docker}
	I1209 11:52:05.381148  662586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1209 11:52:05.404883  662586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1209 11:52:05.433421  662586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1209 11:52:05.456775  662586 provision.go:87] duration metric: took 245.894878ms to configureAuth
	I1209 11:52:05.456811  662586 buildroot.go:189] setting minikube options for container-runtime
	I1209 11:52:05.457003  662586 config.go:182] Loaded profile config "old-k8s-version-014592": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1209 11:52:05.457082  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHHostname
	I1209 11:52:05.459984  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.460372  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:72:3e", ip: ""} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:51:55 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:52:05.460415  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.460631  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHPort
	I1209 11:52:05.460851  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHKeyPath
	I1209 11:52:05.461021  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHKeyPath
	I1209 11:52:05.461217  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHUsername
	I1209 11:52:05.461481  662586 main.go:141] libmachine: Using SSH client type: native
	I1209 11:52:05.461702  662586 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.132 22 <nil> <nil>}
	I1209 11:52:05.461722  662586 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 11:52:05.683276  662586 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 11:52:05.683311  662586 machine.go:96] duration metric: took 831.968459ms to provisionDockerMachine
	I1209 11:52:05.683335  662586 start.go:293] postStartSetup for "old-k8s-version-014592" (driver="kvm2")
	I1209 11:52:05.683349  662586 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 11:52:05.683391  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .DriverName
	I1209 11:52:05.683809  662586 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 11:52:05.683850  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHHostname
	I1209 11:52:05.687116  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.687540  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:72:3e", ip: ""} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:51:55 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:52:05.687579  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.687787  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHPort
	I1209 11:52:05.688013  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHKeyPath
	I1209 11:52:05.688204  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHUsername
	I1209 11:52:05.688439  662586 sshutil.go:53] new ssh client: &{IP:192.168.61.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/old-k8s-version-014592/id_rsa Username:docker}
	I1209 11:52:05.768777  662586 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 11:52:05.772572  662586 info.go:137] Remote host: Buildroot 2023.02.9
	I1209 11:52:05.772603  662586 filesync.go:126] Scanning /home/jenkins/minikube-integration/20068-609844/.minikube/addons for local assets ...
	I1209 11:52:05.772690  662586 filesync.go:126] Scanning /home/jenkins/minikube-integration/20068-609844/.minikube/files for local assets ...
	I1209 11:52:05.772813  662586 filesync.go:149] local asset: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem -> 6170172.pem in /etc/ssl/certs
	I1209 11:52:05.772942  662586 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 11:52:05.784153  662586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem --> /etc/ssl/certs/6170172.pem (1708 bytes)
	I1209 11:52:05.808677  662586 start.go:296] duration metric: took 125.320445ms for postStartSetup
	I1209 11:52:05.808736  662586 fix.go:56] duration metric: took 21.705557963s for fixHost
	I1209 11:52:05.808766  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHHostname
	I1209 11:52:05.811685  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.812053  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:72:3e", ip: ""} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:51:55 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:52:05.812090  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.812426  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHPort
	I1209 11:52:05.812639  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHKeyPath
	I1209 11:52:05.812853  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHKeyPath
	I1209 11:52:05.812996  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHUsername
	I1209 11:52:05.813345  662586 main.go:141] libmachine: Using SSH client type: native
	I1209 11:52:05.813562  662586 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.132 22 <nil> <nil>}
	I1209 11:52:05.813572  662586 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1209 11:52:05.914863  662586 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733745125.875320243
	
	I1209 11:52:05.914892  662586 fix.go:216] guest clock: 1733745125.875320243
	I1209 11:52:05.914906  662586 fix.go:229] Guest: 2024-12-09 11:52:05.875320243 +0000 UTC Remote: 2024-12-09 11:52:05.808742373 +0000 UTC m=+218.159686894 (delta=66.57787ms)
	I1209 11:52:05.914941  662586 fix.go:200] guest clock delta is within tolerance: 66.57787ms
	I1209 11:52:05.914952  662586 start.go:83] releasing machines lock for "old-k8s-version-014592", held for 21.811813657s
	I1209 11:52:05.914983  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .DriverName
	I1209 11:52:05.915289  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetIP
	I1209 11:52:05.918015  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.918513  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:72:3e", ip: ""} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:51:55 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:52:05.918546  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.918662  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .DriverName
	I1209 11:52:05.919315  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .DriverName
	I1209 11:52:05.919508  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .DriverName
	I1209 11:52:05.919628  662586 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 11:52:05.919684  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHHostname
	I1209 11:52:05.919739  662586 ssh_runner.go:195] Run: cat /version.json
	I1209 11:52:05.919767  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHHostname
	I1209 11:52:05.922529  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.922816  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.923096  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:72:3e", ip: ""} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:51:55 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:52:05.923121  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.923223  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:72:3e", ip: ""} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:51:55 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:52:05.923258  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.923291  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHPort
	I1209 11:52:05.923459  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHPort
	I1209 11:52:05.923602  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHKeyPath
	I1209 11:52:05.923616  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHKeyPath
	I1209 11:52:05.923848  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHUsername
	I1209 11:52:05.923900  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHUsername
	I1209 11:52:05.924030  662586 sshutil.go:53] new ssh client: &{IP:192.168.61.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/old-k8s-version-014592/id_rsa Username:docker}
	I1209 11:52:05.924104  662586 sshutil.go:53] new ssh client: &{IP:192.168.61.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/old-k8s-version-014592/id_rsa Username:docker}
	I1209 11:52:06.037215  662586 ssh_runner.go:195] Run: systemctl --version
	I1209 11:52:06.043193  662586 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 11:52:06.193717  662586 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 11:52:06.199693  662586 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 11:52:06.199786  662586 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 11:52:06.216007  662586 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1209 11:52:06.216040  662586 start.go:495] detecting cgroup driver to use...
	I1209 11:52:06.216131  662586 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 11:52:06.233631  662586 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 11:52:06.249730  662586 docker.go:217] disabling cri-docker service (if available) ...
	I1209 11:52:06.249817  662586 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 11:52:06.265290  662586 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 11:52:06.281676  662586 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 11:52:06.432116  662586 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 11:52:06.605899  662586 docker.go:233] disabling docker service ...
	I1209 11:52:06.606004  662586 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 11:52:06.622861  662586 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 11:52:06.637605  662586 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 11:52:06.772842  662586 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 11:52:06.905950  662586 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 11:52:06.923048  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 11:52:06.943483  662586 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1209 11:52:06.943542  662586 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:52:06.957647  662586 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 11:52:06.957725  662586 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:52:06.970221  662586 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:52:06.981243  662586 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:52:06.992084  662586 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 11:52:07.004284  662586 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 11:52:07.014329  662586 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1209 11:52:07.014411  662586 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1209 11:52:07.028104  662586 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 11:52:07.038782  662586 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 11:52:07.155779  662586 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 11:52:07.271726  662586 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 11:52:07.271815  662586 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 11:52:07.276994  662586 start.go:563] Will wait 60s for crictl version
	I1209 11:52:07.277061  662586 ssh_runner.go:195] Run: which crictl
	I1209 11:52:07.281212  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 11:52:07.328839  662586 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1209 11:52:07.328959  662586 ssh_runner.go:195] Run: crio --version
	I1209 11:52:07.360632  662586 ssh_runner.go:195] Run: crio --version
	I1209 11:52:07.393046  662586 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1209 11:52:07.394357  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetIP
	I1209 11:52:07.398002  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:07.398539  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:72:3e", ip: ""} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:51:55 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:52:07.398564  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:07.398893  662586 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1209 11:52:07.404512  662586 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 11:52:07.417822  662586 kubeadm.go:883] updating cluster {Name:old-k8s-version-014592 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-014592 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.132 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 11:52:07.418006  662586 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1209 11:52:07.418108  662586 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 11:52:07.473163  662586 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1209 11:52:07.473249  662586 ssh_runner.go:195] Run: which lz4
	I1209 11:52:07.478501  662586 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1209 11:52:07.483744  662586 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1209 11:52:07.483786  662586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1209 11:52:06.949438  662109 api_server.go:253] Checking apiserver healthz at https://192.168.39.169:8443/healthz ...
	I1209 11:52:06.959097  662109 api_server.go:279] https://192.168.39.169:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1209 11:52:06.959150  662109 api_server.go:103] status: https://192.168.39.169:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1209 11:52:07.449249  662109 api_server.go:253] Checking apiserver healthz at https://192.168.39.169:8443/healthz ...
	I1209 11:52:07.466817  662109 api_server.go:279] https://192.168.39.169:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1209 11:52:07.466860  662109 api_server.go:103] status: https://192.168.39.169:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1209 11:52:07.948998  662109 api_server.go:253] Checking apiserver healthz at https://192.168.39.169:8443/healthz ...
	I1209 11:52:07.958340  662109 api_server.go:279] https://192.168.39.169:8443/healthz returned 200:
	ok
	I1209 11:52:07.966049  662109 api_server.go:141] control plane version: v1.31.2
	I1209 11:52:07.966095  662109 api_server.go:131] duration metric: took 4.017521352s to wait for apiserver health ...
	I1209 11:52:07.966111  662109 cni.go:84] Creating CNI manager for ""
	I1209 11:52:07.966121  662109 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 11:52:07.967962  662109 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1209 11:52:05.941206  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .Start
	I1209 11:52:05.941411  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Ensuring networks are active...
	I1209 11:52:05.942245  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Ensuring network default is active
	I1209 11:52:05.942724  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Ensuring network mk-default-k8s-diff-port-482476 is active
	I1209 11:52:05.943274  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Getting domain xml...
	I1209 11:52:05.944080  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Creating domain...
	I1209 11:52:07.394633  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting to get IP...
	I1209 11:52:07.396032  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:07.397444  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | unable to find current IP address of domain default-k8s-diff-port-482476 in network mk-default-k8s-diff-port-482476
	I1209 11:52:07.397560  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | I1209 11:52:07.397434  663663 retry.go:31] will retry after 205.256699ms: waiting for machine to come up
	I1209 11:52:07.604209  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:07.604884  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | unable to find current IP address of domain default-k8s-diff-port-482476 in network mk-default-k8s-diff-port-482476
	I1209 11:52:07.604920  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | I1209 11:52:07.604828  663663 retry.go:31] will retry after 291.255961ms: waiting for machine to come up
	I1209 11:52:07.897467  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:07.898992  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | unable to find current IP address of domain default-k8s-diff-port-482476 in network mk-default-k8s-diff-port-482476
	I1209 11:52:07.899020  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | I1209 11:52:07.898866  663663 retry.go:31] will retry after 437.180412ms: waiting for machine to come up
	I1209 11:52:08.337664  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:08.338195  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | unable to find current IP address of domain default-k8s-diff-port-482476 in network mk-default-k8s-diff-port-482476
	I1209 11:52:08.338235  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | I1209 11:52:08.338151  663663 retry.go:31] will retry after 603.826089ms: waiting for machine to come up
	I1209 11:52:08.944048  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:08.944672  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | unable to find current IP address of domain default-k8s-diff-port-482476 in network mk-default-k8s-diff-port-482476
	I1209 11:52:08.944702  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | I1209 11:52:08.944612  663663 retry.go:31] will retry after 557.882868ms: waiting for machine to come up
	I1209 11:52:07.969367  662109 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1209 11:52:07.986045  662109 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1209 11:52:08.075377  662109 system_pods.go:43] waiting for kube-system pods to appear ...
	I1209 11:52:08.091609  662109 system_pods.go:59] 8 kube-system pods found
	I1209 11:52:08.091648  662109 system_pods.go:61] "coredns-7c65d6cfc9-z647g" [0e15e13e-efe6-4ae2-8bac-205aadf8f95a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1209 11:52:08.091656  662109 system_pods.go:61] "etcd-no-preload-820741" [3ea97088-f3b5-4c8f-ac11-7ab96f37bc5b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1209 11:52:08.091664  662109 system_pods.go:61] "kube-apiserver-no-preload-820741" [bd0d3e20-bdab-41c1-bb25-0c5149b4e456] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1209 11:52:08.091670  662109 system_pods.go:61] "kube-controller-manager-no-preload-820741" [5eda77af-a169-4be5-9f96-6ad5406f9036] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1209 11:52:08.091675  662109 system_pods.go:61] "kube-proxy-hpvvp" [0945206c-8d1e-47e0-b35b-9011073423b2] Running
	I1209 11:52:08.091681  662109 system_pods.go:61] "kube-scheduler-no-preload-820741" [e7003668-a70d-4334-bb99-c63c463d87e0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1209 11:52:08.091686  662109 system_pods.go:61] "metrics-server-6867b74b74-pwcsr" [40d4df7e-de82-478b-a77b-b27208d8262e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1209 11:52:08.091691  662109 system_pods.go:61] "storage-provisioner" [aeba46d3-ecf1-4923-b89c-75b34e75a06d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1209 11:52:08.091699  662109 system_pods.go:74] duration metric: took 16.289433ms to wait for pod list to return data ...
	I1209 11:52:08.091707  662109 node_conditions.go:102] verifying NodePressure condition ...
	I1209 11:52:08.096961  662109 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1209 11:52:08.097010  662109 node_conditions.go:123] node cpu capacity is 2
	I1209 11:52:08.097047  662109 node_conditions.go:105] duration metric: took 5.334194ms to run NodePressure ...
	I1209 11:52:08.097073  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:08.573868  662109 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1209 11:52:08.583670  662109 kubeadm.go:739] kubelet initialised
	I1209 11:52:08.583700  662109 kubeadm.go:740] duration metric: took 9.800796ms waiting for restarted kubelet to initialise ...
	I1209 11:52:08.583713  662109 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 11:52:08.592490  662109 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-z647g" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:08.600581  662109 pod_ready.go:98] node "no-preload-820741" hosting pod "coredns-7c65d6cfc9-z647g" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-820741" has status "Ready":"False"
	I1209 11:52:08.600611  662109 pod_ready.go:82] duration metric: took 8.087599ms for pod "coredns-7c65d6cfc9-z647g" in "kube-system" namespace to be "Ready" ...
	E1209 11:52:08.600623  662109 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-820741" hosting pod "coredns-7c65d6cfc9-z647g" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-820741" has status "Ready":"False"
	I1209 11:52:08.600633  662109 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-820741" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:08.609663  662109 pod_ready.go:98] node "no-preload-820741" hosting pod "etcd-no-preload-820741" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-820741" has status "Ready":"False"
	I1209 11:52:08.609698  662109 pod_ready.go:82] duration metric: took 9.054194ms for pod "etcd-no-preload-820741" in "kube-system" namespace to be "Ready" ...
	E1209 11:52:08.609712  662109 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-820741" hosting pod "etcd-no-preload-820741" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-820741" has status "Ready":"False"
	I1209 11:52:08.609722  662109 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-820741" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:08.615482  662109 pod_ready.go:98] node "no-preload-820741" hosting pod "kube-apiserver-no-preload-820741" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-820741" has status "Ready":"False"
	I1209 11:52:08.615514  662109 pod_ready.go:82] duration metric: took 5.78152ms for pod "kube-apiserver-no-preload-820741" in "kube-system" namespace to be "Ready" ...
	E1209 11:52:08.615526  662109 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-820741" hosting pod "kube-apiserver-no-preload-820741" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-820741" has status "Ready":"False"
	I1209 11:52:08.615536  662109 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-820741" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:08.623662  662109 pod_ready.go:98] node "no-preload-820741" hosting pod "kube-controller-manager-no-preload-820741" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-820741" has status "Ready":"False"
	I1209 11:52:08.623698  662109 pod_ready.go:82] duration metric: took 8.151877ms for pod "kube-controller-manager-no-preload-820741" in "kube-system" namespace to be "Ready" ...
	E1209 11:52:08.623713  662109 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-820741" hosting pod "kube-controller-manager-no-preload-820741" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-820741" has status "Ready":"False"
	I1209 11:52:08.623722  662109 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-hpvvp" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:08.978286  662109 pod_ready.go:98] node "no-preload-820741" hosting pod "kube-proxy-hpvvp" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-820741" has status "Ready":"False"
	I1209 11:52:08.978323  662109 pod_ready.go:82] duration metric: took 354.589596ms for pod "kube-proxy-hpvvp" in "kube-system" namespace to be "Ready" ...
	E1209 11:52:08.978344  662109 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-820741" hosting pod "kube-proxy-hpvvp" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-820741" has status "Ready":"False"
	I1209 11:52:08.978356  662109 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-820741" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:09.378434  662109 pod_ready.go:98] node "no-preload-820741" hosting pod "kube-scheduler-no-preload-820741" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-820741" has status "Ready":"False"
	I1209 11:52:09.378471  662109 pod_ready.go:82] duration metric: took 400.107028ms for pod "kube-scheduler-no-preload-820741" in "kube-system" namespace to be "Ready" ...
	E1209 11:52:09.378484  662109 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-820741" hosting pod "kube-scheduler-no-preload-820741" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-820741" has status "Ready":"False"
	I1209 11:52:09.378494  662109 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:09.778087  662109 pod_ready.go:98] node "no-preload-820741" hosting pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-820741" has status "Ready":"False"
	I1209 11:52:09.778117  662109 pod_ready.go:82] duration metric: took 399.613592ms for pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace to be "Ready" ...
	E1209 11:52:09.778129  662109 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-820741" hosting pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-820741" has status "Ready":"False"
	I1209 11:52:09.778138  662109 pod_ready.go:39] duration metric: took 1.194413796s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 11:52:09.778162  662109 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1209 11:52:09.793629  662109 ops.go:34] apiserver oom_adj: -16
	I1209 11:52:09.793663  662109 kubeadm.go:597] duration metric: took 8.374104555s to restartPrimaryControlPlane
	I1209 11:52:09.793681  662109 kubeadm.go:394] duration metric: took 8.419719684s to StartCluster
	I1209 11:52:09.793708  662109 settings.go:142] acquiring lock: {Name:mkd819f3469f56ef651f2e5bb461e93bb6b2b5fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 11:52:09.793848  662109 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20068-609844/kubeconfig
	I1209 11:52:09.796407  662109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/kubeconfig: {Name:mk32876a1ab47f13c0d7c0ba1ee5e32de01b4a4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 11:52:09.796774  662109 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.169 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 11:52:09.796837  662109 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1209 11:52:09.796954  662109 addons.go:69] Setting storage-provisioner=true in profile "no-preload-820741"
	I1209 11:52:09.796975  662109 addons.go:234] Setting addon storage-provisioner=true in "no-preload-820741"
	W1209 11:52:09.796984  662109 addons.go:243] addon storage-provisioner should already be in state true
	I1209 11:52:09.797023  662109 host.go:66] Checking if "no-preload-820741" exists ...
	I1209 11:52:09.797048  662109 config.go:182] Loaded profile config "no-preload-820741": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 11:52:09.797086  662109 addons.go:69] Setting default-storageclass=true in profile "no-preload-820741"
	I1209 11:52:09.797110  662109 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-820741"
	I1209 11:52:09.797119  662109 addons.go:69] Setting metrics-server=true in profile "no-preload-820741"
	I1209 11:52:09.797150  662109 addons.go:234] Setting addon metrics-server=true in "no-preload-820741"
	W1209 11:52:09.797160  662109 addons.go:243] addon metrics-server should already be in state true
	I1209 11:52:09.797204  662109 host.go:66] Checking if "no-preload-820741" exists ...
	I1209 11:52:09.797545  662109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:52:09.797571  662109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:52:09.797579  662109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:52:09.797596  662109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:52:09.797611  662109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:52:09.797620  662109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:52:09.799690  662109 out.go:177] * Verifying Kubernetes components...
	I1209 11:52:09.801035  662109 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 11:52:09.814968  662109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33991
	I1209 11:52:09.815010  662109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34319
	I1209 11:52:09.815576  662109 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:52:09.815715  662109 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:52:09.816340  662109 main.go:141] libmachine: Using API Version  1
	I1209 11:52:09.816361  662109 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:52:09.816666  662109 main.go:141] libmachine: Using API Version  1
	I1209 11:52:09.816683  662109 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:52:09.816745  662109 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:52:09.817402  662109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:52:09.817449  662109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:52:09.818118  662109 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:52:09.818680  662109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:52:09.818718  662109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:52:09.842345  662109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37501
	I1209 11:52:09.842582  662109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44183
	I1209 11:52:09.842703  662109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38793
	I1209 11:52:09.843479  662109 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:52:09.843608  662109 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:52:09.843667  662109 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:52:09.843973  662109 main.go:141] libmachine: Using API Version  1
	I1209 11:52:09.843999  662109 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:52:09.844168  662109 main.go:141] libmachine: Using API Version  1
	I1209 11:52:09.844180  662109 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:52:09.844575  662109 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:52:09.844773  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetState
	I1209 11:52:09.845107  662109 main.go:141] libmachine: Using API Version  1
	I1209 11:52:09.845122  662109 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:52:09.845633  662109 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:52:09.845887  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetState
	I1209 11:52:09.847386  662109 main.go:141] libmachine: (no-preload-820741) Calling .DriverName
	I1209 11:52:09.848553  662109 main.go:141] libmachine: (no-preload-820741) Calling .DriverName
	I1209 11:52:09.849410  662109 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1209 11:52:09.849690  662109 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:52:09.850230  662109 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 11:52:09.850303  662109 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1209 11:52:09.850323  662109 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1209 11:52:09.850346  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHHostname
	I1209 11:52:09.851051  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetState
	I1209 11:52:09.851404  662109 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 11:52:09.851426  662109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1209 11:52:09.851447  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHHostname
	I1209 11:52:09.855303  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:52:09.855935  662109 addons.go:234] Setting addon default-storageclass=true in "no-preload-820741"
	W1209 11:52:09.855958  662109 addons.go:243] addon default-storageclass should already be in state true
	I1209 11:52:09.855991  662109 host.go:66] Checking if "no-preload-820741" exists ...
	I1209 11:52:09.856373  662109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:52:09.856429  662109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:52:09.857583  662109 main.go:141] libmachine: (no-preload-820741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4c:0e", ip: ""} in network mk-no-preload-820741: {Iface:virbr1 ExpiryTime:2024-12-09 12:43:41 +0000 UTC Type:0 Mac:52:54:00:27:4c:0e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:no-preload-820741 Clientid:01:52:54:00:27:4c:0e}
	I1209 11:52:09.857614  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined IP address 192.168.39.169 and MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:52:09.857874  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHPort
	I1209 11:52:09.858206  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHKeyPath
	I1209 11:52:09.858588  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHUsername
	I1209 11:52:09.858766  662109 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/no-preload-820741/id_rsa Username:docker}
	I1209 11:52:09.859464  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:52:09.859875  662109 main.go:141] libmachine: (no-preload-820741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4c:0e", ip: ""} in network mk-no-preload-820741: {Iface:virbr1 ExpiryTime:2024-12-09 12:43:41 +0000 UTC Type:0 Mac:52:54:00:27:4c:0e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:no-preload-820741 Clientid:01:52:54:00:27:4c:0e}
	I1209 11:52:09.859897  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined IP address 192.168.39.169 and MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:52:09.860238  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHPort
	I1209 11:52:09.860449  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHKeyPath
	I1209 11:52:09.860597  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHUsername
	I1209 11:52:09.860736  662109 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/no-preload-820741/id_rsa Username:docker}
	I1209 11:52:09.880235  662109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38341
	I1209 11:52:09.880846  662109 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:52:09.881409  662109 main.go:141] libmachine: Using API Version  1
	I1209 11:52:09.881429  662109 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:52:09.881855  662109 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:52:09.882651  662109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:52:09.882711  662109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:52:09.904576  662109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36779
	I1209 11:52:09.905132  662109 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:52:09.905765  662109 main.go:141] libmachine: Using API Version  1
	I1209 11:52:09.905788  662109 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:52:09.906224  662109 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:52:09.906469  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetState
	I1209 11:52:09.908475  662109 main.go:141] libmachine: (no-preload-820741) Calling .DriverName
	I1209 11:52:09.908715  662109 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1209 11:52:09.908735  662109 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1209 11:52:09.908756  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHHostname
	I1209 11:52:09.912294  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:52:09.912928  662109 main.go:141] libmachine: (no-preload-820741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4c:0e", ip: ""} in network mk-no-preload-820741: {Iface:virbr1 ExpiryTime:2024-12-09 12:43:41 +0000 UTC Type:0 Mac:52:54:00:27:4c:0e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:no-preload-820741 Clientid:01:52:54:00:27:4c:0e}
	I1209 11:52:09.912963  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined IP address 192.168.39.169 and MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:52:09.913128  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHPort
	I1209 11:52:09.913383  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHKeyPath
	I1209 11:52:09.913563  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHUsername
	I1209 11:52:09.913711  662109 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/no-preload-820741/id_rsa Username:docker}
	I1209 11:52:10.141200  662109 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 11:52:10.172182  662109 node_ready.go:35] waiting up to 6m0s for node "no-preload-820741" to be "Ready" ...
	I1209 11:52:10.306617  662109 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1209 11:52:10.306646  662109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1209 11:52:10.321962  662109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 11:52:10.326125  662109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1209 11:52:10.360534  662109 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1209 11:52:10.360568  662109 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1209 11:52:10.470875  662109 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1209 11:52:10.470917  662109 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1209 11:52:10.555610  662109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1209 11:52:11.721480  662109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.395310752s)
	I1209 11:52:11.721571  662109 main.go:141] libmachine: Making call to close driver server
	I1209 11:52:11.721638  662109 main.go:141] libmachine: (no-preload-820741) Calling .Close
	I1209 11:52:11.721581  662109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.165925756s)
	I1209 11:52:11.721735  662109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.399738143s)
	I1209 11:52:11.721753  662109 main.go:141] libmachine: Making call to close driver server
	I1209 11:52:11.721766  662109 main.go:141] libmachine: (no-preload-820741) Calling .Close
	I1209 11:52:11.721765  662109 main.go:141] libmachine: Making call to close driver server
	I1209 11:52:11.721779  662109 main.go:141] libmachine: (no-preload-820741) Calling .Close
	I1209 11:52:11.722002  662109 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:52:11.722014  662109 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:52:11.722021  662109 main.go:141] libmachine: Making call to close driver server
	I1209 11:52:11.722028  662109 main.go:141] libmachine: (no-preload-820741) Calling .Close
	I1209 11:52:11.722201  662109 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:52:11.722213  662109 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:52:11.722221  662109 main.go:141] libmachine: Making call to close driver server
	I1209 11:52:11.722226  662109 main.go:141] libmachine: (no-preload-820741) Calling .Close
	I1209 11:52:11.722320  662109 main.go:141] libmachine: (no-preload-820741) DBG | Closing plugin on server side
	I1209 11:52:11.722329  662109 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:52:11.722349  662109 main.go:141] libmachine: (no-preload-820741) DBG | Closing plugin on server side
	I1209 11:52:11.722360  662109 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:52:11.722384  662109 main.go:141] libmachine: Making call to close driver server
	I1209 11:52:11.722395  662109 main.go:141] libmachine: (no-preload-820741) Calling .Close
	I1209 11:52:11.722424  662109 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:52:11.722438  662109 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:52:11.722465  662109 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:52:11.722475  662109 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:52:11.722490  662109 addons.go:475] Verifying addon metrics-server=true in "no-preload-820741"
	I1209 11:52:11.722560  662109 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:52:11.722579  662109 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:52:11.722564  662109 main.go:141] libmachine: (no-preload-820741) DBG | Closing plugin on server side
	I1209 11:52:11.729638  662109 main.go:141] libmachine: Making call to close driver server
	I1209 11:52:11.729660  662109 main.go:141] libmachine: (no-preload-820741) Calling .Close
	I1209 11:52:11.729934  662109 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:52:11.729950  662109 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:52:11.731642  662109 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I1209 11:52:09.097654  662586 crio.go:462] duration metric: took 1.619191765s to copy over tarball
	I1209 11:52:09.097748  662586 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1209 11:52:12.304496  662586 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.20670295s)
	I1209 11:52:12.304543  662586 crio.go:469] duration metric: took 3.206852542s to extract the tarball
	I1209 11:52:12.304553  662586 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1209 11:52:12.347991  662586 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 11:52:12.385411  662586 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1209 11:52:12.385438  662586 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1209 11:52:12.385533  662586 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 11:52:12.385557  662586 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1209 11:52:12.385570  662586 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1209 11:52:12.385609  662586 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 11:52:12.385641  662586 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1209 11:52:12.385650  662586 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1209 11:52:12.385645  662586 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1209 11:52:12.385620  662586 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1209 11:52:12.387326  662586 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1209 11:52:12.387335  662586 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 11:52:12.387371  662586 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1209 11:52:12.387372  662586 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1209 11:52:12.387328  662586 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1209 11:52:12.387338  662586 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 11:52:12.387328  662586 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1209 11:52:12.387383  662586 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1209 11:52:12.621631  662586 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1209 11:52:12.623694  662586 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 11:52:12.632536  662586 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1209 11:52:12.634550  662586 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1209 11:52:12.638401  662586 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1209 11:52:12.641071  662586 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1209 11:52:12.645344  662586 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1209 11:52:09.504566  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:09.505124  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | unable to find current IP address of domain default-k8s-diff-port-482476 in network mk-default-k8s-diff-port-482476
	I1209 11:52:09.505155  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | I1209 11:52:09.505076  663663 retry.go:31] will retry after 636.87343ms: waiting for machine to come up
	I1209 11:52:10.144387  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:10.145090  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | unable to find current IP address of domain default-k8s-diff-port-482476 in network mk-default-k8s-diff-port-482476
	I1209 11:52:10.145119  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | I1209 11:52:10.145037  663663 retry.go:31] will retry after 716.448577ms: waiting for machine to come up
	I1209 11:52:10.863113  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:10.863816  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | unable to find current IP address of domain default-k8s-diff-port-482476 in network mk-default-k8s-diff-port-482476
	I1209 11:52:10.863848  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | I1209 11:52:10.863762  663663 retry.go:31] will retry after 901.007245ms: waiting for machine to come up
	I1209 11:52:11.766356  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:11.766745  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | unable to find current IP address of domain default-k8s-diff-port-482476 in network mk-default-k8s-diff-port-482476
	I1209 11:52:11.766773  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | I1209 11:52:11.766688  663663 retry.go:31] will retry after 1.570604193s: waiting for machine to come up
	I1209 11:52:13.339318  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:13.339796  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | unable to find current IP address of domain default-k8s-diff-port-482476 in network mk-default-k8s-diff-port-482476
	I1209 11:52:13.339828  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | I1209 11:52:13.339744  663663 retry.go:31] will retry after 1.928200683s: waiting for machine to come up
	I1209 11:52:11.732956  662109 addons.go:510] duration metric: took 1.936137102s for enable addons: enabled=[metrics-server storage-provisioner default-storageclass]
	I1209 11:52:12.175844  662109 node_ready.go:53] node "no-preload-820741" has status "Ready":"False"
	I1209 11:52:14.504491  662109 node_ready.go:53] node "no-preload-820741" has status "Ready":"False"
	I1209 11:52:12.756066  662586 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1209 11:52:12.756121  662586 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1209 11:52:12.756134  662586 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1209 11:52:12.756175  662586 ssh_runner.go:195] Run: which crictl
	I1209 11:52:12.756179  662586 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 11:52:12.756230  662586 ssh_runner.go:195] Run: which crictl
	I1209 11:52:12.808091  662586 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1209 11:52:12.808139  662586 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1209 11:52:12.808186  662586 ssh_runner.go:195] Run: which crictl
	I1209 11:52:12.809593  662586 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1209 11:52:12.809622  662586 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1209 11:52:12.809637  662586 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1209 11:52:12.809659  662586 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1209 11:52:12.809682  662586 ssh_runner.go:195] Run: which crictl
	I1209 11:52:12.809712  662586 ssh_runner.go:195] Run: which crictl
	I1209 11:52:12.809775  662586 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1209 11:52:12.809803  662586 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1209 11:52:12.809829  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 11:52:12.809841  662586 ssh_runner.go:195] Run: which crictl
	I1209 11:52:12.809724  662586 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1209 11:52:12.809873  662586 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1209 11:52:12.809898  662586 ssh_runner.go:195] Run: which crictl
	I1209 11:52:12.809933  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1209 11:52:12.812256  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1209 11:52:12.819121  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1209 11:52:12.825106  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1209 11:52:12.910431  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1209 11:52:12.910501  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 11:52:12.910560  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1209 11:52:12.910503  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1209 11:52:12.910638  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1209 11:52:12.910713  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1209 11:52:12.930461  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1209 11:52:13.079147  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1209 11:52:13.079189  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 11:52:13.079233  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1209 11:52:13.079276  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1209 11:52:13.079418  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1209 11:52:13.079447  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1209 11:52:13.079517  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1209 11:52:13.224753  662586 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1209 11:52:13.227126  662586 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1209 11:52:13.227190  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1209 11:52:13.227253  662586 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1209 11:52:13.227291  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1209 11:52:13.227332  662586 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1209 11:52:13.227393  662586 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1209 11:52:13.277747  662586 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1209 11:52:13.285286  662586 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1209 11:52:13.663858  662586 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 11:52:13.805603  662586 cache_images.go:92] duration metric: took 1.420145666s to LoadCachedImages
	W1209 11:52:13.805814  662586 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I1209 11:52:13.805848  662586 kubeadm.go:934] updating node { 192.168.61.132 8443 v1.20.0 crio true true} ...
	I1209 11:52:13.805980  662586 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-014592 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.132
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-014592 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 11:52:13.806079  662586 ssh_runner.go:195] Run: crio config
	I1209 11:52:13.870766  662586 cni.go:84] Creating CNI manager for ""
	I1209 11:52:13.870797  662586 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 11:52:13.870813  662586 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1209 11:52:13.870841  662586 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.132 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-014592 NodeName:old-k8s-version-014592 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.132"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.132 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1209 11:52:13.871050  662586 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.132
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-014592"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.132
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.132"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 11:52:13.871136  662586 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1209 11:52:13.881556  662586 binaries.go:44] Found k8s binaries, skipping transfer
	I1209 11:52:13.881628  662586 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 11:52:13.891122  662586 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1209 11:52:13.908181  662586 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 11:52:13.925041  662586 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1209 11:52:13.941567  662586 ssh_runner.go:195] Run: grep 192.168.61.132	control-plane.minikube.internal$ /etc/hosts
	I1209 11:52:13.945502  662586 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.132	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 11:52:13.957476  662586 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 11:52:14.091699  662586 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 11:52:14.108772  662586 certs.go:68] Setting up /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/old-k8s-version-014592 for IP: 192.168.61.132
	I1209 11:52:14.108810  662586 certs.go:194] generating shared ca certs ...
	I1209 11:52:14.108838  662586 certs.go:226] acquiring lock for ca certs: {Name:mk2df5887e08965d909a9c950da5dfffb8a04ddc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 11:52:14.109024  662586 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.key
	I1209 11:52:14.109087  662586 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.key
	I1209 11:52:14.109105  662586 certs.go:256] generating profile certs ...
	I1209 11:52:14.109248  662586 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/old-k8s-version-014592/client.key
	I1209 11:52:14.109323  662586 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/old-k8s-version-014592/apiserver.key.28078577
	I1209 11:52:14.109383  662586 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/old-k8s-version-014592/proxy-client.key
	I1209 11:52:14.109572  662586 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017.pem (1338 bytes)
	W1209 11:52:14.109609  662586 certs.go:480] ignoring /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017_empty.pem, impossibly tiny 0 bytes
	I1209 11:52:14.109619  662586 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem (1675 bytes)
	I1209 11:52:14.109659  662586 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem (1082 bytes)
	I1209 11:52:14.109697  662586 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem (1123 bytes)
	I1209 11:52:14.109737  662586 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem (1679 bytes)
	I1209 11:52:14.109802  662586 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem (1708 bytes)
	I1209 11:52:14.110497  662586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 11:52:14.145815  662586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1209 11:52:14.179452  662586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 11:52:14.217469  662586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1209 11:52:14.250288  662586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/old-k8s-version-014592/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1209 11:52:14.287110  662586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/old-k8s-version-014592/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1209 11:52:14.317190  662586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/old-k8s-version-014592/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 11:52:14.356825  662586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/old-k8s-version-014592/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1209 11:52:14.379756  662586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017.pem --> /usr/share/ca-certificates/617017.pem (1338 bytes)
	I1209 11:52:14.402045  662586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem --> /usr/share/ca-certificates/6170172.pem (1708 bytes)
	I1209 11:52:14.425287  662586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 11:52:14.448025  662586 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 11:52:14.464144  662586 ssh_runner.go:195] Run: openssl version
	I1209 11:52:14.470256  662586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/617017.pem && ln -fs /usr/share/ca-certificates/617017.pem /etc/ssl/certs/617017.pem"
	I1209 11:52:14.481298  662586 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/617017.pem
	I1209 11:52:14.485849  662586 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 10:45 /usr/share/ca-certificates/617017.pem
	I1209 11:52:14.485904  662586 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/617017.pem
	I1209 11:52:14.492321  662586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/617017.pem /etc/ssl/certs/51391683.0"
	I1209 11:52:14.504155  662586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6170172.pem && ln -fs /usr/share/ca-certificates/6170172.pem /etc/ssl/certs/6170172.pem"
	I1209 11:52:14.515819  662586 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6170172.pem
	I1209 11:52:14.520876  662586 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 10:45 /usr/share/ca-certificates/6170172.pem
	I1209 11:52:14.520955  662586 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6170172.pem
	I1209 11:52:14.527295  662586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6170172.pem /etc/ssl/certs/3ec20f2e.0"
	I1209 11:52:14.538319  662586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1209 11:52:14.549753  662586 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 11:52:14.554273  662586 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 10:34 /usr/share/ca-certificates/minikubeCA.pem
	I1209 11:52:14.554341  662586 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 11:52:14.559893  662586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1209 11:52:14.570744  662586 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 11:52:14.575763  662586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1209 11:52:14.582279  662586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1209 11:52:14.588549  662586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1209 11:52:14.594376  662586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1209 11:52:14.599758  662586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1209 11:52:14.605497  662586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1209 11:52:14.611083  662586 kubeadm.go:392] StartCluster: {Name:old-k8s-version-014592 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-014592 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.132 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 11:52:14.611213  662586 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 11:52:14.611288  662586 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 11:52:14.649447  662586 cri.go:89] found id: ""
	I1209 11:52:14.649538  662586 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 11:52:14.660070  662586 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1209 11:52:14.660094  662586 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1209 11:52:14.660145  662586 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1209 11:52:14.670412  662586 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1209 11:52:14.671387  662586 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-014592" does not appear in /home/jenkins/minikube-integration/20068-609844/kubeconfig
	I1209 11:52:14.672043  662586 kubeconfig.go:62] /home/jenkins/minikube-integration/20068-609844/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-014592" cluster setting kubeconfig missing "old-k8s-version-014592" context setting]
	I1209 11:52:14.673337  662586 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/kubeconfig: {Name:mk32876a1ab47f13c0d7c0ba1ee5e32de01b4a4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 11:52:14.708285  662586 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1209 11:52:14.719486  662586 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.132
	I1209 11:52:14.719535  662586 kubeadm.go:1160] stopping kube-system containers ...
	I1209 11:52:14.719563  662586 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1209 11:52:14.719635  662586 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 11:52:14.755280  662586 cri.go:89] found id: ""
	I1209 11:52:14.755369  662586 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1209 11:52:14.771385  662586 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 11:52:14.781364  662586 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 11:52:14.781387  662586 kubeadm.go:157] found existing configuration files:
	
	I1209 11:52:14.781455  662586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 11:52:14.790942  662586 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 11:52:14.791016  662586 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 11:52:14.800481  662586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 11:52:14.809875  662586 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 11:52:14.809948  662586 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 11:52:14.819619  662586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 11:52:14.831670  662586 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 11:52:14.831750  662586 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 11:52:14.844244  662586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 11:52:14.853328  662586 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 11:52:14.853403  662586 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 11:52:14.862428  662586 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 11:52:14.871346  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:15.007799  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:15.697594  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:15.921787  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:16.031826  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:16.132199  662586 api_server.go:52] waiting for apiserver process to appear ...
	I1209 11:52:16.132310  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:16.633329  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:17.133389  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:17.632581  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:15.270255  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:15.270804  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | unable to find current IP address of domain default-k8s-diff-port-482476 in network mk-default-k8s-diff-port-482476
	I1209 11:52:15.270836  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | I1209 11:52:15.270741  663663 retry.go:31] will retry after 2.90998032s: waiting for machine to come up
	I1209 11:52:18.182069  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:18.182741  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | unable to find current IP address of domain default-k8s-diff-port-482476 in network mk-default-k8s-diff-port-482476
	I1209 11:52:18.182774  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | I1209 11:52:18.182689  663663 retry.go:31] will retry after 3.196470388s: waiting for machine to come up
	I1209 11:52:16.676188  662109 node_ready.go:53] node "no-preload-820741" has status "Ready":"False"
	I1209 11:52:17.175894  662109 node_ready.go:49] node "no-preload-820741" has status "Ready":"True"
	I1209 11:52:17.175928  662109 node_ready.go:38] duration metric: took 7.003696159s for node "no-preload-820741" to be "Ready" ...
	I1209 11:52:17.175945  662109 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 11:52:17.180647  662109 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-z647g" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:19.188583  662109 pod_ready.go:103] pod "coredns-7c65d6cfc9-z647g" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:18.133165  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:18.632403  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:19.132416  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:19.633332  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:20.133343  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:20.632968  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:21.133411  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:21.632656  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:22.132876  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:22.632816  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:21.381260  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:21.381912  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | unable to find current IP address of domain default-k8s-diff-port-482476 in network mk-default-k8s-diff-port-482476
	I1209 11:52:21.381943  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | I1209 11:52:21.381834  663663 retry.go:31] will retry after 3.621023528s: waiting for machine to come up
	I1209 11:52:26.142813  661546 start.go:364] duration metric: took 56.424295065s to acquireMachinesLock for "embed-certs-005123"
	I1209 11:52:26.142877  661546 start.go:96] Skipping create...Using existing machine configuration
	I1209 11:52:26.142886  661546 fix.go:54] fixHost starting: 
	I1209 11:52:26.143376  661546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:52:26.143416  661546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:52:26.164438  661546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45591
	I1209 11:52:26.165041  661546 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:52:26.165779  661546 main.go:141] libmachine: Using API Version  1
	I1209 11:52:26.165828  661546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:52:26.166318  661546 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:52:26.166544  661546 main.go:141] libmachine: (embed-certs-005123) Calling .DriverName
	I1209 11:52:26.166745  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetState
	I1209 11:52:26.168534  661546 fix.go:112] recreateIfNeeded on embed-certs-005123: state=Stopped err=<nil>
	I1209 11:52:26.168564  661546 main.go:141] libmachine: (embed-certs-005123) Calling .DriverName
	W1209 11:52:26.168753  661546 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 11:52:26.170973  661546 out.go:177] * Restarting existing kvm2 VM for "embed-certs-005123" ...
	I1209 11:52:26.172269  661546 main.go:141] libmachine: (embed-certs-005123) Calling .Start
	I1209 11:52:26.172500  661546 main.go:141] libmachine: (embed-certs-005123) Ensuring networks are active...
	I1209 11:52:26.173391  661546 main.go:141] libmachine: (embed-certs-005123) Ensuring network default is active
	I1209 11:52:26.173747  661546 main.go:141] libmachine: (embed-certs-005123) Ensuring network mk-embed-certs-005123 is active
	I1209 11:52:26.174208  661546 main.go:141] libmachine: (embed-certs-005123) Getting domain xml...
	I1209 11:52:26.174990  661546 main.go:141] libmachine: (embed-certs-005123) Creating domain...
	I1209 11:52:21.687274  662109 pod_ready.go:103] pod "coredns-7c65d6cfc9-z647g" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:23.688011  662109 pod_ready.go:103] pod "coredns-7c65d6cfc9-z647g" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:24.187886  662109 pod_ready.go:93] pod "coredns-7c65d6cfc9-z647g" in "kube-system" namespace has status "Ready":"True"
	I1209 11:52:24.187917  662109 pod_ready.go:82] duration metric: took 7.007243363s for pod "coredns-7c65d6cfc9-z647g" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:24.187928  662109 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-820741" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:24.193936  662109 pod_ready.go:93] pod "etcd-no-preload-820741" in "kube-system" namespace has status "Ready":"True"
	I1209 11:52:24.193958  662109 pod_ready.go:82] duration metric: took 6.02353ms for pod "etcd-no-preload-820741" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:24.193966  662109 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-820741" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:24.203685  662109 pod_ready.go:93] pod "kube-apiserver-no-preload-820741" in "kube-system" namespace has status "Ready":"True"
	I1209 11:52:24.203712  662109 pod_ready.go:82] duration metric: took 9.739287ms for pod "kube-apiserver-no-preload-820741" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:24.203722  662109 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-820741" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:24.210004  662109 pod_ready.go:93] pod "kube-controller-manager-no-preload-820741" in "kube-system" namespace has status "Ready":"True"
	I1209 11:52:24.210034  662109 pod_ready.go:82] duration metric: took 6.304008ms for pod "kube-controller-manager-no-preload-820741" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:24.210048  662109 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-hpvvp" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:24.216225  662109 pod_ready.go:93] pod "kube-proxy-hpvvp" in "kube-system" namespace has status "Ready":"True"
	I1209 11:52:24.216249  662109 pod_ready.go:82] duration metric: took 6.193945ms for pod "kube-proxy-hpvvp" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:24.216258  662109 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-820741" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:24.584682  662109 pod_ready.go:93] pod "kube-scheduler-no-preload-820741" in "kube-system" namespace has status "Ready":"True"
	I1209 11:52:24.584711  662109 pod_ready.go:82] duration metric: took 368.445803ms for pod "kube-scheduler-no-preload-820741" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:24.584724  662109 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:25.004323  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.004761  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Found IP for machine: 192.168.50.25
	I1209 11:52:25.004791  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has current primary IP address 192.168.50.25 and MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.004798  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Reserving static IP address...
	I1209 11:52:25.005275  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-482476", mac: "52:54:00:f0:c9:8a", ip: "192.168.50.25"} in network mk-default-k8s-diff-port-482476: {Iface:virbr2 ExpiryTime:2024-12-09 12:52:18 +0000 UTC Type:0 Mac:52:54:00:f0:c9:8a Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:default-k8s-diff-port-482476 Clientid:01:52:54:00:f0:c9:8a}
	I1209 11:52:25.005301  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | skip adding static IP to network mk-default-k8s-diff-port-482476 - found existing host DHCP lease matching {name: "default-k8s-diff-port-482476", mac: "52:54:00:f0:c9:8a", ip: "192.168.50.25"}
	I1209 11:52:25.005314  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Reserved static IP address: 192.168.50.25
	I1209 11:52:25.005328  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for SSH to be available...
	I1209 11:52:25.005342  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | Getting to WaitForSSH function...
	I1209 11:52:25.007758  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.008146  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:c9:8a", ip: ""} in network mk-default-k8s-diff-port-482476: {Iface:virbr2 ExpiryTime:2024-12-09 12:52:18 +0000 UTC Type:0 Mac:52:54:00:f0:c9:8a Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:default-k8s-diff-port-482476 Clientid:01:52:54:00:f0:c9:8a}
	I1209 11:52:25.008189  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined IP address 192.168.50.25 and MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.008291  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | Using SSH client type: external
	I1209 11:52:25.008318  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | Using SSH private key: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/default-k8s-diff-port-482476/id_rsa (-rw-------)
	I1209 11:52:25.008348  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.25 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20068-609844/.minikube/machines/default-k8s-diff-port-482476/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1209 11:52:25.008361  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | About to run SSH command:
	I1209 11:52:25.008369  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | exit 0
	I1209 11:52:25.130532  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | SSH cmd err, output: <nil>: 
	I1209 11:52:25.130901  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetConfigRaw
	I1209 11:52:25.131568  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetIP
	I1209 11:52:25.134487  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.134816  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:c9:8a", ip: ""} in network mk-default-k8s-diff-port-482476: {Iface:virbr2 ExpiryTime:2024-12-09 12:52:18 +0000 UTC Type:0 Mac:52:54:00:f0:c9:8a Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:default-k8s-diff-port-482476 Clientid:01:52:54:00:f0:c9:8a}
	I1209 11:52:25.134854  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined IP address 192.168.50.25 and MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.135163  663024 profile.go:143] Saving config to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/default-k8s-diff-port-482476/config.json ...
	I1209 11:52:25.135451  663024 machine.go:93] provisionDockerMachine start ...
	I1209 11:52:25.135480  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .DriverName
	I1209 11:52:25.135736  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHHostname
	I1209 11:52:25.138444  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.138853  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:c9:8a", ip: ""} in network mk-default-k8s-diff-port-482476: {Iface:virbr2 ExpiryTime:2024-12-09 12:52:18 +0000 UTC Type:0 Mac:52:54:00:f0:c9:8a Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:default-k8s-diff-port-482476 Clientid:01:52:54:00:f0:c9:8a}
	I1209 11:52:25.138894  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined IP address 192.168.50.25 and MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.138981  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHPort
	I1209 11:52:25.139188  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHKeyPath
	I1209 11:52:25.139327  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHKeyPath
	I1209 11:52:25.139491  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHUsername
	I1209 11:52:25.139655  663024 main.go:141] libmachine: Using SSH client type: native
	I1209 11:52:25.139895  663024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.25 22 <nil> <nil>}
	I1209 11:52:25.139906  663024 main.go:141] libmachine: About to run SSH command:
	hostname
	I1209 11:52:25.242441  663024 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1209 11:52:25.242472  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetMachineName
	I1209 11:52:25.242837  663024 buildroot.go:166] provisioning hostname "default-k8s-diff-port-482476"
	I1209 11:52:25.242878  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetMachineName
	I1209 11:52:25.243093  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHHostname
	I1209 11:52:25.245995  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.246447  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:c9:8a", ip: ""} in network mk-default-k8s-diff-port-482476: {Iface:virbr2 ExpiryTime:2024-12-09 12:52:18 +0000 UTC Type:0 Mac:52:54:00:f0:c9:8a Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:default-k8s-diff-port-482476 Clientid:01:52:54:00:f0:c9:8a}
	I1209 11:52:25.246478  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined IP address 192.168.50.25 and MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.246685  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHPort
	I1209 11:52:25.246900  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHKeyPath
	I1209 11:52:25.247052  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHKeyPath
	I1209 11:52:25.247175  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHUsername
	I1209 11:52:25.247330  663024 main.go:141] libmachine: Using SSH client type: native
	I1209 11:52:25.247518  663024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.25 22 <nil> <nil>}
	I1209 11:52:25.247531  663024 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-482476 && echo "default-k8s-diff-port-482476" | sudo tee /etc/hostname
	I1209 11:52:25.361366  663024 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-482476
	
	I1209 11:52:25.361397  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHHostname
	I1209 11:52:25.364194  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.364608  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:c9:8a", ip: ""} in network mk-default-k8s-diff-port-482476: {Iface:virbr2 ExpiryTime:2024-12-09 12:52:18 +0000 UTC Type:0 Mac:52:54:00:f0:c9:8a Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:default-k8s-diff-port-482476 Clientid:01:52:54:00:f0:c9:8a}
	I1209 11:52:25.364639  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined IP address 192.168.50.25 and MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.364813  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHPort
	I1209 11:52:25.365064  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHKeyPath
	I1209 11:52:25.365267  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHKeyPath
	I1209 11:52:25.365369  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHUsername
	I1209 11:52:25.365613  663024 main.go:141] libmachine: Using SSH client type: native
	I1209 11:52:25.365790  663024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.25 22 <nil> <nil>}
	I1209 11:52:25.365808  663024 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-482476' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-482476/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-482476' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 11:52:25.475311  663024 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 11:52:25.475346  663024 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20068-609844/.minikube CaCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20068-609844/.minikube}
	I1209 11:52:25.475386  663024 buildroot.go:174] setting up certificates
	I1209 11:52:25.475403  663024 provision.go:84] configureAuth start
	I1209 11:52:25.475412  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetMachineName
	I1209 11:52:25.475711  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetIP
	I1209 11:52:25.478574  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.478903  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:c9:8a", ip: ""} in network mk-default-k8s-diff-port-482476: {Iface:virbr2 ExpiryTime:2024-12-09 12:52:18 +0000 UTC Type:0 Mac:52:54:00:f0:c9:8a Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:default-k8s-diff-port-482476 Clientid:01:52:54:00:f0:c9:8a}
	I1209 11:52:25.478935  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined IP address 192.168.50.25 and MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.479055  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHHostname
	I1209 11:52:25.481280  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.481655  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:c9:8a", ip: ""} in network mk-default-k8s-diff-port-482476: {Iface:virbr2 ExpiryTime:2024-12-09 12:52:18 +0000 UTC Type:0 Mac:52:54:00:f0:c9:8a Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:default-k8s-diff-port-482476 Clientid:01:52:54:00:f0:c9:8a}
	I1209 11:52:25.481688  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined IP address 192.168.50.25 and MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.481788  663024 provision.go:143] copyHostCerts
	I1209 11:52:25.481845  663024 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem, removing ...
	I1209 11:52:25.481876  663024 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem
	I1209 11:52:25.481957  663024 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem (1123 bytes)
	I1209 11:52:25.482056  663024 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem, removing ...
	I1209 11:52:25.482065  663024 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem
	I1209 11:52:25.482090  663024 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem (1679 bytes)
	I1209 11:52:25.482243  663024 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem, removing ...
	I1209 11:52:25.482254  663024 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem
	I1209 11:52:25.482279  663024 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem (1082 bytes)
	I1209 11:52:25.482336  663024 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-482476 san=[127.0.0.1 192.168.50.25 default-k8s-diff-port-482476 localhost minikube]
	I1209 11:52:25.534856  663024 provision.go:177] copyRemoteCerts
	I1209 11:52:25.534921  663024 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 11:52:25.534951  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHHostname
	I1209 11:52:25.537732  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.538138  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:c9:8a", ip: ""} in network mk-default-k8s-diff-port-482476: {Iface:virbr2 ExpiryTime:2024-12-09 12:52:18 +0000 UTC Type:0 Mac:52:54:00:f0:c9:8a Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:default-k8s-diff-port-482476 Clientid:01:52:54:00:f0:c9:8a}
	I1209 11:52:25.538190  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined IP address 192.168.50.25 and MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.538390  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHPort
	I1209 11:52:25.538611  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHKeyPath
	I1209 11:52:25.538783  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHUsername
	I1209 11:52:25.538943  663024 sshutil.go:53] new ssh client: &{IP:192.168.50.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/default-k8s-diff-port-482476/id_rsa Username:docker}
	I1209 11:52:25.619772  663024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1209 11:52:25.643527  663024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1209 11:52:25.668517  663024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1209 11:52:25.693573  663024 provision.go:87] duration metric: took 218.153182ms to configureAuth
	I1209 11:52:25.693615  663024 buildroot.go:189] setting minikube options for container-runtime
	I1209 11:52:25.693807  663024 config.go:182] Loaded profile config "default-k8s-diff-port-482476": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 11:52:25.693906  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHHostname
	I1209 11:52:25.696683  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.697058  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:c9:8a", ip: ""} in network mk-default-k8s-diff-port-482476: {Iface:virbr2 ExpiryTime:2024-12-09 12:52:18 +0000 UTC Type:0 Mac:52:54:00:f0:c9:8a Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:default-k8s-diff-port-482476 Clientid:01:52:54:00:f0:c9:8a}
	I1209 11:52:25.697092  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined IP address 192.168.50.25 and MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.697344  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHPort
	I1209 11:52:25.697548  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHKeyPath
	I1209 11:52:25.697741  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHKeyPath
	I1209 11:52:25.697868  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHUsername
	I1209 11:52:25.698033  663024 main.go:141] libmachine: Using SSH client type: native
	I1209 11:52:25.698229  663024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.25 22 <nil> <nil>}
	I1209 11:52:25.698254  663024 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 11:52:25.915568  663024 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 11:52:25.915595  663024 machine.go:96] duration metric: took 780.126343ms to provisionDockerMachine
	I1209 11:52:25.915610  663024 start.go:293] postStartSetup for "default-k8s-diff-port-482476" (driver="kvm2")
	I1209 11:52:25.915620  663024 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 11:52:25.915644  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .DriverName
	I1209 11:52:25.916005  663024 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 11:52:25.916047  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHHostname
	I1209 11:52:25.919268  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.919602  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:c9:8a", ip: ""} in network mk-default-k8s-diff-port-482476: {Iface:virbr2 ExpiryTime:2024-12-09 12:52:18 +0000 UTC Type:0 Mac:52:54:00:f0:c9:8a Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:default-k8s-diff-port-482476 Clientid:01:52:54:00:f0:c9:8a}
	I1209 11:52:25.919628  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined IP address 192.168.50.25 and MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.919775  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHPort
	I1209 11:52:25.919967  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHKeyPath
	I1209 11:52:25.920133  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHUsername
	I1209 11:52:25.920285  663024 sshutil.go:53] new ssh client: &{IP:192.168.50.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/default-k8s-diff-port-482476/id_rsa Username:docker}
	I1209 11:52:26.000530  663024 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 11:52:26.004544  663024 info.go:137] Remote host: Buildroot 2023.02.9
	I1209 11:52:26.004574  663024 filesync.go:126] Scanning /home/jenkins/minikube-integration/20068-609844/.minikube/addons for local assets ...
	I1209 11:52:26.004651  663024 filesync.go:126] Scanning /home/jenkins/minikube-integration/20068-609844/.minikube/files for local assets ...
	I1209 11:52:26.004759  663024 filesync.go:149] local asset: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem -> 6170172.pem in /etc/ssl/certs
	I1209 11:52:26.004885  663024 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 11:52:26.013444  663024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem --> /etc/ssl/certs/6170172.pem (1708 bytes)
	I1209 11:52:26.036052  663024 start.go:296] duration metric: took 120.422739ms for postStartSetup
	I1209 11:52:26.036110  663024 fix.go:56] duration metric: took 20.120932786s for fixHost
	I1209 11:52:26.036135  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHHostname
	I1209 11:52:26.039079  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:26.039445  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:c9:8a", ip: ""} in network mk-default-k8s-diff-port-482476: {Iface:virbr2 ExpiryTime:2024-12-09 12:52:18 +0000 UTC Type:0 Mac:52:54:00:f0:c9:8a Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:default-k8s-diff-port-482476 Clientid:01:52:54:00:f0:c9:8a}
	I1209 11:52:26.039478  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined IP address 192.168.50.25 and MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:26.039797  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHPort
	I1209 11:52:26.040065  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHKeyPath
	I1209 11:52:26.040228  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHKeyPath
	I1209 11:52:26.040427  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHUsername
	I1209 11:52:26.040620  663024 main.go:141] libmachine: Using SSH client type: native
	I1209 11:52:26.040906  663024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.25 22 <nil> <nil>}
	I1209 11:52:26.040924  663024 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1209 11:52:26.142590  663024 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733745146.090497627
	
	I1209 11:52:26.142623  663024 fix.go:216] guest clock: 1733745146.090497627
	I1209 11:52:26.142634  663024 fix.go:229] Guest: 2024-12-09 11:52:26.090497627 +0000 UTC Remote: 2024-12-09 11:52:26.036115182 +0000 UTC m=+146.587055001 (delta=54.382445ms)
	I1209 11:52:26.142669  663024 fix.go:200] guest clock delta is within tolerance: 54.382445ms
	I1209 11:52:26.142681  663024 start.go:83] releasing machines lock for "default-k8s-diff-port-482476", held for 20.227543026s
	I1209 11:52:26.142723  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .DriverName
	I1209 11:52:26.143032  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetIP
	I1209 11:52:26.146118  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:26.146602  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:c9:8a", ip: ""} in network mk-default-k8s-diff-port-482476: {Iface:virbr2 ExpiryTime:2024-12-09 12:52:18 +0000 UTC Type:0 Mac:52:54:00:f0:c9:8a Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:default-k8s-diff-port-482476 Clientid:01:52:54:00:f0:c9:8a}
	I1209 11:52:26.146634  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined IP address 192.168.50.25 and MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:26.146841  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .DriverName
	I1209 11:52:26.147440  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .DriverName
	I1209 11:52:26.147709  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .DriverName
	I1209 11:52:26.147833  663024 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 11:52:26.147872  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHHostname
	I1209 11:52:26.147980  663024 ssh_runner.go:195] Run: cat /version.json
	I1209 11:52:26.148009  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHHostname
	I1209 11:52:26.151002  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:26.151346  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:26.151379  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:c9:8a", ip: ""} in network mk-default-k8s-diff-port-482476: {Iface:virbr2 ExpiryTime:2024-12-09 12:52:18 +0000 UTC Type:0 Mac:52:54:00:f0:c9:8a Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:default-k8s-diff-port-482476 Clientid:01:52:54:00:f0:c9:8a}
	I1209 11:52:26.151410  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined IP address 192.168.50.25 and MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:26.151534  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHPort
	I1209 11:52:26.151729  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHKeyPath
	I1209 11:52:26.151848  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:c9:8a", ip: ""} in network mk-default-k8s-diff-port-482476: {Iface:virbr2 ExpiryTime:2024-12-09 12:52:18 +0000 UTC Type:0 Mac:52:54:00:f0:c9:8a Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:default-k8s-diff-port-482476 Clientid:01:52:54:00:f0:c9:8a}
	I1209 11:52:26.151876  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined IP address 192.168.50.25 and MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:26.151904  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHUsername
	I1209 11:52:26.152003  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHPort
	I1209 11:52:26.152082  663024 sshutil.go:53] new ssh client: &{IP:192.168.50.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/default-k8s-diff-port-482476/id_rsa Username:docker}
	I1209 11:52:26.152159  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHKeyPath
	I1209 11:52:26.152322  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHUsername
	I1209 11:52:26.152565  663024 sshutil.go:53] new ssh client: &{IP:192.168.50.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/default-k8s-diff-port-482476/id_rsa Username:docker}
	I1209 11:52:26.231575  663024 ssh_runner.go:195] Run: systemctl --version
	I1209 11:52:26.267939  663024 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 11:52:26.418953  663024 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 11:52:26.426243  663024 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 11:52:26.426337  663024 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 11:52:26.448407  663024 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1209 11:52:26.448442  663024 start.go:495] detecting cgroup driver to use...
	I1209 11:52:26.448540  663024 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 11:52:26.469675  663024 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 11:52:26.488825  663024 docker.go:217] disabling cri-docker service (if available) ...
	I1209 11:52:26.488902  663024 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 11:52:26.507716  663024 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 11:52:26.525232  663024 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 11:52:26.664062  663024 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 11:52:26.854813  663024 docker.go:233] disabling docker service ...
	I1209 11:52:26.854883  663024 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 11:52:26.870021  663024 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 11:52:26.883610  663024 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 11:52:27.001237  663024 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 11:52:27.126865  663024 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 11:52:27.144121  663024 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 11:52:27.168073  663024 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1209 11:52:27.168242  663024 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:52:27.180516  663024 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 11:52:27.180587  663024 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:52:27.191681  663024 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:52:27.204047  663024 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:52:27.214157  663024 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 11:52:27.225934  663024 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:52:27.236691  663024 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:52:27.258774  663024 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:52:27.271986  663024 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 11:52:27.283488  663024 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1209 11:52:27.283539  663024 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1209 11:52:27.299065  663024 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 11:52:27.309203  663024 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 11:52:27.431740  663024 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 11:52:27.529577  663024 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 11:52:27.529668  663024 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 11:52:27.534733  663024 start.go:563] Will wait 60s for crictl version
	I1209 11:52:27.534800  663024 ssh_runner.go:195] Run: which crictl
	I1209 11:52:27.538544  663024 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 11:52:27.577577  663024 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1209 11:52:27.577684  663024 ssh_runner.go:195] Run: crio --version
	I1209 11:52:27.607938  663024 ssh_runner.go:195] Run: crio --version
	I1209 11:52:27.645210  663024 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1209 11:52:23.133393  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:23.632776  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:24.133286  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:24.632415  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:25.132348  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:25.632478  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:26.132982  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:26.632517  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:27.132692  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:27.633291  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:27.646510  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetIP
	I1209 11:52:27.650014  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:27.650439  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:c9:8a", ip: ""} in network mk-default-k8s-diff-port-482476: {Iface:virbr2 ExpiryTime:2024-12-09 12:52:18 +0000 UTC Type:0 Mac:52:54:00:f0:c9:8a Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:default-k8s-diff-port-482476 Clientid:01:52:54:00:f0:c9:8a}
	I1209 11:52:27.650469  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined IP address 192.168.50.25 and MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:27.650705  663024 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1209 11:52:27.654738  663024 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 11:52:27.668671  663024 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-482476 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:default-k8s-diff-port-482476 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.25 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 11:52:27.668808  663024 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 11:52:27.668873  663024 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 11:52:27.709582  663024 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1209 11:52:27.709679  663024 ssh_runner.go:195] Run: which lz4
	I1209 11:52:27.713702  663024 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1209 11:52:27.717851  663024 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1209 11:52:27.717887  663024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1209 11:52:29.037160  663024 crio.go:462] duration metric: took 1.32348676s to copy over tarball
	I1209 11:52:29.037262  663024 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1209 11:52:27.500098  661546 main.go:141] libmachine: (embed-certs-005123) Waiting to get IP...
	I1209 11:52:27.501088  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:27.501538  661546 main.go:141] libmachine: (embed-certs-005123) DBG | unable to find current IP address of domain embed-certs-005123 in network mk-embed-certs-005123
	I1209 11:52:27.501605  661546 main.go:141] libmachine: (embed-certs-005123) DBG | I1209 11:52:27.501510  663907 retry.go:31] will retry after 191.187925ms: waiting for machine to come up
	I1209 11:52:27.694017  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:27.694574  661546 main.go:141] libmachine: (embed-certs-005123) DBG | unable to find current IP address of domain embed-certs-005123 in network mk-embed-certs-005123
	I1209 11:52:27.694605  661546 main.go:141] libmachine: (embed-certs-005123) DBG | I1209 11:52:27.694512  663907 retry.go:31] will retry after 256.268ms: waiting for machine to come up
	I1209 11:52:27.952185  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:27.952863  661546 main.go:141] libmachine: (embed-certs-005123) DBG | unable to find current IP address of domain embed-certs-005123 in network mk-embed-certs-005123
	I1209 11:52:27.952908  661546 main.go:141] libmachine: (embed-certs-005123) DBG | I1209 11:52:27.952759  663907 retry.go:31] will retry after 460.272204ms: waiting for machine to come up
	I1209 11:52:28.414403  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:28.414925  661546 main.go:141] libmachine: (embed-certs-005123) DBG | unable to find current IP address of domain embed-certs-005123 in network mk-embed-certs-005123
	I1209 11:52:28.414967  661546 main.go:141] libmachine: (embed-certs-005123) DBG | I1209 11:52:28.414873  663907 retry.go:31] will retry after 450.761189ms: waiting for machine to come up
	I1209 11:52:28.867687  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:28.868350  661546 main.go:141] libmachine: (embed-certs-005123) DBG | unable to find current IP address of domain embed-certs-005123 in network mk-embed-certs-005123
	I1209 11:52:28.868389  661546 main.go:141] libmachine: (embed-certs-005123) DBG | I1209 11:52:28.868313  663907 retry.go:31] will retry after 615.800863ms: waiting for machine to come up
	I1209 11:52:29.486566  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:29.487179  661546 main.go:141] libmachine: (embed-certs-005123) DBG | unable to find current IP address of domain embed-certs-005123 in network mk-embed-certs-005123
	I1209 11:52:29.487218  661546 main.go:141] libmachine: (embed-certs-005123) DBG | I1209 11:52:29.487108  663907 retry.go:31] will retry after 628.641045ms: waiting for machine to come up
	I1209 11:52:30.117051  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:30.117424  661546 main.go:141] libmachine: (embed-certs-005123) DBG | unable to find current IP address of domain embed-certs-005123 in network mk-embed-certs-005123
	I1209 11:52:30.117459  661546 main.go:141] libmachine: (embed-certs-005123) DBG | I1209 11:52:30.117356  663907 retry.go:31] will retry after 902.465226ms: waiting for machine to come up
	I1209 11:52:31.021756  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:31.022268  661546 main.go:141] libmachine: (embed-certs-005123) DBG | unable to find current IP address of domain embed-certs-005123 in network mk-embed-certs-005123
	I1209 11:52:31.022298  661546 main.go:141] libmachine: (embed-certs-005123) DBG | I1209 11:52:31.022229  663907 retry.go:31] will retry after 918.939368ms: waiting for machine to come up
	I1209 11:52:26.594953  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:29.093499  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:28.132379  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:28.633377  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:29.132983  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:29.633370  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:30.132748  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:30.633383  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:31.133450  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:31.633210  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:32.132406  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:32.632598  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:31.234956  663024 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.197609203s)
	I1209 11:52:31.235007  663024 crio.go:469] duration metric: took 2.197798334s to extract the tarball
	I1209 11:52:31.235018  663024 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1209 11:52:31.275616  663024 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 11:52:31.320918  663024 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 11:52:31.320945  663024 cache_images.go:84] Images are preloaded, skipping loading
	I1209 11:52:31.320961  663024 kubeadm.go:934] updating node { 192.168.50.25 8444 v1.31.2 crio true true} ...
	I1209 11:52:31.321122  663024 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-482476 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.25
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-482476 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 11:52:31.321246  663024 ssh_runner.go:195] Run: crio config
	I1209 11:52:31.367805  663024 cni.go:84] Creating CNI manager for ""
	I1209 11:52:31.367827  663024 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 11:52:31.367839  663024 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1209 11:52:31.367863  663024 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.25 APIServerPort:8444 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-482476 NodeName:default-k8s-diff-port-482476 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.25"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.25 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 11:52:31.368005  663024 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.25
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-482476"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.25"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.25"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 11:52:31.368074  663024 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1209 11:52:31.377831  663024 binaries.go:44] Found k8s binaries, skipping transfer
	I1209 11:52:31.377902  663024 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 11:52:31.386872  663024 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I1209 11:52:31.403764  663024 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 11:52:31.419295  663024 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2305 bytes)
	I1209 11:52:31.435856  663024 ssh_runner.go:195] Run: grep 192.168.50.25	control-plane.minikube.internal$ /etc/hosts
	I1209 11:52:31.439480  663024 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.25	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 11:52:31.455136  663024 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 11:52:31.573295  663024 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 11:52:31.589679  663024 certs.go:68] Setting up /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/default-k8s-diff-port-482476 for IP: 192.168.50.25
	I1209 11:52:31.589703  663024 certs.go:194] generating shared ca certs ...
	I1209 11:52:31.589741  663024 certs.go:226] acquiring lock for ca certs: {Name:mk2df5887e08965d909a9c950da5dfffb8a04ddc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 11:52:31.589930  663024 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.key
	I1209 11:52:31.589982  663024 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.key
	I1209 11:52:31.589995  663024 certs.go:256] generating profile certs ...
	I1209 11:52:31.590137  663024 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/default-k8s-diff-port-482476/client.key
	I1209 11:52:31.590256  663024 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/default-k8s-diff-port-482476/apiserver.key.e2346b12
	I1209 11:52:31.590322  663024 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/default-k8s-diff-port-482476/proxy-client.key
	I1209 11:52:31.590479  663024 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017.pem (1338 bytes)
	W1209 11:52:31.590522  663024 certs.go:480] ignoring /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017_empty.pem, impossibly tiny 0 bytes
	I1209 11:52:31.590535  663024 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem (1675 bytes)
	I1209 11:52:31.590571  663024 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem (1082 bytes)
	I1209 11:52:31.590612  663024 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem (1123 bytes)
	I1209 11:52:31.590649  663024 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem (1679 bytes)
	I1209 11:52:31.590710  663024 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem (1708 bytes)
	I1209 11:52:31.591643  663024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 11:52:31.634363  663024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1209 11:52:31.660090  663024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 11:52:31.692933  663024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1209 11:52:31.726010  663024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/default-k8s-diff-port-482476/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1209 11:52:31.757565  663024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/default-k8s-diff-port-482476/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1209 11:52:31.781368  663024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/default-k8s-diff-port-482476/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 11:52:31.805233  663024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/default-k8s-diff-port-482476/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1209 11:52:31.828391  663024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem --> /usr/share/ca-certificates/6170172.pem (1708 bytes)
	I1209 11:52:31.850407  663024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 11:52:31.873159  663024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017.pem --> /usr/share/ca-certificates/617017.pem (1338 bytes)
	I1209 11:52:31.895503  663024 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 11:52:31.911754  663024 ssh_runner.go:195] Run: openssl version
	I1209 11:52:31.917771  663024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6170172.pem && ln -fs /usr/share/ca-certificates/6170172.pem /etc/ssl/certs/6170172.pem"
	I1209 11:52:31.929857  663024 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6170172.pem
	I1209 11:52:31.934518  663024 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 10:45 /usr/share/ca-certificates/6170172.pem
	I1209 11:52:31.934596  663024 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6170172.pem
	I1209 11:52:31.940382  663024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6170172.pem /etc/ssl/certs/3ec20f2e.0"
	I1209 11:52:31.951417  663024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1209 11:52:31.961966  663024 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 11:52:31.966234  663024 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 10:34 /usr/share/ca-certificates/minikubeCA.pem
	I1209 11:52:31.966286  663024 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 11:52:31.972070  663024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1209 11:52:31.982547  663024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/617017.pem && ln -fs /usr/share/ca-certificates/617017.pem /etc/ssl/certs/617017.pem"
	I1209 11:52:31.993215  663024 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/617017.pem
	I1209 11:52:31.997579  663024 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 10:45 /usr/share/ca-certificates/617017.pem
	I1209 11:52:31.997641  663024 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/617017.pem
	I1209 11:52:32.003050  663024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/617017.pem /etc/ssl/certs/51391683.0"
	I1209 11:52:32.013463  663024 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 11:52:32.017936  663024 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1209 11:52:32.024029  663024 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1209 11:52:32.029686  663024 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1209 11:52:32.035260  663024 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1209 11:52:32.040696  663024 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1209 11:52:32.046116  663024 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1209 11:52:32.051521  663024 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-482476 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.2 ClusterName:default-k8s-diff-port-482476 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.25 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 11:52:32.051605  663024 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 11:52:32.051676  663024 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 11:52:32.092529  663024 cri.go:89] found id: ""
	I1209 11:52:32.092623  663024 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 11:52:32.103153  663024 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1209 11:52:32.103183  663024 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1209 11:52:32.103247  663024 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1209 11:52:32.113029  663024 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1209 11:52:32.114506  663024 kubeconfig.go:125] found "default-k8s-diff-port-482476" server: "https://192.168.50.25:8444"
	I1209 11:52:32.116929  663024 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1209 11:52:32.127055  663024 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.25
	I1209 11:52:32.127108  663024 kubeadm.go:1160] stopping kube-system containers ...
	I1209 11:52:32.127124  663024 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1209 11:52:32.127189  663024 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 11:52:32.169401  663024 cri.go:89] found id: ""
	I1209 11:52:32.169507  663024 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1209 11:52:32.187274  663024 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 11:52:32.196843  663024 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 11:52:32.196867  663024 kubeadm.go:157] found existing configuration files:
	
	I1209 11:52:32.196925  663024 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1209 11:52:32.205670  663024 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 11:52:32.205754  663024 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 11:52:32.214977  663024 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1209 11:52:32.223707  663024 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 11:52:32.223782  663024 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 11:52:32.232514  663024 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1209 11:52:32.240999  663024 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 11:52:32.241076  663024 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 11:52:32.250049  663024 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1209 11:52:32.258782  663024 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 11:52:32.258846  663024 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 11:52:32.268447  663024 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 11:52:32.277875  663024 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:32.394016  663024 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:33.494978  663024 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.100920844s)
	I1209 11:52:33.495030  663024 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:33.719319  663024 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:33.787272  663024 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:33.882783  663024 api_server.go:52] waiting for apiserver process to appear ...
	I1209 11:52:33.882876  663024 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:34.383090  663024 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:31.942735  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:31.943207  661546 main.go:141] libmachine: (embed-certs-005123) DBG | unable to find current IP address of domain embed-certs-005123 in network mk-embed-certs-005123
	I1209 11:52:31.943244  661546 main.go:141] libmachine: (embed-certs-005123) DBG | I1209 11:52:31.943141  663907 retry.go:31] will retry after 1.153139191s: waiting for machine to come up
	I1209 11:52:33.097672  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:33.098233  661546 main.go:141] libmachine: (embed-certs-005123) DBG | unable to find current IP address of domain embed-certs-005123 in network mk-embed-certs-005123
	I1209 11:52:33.098299  661546 main.go:141] libmachine: (embed-certs-005123) DBG | I1209 11:52:33.098199  663907 retry.go:31] will retry after 2.002880852s: waiting for machine to come up
	I1209 11:52:35.103239  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:35.103693  661546 main.go:141] libmachine: (embed-certs-005123) DBG | unable to find current IP address of domain embed-certs-005123 in network mk-embed-certs-005123
	I1209 11:52:35.103724  661546 main.go:141] libmachine: (embed-certs-005123) DBG | I1209 11:52:35.103639  663907 retry.go:31] will retry after 2.219510124s: waiting for machine to come up
	I1209 11:52:31.593184  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:34.090877  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:36.094569  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:33.132924  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:33.632884  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:34.132528  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:34.632989  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:35.133398  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:35.632376  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:36.132936  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:36.633152  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:37.133343  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:37.633367  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:34.883172  663024 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:35.384008  663024 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:35.883940  663024 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:35.901453  663024 api_server.go:72] duration metric: took 2.018670363s to wait for apiserver process to appear ...
	I1209 11:52:35.901489  663024 api_server.go:88] waiting for apiserver healthz status ...
	I1209 11:52:35.901524  663024 api_server.go:253] Checking apiserver healthz at https://192.168.50.25:8444/healthz ...
	I1209 11:52:38.225976  663024 api_server.go:279] https://192.168.50.25:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1209 11:52:38.226017  663024 api_server.go:103] status: https://192.168.50.25:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1209 11:52:38.226037  663024 api_server.go:253] Checking apiserver healthz at https://192.168.50.25:8444/healthz ...
	I1209 11:52:38.269459  663024 api_server.go:279] https://192.168.50.25:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1209 11:52:38.269549  663024 api_server.go:103] status: https://192.168.50.25:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1209 11:52:38.401652  663024 api_server.go:253] Checking apiserver healthz at https://192.168.50.25:8444/healthz ...
	I1209 11:52:38.407995  663024 api_server.go:279] https://192.168.50.25:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1209 11:52:38.408028  663024 api_server.go:103] status: https://192.168.50.25:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1209 11:52:38.902416  663024 api_server.go:253] Checking apiserver healthz at https://192.168.50.25:8444/healthz ...
	I1209 11:52:38.914550  663024 api_server.go:279] https://192.168.50.25:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1209 11:52:38.914579  663024 api_server.go:103] status: https://192.168.50.25:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1209 11:52:39.401719  663024 api_server.go:253] Checking apiserver healthz at https://192.168.50.25:8444/healthz ...
	I1209 11:52:39.409382  663024 api_server.go:279] https://192.168.50.25:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1209 11:52:39.409427  663024 api_server.go:103] status: https://192.168.50.25:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1209 11:52:39.902488  663024 api_server.go:253] Checking apiserver healthz at https://192.168.50.25:8444/healthz ...
	I1209 11:52:39.907511  663024 api_server.go:279] https://192.168.50.25:8444/healthz returned 200:
	ok
	I1209 11:52:39.914532  663024 api_server.go:141] control plane version: v1.31.2
	I1209 11:52:39.914562  663024 api_server.go:131] duration metric: took 4.013066199s to wait for apiserver health ...
	I1209 11:52:39.914586  663024 cni.go:84] Creating CNI manager for ""
	I1209 11:52:39.914594  663024 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 11:52:39.915954  663024 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1209 11:52:37.324833  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:37.325397  661546 main.go:141] libmachine: (embed-certs-005123) DBG | unable to find current IP address of domain embed-certs-005123 in network mk-embed-certs-005123
	I1209 11:52:37.325430  661546 main.go:141] libmachine: (embed-certs-005123) DBG | I1209 11:52:37.325338  663907 retry.go:31] will retry after 3.636796307s: waiting for machine to come up
	I1209 11:52:40.966039  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:40.966438  661546 main.go:141] libmachine: (embed-certs-005123) DBG | unable to find current IP address of domain embed-certs-005123 in network mk-embed-certs-005123
	I1209 11:52:40.966463  661546 main.go:141] libmachine: (embed-certs-005123) DBG | I1209 11:52:40.966419  663907 retry.go:31] will retry after 3.704289622s: waiting for machine to come up
	I1209 11:52:38.592804  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:40.593407  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:38.133368  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:38.632475  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:39.132993  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:39.633225  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:40.132552  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:40.633292  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:41.132443  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:41.632994  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:42.132631  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:42.633378  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:39.917397  663024 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1209 11:52:39.928995  663024 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1209 11:52:39.953045  663024 system_pods.go:43] waiting for kube-system pods to appear ...
	I1209 11:52:39.962582  663024 system_pods.go:59] 8 kube-system pods found
	I1209 11:52:39.962628  663024 system_pods.go:61] "coredns-7c65d6cfc9-zzrgn" [dca7a835-3b66-4515-b571-6420afc42c44] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1209 11:52:39.962639  663024 system_pods.go:61] "etcd-default-k8s-diff-port-482476" [2323dbbc-9e7f-4047-b0be-b68b851f4986] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1209 11:52:39.962649  663024 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-482476" [0b7a4936-5282-46a4-a08a-e225b303f6f2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1209 11:52:39.962658  663024 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-482476" [c6ff79a0-2177-4c79-8021-c523f8d53e9b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1209 11:52:39.962666  663024 system_pods.go:61] "kube-proxy-6th5d" [0cff6df1-1adb-4b7e-8d59-a837db026339] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1209 11:52:39.962682  663024 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-482476" [524125eb-afd4-4e20-b0f0-e58019e84962] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1209 11:52:39.962694  663024 system_pods.go:61] "metrics-server-6867b74b74-bpccn" [7426c800-9ff7-4778-82a0-6c71fd05a222] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1209 11:52:39.962702  663024 system_pods.go:61] "storage-provisioner" [4478313a-58e8-4d24-ab0b-c087e664200d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1209 11:52:39.962711  663024 system_pods.go:74] duration metric: took 9.637672ms to wait for pod list to return data ...
	I1209 11:52:39.962725  663024 node_conditions.go:102] verifying NodePressure condition ...
	I1209 11:52:39.969576  663024 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1209 11:52:39.969611  663024 node_conditions.go:123] node cpu capacity is 2
	I1209 11:52:39.969627  663024 node_conditions.go:105] duration metric: took 6.893708ms to run NodePressure ...
	I1209 11:52:39.969660  663024 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:40.340239  663024 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1209 11:52:40.345384  663024 kubeadm.go:739] kubelet initialised
	I1209 11:52:40.345412  663024 kubeadm.go:740] duration metric: took 5.145751ms waiting for restarted kubelet to initialise ...
	I1209 11:52:40.345425  663024 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 11:52:40.350721  663024 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-zzrgn" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:42.357138  663024 pod_ready.go:103] pod "coredns-7c65d6cfc9-zzrgn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:44.361981  663024 pod_ready.go:103] pod "coredns-7c65d6cfc9-zzrgn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:44.674598  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:44.675048  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has current primary IP address 192.168.72.218 and MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:44.675068  661546 main.go:141] libmachine: (embed-certs-005123) Found IP for machine: 192.168.72.218
	I1209 11:52:44.675075  661546 main.go:141] libmachine: (embed-certs-005123) Reserving static IP address...
	I1209 11:52:44.675492  661546 main.go:141] libmachine: (embed-certs-005123) DBG | found host DHCP lease matching {name: "embed-certs-005123", mac: "52:54:00:ee:a0:a8", ip: "192.168.72.218"} in network mk-embed-certs-005123: {Iface:virbr4 ExpiryTime:2024-12-09 12:52:37 +0000 UTC Type:0 Mac:52:54:00:ee:a0:a8 Iaid: IPaddr:192.168.72.218 Prefix:24 Hostname:embed-certs-005123 Clientid:01:52:54:00:ee:a0:a8}
	I1209 11:52:44.675522  661546 main.go:141] libmachine: (embed-certs-005123) DBG | skip adding static IP to network mk-embed-certs-005123 - found existing host DHCP lease matching {name: "embed-certs-005123", mac: "52:54:00:ee:a0:a8", ip: "192.168.72.218"}
	I1209 11:52:44.675537  661546 main.go:141] libmachine: (embed-certs-005123) Reserved static IP address: 192.168.72.218
	I1209 11:52:44.675555  661546 main.go:141] libmachine: (embed-certs-005123) Waiting for SSH to be available...
	I1209 11:52:44.675566  661546 main.go:141] libmachine: (embed-certs-005123) DBG | Getting to WaitForSSH function...
	I1209 11:52:44.677490  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:44.677814  661546 main.go:141] libmachine: (embed-certs-005123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:a0:a8", ip: ""} in network mk-embed-certs-005123: {Iface:virbr4 ExpiryTime:2024-12-09 12:52:37 +0000 UTC Type:0 Mac:52:54:00:ee:a0:a8 Iaid: IPaddr:192.168.72.218 Prefix:24 Hostname:embed-certs-005123 Clientid:01:52:54:00:ee:a0:a8}
	I1209 11:52:44.677860  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined IP address 192.168.72.218 and MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:44.677952  661546 main.go:141] libmachine: (embed-certs-005123) DBG | Using SSH client type: external
	I1209 11:52:44.678012  661546 main.go:141] libmachine: (embed-certs-005123) DBG | Using SSH private key: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/embed-certs-005123/id_rsa (-rw-------)
	I1209 11:52:44.678042  661546 main.go:141] libmachine: (embed-certs-005123) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.218 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20068-609844/.minikube/machines/embed-certs-005123/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1209 11:52:44.678056  661546 main.go:141] libmachine: (embed-certs-005123) DBG | About to run SSH command:
	I1209 11:52:44.678068  661546 main.go:141] libmachine: (embed-certs-005123) DBG | exit 0
	I1209 11:52:44.798377  661546 main.go:141] libmachine: (embed-certs-005123) DBG | SSH cmd err, output: <nil>: 
	I1209 11:52:44.798782  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetConfigRaw
	I1209 11:52:44.799532  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetIP
	I1209 11:52:44.801853  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:44.802223  661546 main.go:141] libmachine: (embed-certs-005123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:a0:a8", ip: ""} in network mk-embed-certs-005123: {Iface:virbr4 ExpiryTime:2024-12-09 12:52:37 +0000 UTC Type:0 Mac:52:54:00:ee:a0:a8 Iaid: IPaddr:192.168.72.218 Prefix:24 Hostname:embed-certs-005123 Clientid:01:52:54:00:ee:a0:a8}
	I1209 11:52:44.802255  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined IP address 192.168.72.218 and MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:44.802539  661546 profile.go:143] Saving config to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/embed-certs-005123/config.json ...
	I1209 11:52:44.802777  661546 machine.go:93] provisionDockerMachine start ...
	I1209 11:52:44.802799  661546 main.go:141] libmachine: (embed-certs-005123) Calling .DriverName
	I1209 11:52:44.802994  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHHostname
	I1209 11:52:44.805481  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:44.805803  661546 main.go:141] libmachine: (embed-certs-005123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:a0:a8", ip: ""} in network mk-embed-certs-005123: {Iface:virbr4 ExpiryTime:2024-12-09 12:52:37 +0000 UTC Type:0 Mac:52:54:00:ee:a0:a8 Iaid: IPaddr:192.168.72.218 Prefix:24 Hostname:embed-certs-005123 Clientid:01:52:54:00:ee:a0:a8}
	I1209 11:52:44.805838  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined IP address 192.168.72.218 and MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:44.805999  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHPort
	I1209 11:52:44.806219  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHKeyPath
	I1209 11:52:44.806386  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHKeyPath
	I1209 11:52:44.806555  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHUsername
	I1209 11:52:44.806716  661546 main.go:141] libmachine: Using SSH client type: native
	I1209 11:52:44.806886  661546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.218 22 <nil> <nil>}
	I1209 11:52:44.806897  661546 main.go:141] libmachine: About to run SSH command:
	hostname
	I1209 11:52:44.914443  661546 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1209 11:52:44.914480  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetMachineName
	I1209 11:52:44.914783  661546 buildroot.go:166] provisioning hostname "embed-certs-005123"
	I1209 11:52:44.914810  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetMachineName
	I1209 11:52:44.914973  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHHostname
	I1209 11:52:44.918053  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:44.918471  661546 main.go:141] libmachine: (embed-certs-005123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:a0:a8", ip: ""} in network mk-embed-certs-005123: {Iface:virbr4 ExpiryTime:2024-12-09 12:52:37 +0000 UTC Type:0 Mac:52:54:00:ee:a0:a8 Iaid: IPaddr:192.168.72.218 Prefix:24 Hostname:embed-certs-005123 Clientid:01:52:54:00:ee:a0:a8}
	I1209 11:52:44.918508  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined IP address 192.168.72.218 and MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:44.918701  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHPort
	I1209 11:52:44.918935  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHKeyPath
	I1209 11:52:44.919087  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHKeyPath
	I1209 11:52:44.919267  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHUsername
	I1209 11:52:44.919452  661546 main.go:141] libmachine: Using SSH client type: native
	I1209 11:52:44.919624  661546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.218 22 <nil> <nil>}
	I1209 11:52:44.919645  661546 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-005123 && echo "embed-certs-005123" | sudo tee /etc/hostname
	I1209 11:52:45.032725  661546 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-005123
	
	I1209 11:52:45.032760  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHHostname
	I1209 11:52:45.035820  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:45.036222  661546 main.go:141] libmachine: (embed-certs-005123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:a0:a8", ip: ""} in network mk-embed-certs-005123: {Iface:virbr4 ExpiryTime:2024-12-09 12:52:37 +0000 UTC Type:0 Mac:52:54:00:ee:a0:a8 Iaid: IPaddr:192.168.72.218 Prefix:24 Hostname:embed-certs-005123 Clientid:01:52:54:00:ee:a0:a8}
	I1209 11:52:45.036263  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined IP address 192.168.72.218 and MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:45.036466  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHPort
	I1209 11:52:45.036666  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHKeyPath
	I1209 11:52:45.036864  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHKeyPath
	I1209 11:52:45.037003  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHUsername
	I1209 11:52:45.037189  661546 main.go:141] libmachine: Using SSH client type: native
	I1209 11:52:45.037396  661546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.218 22 <nil> <nil>}
	I1209 11:52:45.037413  661546 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-005123' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-005123/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-005123' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 11:52:45.147189  661546 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 11:52:45.147225  661546 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20068-609844/.minikube CaCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20068-609844/.minikube}
	I1209 11:52:45.147284  661546 buildroot.go:174] setting up certificates
	I1209 11:52:45.147299  661546 provision.go:84] configureAuth start
	I1209 11:52:45.147313  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetMachineName
	I1209 11:52:45.147667  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetIP
	I1209 11:52:45.150526  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:45.150965  661546 main.go:141] libmachine: (embed-certs-005123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:a0:a8", ip: ""} in network mk-embed-certs-005123: {Iface:virbr4 ExpiryTime:2024-12-09 12:52:37 +0000 UTC Type:0 Mac:52:54:00:ee:a0:a8 Iaid: IPaddr:192.168.72.218 Prefix:24 Hostname:embed-certs-005123 Clientid:01:52:54:00:ee:a0:a8}
	I1209 11:52:45.151009  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined IP address 192.168.72.218 and MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:45.151118  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHHostname
	I1209 11:52:45.153778  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:45.154178  661546 main.go:141] libmachine: (embed-certs-005123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:a0:a8", ip: ""} in network mk-embed-certs-005123: {Iface:virbr4 ExpiryTime:2024-12-09 12:52:37 +0000 UTC Type:0 Mac:52:54:00:ee:a0:a8 Iaid: IPaddr:192.168.72.218 Prefix:24 Hostname:embed-certs-005123 Clientid:01:52:54:00:ee:a0:a8}
	I1209 11:52:45.154213  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined IP address 192.168.72.218 and MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:45.154382  661546 provision.go:143] copyHostCerts
	I1209 11:52:45.154455  661546 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem, removing ...
	I1209 11:52:45.154478  661546 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem
	I1209 11:52:45.154549  661546 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem (1082 bytes)
	I1209 11:52:45.154673  661546 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem, removing ...
	I1209 11:52:45.154685  661546 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem
	I1209 11:52:45.154717  661546 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem (1123 bytes)
	I1209 11:52:45.154816  661546 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem, removing ...
	I1209 11:52:45.154829  661546 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem
	I1209 11:52:45.154857  661546 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem (1679 bytes)
	I1209 11:52:45.154935  661546 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem org=jenkins.embed-certs-005123 san=[127.0.0.1 192.168.72.218 embed-certs-005123 localhost minikube]
	I1209 11:52:45.382712  661546 provision.go:177] copyRemoteCerts
	I1209 11:52:45.382772  661546 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 11:52:45.382801  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHHostname
	I1209 11:52:45.385625  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:45.386020  661546 main.go:141] libmachine: (embed-certs-005123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:a0:a8", ip: ""} in network mk-embed-certs-005123: {Iface:virbr4 ExpiryTime:2024-12-09 12:52:37 +0000 UTC Type:0 Mac:52:54:00:ee:a0:a8 Iaid: IPaddr:192.168.72.218 Prefix:24 Hostname:embed-certs-005123 Clientid:01:52:54:00:ee:a0:a8}
	I1209 11:52:45.386050  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined IP address 192.168.72.218 and MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:45.386241  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHPort
	I1209 11:52:45.386448  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHKeyPath
	I1209 11:52:45.386626  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHUsername
	I1209 11:52:45.386765  661546 sshutil.go:53] new ssh client: &{IP:192.168.72.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/embed-certs-005123/id_rsa Username:docker}
	I1209 11:52:45.464427  661546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1209 11:52:45.488111  661546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1209 11:52:45.511231  661546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1209 11:52:45.534104  661546 provision.go:87] duration metric: took 386.787703ms to configureAuth
	I1209 11:52:45.534141  661546 buildroot.go:189] setting minikube options for container-runtime
	I1209 11:52:45.534411  661546 config.go:182] Loaded profile config "embed-certs-005123": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 11:52:45.534526  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHHostname
	I1209 11:52:45.537936  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:45.538370  661546 main.go:141] libmachine: (embed-certs-005123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:a0:a8", ip: ""} in network mk-embed-certs-005123: {Iface:virbr4 ExpiryTime:2024-12-09 12:52:37 +0000 UTC Type:0 Mac:52:54:00:ee:a0:a8 Iaid: IPaddr:192.168.72.218 Prefix:24 Hostname:embed-certs-005123 Clientid:01:52:54:00:ee:a0:a8}
	I1209 11:52:45.538402  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined IP address 192.168.72.218 and MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:45.538584  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHPort
	I1209 11:52:45.538826  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHKeyPath
	I1209 11:52:45.539019  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHKeyPath
	I1209 11:52:45.539150  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHUsername
	I1209 11:52:45.539378  661546 main.go:141] libmachine: Using SSH client type: native
	I1209 11:52:45.539551  661546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.218 22 <nil> <nil>}
	I1209 11:52:45.539568  661546 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 11:52:45.771215  661546 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 11:52:45.771259  661546 machine.go:96] duration metric: took 968.466766ms to provisionDockerMachine
	I1209 11:52:45.771276  661546 start.go:293] postStartSetup for "embed-certs-005123" (driver="kvm2")
	I1209 11:52:45.771287  661546 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 11:52:45.771316  661546 main.go:141] libmachine: (embed-certs-005123) Calling .DriverName
	I1209 11:52:45.771673  661546 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 11:52:45.771709  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHHostname
	I1209 11:52:45.774881  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:45.775294  661546 main.go:141] libmachine: (embed-certs-005123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:a0:a8", ip: ""} in network mk-embed-certs-005123: {Iface:virbr4 ExpiryTime:2024-12-09 12:52:37 +0000 UTC Type:0 Mac:52:54:00:ee:a0:a8 Iaid: IPaddr:192.168.72.218 Prefix:24 Hostname:embed-certs-005123 Clientid:01:52:54:00:ee:a0:a8}
	I1209 11:52:45.775340  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined IP address 192.168.72.218 and MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:45.775510  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHPort
	I1209 11:52:45.775714  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHKeyPath
	I1209 11:52:45.775899  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHUsername
	I1209 11:52:45.776065  661546 sshutil.go:53] new ssh client: &{IP:192.168.72.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/embed-certs-005123/id_rsa Username:docker}
	I1209 11:52:45.856991  661546 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 11:52:45.862195  661546 info.go:137] Remote host: Buildroot 2023.02.9
	I1209 11:52:45.862224  661546 filesync.go:126] Scanning /home/jenkins/minikube-integration/20068-609844/.minikube/addons for local assets ...
	I1209 11:52:45.862295  661546 filesync.go:126] Scanning /home/jenkins/minikube-integration/20068-609844/.minikube/files for local assets ...
	I1209 11:52:45.862368  661546 filesync.go:149] local asset: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem -> 6170172.pem in /etc/ssl/certs
	I1209 11:52:45.862497  661546 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 11:52:45.874850  661546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem --> /etc/ssl/certs/6170172.pem (1708 bytes)
	I1209 11:52:45.899279  661546 start.go:296] duration metric: took 127.984399ms for postStartSetup
	I1209 11:52:45.899332  661546 fix.go:56] duration metric: took 19.756446591s for fixHost
	I1209 11:52:45.899362  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHHostname
	I1209 11:52:45.902428  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:45.902828  661546 main.go:141] libmachine: (embed-certs-005123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:a0:a8", ip: ""} in network mk-embed-certs-005123: {Iface:virbr4 ExpiryTime:2024-12-09 12:52:37 +0000 UTC Type:0 Mac:52:54:00:ee:a0:a8 Iaid: IPaddr:192.168.72.218 Prefix:24 Hostname:embed-certs-005123 Clientid:01:52:54:00:ee:a0:a8}
	I1209 11:52:45.902861  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined IP address 192.168.72.218 and MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:45.903117  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHPort
	I1209 11:52:45.903344  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHKeyPath
	I1209 11:52:45.903554  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHKeyPath
	I1209 11:52:45.903704  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHUsername
	I1209 11:52:45.903955  661546 main.go:141] libmachine: Using SSH client type: native
	I1209 11:52:45.904191  661546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.218 22 <nil> <nil>}
	I1209 11:52:45.904209  661546 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1209 11:52:46.007164  661546 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733745165.964649155
	
	I1209 11:52:46.007194  661546 fix.go:216] guest clock: 1733745165.964649155
	I1209 11:52:46.007217  661546 fix.go:229] Guest: 2024-12-09 11:52:45.964649155 +0000 UTC Remote: 2024-12-09 11:52:45.899337716 +0000 UTC m=+369.711404421 (delta=65.311439ms)
	I1209 11:52:46.007267  661546 fix.go:200] guest clock delta is within tolerance: 65.311439ms
	I1209 11:52:46.007280  661546 start.go:83] releasing machines lock for "embed-certs-005123", held for 19.864428938s
	I1209 11:52:46.007313  661546 main.go:141] libmachine: (embed-certs-005123) Calling .DriverName
	I1209 11:52:46.007616  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetIP
	I1209 11:52:46.011273  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:46.011799  661546 main.go:141] libmachine: (embed-certs-005123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:a0:a8", ip: ""} in network mk-embed-certs-005123: {Iface:virbr4 ExpiryTime:2024-12-09 12:52:37 +0000 UTC Type:0 Mac:52:54:00:ee:a0:a8 Iaid: IPaddr:192.168.72.218 Prefix:24 Hostname:embed-certs-005123 Clientid:01:52:54:00:ee:a0:a8}
	I1209 11:52:46.011830  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined IP address 192.168.72.218 and MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:46.012074  661546 main.go:141] libmachine: (embed-certs-005123) Calling .DriverName
	I1209 11:52:46.012681  661546 main.go:141] libmachine: (embed-certs-005123) Calling .DriverName
	I1209 11:52:46.012907  661546 main.go:141] libmachine: (embed-certs-005123) Calling .DriverName
	I1209 11:52:46.013027  661546 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 11:52:46.013099  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHHostname
	I1209 11:52:46.013170  661546 ssh_runner.go:195] Run: cat /version.json
	I1209 11:52:46.013196  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHHostname
	I1209 11:52:46.016473  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:46.016764  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:46.016840  661546 main.go:141] libmachine: (embed-certs-005123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:a0:a8", ip: ""} in network mk-embed-certs-005123: {Iface:virbr4 ExpiryTime:2024-12-09 12:52:37 +0000 UTC Type:0 Mac:52:54:00:ee:a0:a8 Iaid: IPaddr:192.168.72.218 Prefix:24 Hostname:embed-certs-005123 Clientid:01:52:54:00:ee:a0:a8}
	I1209 11:52:46.016875  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined IP address 192.168.72.218 and MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:46.016964  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHPort
	I1209 11:52:46.017186  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHKeyPath
	I1209 11:52:46.017287  661546 main.go:141] libmachine: (embed-certs-005123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:a0:a8", ip: ""} in network mk-embed-certs-005123: {Iface:virbr4 ExpiryTime:2024-12-09 12:52:37 +0000 UTC Type:0 Mac:52:54:00:ee:a0:a8 Iaid: IPaddr:192.168.72.218 Prefix:24 Hostname:embed-certs-005123 Clientid:01:52:54:00:ee:a0:a8}
	I1209 11:52:46.017401  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHUsername
	I1209 11:52:46.017442  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined IP address 192.168.72.218 and MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:46.017480  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHPort
	I1209 11:52:46.017553  661546 sshutil.go:53] new ssh client: &{IP:192.168.72.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/embed-certs-005123/id_rsa Username:docker}
	I1209 11:52:46.017785  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHKeyPath
	I1209 11:52:46.017911  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHUsername
	I1209 11:52:46.018075  661546 sshutil.go:53] new ssh client: &{IP:192.168.72.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/embed-certs-005123/id_rsa Username:docker}
	I1209 11:52:46.129248  661546 ssh_runner.go:195] Run: systemctl --version
	I1209 11:52:46.136309  661546 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 11:52:43.091899  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:45.592415  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:46.287879  661546 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 11:52:46.293689  661546 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 11:52:46.293770  661546 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 11:52:46.311972  661546 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1209 11:52:46.312009  661546 start.go:495] detecting cgroup driver to use...
	I1209 11:52:46.312085  661546 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 11:52:46.329406  661546 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 11:52:46.344607  661546 docker.go:217] disabling cri-docker service (if available) ...
	I1209 11:52:46.344664  661546 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 11:52:46.360448  661546 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 11:52:46.374509  661546 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 11:52:46.503687  661546 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 11:52:46.649152  661546 docker.go:233] disabling docker service ...
	I1209 11:52:46.649234  661546 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 11:52:46.663277  661546 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 11:52:46.677442  661546 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 11:52:46.832667  661546 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 11:52:46.949826  661546 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 11:52:46.963119  661546 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 11:52:46.981743  661546 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1209 11:52:46.981834  661546 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:52:46.991634  661546 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 11:52:46.991706  661546 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:52:47.004032  661546 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:52:47.015001  661546 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:52:47.025000  661546 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 11:52:47.035513  661546 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:52:47.045431  661546 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:52:47.061931  661546 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:52:47.071531  661546 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 11:52:47.080492  661546 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1209 11:52:47.080559  661546 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1209 11:52:47.094021  661546 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 11:52:47.104015  661546 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 11:52:47.226538  661546 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 11:52:47.318832  661546 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 11:52:47.318911  661546 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 11:52:47.323209  661546 start.go:563] Will wait 60s for crictl version
	I1209 11:52:47.323276  661546 ssh_runner.go:195] Run: which crictl
	I1209 11:52:47.326773  661546 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 11:52:47.365536  661546 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1209 11:52:47.365629  661546 ssh_runner.go:195] Run: crio --version
	I1209 11:52:47.392781  661546 ssh_runner.go:195] Run: crio --version
	I1209 11:52:47.422945  661546 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1209 11:52:43.133189  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:43.632726  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:44.132804  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:44.632952  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:45.132474  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:45.633318  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:46.133116  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:46.632595  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:47.133211  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:47.633233  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:46.858128  663024 pod_ready.go:103] pod "coredns-7c65d6cfc9-zzrgn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:49.358845  663024 pod_ready.go:103] pod "coredns-7c65d6cfc9-zzrgn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:47.423936  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetIP
	I1209 11:52:47.426959  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:47.427401  661546 main.go:141] libmachine: (embed-certs-005123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:a0:a8", ip: ""} in network mk-embed-certs-005123: {Iface:virbr4 ExpiryTime:2024-12-09 12:52:37 +0000 UTC Type:0 Mac:52:54:00:ee:a0:a8 Iaid: IPaddr:192.168.72.218 Prefix:24 Hostname:embed-certs-005123 Clientid:01:52:54:00:ee:a0:a8}
	I1209 11:52:47.427425  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined IP address 192.168.72.218 and MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:47.427636  661546 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1209 11:52:47.432509  661546 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 11:52:47.448620  661546 kubeadm.go:883] updating cluster {Name:embed-certs-005123 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:embed-certs-005123 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.218 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 11:52:47.448772  661546 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 11:52:47.448824  661546 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 11:52:47.485100  661546 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1209 11:52:47.485173  661546 ssh_runner.go:195] Run: which lz4
	I1209 11:52:47.489202  661546 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1209 11:52:47.493060  661546 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1209 11:52:47.493093  661546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1209 11:52:48.772297  661546 crio.go:462] duration metric: took 1.283133931s to copy over tarball
	I1209 11:52:48.772381  661546 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1209 11:52:50.959318  661546 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.18690714s)
	I1209 11:52:50.959352  661546 crio.go:469] duration metric: took 2.187018432s to extract the tarball
	I1209 11:52:50.959359  661546 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1209 11:52:50.995746  661546 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 11:52:51.037764  661546 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 11:52:51.037792  661546 cache_images.go:84] Images are preloaded, skipping loading
	I1209 11:52:51.037799  661546 kubeadm.go:934] updating node { 192.168.72.218 8443 v1.31.2 crio true true} ...
	I1209 11:52:51.037909  661546 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-005123 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.218
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:embed-certs-005123 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 11:52:51.037972  661546 ssh_runner.go:195] Run: crio config
	I1209 11:52:51.080191  661546 cni.go:84] Creating CNI manager for ""
	I1209 11:52:51.080220  661546 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 11:52:51.080231  661546 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1209 11:52:51.080258  661546 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.218 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-005123 NodeName:embed-certs-005123 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.218"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.218 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 11:52:51.080442  661546 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.218
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-005123"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.218"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.218"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 11:52:51.080544  661546 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1209 11:52:51.091894  661546 binaries.go:44] Found k8s binaries, skipping transfer
	I1209 11:52:51.091975  661546 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 11:52:51.101702  661546 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I1209 11:52:51.117636  661546 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 11:52:51.133662  661546 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2298 bytes)
	I1209 11:52:51.151725  661546 ssh_runner.go:195] Run: grep 192.168.72.218	control-plane.minikube.internal$ /etc/hosts
	I1209 11:52:51.155759  661546 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.218	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 11:52:51.167480  661546 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 11:52:47.592707  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:50.093177  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:48.132348  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:48.632894  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:49.133272  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:49.633015  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:50.132977  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:50.632533  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:51.132939  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:51.632463  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:52.133082  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:52.633298  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:50.357709  663024 pod_ready.go:93] pod "coredns-7c65d6cfc9-zzrgn" in "kube-system" namespace has status "Ready":"True"
	I1209 11:52:50.357740  663024 pod_ready.go:82] duration metric: took 10.006992001s for pod "coredns-7c65d6cfc9-zzrgn" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:50.357752  663024 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-482476" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:50.363374  663024 pod_ready.go:93] pod "etcd-default-k8s-diff-port-482476" in "kube-system" namespace has status "Ready":"True"
	I1209 11:52:50.363403  663024 pod_ready.go:82] duration metric: took 5.642657ms for pod "etcd-default-k8s-diff-port-482476" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:50.363417  663024 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-482476" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:50.368456  663024 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-482476" in "kube-system" namespace has status "Ready":"True"
	I1209 11:52:50.368478  663024 pod_ready.go:82] duration metric: took 5.053713ms for pod "kube-apiserver-default-k8s-diff-port-482476" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:50.368488  663024 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-482476" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:50.374156  663024 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-482476" in "kube-system" namespace has status "Ready":"True"
	I1209 11:52:50.374205  663024 pod_ready.go:82] duration metric: took 5.708489ms for pod "kube-controller-manager-default-k8s-diff-port-482476" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:50.374219  663024 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-6th5d" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:50.378734  663024 pod_ready.go:93] pod "kube-proxy-6th5d" in "kube-system" namespace has status "Ready":"True"
	I1209 11:52:50.378752  663024 pod_ready.go:82] duration metric: took 4.526066ms for pod "kube-proxy-6th5d" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:50.378760  663024 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-482476" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:52.384763  663024 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-482476" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:53.389110  663024 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-482476" in "kube-system" namespace has status "Ready":"True"
	I1209 11:52:53.389146  663024 pod_ready.go:82] duration metric: took 3.010378852s for pod "kube-scheduler-default-k8s-diff-port-482476" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:53.389162  663024 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:51.305408  661546 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 11:52:51.330738  661546 certs.go:68] Setting up /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/embed-certs-005123 for IP: 192.168.72.218
	I1209 11:52:51.330766  661546 certs.go:194] generating shared ca certs ...
	I1209 11:52:51.330791  661546 certs.go:226] acquiring lock for ca certs: {Name:mk2df5887e08965d909a9c950da5dfffb8a04ddc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 11:52:51.331002  661546 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.key
	I1209 11:52:51.331099  661546 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.key
	I1209 11:52:51.331116  661546 certs.go:256] generating profile certs ...
	I1209 11:52:51.331252  661546 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/embed-certs-005123/client.key
	I1209 11:52:51.331333  661546 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/embed-certs-005123/apiserver.key.a40d22b0
	I1209 11:52:51.331400  661546 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/embed-certs-005123/proxy-client.key
	I1209 11:52:51.331595  661546 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017.pem (1338 bytes)
	W1209 11:52:51.331631  661546 certs.go:480] ignoring /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017_empty.pem, impossibly tiny 0 bytes
	I1209 11:52:51.331645  661546 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem (1675 bytes)
	I1209 11:52:51.331680  661546 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem (1082 bytes)
	I1209 11:52:51.331717  661546 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem (1123 bytes)
	I1209 11:52:51.331747  661546 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem (1679 bytes)
	I1209 11:52:51.331824  661546 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem (1708 bytes)
	I1209 11:52:51.332728  661546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 11:52:51.366002  661546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1209 11:52:51.400591  661546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 11:52:51.431219  661546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1209 11:52:51.459334  661546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/embed-certs-005123/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1209 11:52:51.487240  661546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/embed-certs-005123/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1209 11:52:51.522273  661546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/embed-certs-005123/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 11:52:51.545757  661546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/embed-certs-005123/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1209 11:52:51.572793  661546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 11:52:51.595719  661546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017.pem --> /usr/share/ca-certificates/617017.pem (1338 bytes)
	I1209 11:52:51.618456  661546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem --> /usr/share/ca-certificates/6170172.pem (1708 bytes)
	I1209 11:52:51.643337  661546 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 11:52:51.659719  661546 ssh_runner.go:195] Run: openssl version
	I1209 11:52:51.665339  661546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/617017.pem && ln -fs /usr/share/ca-certificates/617017.pem /etc/ssl/certs/617017.pem"
	I1209 11:52:51.676145  661546 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/617017.pem
	I1209 11:52:51.680615  661546 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 10:45 /usr/share/ca-certificates/617017.pem
	I1209 11:52:51.680670  661546 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/617017.pem
	I1209 11:52:51.686782  661546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/617017.pem /etc/ssl/certs/51391683.0"
	I1209 11:52:51.697398  661546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6170172.pem && ln -fs /usr/share/ca-certificates/6170172.pem /etc/ssl/certs/6170172.pem"
	I1209 11:52:51.707438  661546 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6170172.pem
	I1209 11:52:51.711764  661546 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 10:45 /usr/share/ca-certificates/6170172.pem
	I1209 11:52:51.711832  661546 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6170172.pem
	I1209 11:52:51.717278  661546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6170172.pem /etc/ssl/certs/3ec20f2e.0"
	I1209 11:52:51.727774  661546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1209 11:52:51.738575  661546 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 11:52:51.742996  661546 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 10:34 /usr/share/ca-certificates/minikubeCA.pem
	I1209 11:52:51.743057  661546 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 11:52:51.748505  661546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1209 11:52:51.758738  661546 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 11:52:51.763005  661546 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1209 11:52:51.768964  661546 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1209 11:52:51.775011  661546 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1209 11:52:51.780810  661546 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1209 11:52:51.786716  661546 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1209 11:52:51.792351  661546 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1209 11:52:51.798098  661546 kubeadm.go:392] StartCluster: {Name:embed-certs-005123 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:embed-certs-005123 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.218 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 11:52:51.798239  661546 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 11:52:51.798296  661546 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 11:52:51.840669  661546 cri.go:89] found id: ""
	I1209 11:52:51.840755  661546 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 11:52:51.850404  661546 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1209 11:52:51.850429  661546 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1209 11:52:51.850474  661546 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1209 11:52:51.859350  661546 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1209 11:52:51.860405  661546 kubeconfig.go:125] found "embed-certs-005123" server: "https://192.168.72.218:8443"
	I1209 11:52:51.862591  661546 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1209 11:52:51.872497  661546 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.218
	I1209 11:52:51.872539  661546 kubeadm.go:1160] stopping kube-system containers ...
	I1209 11:52:51.872558  661546 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1209 11:52:51.872638  661546 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 11:52:51.913221  661546 cri.go:89] found id: ""
	I1209 11:52:51.913316  661546 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1209 11:52:51.929885  661546 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 11:52:51.940078  661546 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 11:52:51.940105  661546 kubeadm.go:157] found existing configuration files:
	
	I1209 11:52:51.940166  661546 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 11:52:51.948911  661546 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 11:52:51.948977  661546 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 11:52:51.958278  661546 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 11:52:51.966808  661546 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 11:52:51.966879  661546 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 11:52:51.975480  661546 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 11:52:51.984071  661546 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 11:52:51.984127  661546 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 11:52:51.992480  661546 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 11:52:52.000798  661546 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 11:52:52.000873  661546 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 11:52:52.009553  661546 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 11:52:52.019274  661546 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:52.133477  661546 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:53.081976  661546 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:53.293871  661546 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:53.364259  661546 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:53.452043  661546 api_server.go:52] waiting for apiserver process to appear ...
	I1209 11:52:53.452147  661546 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:53.952743  661546 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:54.452498  661546 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:54.952482  661546 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:55.452783  661546 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:55.483411  661546 api_server.go:72] duration metric: took 2.0313706s to wait for apiserver process to appear ...
	I1209 11:52:55.483448  661546 api_server.go:88] waiting for apiserver healthz status ...
	I1209 11:52:55.483473  661546 api_server.go:253] Checking apiserver healthz at https://192.168.72.218:8443/healthz ...
	I1209 11:52:55.483982  661546 api_server.go:269] stopped: https://192.168.72.218:8443/healthz: Get "https://192.168.72.218:8443/healthz": dial tcp 192.168.72.218:8443: connect: connection refused
	I1209 11:52:55.983589  661546 api_server.go:253] Checking apiserver healthz at https://192.168.72.218:8443/healthz ...
	I1209 11:52:52.592309  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:55.257400  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:53.132520  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:53.633385  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:54.132432  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:54.632974  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:55.132958  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:55.633343  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:56.132687  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:56.633236  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:57.133489  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:57.633105  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:55.396602  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:57.397077  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:58.136225  661546 api_server.go:279] https://192.168.72.218:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1209 11:52:58.136259  661546 api_server.go:103] status: https://192.168.72.218:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1209 11:52:58.136276  661546 api_server.go:253] Checking apiserver healthz at https://192.168.72.218:8443/healthz ...
	I1209 11:52:58.174521  661546 api_server.go:279] https://192.168.72.218:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1209 11:52:58.174583  661546 api_server.go:103] status: https://192.168.72.218:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1209 11:52:58.484089  661546 api_server.go:253] Checking apiserver healthz at https://192.168.72.218:8443/healthz ...
	I1209 11:52:58.489495  661546 api_server.go:279] https://192.168.72.218:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1209 11:52:58.489536  661546 api_server.go:103] status: https://192.168.72.218:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1209 11:52:58.984185  661546 api_server.go:253] Checking apiserver healthz at https://192.168.72.218:8443/healthz ...
	I1209 11:52:58.990889  661546 api_server.go:279] https://192.168.72.218:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1209 11:52:58.990932  661546 api_server.go:103] status: https://192.168.72.218:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1209 11:52:59.484415  661546 api_server.go:253] Checking apiserver healthz at https://192.168.72.218:8443/healthz ...
	I1209 11:52:59.490878  661546 api_server.go:279] https://192.168.72.218:8443/healthz returned 200:
	ok
	I1209 11:52:59.498196  661546 api_server.go:141] control plane version: v1.31.2
	I1209 11:52:59.498231  661546 api_server.go:131] duration metric: took 4.014775842s to wait for apiserver health ...
	I1209 11:52:59.498241  661546 cni.go:84] Creating CNI manager for ""
	I1209 11:52:59.498247  661546 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 11:52:59.499779  661546 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1209 11:52:59.500941  661546 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1209 11:52:59.514201  661546 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1209 11:52:59.544391  661546 system_pods.go:43] waiting for kube-system pods to appear ...
	I1209 11:52:59.555798  661546 system_pods.go:59] 8 kube-system pods found
	I1209 11:52:59.555837  661546 system_pods.go:61] "coredns-7c65d6cfc9-cdnjm" [7cb724f8-c570-4a19-808d-da994ec43eaa] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1209 11:52:59.555849  661546 system_pods.go:61] "etcd-embed-certs-005123" [bf194765-7520-4b5d-a1e5-b49830a0f620] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1209 11:52:59.555858  661546 system_pods.go:61] "kube-apiserver-embed-certs-005123" [470f6c19-0112-4b0d-89d9-b792e912cf6d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1209 11:52:59.555863  661546 system_pods.go:61] "kube-controller-manager-embed-certs-005123" [b42748b2-f3a9-4d29-a832-a30d54b329c2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1209 11:52:59.555868  661546 system_pods.go:61] "kube-proxy-b7bf2" [f9aab69c-2232-4f56-a502-ffd033f7ac10] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1209 11:52:59.555877  661546 system_pods.go:61] "kube-scheduler-embed-certs-005123" [e61a8e3c-c1d3-4dab-abb2-6f5221bc5d25] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1209 11:52:59.555885  661546 system_pods.go:61] "metrics-server-6867b74b74-x4kvn" [210cb99c-e3e7-4337-bed4-985cb98143dd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1209 11:52:59.555893  661546 system_pods.go:61] "storage-provisioner" [f2f7d9e2-1121-4df2-adb7-a0af32f957ff] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1209 11:52:59.555903  661546 system_pods.go:74] duration metric: took 11.485008ms to wait for pod list to return data ...
	I1209 11:52:59.555913  661546 node_conditions.go:102] verifying NodePressure condition ...
	I1209 11:52:59.560077  661546 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1209 11:52:59.560100  661546 node_conditions.go:123] node cpu capacity is 2
	I1209 11:52:59.560110  661546 node_conditions.go:105] duration metric: took 4.192476ms to run NodePressure ...
	I1209 11:52:59.560132  661546 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:59.890141  661546 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1209 11:52:59.895382  661546 kubeadm.go:739] kubelet initialised
	I1209 11:52:59.895414  661546 kubeadm.go:740] duration metric: took 5.227549ms waiting for restarted kubelet to initialise ...
	I1209 11:52:59.895425  661546 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 11:52:59.901454  661546 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-cdnjm" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:57.593336  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:00.094942  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:58.132858  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:58.633386  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:59.132544  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:59.633427  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:00.133402  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:00.632719  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:01.132786  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:01.632909  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:02.133197  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:02.632620  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:59.896691  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:02.396546  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:01.907730  661546 pod_ready.go:103] pod "coredns-7c65d6cfc9-cdnjm" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:03.910835  661546 pod_ready.go:103] pod "coredns-7c65d6cfc9-cdnjm" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:02.591692  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:05.090892  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:03.133091  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:03.633389  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:04.132587  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:04.633239  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:05.132773  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:05.632456  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:06.132989  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:06.632584  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:07.133153  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:07.633389  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:04.895599  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:06.896541  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:08.912963  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:06.408122  661546 pod_ready.go:103] pod "coredns-7c65d6cfc9-cdnjm" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:08.412579  661546 pod_ready.go:103] pod "coredns-7c65d6cfc9-cdnjm" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:10.419673  661546 pod_ready.go:93] pod "coredns-7c65d6cfc9-cdnjm" in "kube-system" namespace has status "Ready":"True"
	I1209 11:53:10.419702  661546 pod_ready.go:82] duration metric: took 10.518223469s for pod "coredns-7c65d6cfc9-cdnjm" in "kube-system" namespace to be "Ready" ...
	I1209 11:53:10.419716  661546 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-005123" in "kube-system" namespace to be "Ready" ...
	I1209 11:53:07.591181  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:10.091248  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:08.132885  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:08.633192  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:09.132446  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:09.633385  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:10.132534  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:10.632399  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:11.132877  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:11.633091  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:12.132592  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:12.633185  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:11.396121  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:13.901605  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:12.425696  661546 pod_ready.go:103] pod "etcd-embed-certs-005123" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:13.926007  661546 pod_ready.go:93] pod "etcd-embed-certs-005123" in "kube-system" namespace has status "Ready":"True"
	I1209 11:53:13.926041  661546 pod_ready.go:82] duration metric: took 3.50631846s for pod "etcd-embed-certs-005123" in "kube-system" namespace to be "Ready" ...
	I1209 11:53:13.926053  661546 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-005123" in "kube-system" namespace to be "Ready" ...
	I1209 11:53:13.931124  661546 pod_ready.go:93] pod "kube-apiserver-embed-certs-005123" in "kube-system" namespace has status "Ready":"True"
	I1209 11:53:13.931150  661546 pod_ready.go:82] duration metric: took 5.090118ms for pod "kube-apiserver-embed-certs-005123" in "kube-system" namespace to be "Ready" ...
	I1209 11:53:13.931163  661546 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-005123" in "kube-system" namespace to be "Ready" ...
	I1209 11:53:13.935763  661546 pod_ready.go:93] pod "kube-controller-manager-embed-certs-005123" in "kube-system" namespace has status "Ready":"True"
	I1209 11:53:13.935783  661546 pod_ready.go:82] duration metric: took 4.613902ms for pod "kube-controller-manager-embed-certs-005123" in "kube-system" namespace to be "Ready" ...
	I1209 11:53:13.935792  661546 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-b7bf2" in "kube-system" namespace to be "Ready" ...
	I1209 11:53:13.940013  661546 pod_ready.go:93] pod "kube-proxy-b7bf2" in "kube-system" namespace has status "Ready":"True"
	I1209 11:53:13.940037  661546 pod_ready.go:82] duration metric: took 4.238468ms for pod "kube-proxy-b7bf2" in "kube-system" namespace to be "Ready" ...
	I1209 11:53:13.940050  661546 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-005123" in "kube-system" namespace to be "Ready" ...
	I1209 11:53:13.944480  661546 pod_ready.go:93] pod "kube-scheduler-embed-certs-005123" in "kube-system" namespace has status "Ready":"True"
	I1209 11:53:13.944497  661546 pod_ready.go:82] duration metric: took 4.439334ms for pod "kube-scheduler-embed-certs-005123" in "kube-system" namespace to be "Ready" ...
	I1209 11:53:13.944504  661546 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace to be "Ready" ...
	I1209 11:53:15.951194  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:12.091413  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:14.591239  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:13.132852  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:13.632863  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:14.132638  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:14.632522  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:15.133201  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:15.632442  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:16.132620  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:53:16.132747  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:53:16.171708  662586 cri.go:89] found id: ""
	I1209 11:53:16.171748  662586 logs.go:282] 0 containers: []
	W1209 11:53:16.171761  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:53:16.171768  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:53:16.171823  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:53:16.206350  662586 cri.go:89] found id: ""
	I1209 11:53:16.206381  662586 logs.go:282] 0 containers: []
	W1209 11:53:16.206390  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:53:16.206398  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:53:16.206468  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:53:16.239292  662586 cri.go:89] found id: ""
	I1209 11:53:16.239325  662586 logs.go:282] 0 containers: []
	W1209 11:53:16.239334  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:53:16.239341  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:53:16.239397  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:53:16.275809  662586 cri.go:89] found id: ""
	I1209 11:53:16.275841  662586 logs.go:282] 0 containers: []
	W1209 11:53:16.275850  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:53:16.275856  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:53:16.275913  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:53:16.310434  662586 cri.go:89] found id: ""
	I1209 11:53:16.310466  662586 logs.go:282] 0 containers: []
	W1209 11:53:16.310474  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:53:16.310480  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:53:16.310540  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:53:16.347697  662586 cri.go:89] found id: ""
	I1209 11:53:16.347729  662586 logs.go:282] 0 containers: []
	W1209 11:53:16.347738  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:53:16.347745  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:53:16.347801  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:53:16.380949  662586 cri.go:89] found id: ""
	I1209 11:53:16.380977  662586 logs.go:282] 0 containers: []
	W1209 11:53:16.380985  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:53:16.380992  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:53:16.381074  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:53:16.415236  662586 cri.go:89] found id: ""
	I1209 11:53:16.415268  662586 logs.go:282] 0 containers: []
	W1209 11:53:16.415290  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:53:16.415304  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:53:16.415321  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:53:16.459614  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:53:16.459645  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:53:16.509575  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:53:16.509617  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:53:16.522864  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:53:16.522898  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:53:16.644997  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:53:16.645059  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:53:16.645106  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:53:16.396028  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:18.397195  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:17.951721  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:19.952199  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:16.591767  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:19.091470  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:21.095835  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:19.220978  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:19.233506  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:53:19.233597  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:53:19.268975  662586 cri.go:89] found id: ""
	I1209 11:53:19.269007  662586 logs.go:282] 0 containers: []
	W1209 11:53:19.269019  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:53:19.269027  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:53:19.269103  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:53:19.304898  662586 cri.go:89] found id: ""
	I1209 11:53:19.304935  662586 logs.go:282] 0 containers: []
	W1209 11:53:19.304949  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:53:19.304957  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:53:19.305034  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:53:19.344798  662586 cri.go:89] found id: ""
	I1209 11:53:19.344835  662586 logs.go:282] 0 containers: []
	W1209 11:53:19.344846  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:53:19.344855  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:53:19.344925  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:53:19.395335  662586 cri.go:89] found id: ""
	I1209 11:53:19.395377  662586 logs.go:282] 0 containers: []
	W1209 11:53:19.395387  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:53:19.395395  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:53:19.395464  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:53:19.430334  662586 cri.go:89] found id: ""
	I1209 11:53:19.430364  662586 logs.go:282] 0 containers: []
	W1209 11:53:19.430377  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:53:19.430386  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:53:19.430465  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:53:19.468732  662586 cri.go:89] found id: ""
	I1209 11:53:19.468766  662586 logs.go:282] 0 containers: []
	W1209 11:53:19.468775  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:53:19.468782  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:53:19.468836  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:53:19.503194  662586 cri.go:89] found id: ""
	I1209 11:53:19.503242  662586 logs.go:282] 0 containers: []
	W1209 11:53:19.503255  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:53:19.503263  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:53:19.503328  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:53:19.537074  662586 cri.go:89] found id: ""
	I1209 11:53:19.537114  662586 logs.go:282] 0 containers: []
	W1209 11:53:19.537125  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:53:19.537135  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:53:19.537151  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:53:19.590081  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:53:19.590130  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:53:19.604350  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:53:19.604388  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:53:19.683073  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:53:19.683106  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:53:19.683124  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:53:19.763564  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:53:19.763611  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:53:22.302792  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:22.315992  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:53:22.316079  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:53:22.350464  662586 cri.go:89] found id: ""
	I1209 11:53:22.350495  662586 logs.go:282] 0 containers: []
	W1209 11:53:22.350505  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:53:22.350511  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:53:22.350569  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:53:22.382832  662586 cri.go:89] found id: ""
	I1209 11:53:22.382867  662586 logs.go:282] 0 containers: []
	W1209 11:53:22.382880  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:53:22.382889  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:53:22.382958  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:53:22.417826  662586 cri.go:89] found id: ""
	I1209 11:53:22.417859  662586 logs.go:282] 0 containers: []
	W1209 11:53:22.417871  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:53:22.417880  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:53:22.417963  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:53:22.451545  662586 cri.go:89] found id: ""
	I1209 11:53:22.451579  662586 logs.go:282] 0 containers: []
	W1209 11:53:22.451588  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:53:22.451594  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:53:22.451659  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:53:22.488413  662586 cri.go:89] found id: ""
	I1209 11:53:22.488448  662586 logs.go:282] 0 containers: []
	W1209 11:53:22.488458  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:53:22.488464  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:53:22.488531  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:53:22.523891  662586 cri.go:89] found id: ""
	I1209 11:53:22.523916  662586 logs.go:282] 0 containers: []
	W1209 11:53:22.523925  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:53:22.523931  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:53:22.523990  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:53:22.555828  662586 cri.go:89] found id: ""
	I1209 11:53:22.555866  662586 logs.go:282] 0 containers: []
	W1209 11:53:22.555879  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:53:22.555887  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:53:22.555960  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:53:22.592133  662586 cri.go:89] found id: ""
	I1209 11:53:22.592171  662586 logs.go:282] 0 containers: []
	W1209 11:53:22.592181  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:53:22.592192  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:53:22.592209  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:53:22.641928  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:53:22.641966  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:53:22.655182  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:53:22.655215  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 11:53:20.896189  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:23.397242  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:21.957934  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:24.451292  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:23.591147  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:25.591982  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	W1209 11:53:22.724320  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:53:22.724343  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:53:22.724359  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:53:22.811692  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:53:22.811743  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:53:25.347903  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:25.360839  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:53:25.360907  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:53:25.392880  662586 cri.go:89] found id: ""
	I1209 11:53:25.392917  662586 logs.go:282] 0 containers: []
	W1209 11:53:25.392930  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:53:25.392939  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:53:25.393008  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:53:25.427862  662586 cri.go:89] found id: ""
	I1209 11:53:25.427905  662586 logs.go:282] 0 containers: []
	W1209 11:53:25.427914  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:53:25.427921  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:53:25.428009  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:53:25.463733  662586 cri.go:89] found id: ""
	I1209 11:53:25.463767  662586 logs.go:282] 0 containers: []
	W1209 11:53:25.463778  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:53:25.463788  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:53:25.463884  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:53:25.501653  662586 cri.go:89] found id: ""
	I1209 11:53:25.501681  662586 logs.go:282] 0 containers: []
	W1209 11:53:25.501690  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:53:25.501697  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:53:25.501751  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:53:25.535368  662586 cri.go:89] found id: ""
	I1209 11:53:25.535410  662586 logs.go:282] 0 containers: []
	W1209 11:53:25.535422  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:53:25.535431  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:53:25.535511  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:53:25.569709  662586 cri.go:89] found id: ""
	I1209 11:53:25.569739  662586 logs.go:282] 0 containers: []
	W1209 11:53:25.569748  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:53:25.569761  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:53:25.569827  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:53:25.604352  662586 cri.go:89] found id: ""
	I1209 11:53:25.604389  662586 logs.go:282] 0 containers: []
	W1209 11:53:25.604404  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:53:25.604413  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:53:25.604477  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:53:25.635832  662586 cri.go:89] found id: ""
	I1209 11:53:25.635865  662586 logs.go:282] 0 containers: []
	W1209 11:53:25.635878  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:53:25.635892  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:53:25.635908  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:53:25.650611  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:53:25.650647  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:53:25.721092  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:53:25.721121  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:53:25.721139  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:53:25.795552  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:53:25.795598  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:53:25.858088  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:53:25.858161  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:53:25.898217  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:28.395882  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:26.950691  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:28.951203  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:28.091306  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:30.091842  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:28.410683  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:28.422993  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:53:28.423072  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:53:28.455054  662586 cri.go:89] found id: ""
	I1209 11:53:28.455083  662586 logs.go:282] 0 containers: []
	W1209 11:53:28.455092  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:53:28.455098  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:53:28.455162  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:53:28.493000  662586 cri.go:89] found id: ""
	I1209 11:53:28.493037  662586 logs.go:282] 0 containers: []
	W1209 11:53:28.493046  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:53:28.493052  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:53:28.493104  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:53:28.526294  662586 cri.go:89] found id: ""
	I1209 11:53:28.526333  662586 logs.go:282] 0 containers: []
	W1209 11:53:28.526346  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:53:28.526354  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:53:28.526417  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:53:28.560383  662586 cri.go:89] found id: ""
	I1209 11:53:28.560414  662586 logs.go:282] 0 containers: []
	W1209 11:53:28.560423  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:53:28.560430  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:53:28.560485  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:53:28.595906  662586 cri.go:89] found id: ""
	I1209 11:53:28.595935  662586 logs.go:282] 0 containers: []
	W1209 11:53:28.595946  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:53:28.595954  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:53:28.596021  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:53:28.629548  662586 cri.go:89] found id: ""
	I1209 11:53:28.629584  662586 logs.go:282] 0 containers: []
	W1209 11:53:28.629597  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:53:28.629607  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:53:28.629673  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:53:28.666362  662586 cri.go:89] found id: ""
	I1209 11:53:28.666398  662586 logs.go:282] 0 containers: []
	W1209 11:53:28.666410  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:53:28.666418  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:53:28.666494  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:53:28.697704  662586 cri.go:89] found id: ""
	I1209 11:53:28.697736  662586 logs.go:282] 0 containers: []
	W1209 11:53:28.697746  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:53:28.697756  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:53:28.697769  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:53:28.745774  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:53:28.745816  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:53:28.759543  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:53:28.759582  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:53:28.834772  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:53:28.834795  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:53:28.834812  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:53:28.913137  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:53:28.913178  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:53:31.460658  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:31.473503  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:53:31.473575  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:53:31.506710  662586 cri.go:89] found id: ""
	I1209 11:53:31.506748  662586 logs.go:282] 0 containers: []
	W1209 11:53:31.506760  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:53:31.506770  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:53:31.506842  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:53:31.544127  662586 cri.go:89] found id: ""
	I1209 11:53:31.544188  662586 logs.go:282] 0 containers: []
	W1209 11:53:31.544202  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:53:31.544211  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:53:31.544289  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:53:31.591081  662586 cri.go:89] found id: ""
	I1209 11:53:31.591116  662586 logs.go:282] 0 containers: []
	W1209 11:53:31.591128  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:53:31.591135  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:53:31.591213  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:53:31.629311  662586 cri.go:89] found id: ""
	I1209 11:53:31.629340  662586 logs.go:282] 0 containers: []
	W1209 11:53:31.629348  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:53:31.629355  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:53:31.629432  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:53:31.671035  662586 cri.go:89] found id: ""
	I1209 11:53:31.671069  662586 logs.go:282] 0 containers: []
	W1209 11:53:31.671081  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:53:31.671090  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:53:31.671162  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:53:31.705753  662586 cri.go:89] found id: ""
	I1209 11:53:31.705792  662586 logs.go:282] 0 containers: []
	W1209 11:53:31.705805  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:53:31.705815  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:53:31.705889  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:53:31.739118  662586 cri.go:89] found id: ""
	I1209 11:53:31.739146  662586 logs.go:282] 0 containers: []
	W1209 11:53:31.739155  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:53:31.739162  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:53:31.739225  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:53:31.771085  662586 cri.go:89] found id: ""
	I1209 11:53:31.771120  662586 logs.go:282] 0 containers: []
	W1209 11:53:31.771129  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:53:31.771139  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:53:31.771152  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:53:31.820993  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:53:31.821049  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:53:31.835576  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:53:31.835612  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:53:31.903011  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:53:31.903039  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:53:31.903056  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:53:31.977784  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:53:31.977830  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:53:30.896197  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:33.395937  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:31.450832  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:33.451161  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:35.451446  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:32.590724  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:34.592352  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:34.514654  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:34.529156  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:53:34.529236  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:53:34.567552  662586 cri.go:89] found id: ""
	I1209 11:53:34.567580  662586 logs.go:282] 0 containers: []
	W1209 11:53:34.567590  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:53:34.567598  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:53:34.567665  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:53:34.608863  662586 cri.go:89] found id: ""
	I1209 11:53:34.608891  662586 logs.go:282] 0 containers: []
	W1209 11:53:34.608900  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:53:34.608907  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:53:34.608970  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:53:34.647204  662586 cri.go:89] found id: ""
	I1209 11:53:34.647242  662586 logs.go:282] 0 containers: []
	W1209 11:53:34.647254  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:53:34.647263  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:53:34.647333  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:53:34.682511  662586 cri.go:89] found id: ""
	I1209 11:53:34.682565  662586 logs.go:282] 0 containers: []
	W1209 11:53:34.682580  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:53:34.682596  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:53:34.682674  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:53:34.717557  662586 cri.go:89] found id: ""
	I1209 11:53:34.717585  662586 logs.go:282] 0 containers: []
	W1209 11:53:34.717595  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:53:34.717602  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:53:34.717670  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:53:34.749814  662586 cri.go:89] found id: ""
	I1209 11:53:34.749851  662586 logs.go:282] 0 containers: []
	W1209 11:53:34.749865  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:53:34.749876  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:53:34.749949  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:53:34.782732  662586 cri.go:89] found id: ""
	I1209 11:53:34.782766  662586 logs.go:282] 0 containers: []
	W1209 11:53:34.782776  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:53:34.782782  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:53:34.782846  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:53:34.817114  662586 cri.go:89] found id: ""
	I1209 11:53:34.817149  662586 logs.go:282] 0 containers: []
	W1209 11:53:34.817162  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:53:34.817175  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:53:34.817192  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:53:34.885963  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:53:34.885986  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:53:34.886001  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:53:34.969858  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:53:34.969905  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:53:35.006981  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:53:35.007024  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:53:35.055360  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:53:35.055401  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:53:37.570641  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:37.595904  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:53:37.595986  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:53:37.642205  662586 cri.go:89] found id: ""
	I1209 11:53:37.642248  662586 logs.go:282] 0 containers: []
	W1209 11:53:37.642261  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:53:37.642270  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:53:37.642347  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:53:37.676666  662586 cri.go:89] found id: ""
	I1209 11:53:37.676692  662586 logs.go:282] 0 containers: []
	W1209 11:53:37.676701  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:53:37.676707  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:53:37.676760  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:53:35.396037  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:37.896489  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:37.952569  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:40.450464  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:37.092250  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:39.092392  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:37.714201  662586 cri.go:89] found id: ""
	I1209 11:53:37.714233  662586 logs.go:282] 0 containers: []
	W1209 11:53:37.714243  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:53:37.714249  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:53:37.714311  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:53:37.748018  662586 cri.go:89] found id: ""
	I1209 11:53:37.748047  662586 logs.go:282] 0 containers: []
	W1209 11:53:37.748058  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:53:37.748067  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:53:37.748127  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:53:37.783763  662586 cri.go:89] found id: ""
	I1209 11:53:37.783799  662586 logs.go:282] 0 containers: []
	W1209 11:53:37.783807  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:53:37.783823  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:53:37.783898  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:53:37.822470  662586 cri.go:89] found id: ""
	I1209 11:53:37.822502  662586 logs.go:282] 0 containers: []
	W1209 11:53:37.822514  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:53:37.822523  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:53:37.822585  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:53:37.858493  662586 cri.go:89] found id: ""
	I1209 11:53:37.858527  662586 logs.go:282] 0 containers: []
	W1209 11:53:37.858537  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:53:37.858543  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:53:37.858599  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:53:37.899263  662586 cri.go:89] found id: ""
	I1209 11:53:37.899288  662586 logs.go:282] 0 containers: []
	W1209 11:53:37.899295  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:53:37.899304  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:53:37.899321  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:53:37.972531  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:53:37.972559  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:53:37.972575  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:53:38.046271  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:53:38.046315  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:53:38.088829  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:53:38.088867  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:53:38.141935  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:53:38.141985  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:53:40.657131  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:40.669884  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:53:40.669954  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:53:40.704291  662586 cri.go:89] found id: ""
	I1209 11:53:40.704332  662586 logs.go:282] 0 containers: []
	W1209 11:53:40.704345  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:53:40.704357  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:53:40.704435  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:53:40.738637  662586 cri.go:89] found id: ""
	I1209 11:53:40.738673  662586 logs.go:282] 0 containers: []
	W1209 11:53:40.738684  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:53:40.738690  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:53:40.738747  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:53:40.770737  662586 cri.go:89] found id: ""
	I1209 11:53:40.770774  662586 logs.go:282] 0 containers: []
	W1209 11:53:40.770787  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:53:40.770796  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:53:40.770865  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:53:40.805667  662586 cri.go:89] found id: ""
	I1209 11:53:40.805702  662586 logs.go:282] 0 containers: []
	W1209 11:53:40.805729  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:53:40.805739  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:53:40.805812  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:53:40.838444  662586 cri.go:89] found id: ""
	I1209 11:53:40.838482  662586 logs.go:282] 0 containers: []
	W1209 11:53:40.838496  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:53:40.838505  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:53:40.838578  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:53:40.871644  662586 cri.go:89] found id: ""
	I1209 11:53:40.871679  662586 logs.go:282] 0 containers: []
	W1209 11:53:40.871691  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:53:40.871700  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:53:40.871763  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:53:40.907242  662586 cri.go:89] found id: ""
	I1209 11:53:40.907275  662586 logs.go:282] 0 containers: []
	W1209 11:53:40.907284  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:53:40.907291  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:53:40.907359  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:53:40.941542  662586 cri.go:89] found id: ""
	I1209 11:53:40.941570  662586 logs.go:282] 0 containers: []
	W1209 11:53:40.941583  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:53:40.941595  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:53:40.941616  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:53:41.022344  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:53:41.022373  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:53:41.022387  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:53:41.097083  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:53:41.097129  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:53:41.135303  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:53:41.135349  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:53:41.191400  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:53:41.191447  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:53:40.396681  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:42.895118  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:42.451217  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:44.951893  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:41.591753  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:44.090762  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:46.091821  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:43.705246  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:43.717939  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:53:43.718001  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:53:43.750027  662586 cri.go:89] found id: ""
	I1209 11:53:43.750066  662586 logs.go:282] 0 containers: []
	W1209 11:53:43.750079  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:53:43.750087  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:53:43.750156  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:53:43.782028  662586 cri.go:89] found id: ""
	I1209 11:53:43.782067  662586 logs.go:282] 0 containers: []
	W1209 11:53:43.782081  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:53:43.782090  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:53:43.782153  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:53:43.815509  662586 cri.go:89] found id: ""
	I1209 11:53:43.815549  662586 logs.go:282] 0 containers: []
	W1209 11:53:43.815562  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:53:43.815570  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:53:43.815629  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:53:43.852803  662586 cri.go:89] found id: ""
	I1209 11:53:43.852834  662586 logs.go:282] 0 containers: []
	W1209 11:53:43.852842  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:53:43.852850  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:53:43.852915  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:53:43.886761  662586 cri.go:89] found id: ""
	I1209 11:53:43.886789  662586 logs.go:282] 0 containers: []
	W1209 11:53:43.886798  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:53:43.886805  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:53:43.886883  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:53:43.924427  662586 cri.go:89] found id: ""
	I1209 11:53:43.924458  662586 logs.go:282] 0 containers: []
	W1209 11:53:43.924466  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:53:43.924478  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:53:43.924542  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:53:43.960351  662586 cri.go:89] found id: ""
	I1209 11:53:43.960381  662586 logs.go:282] 0 containers: []
	W1209 11:53:43.960398  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:53:43.960407  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:53:43.960476  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:53:43.993933  662586 cri.go:89] found id: ""
	I1209 11:53:43.993960  662586 logs.go:282] 0 containers: []
	W1209 11:53:43.993969  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:53:43.993979  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:53:43.994002  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:53:44.006915  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:53:44.006952  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:53:44.078928  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:53:44.078981  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:53:44.078999  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:53:44.158129  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:53:44.158188  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:53:44.199543  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:53:44.199577  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:53:46.748607  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:46.762381  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:53:46.762494  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:53:46.795674  662586 cri.go:89] found id: ""
	I1209 11:53:46.795713  662586 logs.go:282] 0 containers: []
	W1209 11:53:46.795727  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:53:46.795737  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:53:46.795812  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:53:46.834027  662586 cri.go:89] found id: ""
	I1209 11:53:46.834055  662586 logs.go:282] 0 containers: []
	W1209 11:53:46.834065  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:53:46.834072  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:53:46.834124  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:53:46.872116  662586 cri.go:89] found id: ""
	I1209 11:53:46.872156  662586 logs.go:282] 0 containers: []
	W1209 11:53:46.872169  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:53:46.872179  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:53:46.872264  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:53:46.906571  662586 cri.go:89] found id: ""
	I1209 11:53:46.906599  662586 logs.go:282] 0 containers: []
	W1209 11:53:46.906608  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:53:46.906615  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:53:46.906676  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:53:46.938266  662586 cri.go:89] found id: ""
	I1209 11:53:46.938303  662586 logs.go:282] 0 containers: []
	W1209 11:53:46.938315  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:53:46.938323  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:53:46.938381  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:53:46.972281  662586 cri.go:89] found id: ""
	I1209 11:53:46.972318  662586 logs.go:282] 0 containers: []
	W1209 11:53:46.972329  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:53:46.972337  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:53:46.972391  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:53:47.004797  662586 cri.go:89] found id: ""
	I1209 11:53:47.004828  662586 logs.go:282] 0 containers: []
	W1209 11:53:47.004837  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:53:47.004843  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:53:47.004908  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:53:47.035877  662586 cri.go:89] found id: ""
	I1209 11:53:47.035905  662586 logs.go:282] 0 containers: []
	W1209 11:53:47.035917  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:53:47.035931  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:53:47.035947  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:53:47.087654  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:53:47.087706  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:53:47.102311  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:53:47.102346  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:53:47.195370  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:53:47.195396  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:53:47.195414  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:53:47.279103  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:53:47.279158  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:53:44.895382  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:46.895838  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:48.896133  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:47.453879  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:49.951686  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:48.591393  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:51.090806  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:49.817942  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:49.830291  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:53:49.830357  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:53:49.862917  662586 cri.go:89] found id: ""
	I1209 11:53:49.862950  662586 logs.go:282] 0 containers: []
	W1209 11:53:49.862959  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:53:49.862965  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:53:49.863033  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:53:49.894971  662586 cri.go:89] found id: ""
	I1209 11:53:49.895005  662586 logs.go:282] 0 containers: []
	W1209 11:53:49.895018  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:53:49.895027  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:53:49.895097  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:53:49.931737  662586 cri.go:89] found id: ""
	I1209 11:53:49.931775  662586 logs.go:282] 0 containers: []
	W1209 11:53:49.931786  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:53:49.931800  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:53:49.931862  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:53:49.971064  662586 cri.go:89] found id: ""
	I1209 11:53:49.971097  662586 logs.go:282] 0 containers: []
	W1209 11:53:49.971109  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:53:49.971118  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:53:49.971210  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:53:50.005354  662586 cri.go:89] found id: ""
	I1209 11:53:50.005393  662586 logs.go:282] 0 containers: []
	W1209 11:53:50.005417  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:53:50.005427  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:53:50.005501  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:53:50.044209  662586 cri.go:89] found id: ""
	I1209 11:53:50.044240  662586 logs.go:282] 0 containers: []
	W1209 11:53:50.044249  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:53:50.044257  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:53:50.044313  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:53:50.076360  662586 cri.go:89] found id: ""
	I1209 11:53:50.076408  662586 logs.go:282] 0 containers: []
	W1209 11:53:50.076418  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:53:50.076426  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:53:50.076494  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:53:50.112125  662586 cri.go:89] found id: ""
	I1209 11:53:50.112168  662586 logs.go:282] 0 containers: []
	W1209 11:53:50.112196  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:53:50.112210  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:53:50.112228  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:53:50.164486  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:53:50.164530  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:53:50.178489  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:53:50.178525  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:53:50.250131  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:53:50.250165  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:53:50.250196  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:53:50.329733  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:53:50.329779  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:53:50.896354  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:53.395149  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:52.450595  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:54.450939  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:53.092311  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:55.590766  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:52.874887  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:52.888518  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:53:52.888607  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:53:52.924361  662586 cri.go:89] found id: ""
	I1209 11:53:52.924389  662586 logs.go:282] 0 containers: []
	W1209 11:53:52.924398  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:53:52.924404  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:53:52.924467  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:53:52.957769  662586 cri.go:89] found id: ""
	I1209 11:53:52.957803  662586 logs.go:282] 0 containers: []
	W1209 11:53:52.957816  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:53:52.957824  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:53:52.957891  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:53:52.990339  662586 cri.go:89] found id: ""
	I1209 11:53:52.990376  662586 logs.go:282] 0 containers: []
	W1209 11:53:52.990388  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:53:52.990397  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:53:52.990461  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:53:53.022959  662586 cri.go:89] found id: ""
	I1209 11:53:53.023003  662586 logs.go:282] 0 containers: []
	W1209 11:53:53.023017  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:53:53.023028  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:53:53.023111  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:53:53.060271  662586 cri.go:89] found id: ""
	I1209 11:53:53.060299  662586 logs.go:282] 0 containers: []
	W1209 11:53:53.060315  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:53:53.060321  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:53:53.060390  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:53:53.093470  662586 cri.go:89] found id: ""
	I1209 11:53:53.093500  662586 logs.go:282] 0 containers: []
	W1209 11:53:53.093511  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:53:53.093519  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:53:53.093575  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:53:53.128902  662586 cri.go:89] found id: ""
	I1209 11:53:53.128941  662586 logs.go:282] 0 containers: []
	W1209 11:53:53.128955  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:53:53.128963  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:53:53.129036  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:53:53.161927  662586 cri.go:89] found id: ""
	I1209 11:53:53.161955  662586 logs.go:282] 0 containers: []
	W1209 11:53:53.161964  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:53:53.161974  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:53:53.161988  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:53:53.214098  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:53:53.214140  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:53:53.229191  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:53:53.229232  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:53:53.308648  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:53:53.308678  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:53:53.308695  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:53:53.386776  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:53:53.386816  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:53:55.929307  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:55.942217  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:53:55.942285  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:53:55.983522  662586 cri.go:89] found id: ""
	I1209 11:53:55.983563  662586 logs.go:282] 0 containers: []
	W1209 11:53:55.983572  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:53:55.983579  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:53:55.983645  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:53:56.017262  662586 cri.go:89] found id: ""
	I1209 11:53:56.017293  662586 logs.go:282] 0 containers: []
	W1209 11:53:56.017308  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:53:56.017314  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:53:56.017367  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:53:56.052385  662586 cri.go:89] found id: ""
	I1209 11:53:56.052419  662586 logs.go:282] 0 containers: []
	W1209 11:53:56.052429  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:53:56.052436  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:53:56.052489  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:53:56.085385  662586 cri.go:89] found id: ""
	I1209 11:53:56.085432  662586 logs.go:282] 0 containers: []
	W1209 11:53:56.085444  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:53:56.085452  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:53:56.085519  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:53:56.122754  662586 cri.go:89] found id: ""
	I1209 11:53:56.122785  662586 logs.go:282] 0 containers: []
	W1209 11:53:56.122794  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:53:56.122800  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:53:56.122862  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:53:56.159033  662586 cri.go:89] found id: ""
	I1209 11:53:56.159061  662586 logs.go:282] 0 containers: []
	W1209 11:53:56.159070  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:53:56.159077  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:53:56.159128  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:53:56.198022  662586 cri.go:89] found id: ""
	I1209 11:53:56.198058  662586 logs.go:282] 0 containers: []
	W1209 11:53:56.198070  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:53:56.198078  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:53:56.198148  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:53:56.231475  662586 cri.go:89] found id: ""
	I1209 11:53:56.231515  662586 logs.go:282] 0 containers: []
	W1209 11:53:56.231528  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:53:56.231542  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:53:56.231559  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:53:56.304922  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:53:56.304969  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:53:56.339875  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:53:56.339916  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:53:56.392893  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:53:56.392929  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:53:56.406334  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:53:56.406376  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:53:56.474037  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:53:55.895077  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:57.895835  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:56.452163  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:58.950981  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:57.590943  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:00.091057  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:58.974725  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:58.987817  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:53:58.987890  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:53:59.020951  662586 cri.go:89] found id: ""
	I1209 11:53:59.020987  662586 logs.go:282] 0 containers: []
	W1209 11:53:59.020996  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:53:59.021003  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:53:59.021055  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:53:59.055675  662586 cri.go:89] found id: ""
	I1209 11:53:59.055715  662586 logs.go:282] 0 containers: []
	W1209 11:53:59.055727  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:53:59.055733  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:53:59.055800  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:53:59.090099  662586 cri.go:89] found id: ""
	I1209 11:53:59.090138  662586 logs.go:282] 0 containers: []
	W1209 11:53:59.090150  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:53:59.090158  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:53:59.090252  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:53:59.124680  662586 cri.go:89] found id: ""
	I1209 11:53:59.124718  662586 logs.go:282] 0 containers: []
	W1209 11:53:59.124730  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:53:59.124739  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:53:59.124802  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:53:59.157772  662586 cri.go:89] found id: ""
	I1209 11:53:59.157808  662586 logs.go:282] 0 containers: []
	W1209 11:53:59.157819  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:53:59.157828  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:53:59.157892  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:53:59.191098  662586 cri.go:89] found id: ""
	I1209 11:53:59.191132  662586 logs.go:282] 0 containers: []
	W1209 11:53:59.191141  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:53:59.191148  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:53:59.191212  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:53:59.224050  662586 cri.go:89] found id: ""
	I1209 11:53:59.224090  662586 logs.go:282] 0 containers: []
	W1209 11:53:59.224102  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:53:59.224110  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:53:59.224198  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:53:59.262361  662586 cri.go:89] found id: ""
	I1209 11:53:59.262397  662586 logs.go:282] 0 containers: []
	W1209 11:53:59.262418  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:53:59.262432  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:53:59.262456  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:53:59.276811  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:53:59.276844  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:53:59.349465  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:53:59.349492  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:53:59.349506  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:53:59.429146  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:53:59.429192  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:53:59.470246  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:53:59.470287  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:02.021651  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:02.036039  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:02.036109  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:02.070999  662586 cri.go:89] found id: ""
	I1209 11:54:02.071034  662586 logs.go:282] 0 containers: []
	W1209 11:54:02.071045  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:02.071052  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:02.071119  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:02.107506  662586 cri.go:89] found id: ""
	I1209 11:54:02.107536  662586 logs.go:282] 0 containers: []
	W1209 11:54:02.107546  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:02.107554  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:02.107624  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:02.146279  662586 cri.go:89] found id: ""
	I1209 11:54:02.146314  662586 logs.go:282] 0 containers: []
	W1209 11:54:02.146326  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:02.146342  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:02.146408  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:02.178349  662586 cri.go:89] found id: ""
	I1209 11:54:02.178378  662586 logs.go:282] 0 containers: []
	W1209 11:54:02.178387  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:02.178402  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:02.178460  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:02.211916  662586 cri.go:89] found id: ""
	I1209 11:54:02.211952  662586 logs.go:282] 0 containers: []
	W1209 11:54:02.211963  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:02.211969  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:02.212038  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:02.246334  662586 cri.go:89] found id: ""
	I1209 11:54:02.246370  662586 logs.go:282] 0 containers: []
	W1209 11:54:02.246380  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:02.246387  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:02.246452  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:02.280111  662586 cri.go:89] found id: ""
	I1209 11:54:02.280157  662586 logs.go:282] 0 containers: []
	W1209 11:54:02.280168  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:02.280175  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:02.280246  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:02.314141  662586 cri.go:89] found id: ""
	I1209 11:54:02.314188  662586 logs.go:282] 0 containers: []
	W1209 11:54:02.314203  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:02.314216  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:02.314236  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:02.327220  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:02.327253  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:02.396099  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:02.396127  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:02.396142  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:02.478096  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:02.478148  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:02.515144  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:02.515175  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:53:59.896109  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:02.396485  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:04.396925  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:01.450279  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:03.450732  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:05.451265  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:02.092010  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:04.092427  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:05.069286  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:05.082453  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:05.082540  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:05.116263  662586 cri.go:89] found id: ""
	I1209 11:54:05.116299  662586 logs.go:282] 0 containers: []
	W1209 11:54:05.116313  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:05.116321  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:05.116388  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:05.150736  662586 cri.go:89] found id: ""
	I1209 11:54:05.150775  662586 logs.go:282] 0 containers: []
	W1209 11:54:05.150788  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:05.150796  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:05.150864  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:05.183757  662586 cri.go:89] found id: ""
	I1209 11:54:05.183792  662586 logs.go:282] 0 containers: []
	W1209 11:54:05.183804  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:05.183812  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:05.183873  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:05.215986  662586 cri.go:89] found id: ""
	I1209 11:54:05.216017  662586 logs.go:282] 0 containers: []
	W1209 11:54:05.216029  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:05.216038  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:05.216096  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:05.247648  662586 cri.go:89] found id: ""
	I1209 11:54:05.247686  662586 logs.go:282] 0 containers: []
	W1209 11:54:05.247698  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:05.247707  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:05.247776  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:05.279455  662586 cri.go:89] found id: ""
	I1209 11:54:05.279484  662586 logs.go:282] 0 containers: []
	W1209 11:54:05.279495  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:05.279504  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:05.279567  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:05.320334  662586 cri.go:89] found id: ""
	I1209 11:54:05.320374  662586 logs.go:282] 0 containers: []
	W1209 11:54:05.320387  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:05.320398  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:05.320490  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:05.353475  662586 cri.go:89] found id: ""
	I1209 11:54:05.353503  662586 logs.go:282] 0 containers: []
	W1209 11:54:05.353512  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:05.353522  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:05.353536  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:05.368320  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:05.368357  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:05.442152  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:05.442193  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:05.442212  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:05.523726  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:05.523768  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:05.562405  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:05.562438  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:06.895764  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:08.897032  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:07.454237  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:09.456440  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:06.591474  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:08.591578  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:11.091599  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:08.115564  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:08.129426  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:08.129523  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:08.162412  662586 cri.go:89] found id: ""
	I1209 11:54:08.162454  662586 logs.go:282] 0 containers: []
	W1209 11:54:08.162467  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:08.162477  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:08.162543  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:08.196821  662586 cri.go:89] found id: ""
	I1209 11:54:08.196860  662586 logs.go:282] 0 containers: []
	W1209 11:54:08.196873  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:08.196882  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:08.196949  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:08.233068  662586 cri.go:89] found id: ""
	I1209 11:54:08.233106  662586 logs.go:282] 0 containers: []
	W1209 11:54:08.233117  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:08.233124  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:08.233184  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:08.268683  662586 cri.go:89] found id: ""
	I1209 11:54:08.268715  662586 logs.go:282] 0 containers: []
	W1209 11:54:08.268724  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:08.268731  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:08.268790  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:08.303237  662586 cri.go:89] found id: ""
	I1209 11:54:08.303276  662586 logs.go:282] 0 containers: []
	W1209 11:54:08.303288  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:08.303309  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:08.303393  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:08.339513  662586 cri.go:89] found id: ""
	I1209 11:54:08.339543  662586 logs.go:282] 0 containers: []
	W1209 11:54:08.339551  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:08.339557  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:08.339612  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:08.376237  662586 cri.go:89] found id: ""
	I1209 11:54:08.376268  662586 logs.go:282] 0 containers: []
	W1209 11:54:08.376289  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:08.376298  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:08.376363  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:08.410530  662586 cri.go:89] found id: ""
	I1209 11:54:08.410560  662586 logs.go:282] 0 containers: []
	W1209 11:54:08.410568  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:08.410577  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:08.410589  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:08.460064  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:08.460101  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:08.474547  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:08.474582  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:08.544231  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:08.544260  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:08.544277  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:08.624727  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:08.624775  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:11.167943  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:11.183210  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:11.183294  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:11.221326  662586 cri.go:89] found id: ""
	I1209 11:54:11.221356  662586 logs.go:282] 0 containers: []
	W1209 11:54:11.221365  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:11.221371  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:11.221434  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:11.254688  662586 cri.go:89] found id: ""
	I1209 11:54:11.254721  662586 logs.go:282] 0 containers: []
	W1209 11:54:11.254730  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:11.254736  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:11.254801  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:11.287611  662586 cri.go:89] found id: ""
	I1209 11:54:11.287649  662586 logs.go:282] 0 containers: []
	W1209 11:54:11.287660  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:11.287666  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:11.287738  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:11.320533  662586 cri.go:89] found id: ""
	I1209 11:54:11.320565  662586 logs.go:282] 0 containers: []
	W1209 11:54:11.320574  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:11.320580  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:11.320638  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:11.362890  662586 cri.go:89] found id: ""
	I1209 11:54:11.362923  662586 logs.go:282] 0 containers: []
	W1209 11:54:11.362933  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:11.362939  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:11.363007  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:11.418729  662586 cri.go:89] found id: ""
	I1209 11:54:11.418762  662586 logs.go:282] 0 containers: []
	W1209 11:54:11.418772  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:11.418779  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:11.418837  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:11.455336  662586 cri.go:89] found id: ""
	I1209 11:54:11.455374  662586 logs.go:282] 0 containers: []
	W1209 11:54:11.455388  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:11.455397  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:11.455479  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:11.491307  662586 cri.go:89] found id: ""
	I1209 11:54:11.491334  662586 logs.go:282] 0 containers: []
	W1209 11:54:11.491344  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:11.491355  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:11.491369  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:11.543161  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:11.543204  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:11.556633  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:11.556670  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:11.626971  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:11.627001  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:11.627025  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:11.702061  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:11.702107  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:11.396167  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:13.897097  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:11.952029  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:14.451701  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:13.590749  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:15.591845  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:14.245241  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:14.258461  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:14.258544  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:14.292108  662586 cri.go:89] found id: ""
	I1209 11:54:14.292147  662586 logs.go:282] 0 containers: []
	W1209 11:54:14.292156  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:14.292163  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:14.292219  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:14.327347  662586 cri.go:89] found id: ""
	I1209 11:54:14.327381  662586 logs.go:282] 0 containers: []
	W1209 11:54:14.327394  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:14.327403  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:14.327484  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:14.361188  662586 cri.go:89] found id: ""
	I1209 11:54:14.361220  662586 logs.go:282] 0 containers: []
	W1209 11:54:14.361229  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:14.361236  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:14.361290  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:14.394898  662586 cri.go:89] found id: ""
	I1209 11:54:14.394936  662586 logs.go:282] 0 containers: []
	W1209 11:54:14.394948  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:14.394960  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:14.395027  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:14.429326  662586 cri.go:89] found id: ""
	I1209 11:54:14.429402  662586 logs.go:282] 0 containers: []
	W1209 11:54:14.429420  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:14.429431  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:14.429510  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:14.462903  662586 cri.go:89] found id: ""
	I1209 11:54:14.462938  662586 logs.go:282] 0 containers: []
	W1209 11:54:14.462947  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:14.462954  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:14.463009  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:14.496362  662586 cri.go:89] found id: ""
	I1209 11:54:14.496396  662586 logs.go:282] 0 containers: []
	W1209 11:54:14.496409  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:14.496418  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:14.496562  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:14.530052  662586 cri.go:89] found id: ""
	I1209 11:54:14.530085  662586 logs.go:282] 0 containers: []
	W1209 11:54:14.530098  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:14.530111  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:14.530131  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:14.543096  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:14.543133  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:14.611030  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:14.611055  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:14.611067  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:14.684984  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:14.685041  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:14.722842  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:14.722881  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:17.275868  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:17.288812  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:17.288898  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:17.323732  662586 cri.go:89] found id: ""
	I1209 11:54:17.323766  662586 logs.go:282] 0 containers: []
	W1209 11:54:17.323777  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:17.323786  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:17.323852  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:17.367753  662586 cri.go:89] found id: ""
	I1209 11:54:17.367788  662586 logs.go:282] 0 containers: []
	W1209 11:54:17.367801  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:17.367810  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:17.367878  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:17.411444  662586 cri.go:89] found id: ""
	I1209 11:54:17.411476  662586 logs.go:282] 0 containers: []
	W1209 11:54:17.411488  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:17.411496  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:17.411563  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:17.450790  662586 cri.go:89] found id: ""
	I1209 11:54:17.450821  662586 logs.go:282] 0 containers: []
	W1209 11:54:17.450832  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:17.450840  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:17.450913  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:17.488824  662586 cri.go:89] found id: ""
	I1209 11:54:17.488859  662586 logs.go:282] 0 containers: []
	W1209 11:54:17.488869  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:17.488876  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:17.488948  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:17.522051  662586 cri.go:89] found id: ""
	I1209 11:54:17.522085  662586 logs.go:282] 0 containers: []
	W1209 11:54:17.522094  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:17.522102  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:17.522165  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:17.556653  662586 cri.go:89] found id: ""
	I1209 11:54:17.556687  662586 logs.go:282] 0 containers: []
	W1209 11:54:17.556700  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:17.556707  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:17.556783  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:17.591303  662586 cri.go:89] found id: ""
	I1209 11:54:17.591337  662586 logs.go:282] 0 containers: []
	W1209 11:54:17.591355  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:17.591367  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:17.591384  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:17.656675  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:17.656699  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:17.656712  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:16.396574  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:18.896050  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:16.950508  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:19.451294  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:18.091307  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:20.091489  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:17.739894  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:17.739939  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:17.789486  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:17.789517  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:17.843606  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:17.843648  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:20.361896  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:20.378015  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:20.378105  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:20.412252  662586 cri.go:89] found id: ""
	I1209 11:54:20.412299  662586 logs.go:282] 0 containers: []
	W1209 11:54:20.412311  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:20.412327  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:20.412396  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:20.443638  662586 cri.go:89] found id: ""
	I1209 11:54:20.443671  662586 logs.go:282] 0 containers: []
	W1209 11:54:20.443682  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:20.443690  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:20.443758  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:20.478578  662586 cri.go:89] found id: ""
	I1209 11:54:20.478613  662586 logs.go:282] 0 containers: []
	W1209 11:54:20.478625  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:20.478634  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:20.478704  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:20.512232  662586 cri.go:89] found id: ""
	I1209 11:54:20.512266  662586 logs.go:282] 0 containers: []
	W1209 11:54:20.512279  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:20.512295  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:20.512357  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:20.544358  662586 cri.go:89] found id: ""
	I1209 11:54:20.544398  662586 logs.go:282] 0 containers: []
	W1209 11:54:20.544413  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:20.544429  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:20.544494  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:20.579476  662586 cri.go:89] found id: ""
	I1209 11:54:20.579513  662586 logs.go:282] 0 containers: []
	W1209 11:54:20.579525  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:20.579533  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:20.579600  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:20.613851  662586 cri.go:89] found id: ""
	I1209 11:54:20.613884  662586 logs.go:282] 0 containers: []
	W1209 11:54:20.613897  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:20.613903  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:20.613973  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:20.647311  662586 cri.go:89] found id: ""
	I1209 11:54:20.647342  662586 logs.go:282] 0 containers: []
	W1209 11:54:20.647351  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:20.647362  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:20.647375  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:20.695798  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:20.695839  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:20.709443  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:20.709478  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:20.779211  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:20.779237  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:20.779253  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:20.857966  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:20.858012  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:20.896168  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:22.896667  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:21.455716  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:23.950823  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:25.952038  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:22.592225  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:25.091934  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:23.398095  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:23.412622  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:23.412686  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:23.446582  662586 cri.go:89] found id: ""
	I1209 11:54:23.446616  662586 logs.go:282] 0 containers: []
	W1209 11:54:23.446628  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:23.446637  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:23.446705  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:23.487896  662586 cri.go:89] found id: ""
	I1209 11:54:23.487926  662586 logs.go:282] 0 containers: []
	W1209 11:54:23.487935  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:23.487941  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:23.488007  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:23.521520  662586 cri.go:89] found id: ""
	I1209 11:54:23.521559  662586 logs.go:282] 0 containers: []
	W1209 11:54:23.521571  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:23.521579  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:23.521651  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:23.561296  662586 cri.go:89] found id: ""
	I1209 11:54:23.561329  662586 logs.go:282] 0 containers: []
	W1209 11:54:23.561342  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:23.561350  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:23.561417  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:23.604936  662586 cri.go:89] found id: ""
	I1209 11:54:23.604965  662586 logs.go:282] 0 containers: []
	W1209 11:54:23.604976  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:23.604985  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:23.605055  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:23.665193  662586 cri.go:89] found id: ""
	I1209 11:54:23.665225  662586 logs.go:282] 0 containers: []
	W1209 11:54:23.665237  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:23.665247  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:23.665315  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:23.700202  662586 cri.go:89] found id: ""
	I1209 11:54:23.700239  662586 logs.go:282] 0 containers: []
	W1209 11:54:23.700251  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:23.700259  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:23.700336  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:23.734877  662586 cri.go:89] found id: ""
	I1209 11:54:23.734907  662586 logs.go:282] 0 containers: []
	W1209 11:54:23.734917  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:23.734927  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:23.734941  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:23.817328  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:23.817371  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:23.855052  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:23.855085  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:23.909107  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:23.909154  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:23.924198  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:23.924227  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:23.991976  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:26.492366  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:26.506223  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:26.506299  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:26.544932  662586 cri.go:89] found id: ""
	I1209 11:54:26.544974  662586 logs.go:282] 0 containers: []
	W1209 11:54:26.544987  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:26.544997  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:26.545080  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:26.579581  662586 cri.go:89] found id: ""
	I1209 11:54:26.579621  662586 logs.go:282] 0 containers: []
	W1209 11:54:26.579634  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:26.579643  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:26.579716  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:26.612510  662586 cri.go:89] found id: ""
	I1209 11:54:26.612545  662586 logs.go:282] 0 containers: []
	W1209 11:54:26.612567  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:26.612577  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:26.612646  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:26.646273  662586 cri.go:89] found id: ""
	I1209 11:54:26.646306  662586 logs.go:282] 0 containers: []
	W1209 11:54:26.646316  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:26.646322  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:26.646376  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:26.682027  662586 cri.go:89] found id: ""
	I1209 11:54:26.682063  662586 logs.go:282] 0 containers: []
	W1209 11:54:26.682072  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:26.682078  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:26.682132  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:26.715822  662586 cri.go:89] found id: ""
	I1209 11:54:26.715876  662586 logs.go:282] 0 containers: []
	W1209 11:54:26.715889  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:26.715898  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:26.715964  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:26.755976  662586 cri.go:89] found id: ""
	I1209 11:54:26.756016  662586 logs.go:282] 0 containers: []
	W1209 11:54:26.756031  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:26.756040  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:26.756122  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:26.787258  662586 cri.go:89] found id: ""
	I1209 11:54:26.787297  662586 logs.go:282] 0 containers: []
	W1209 11:54:26.787308  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:26.787319  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:26.787333  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:26.800534  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:26.800573  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:26.865767  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:26.865798  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:26.865824  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:26.950409  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:26.950460  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:26.994281  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:26.994320  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:25.396411  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:27.894846  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:28.451141  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:30.455101  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:27.591769  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:30.091528  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:29.544568  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:29.565182  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:29.565263  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:29.625116  662586 cri.go:89] found id: ""
	I1209 11:54:29.625155  662586 logs.go:282] 0 containers: []
	W1209 11:54:29.625168  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:29.625181  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:29.625257  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:29.673689  662586 cri.go:89] found id: ""
	I1209 11:54:29.673727  662586 logs.go:282] 0 containers: []
	W1209 11:54:29.673739  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:29.673747  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:29.673811  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:29.705925  662586 cri.go:89] found id: ""
	I1209 11:54:29.705959  662586 logs.go:282] 0 containers: []
	W1209 11:54:29.705971  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:29.705979  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:29.706033  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:29.738731  662586 cri.go:89] found id: ""
	I1209 11:54:29.738759  662586 logs.go:282] 0 containers: []
	W1209 11:54:29.738767  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:29.738774  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:29.738832  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:29.770778  662586 cri.go:89] found id: ""
	I1209 11:54:29.770814  662586 logs.go:282] 0 containers: []
	W1209 11:54:29.770826  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:29.770833  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:29.770899  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:29.801925  662586 cri.go:89] found id: ""
	I1209 11:54:29.801961  662586 logs.go:282] 0 containers: []
	W1209 11:54:29.801973  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:29.801981  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:29.802050  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:29.833681  662586 cri.go:89] found id: ""
	I1209 11:54:29.833712  662586 logs.go:282] 0 containers: []
	W1209 11:54:29.833722  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:29.833727  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:29.833791  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:29.873666  662586 cri.go:89] found id: ""
	I1209 11:54:29.873700  662586 logs.go:282] 0 containers: []
	W1209 11:54:29.873712  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:29.873722  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:29.873735  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:29.914855  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:29.914895  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:29.967730  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:29.967772  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:29.982037  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:29.982070  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:30.047168  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:30.047195  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:30.047212  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:32.623371  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:32.636346  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:32.636411  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:32.677709  662586 cri.go:89] found id: ""
	I1209 11:54:32.677736  662586 logs.go:282] 0 containers: []
	W1209 11:54:32.677744  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:32.677753  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:32.677805  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:29.896176  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:32.395216  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:32.952287  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:35.451456  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:32.092615  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:34.591397  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:32.710906  662586 cri.go:89] found id: ""
	I1209 11:54:32.710933  662586 logs.go:282] 0 containers: []
	W1209 11:54:32.710942  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:32.710948  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:32.711000  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:32.744623  662586 cri.go:89] found id: ""
	I1209 11:54:32.744654  662586 logs.go:282] 0 containers: []
	W1209 11:54:32.744667  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:32.744676  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:32.744736  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:32.779334  662586 cri.go:89] found id: ""
	I1209 11:54:32.779364  662586 logs.go:282] 0 containers: []
	W1209 11:54:32.779375  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:32.779382  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:32.779443  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:32.814998  662586 cri.go:89] found id: ""
	I1209 11:54:32.815032  662586 logs.go:282] 0 containers: []
	W1209 11:54:32.815046  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:32.815055  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:32.815128  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:32.850054  662586 cri.go:89] found id: ""
	I1209 11:54:32.850099  662586 logs.go:282] 0 containers: []
	W1209 11:54:32.850116  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:32.850127  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:32.850213  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:32.885769  662586 cri.go:89] found id: ""
	I1209 11:54:32.885805  662586 logs.go:282] 0 containers: []
	W1209 11:54:32.885818  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:32.885827  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:32.885899  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:32.927973  662586 cri.go:89] found id: ""
	I1209 11:54:32.928001  662586 logs.go:282] 0 containers: []
	W1209 11:54:32.928010  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:32.928019  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:32.928032  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:32.981915  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:32.981966  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:32.995817  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:32.995851  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:33.062409  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:33.062445  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:33.062462  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:33.146967  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:33.147011  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:35.688225  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:35.701226  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:35.701325  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:35.738628  662586 cri.go:89] found id: ""
	I1209 11:54:35.738655  662586 logs.go:282] 0 containers: []
	W1209 11:54:35.738663  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:35.738670  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:35.738737  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:35.771125  662586 cri.go:89] found id: ""
	I1209 11:54:35.771163  662586 logs.go:282] 0 containers: []
	W1209 11:54:35.771177  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:35.771187  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:35.771260  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:35.806244  662586 cri.go:89] found id: ""
	I1209 11:54:35.806277  662586 logs.go:282] 0 containers: []
	W1209 11:54:35.806290  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:35.806301  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:35.806376  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:35.839871  662586 cri.go:89] found id: ""
	I1209 11:54:35.839912  662586 logs.go:282] 0 containers: []
	W1209 11:54:35.839925  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:35.839932  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:35.840010  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:35.874994  662586 cri.go:89] found id: ""
	I1209 11:54:35.875034  662586 logs.go:282] 0 containers: []
	W1209 11:54:35.875046  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:35.875054  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:35.875129  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:35.910802  662586 cri.go:89] found id: ""
	I1209 11:54:35.910834  662586 logs.go:282] 0 containers: []
	W1209 11:54:35.910846  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:35.910855  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:35.910927  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:35.944633  662586 cri.go:89] found id: ""
	I1209 11:54:35.944663  662586 logs.go:282] 0 containers: []
	W1209 11:54:35.944672  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:35.944678  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:35.944749  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:35.982732  662586 cri.go:89] found id: ""
	I1209 11:54:35.982781  662586 logs.go:282] 0 containers: []
	W1209 11:54:35.982796  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:35.982811  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:35.982830  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:35.996271  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:35.996302  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:36.063463  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:36.063533  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:36.063554  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:36.141789  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:36.141833  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:36.187015  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:36.187047  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:34.895890  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:37.396472  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:37.951404  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:40.452814  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:37.091548  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:39.092168  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:38.739585  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:38.754322  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:38.754394  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:38.792497  662586 cri.go:89] found id: ""
	I1209 11:54:38.792525  662586 logs.go:282] 0 containers: []
	W1209 11:54:38.792535  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:38.792543  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:38.792608  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:38.829730  662586 cri.go:89] found id: ""
	I1209 11:54:38.829759  662586 logs.go:282] 0 containers: []
	W1209 11:54:38.829768  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:38.829774  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:38.829834  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:38.869942  662586 cri.go:89] found id: ""
	I1209 11:54:38.869981  662586 logs.go:282] 0 containers: []
	W1209 11:54:38.869994  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:38.870015  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:38.870085  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:38.906001  662586 cri.go:89] found id: ""
	I1209 11:54:38.906041  662586 logs.go:282] 0 containers: []
	W1209 11:54:38.906054  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:38.906063  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:38.906133  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:38.944389  662586 cri.go:89] found id: ""
	I1209 11:54:38.944427  662586 logs.go:282] 0 containers: []
	W1209 11:54:38.944445  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:38.944453  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:38.944534  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:38.979633  662586 cri.go:89] found id: ""
	I1209 11:54:38.979665  662586 logs.go:282] 0 containers: []
	W1209 11:54:38.979674  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:38.979681  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:38.979735  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:39.016366  662586 cri.go:89] found id: ""
	I1209 11:54:39.016402  662586 logs.go:282] 0 containers: []
	W1209 11:54:39.016416  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:39.016424  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:39.016489  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:39.049084  662586 cri.go:89] found id: ""
	I1209 11:54:39.049116  662586 logs.go:282] 0 containers: []
	W1209 11:54:39.049125  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:39.049134  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:39.049148  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:39.113953  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:39.113985  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:39.114004  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:39.191715  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:39.191767  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:39.232127  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:39.232167  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:39.281406  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:39.281448  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:41.795395  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:41.810293  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:41.810364  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:41.849819  662586 cri.go:89] found id: ""
	I1209 11:54:41.849858  662586 logs.go:282] 0 containers: []
	W1209 11:54:41.849872  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:41.849882  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:41.849952  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:41.883871  662586 cri.go:89] found id: ""
	I1209 11:54:41.883908  662586 logs.go:282] 0 containers: []
	W1209 11:54:41.883934  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:41.883942  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:41.884017  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:41.918194  662586 cri.go:89] found id: ""
	I1209 11:54:41.918230  662586 logs.go:282] 0 containers: []
	W1209 11:54:41.918239  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:41.918245  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:41.918312  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:41.950878  662586 cri.go:89] found id: ""
	I1209 11:54:41.950912  662586 logs.go:282] 0 containers: []
	W1209 11:54:41.950924  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:41.950933  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:41.950995  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:41.982922  662586 cri.go:89] found id: ""
	I1209 11:54:41.982964  662586 logs.go:282] 0 containers: []
	W1209 11:54:41.982976  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:41.982985  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:41.983064  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:42.014066  662586 cri.go:89] found id: ""
	I1209 11:54:42.014107  662586 logs.go:282] 0 containers: []
	W1209 11:54:42.014120  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:42.014129  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:42.014229  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:42.048017  662586 cri.go:89] found id: ""
	I1209 11:54:42.048056  662586 logs.go:282] 0 containers: []
	W1209 11:54:42.048070  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:42.048079  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:42.048146  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:42.080585  662586 cri.go:89] found id: ""
	I1209 11:54:42.080614  662586 logs.go:282] 0 containers: []
	W1209 11:54:42.080624  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:42.080634  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:42.080646  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:42.135012  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:42.135054  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:42.148424  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:42.148462  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:42.219179  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:42.219206  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:42.219230  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:42.305855  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:42.305902  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:39.895830  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:41.896255  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:44.398373  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:42.949835  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:44.951542  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:41.590831  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:43.592053  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:45.593044  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:44.843158  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:44.856317  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:44.856380  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:44.890940  662586 cri.go:89] found id: ""
	I1209 11:54:44.890984  662586 logs.go:282] 0 containers: []
	W1209 11:54:44.891003  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:44.891012  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:44.891081  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:44.923657  662586 cri.go:89] found id: ""
	I1209 11:54:44.923684  662586 logs.go:282] 0 containers: []
	W1209 11:54:44.923692  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:44.923698  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:44.923769  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:44.957512  662586 cri.go:89] found id: ""
	I1209 11:54:44.957545  662586 logs.go:282] 0 containers: []
	W1209 11:54:44.957558  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:44.957566  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:44.957636  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:44.998084  662586 cri.go:89] found id: ""
	I1209 11:54:44.998112  662586 logs.go:282] 0 containers: []
	W1209 11:54:44.998121  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:44.998128  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:44.998210  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:45.030335  662586 cri.go:89] found id: ""
	I1209 11:54:45.030360  662586 logs.go:282] 0 containers: []
	W1209 11:54:45.030369  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:45.030375  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:45.030447  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:45.063098  662586 cri.go:89] found id: ""
	I1209 11:54:45.063127  662586 logs.go:282] 0 containers: []
	W1209 11:54:45.063135  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:45.063141  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:45.063210  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:45.098430  662586 cri.go:89] found id: ""
	I1209 11:54:45.098458  662586 logs.go:282] 0 containers: []
	W1209 11:54:45.098466  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:45.098472  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:45.098526  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:45.132064  662586 cri.go:89] found id: ""
	I1209 11:54:45.132094  662586 logs.go:282] 0 containers: []
	W1209 11:54:45.132102  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:45.132113  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:45.132131  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:45.185512  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:45.185556  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:45.199543  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:45.199572  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:45.268777  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:45.268803  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:45.268817  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:45.352250  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:45.352299  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:46.897153  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:49.395935  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:46.952862  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:49.450006  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:48.092394  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:50.591937  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:47.892201  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:47.906961  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:47.907053  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:47.941349  662586 cri.go:89] found id: ""
	I1209 11:54:47.941394  662586 logs.go:282] 0 containers: []
	W1209 11:54:47.941408  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:47.941418  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:47.941479  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:47.981086  662586 cri.go:89] found id: ""
	I1209 11:54:47.981120  662586 logs.go:282] 0 containers: []
	W1209 11:54:47.981133  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:47.981141  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:47.981210  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:48.014105  662586 cri.go:89] found id: ""
	I1209 11:54:48.014142  662586 logs.go:282] 0 containers: []
	W1209 11:54:48.014151  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:48.014162  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:48.014249  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:48.049506  662586 cri.go:89] found id: ""
	I1209 11:54:48.049535  662586 logs.go:282] 0 containers: []
	W1209 11:54:48.049544  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:48.049552  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:48.049619  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:48.084284  662586 cri.go:89] found id: ""
	I1209 11:54:48.084314  662586 logs.go:282] 0 containers: []
	W1209 11:54:48.084324  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:48.084336  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:48.084406  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:48.117318  662586 cri.go:89] found id: ""
	I1209 11:54:48.117349  662586 logs.go:282] 0 containers: []
	W1209 11:54:48.117362  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:48.117371  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:48.117441  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:48.150121  662586 cri.go:89] found id: ""
	I1209 11:54:48.150151  662586 logs.go:282] 0 containers: []
	W1209 11:54:48.150187  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:48.150198  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:48.150266  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:48.180919  662586 cri.go:89] found id: ""
	I1209 11:54:48.180947  662586 logs.go:282] 0 containers: []
	W1209 11:54:48.180955  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:48.180966  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:48.180978  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:48.249572  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:48.249602  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:48.249617  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:48.324508  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:48.324552  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:48.363856  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:48.363901  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:48.415662  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:48.415721  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:50.929811  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:50.943650  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:50.943714  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:50.976444  662586 cri.go:89] found id: ""
	I1209 11:54:50.976480  662586 logs.go:282] 0 containers: []
	W1209 11:54:50.976493  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:50.976502  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:50.976574  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:51.016567  662586 cri.go:89] found id: ""
	I1209 11:54:51.016600  662586 logs.go:282] 0 containers: []
	W1209 11:54:51.016613  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:51.016621  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:51.016699  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:51.048933  662586 cri.go:89] found id: ""
	I1209 11:54:51.048967  662586 logs.go:282] 0 containers: []
	W1209 11:54:51.048977  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:51.048986  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:51.049073  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:51.083292  662586 cri.go:89] found id: ""
	I1209 11:54:51.083333  662586 logs.go:282] 0 containers: []
	W1209 11:54:51.083345  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:51.083354  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:51.083423  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:51.118505  662586 cri.go:89] found id: ""
	I1209 11:54:51.118547  662586 logs.go:282] 0 containers: []
	W1209 11:54:51.118560  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:51.118571  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:51.118644  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:51.152818  662586 cri.go:89] found id: ""
	I1209 11:54:51.152847  662586 logs.go:282] 0 containers: []
	W1209 11:54:51.152856  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:51.152870  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:51.152922  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:51.186953  662586 cri.go:89] found id: ""
	I1209 11:54:51.186981  662586 logs.go:282] 0 containers: []
	W1209 11:54:51.186991  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:51.186997  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:51.187063  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:51.219305  662586 cri.go:89] found id: ""
	I1209 11:54:51.219337  662586 logs.go:282] 0 containers: []
	W1209 11:54:51.219348  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:51.219361  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:51.219380  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:51.256295  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:51.256338  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:51.313751  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:51.313806  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:51.326940  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:51.326977  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:51.397395  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:51.397428  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:51.397445  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:51.396434  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:53.896554  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:51.456719  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:53.951566  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:52.592043  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:55.091800  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:53.975557  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:53.989509  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:53.989581  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:54.024363  662586 cri.go:89] found id: ""
	I1209 11:54:54.024403  662586 logs.go:282] 0 containers: []
	W1209 11:54:54.024416  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:54.024423  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:54.024484  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:54.062618  662586 cri.go:89] found id: ""
	I1209 11:54:54.062649  662586 logs.go:282] 0 containers: []
	W1209 11:54:54.062659  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:54.062667  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:54.062739  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:54.100194  662586 cri.go:89] found id: ""
	I1209 11:54:54.100231  662586 logs.go:282] 0 containers: []
	W1209 11:54:54.100243  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:54.100252  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:54.100324  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:54.135302  662586 cri.go:89] found id: ""
	I1209 11:54:54.135341  662586 logs.go:282] 0 containers: []
	W1209 11:54:54.135354  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:54.135363  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:54.135447  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:54.170898  662586 cri.go:89] found id: ""
	I1209 11:54:54.170940  662586 logs.go:282] 0 containers: []
	W1209 11:54:54.170953  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:54.170963  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:54.171035  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:54.205098  662586 cri.go:89] found id: ""
	I1209 11:54:54.205138  662586 logs.go:282] 0 containers: []
	W1209 11:54:54.205151  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:54.205159  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:54.205223  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:54.239153  662586 cri.go:89] found id: ""
	I1209 11:54:54.239210  662586 logs.go:282] 0 containers: []
	W1209 11:54:54.239226  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:54.239234  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:54.239307  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:54.278213  662586 cri.go:89] found id: ""
	I1209 11:54:54.278248  662586 logs.go:282] 0 containers: []
	W1209 11:54:54.278260  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:54.278275  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:54.278296  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:54.348095  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:54.348128  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:54.348156  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:54.427181  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:54.427224  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:54.467623  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:54.467656  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:54.519690  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:54.519734  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:57.033524  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:57.046420  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:57.046518  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:57.079588  662586 cri.go:89] found id: ""
	I1209 11:54:57.079616  662586 logs.go:282] 0 containers: []
	W1209 11:54:57.079626  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:57.079633  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:57.079687  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:57.114944  662586 cri.go:89] found id: ""
	I1209 11:54:57.114973  662586 logs.go:282] 0 containers: []
	W1209 11:54:57.114982  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:57.114988  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:57.115043  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:57.147667  662586 cri.go:89] found id: ""
	I1209 11:54:57.147708  662586 logs.go:282] 0 containers: []
	W1209 11:54:57.147721  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:57.147730  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:57.147794  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:57.182339  662586 cri.go:89] found id: ""
	I1209 11:54:57.182370  662586 logs.go:282] 0 containers: []
	W1209 11:54:57.182386  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:57.182395  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:57.182470  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:57.223129  662586 cri.go:89] found id: ""
	I1209 11:54:57.223170  662586 logs.go:282] 0 containers: []
	W1209 11:54:57.223186  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:57.223197  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:57.223270  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:57.262351  662586 cri.go:89] found id: ""
	I1209 11:54:57.262386  662586 logs.go:282] 0 containers: []
	W1209 11:54:57.262398  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:57.262409  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:57.262471  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:57.298743  662586 cri.go:89] found id: ""
	I1209 11:54:57.298772  662586 logs.go:282] 0 containers: []
	W1209 11:54:57.298782  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:57.298789  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:57.298856  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:57.339030  662586 cri.go:89] found id: ""
	I1209 11:54:57.339064  662586 logs.go:282] 0 containers: []
	W1209 11:54:57.339073  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:57.339085  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:57.339122  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:57.352603  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:57.352637  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:57.426627  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:57.426653  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:57.426669  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:57.515357  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:57.515401  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:57.554882  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:57.554925  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:56.396610  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:58.895822  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:56.451429  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:58.951440  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:57.590864  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:00.091967  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:00.112082  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:00.124977  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:00.125056  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:00.159003  662586 cri.go:89] found id: ""
	I1209 11:55:00.159032  662586 logs.go:282] 0 containers: []
	W1209 11:55:00.159041  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:00.159048  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:00.159101  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:00.192479  662586 cri.go:89] found id: ""
	I1209 11:55:00.192515  662586 logs.go:282] 0 containers: []
	W1209 11:55:00.192527  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:00.192533  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:00.192587  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:00.226146  662586 cri.go:89] found id: ""
	I1209 11:55:00.226194  662586 logs.go:282] 0 containers: []
	W1209 11:55:00.226208  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:00.226216  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:00.226273  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:00.260389  662586 cri.go:89] found id: ""
	I1209 11:55:00.260420  662586 logs.go:282] 0 containers: []
	W1209 11:55:00.260430  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:00.260442  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:00.260500  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:00.296091  662586 cri.go:89] found id: ""
	I1209 11:55:00.296121  662586 logs.go:282] 0 containers: []
	W1209 11:55:00.296131  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:00.296138  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:00.296195  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:00.332101  662586 cri.go:89] found id: ""
	I1209 11:55:00.332137  662586 logs.go:282] 0 containers: []
	W1209 11:55:00.332150  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:00.332158  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:00.332244  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:00.377329  662586 cri.go:89] found id: ""
	I1209 11:55:00.377358  662586 logs.go:282] 0 containers: []
	W1209 11:55:00.377368  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:00.377374  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:00.377438  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:00.415660  662586 cri.go:89] found id: ""
	I1209 11:55:00.415688  662586 logs.go:282] 0 containers: []
	W1209 11:55:00.415751  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:00.415767  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:00.415781  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:00.467734  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:00.467776  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:00.481244  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:00.481280  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:00.545721  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:00.545755  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:00.545777  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:00.624482  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:00.624533  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:01.396452  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:03.895539  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:01.452337  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:03.950752  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:05.951246  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:02.092654  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:04.592173  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:03.168340  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:03.183354  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:03.183439  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:03.223131  662586 cri.go:89] found id: ""
	I1209 11:55:03.223171  662586 logs.go:282] 0 containers: []
	W1209 11:55:03.223185  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:03.223193  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:03.223263  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:03.256561  662586 cri.go:89] found id: ""
	I1209 11:55:03.256595  662586 logs.go:282] 0 containers: []
	W1209 11:55:03.256603  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:03.256609  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:03.256667  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:03.289670  662586 cri.go:89] found id: ""
	I1209 11:55:03.289707  662586 logs.go:282] 0 containers: []
	W1209 11:55:03.289722  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:03.289738  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:03.289813  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:03.323687  662586 cri.go:89] found id: ""
	I1209 11:55:03.323714  662586 logs.go:282] 0 containers: []
	W1209 11:55:03.323724  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:03.323730  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:03.323786  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:03.358163  662586 cri.go:89] found id: ""
	I1209 11:55:03.358221  662586 logs.go:282] 0 containers: []
	W1209 11:55:03.358233  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:03.358241  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:03.358311  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:03.399688  662586 cri.go:89] found id: ""
	I1209 11:55:03.399721  662586 logs.go:282] 0 containers: []
	W1209 11:55:03.399734  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:03.399744  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:03.399812  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:03.433909  662586 cri.go:89] found id: ""
	I1209 11:55:03.433939  662586 logs.go:282] 0 containers: []
	W1209 11:55:03.433948  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:03.433954  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:03.434011  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:03.470208  662586 cri.go:89] found id: ""
	I1209 11:55:03.470239  662586 logs.go:282] 0 containers: []
	W1209 11:55:03.470248  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:03.470270  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:03.470289  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:03.545801  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:03.545848  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:03.584357  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:03.584389  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:03.641241  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:03.641283  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:03.657034  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:03.657080  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:03.731285  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:06.232380  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:06.246339  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:06.246411  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:06.281323  662586 cri.go:89] found id: ""
	I1209 11:55:06.281362  662586 logs.go:282] 0 containers: []
	W1209 11:55:06.281377  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:06.281385  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:06.281444  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:06.318225  662586 cri.go:89] found id: ""
	I1209 11:55:06.318261  662586 logs.go:282] 0 containers: []
	W1209 11:55:06.318277  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:06.318293  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:06.318364  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:06.353649  662586 cri.go:89] found id: ""
	I1209 11:55:06.353685  662586 logs.go:282] 0 containers: []
	W1209 11:55:06.353699  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:06.353708  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:06.353782  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:06.395204  662586 cri.go:89] found id: ""
	I1209 11:55:06.395242  662586 logs.go:282] 0 containers: []
	W1209 11:55:06.395257  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:06.395266  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:06.395335  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:06.436421  662586 cri.go:89] found id: ""
	I1209 11:55:06.436452  662586 logs.go:282] 0 containers: []
	W1209 11:55:06.436462  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:06.436469  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:06.436524  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:06.472218  662586 cri.go:89] found id: ""
	I1209 11:55:06.472246  662586 logs.go:282] 0 containers: []
	W1209 11:55:06.472255  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:06.472268  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:06.472335  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:06.506585  662586 cri.go:89] found id: ""
	I1209 11:55:06.506629  662586 logs.go:282] 0 containers: []
	W1209 11:55:06.506640  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:06.506647  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:06.506702  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:06.541442  662586 cri.go:89] found id: ""
	I1209 11:55:06.541472  662586 logs.go:282] 0 containers: []
	W1209 11:55:06.541481  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:06.541493  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:06.541512  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:06.592642  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:06.592682  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:06.606764  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:06.606805  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:06.677693  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:06.677720  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:06.677740  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:06.766074  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:06.766124  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:05.896263  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:08.396283  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:07.951409  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:10.451540  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:06.592724  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:09.091961  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:09.305144  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:09.319352  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:09.319444  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:09.357918  662586 cri.go:89] found id: ""
	I1209 11:55:09.358027  662586 logs.go:282] 0 containers: []
	W1209 11:55:09.358066  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:09.358077  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:09.358139  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:09.413181  662586 cri.go:89] found id: ""
	I1209 11:55:09.413213  662586 logs.go:282] 0 containers: []
	W1209 11:55:09.413226  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:09.413234  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:09.413310  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:09.448417  662586 cri.go:89] found id: ""
	I1209 11:55:09.448460  662586 logs.go:282] 0 containers: []
	W1209 11:55:09.448471  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:09.448480  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:09.448566  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:09.489732  662586 cri.go:89] found id: ""
	I1209 11:55:09.489765  662586 logs.go:282] 0 containers: []
	W1209 11:55:09.489775  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:09.489781  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:09.489845  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:09.524919  662586 cri.go:89] found id: ""
	I1209 11:55:09.524948  662586 logs.go:282] 0 containers: []
	W1209 11:55:09.524959  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:09.524968  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:09.525051  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:09.563268  662586 cri.go:89] found id: ""
	I1209 11:55:09.563301  662586 logs.go:282] 0 containers: []
	W1209 11:55:09.563311  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:09.563318  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:09.563373  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:09.598747  662586 cri.go:89] found id: ""
	I1209 11:55:09.598780  662586 logs.go:282] 0 containers: []
	W1209 11:55:09.598790  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:09.598798  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:09.598866  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:09.634447  662586 cri.go:89] found id: ""
	I1209 11:55:09.634479  662586 logs.go:282] 0 containers: []
	W1209 11:55:09.634492  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:09.634505  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:09.634520  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:09.647380  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:09.647419  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:09.721335  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:09.721363  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:09.721380  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:09.801039  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:09.801088  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:09.840929  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:09.840971  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:12.393810  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:12.407553  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:12.407654  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:12.444391  662586 cri.go:89] found id: ""
	I1209 11:55:12.444437  662586 logs.go:282] 0 containers: []
	W1209 11:55:12.444450  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:12.444459  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:12.444533  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:12.482714  662586 cri.go:89] found id: ""
	I1209 11:55:12.482752  662586 logs.go:282] 0 containers: []
	W1209 11:55:12.482764  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:12.482771  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:12.482853  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:12.518139  662586 cri.go:89] found id: ""
	I1209 11:55:12.518187  662586 logs.go:282] 0 containers: []
	W1209 11:55:12.518202  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:12.518211  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:12.518281  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:12.556903  662586 cri.go:89] found id: ""
	I1209 11:55:12.556938  662586 logs.go:282] 0 containers: []
	W1209 11:55:12.556950  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:12.556958  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:12.557028  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:12.591915  662586 cri.go:89] found id: ""
	I1209 11:55:12.591953  662586 logs.go:282] 0 containers: []
	W1209 11:55:12.591963  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:12.591971  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:12.592038  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:12.629767  662586 cri.go:89] found id: ""
	I1209 11:55:12.629797  662586 logs.go:282] 0 containers: []
	W1209 11:55:12.629806  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:12.629812  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:12.629878  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:12.667677  662586 cri.go:89] found id: ""
	I1209 11:55:12.667710  662586 logs.go:282] 0 containers: []
	W1209 11:55:12.667720  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:12.667727  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:12.667781  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:10.896109  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:12.896992  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:12.451770  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:14.952359  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:11.591952  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:14.092213  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:12.705720  662586 cri.go:89] found id: ""
	I1209 11:55:12.705747  662586 logs.go:282] 0 containers: []
	W1209 11:55:12.705756  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:12.705766  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:12.705780  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:12.758399  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:12.758441  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:12.772297  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:12.772336  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:12.839545  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:12.839569  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:12.839582  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:12.918424  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:12.918467  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:15.458122  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:15.473193  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:15.473284  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:15.508756  662586 cri.go:89] found id: ""
	I1209 11:55:15.508790  662586 logs.go:282] 0 containers: []
	W1209 11:55:15.508799  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:15.508806  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:15.508862  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:15.544735  662586 cri.go:89] found id: ""
	I1209 11:55:15.544770  662586 logs.go:282] 0 containers: []
	W1209 11:55:15.544782  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:15.544791  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:15.544866  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:15.577169  662586 cri.go:89] found id: ""
	I1209 11:55:15.577200  662586 logs.go:282] 0 containers: []
	W1209 11:55:15.577210  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:15.577216  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:15.577277  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:15.610662  662586 cri.go:89] found id: ""
	I1209 11:55:15.610690  662586 logs.go:282] 0 containers: []
	W1209 11:55:15.610700  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:15.610707  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:15.610763  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:15.645339  662586 cri.go:89] found id: ""
	I1209 11:55:15.645375  662586 logs.go:282] 0 containers: []
	W1209 11:55:15.645386  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:15.645394  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:15.645469  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:15.682044  662586 cri.go:89] found id: ""
	I1209 11:55:15.682079  662586 logs.go:282] 0 containers: []
	W1209 11:55:15.682096  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:15.682106  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:15.682201  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:15.717193  662586 cri.go:89] found id: ""
	I1209 11:55:15.717228  662586 logs.go:282] 0 containers: []
	W1209 11:55:15.717245  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:15.717256  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:15.717332  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:15.751756  662586 cri.go:89] found id: ""
	I1209 11:55:15.751792  662586 logs.go:282] 0 containers: []
	W1209 11:55:15.751803  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:15.751813  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:15.751827  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:15.811010  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:15.811063  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:15.842556  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:15.842597  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:15.920169  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:15.920195  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:15.920209  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:16.003180  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:16.003226  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:15.395666  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:17.396041  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:19.396262  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:17.451272  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:19.951638  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:16.591423  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:18.592456  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:21.090108  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:18.542563  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:18.555968  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:18.556059  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:18.588746  662586 cri.go:89] found id: ""
	I1209 11:55:18.588780  662586 logs.go:282] 0 containers: []
	W1209 11:55:18.588790  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:18.588797  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:18.588854  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:18.623664  662586 cri.go:89] found id: ""
	I1209 11:55:18.623707  662586 logs.go:282] 0 containers: []
	W1209 11:55:18.623720  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:18.623728  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:18.623798  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:18.659012  662586 cri.go:89] found id: ""
	I1209 11:55:18.659051  662586 logs.go:282] 0 containers: []
	W1209 11:55:18.659065  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:18.659074  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:18.659148  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:18.693555  662586 cri.go:89] found id: ""
	I1209 11:55:18.693588  662586 logs.go:282] 0 containers: []
	W1209 11:55:18.693600  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:18.693607  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:18.693661  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:18.726609  662586 cri.go:89] found id: ""
	I1209 11:55:18.726641  662586 logs.go:282] 0 containers: []
	W1209 11:55:18.726652  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:18.726659  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:18.726712  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:18.760654  662586 cri.go:89] found id: ""
	I1209 11:55:18.760682  662586 logs.go:282] 0 containers: []
	W1209 11:55:18.760694  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:18.760704  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:18.760761  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:18.794656  662586 cri.go:89] found id: ""
	I1209 11:55:18.794688  662586 logs.go:282] 0 containers: []
	W1209 11:55:18.794699  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:18.794706  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:18.794769  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:18.829988  662586 cri.go:89] found id: ""
	I1209 11:55:18.830030  662586 logs.go:282] 0 containers: []
	W1209 11:55:18.830045  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:18.830059  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:18.830073  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:18.872523  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:18.872558  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:18.929408  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:18.929449  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:18.943095  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:18.943133  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:19.009125  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:19.009150  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:19.009164  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:21.587418  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:21.606271  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:21.606358  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:21.653536  662586 cri.go:89] found id: ""
	I1209 11:55:21.653574  662586 logs.go:282] 0 containers: []
	W1209 11:55:21.653586  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:21.653595  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:21.653671  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:21.687023  662586 cri.go:89] found id: ""
	I1209 11:55:21.687049  662586 logs.go:282] 0 containers: []
	W1209 11:55:21.687060  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:21.687068  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:21.687131  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:21.720112  662586 cri.go:89] found id: ""
	I1209 11:55:21.720150  662586 logs.go:282] 0 containers: []
	W1209 11:55:21.720163  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:21.720171  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:21.720243  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:21.754697  662586 cri.go:89] found id: ""
	I1209 11:55:21.754729  662586 logs.go:282] 0 containers: []
	W1209 11:55:21.754740  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:21.754749  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:21.754814  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:21.793926  662586 cri.go:89] found id: ""
	I1209 11:55:21.793957  662586 logs.go:282] 0 containers: []
	W1209 11:55:21.793967  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:21.793973  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:21.794040  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:21.827572  662586 cri.go:89] found id: ""
	I1209 11:55:21.827609  662586 logs.go:282] 0 containers: []
	W1209 11:55:21.827622  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:21.827633  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:21.827700  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:21.861442  662586 cri.go:89] found id: ""
	I1209 11:55:21.861472  662586 logs.go:282] 0 containers: []
	W1209 11:55:21.861490  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:21.861499  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:21.861565  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:21.894858  662586 cri.go:89] found id: ""
	I1209 11:55:21.894884  662586 logs.go:282] 0 containers: []
	W1209 11:55:21.894892  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:21.894901  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:21.894914  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:21.942567  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:21.942625  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:21.956849  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:21.956879  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:22.020700  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:22.020724  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:22.020738  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:22.095730  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:22.095767  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:21.896304  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:24.395936  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:21.951928  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:24.450997  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:23.090962  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:25.091816  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:24.631715  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:24.644165  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:24.644252  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:24.677720  662586 cri.go:89] found id: ""
	I1209 11:55:24.677757  662586 logs.go:282] 0 containers: []
	W1209 11:55:24.677769  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:24.677778  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:24.677835  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:24.711053  662586 cri.go:89] found id: ""
	I1209 11:55:24.711086  662586 logs.go:282] 0 containers: []
	W1209 11:55:24.711095  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:24.711101  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:24.711154  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:24.744107  662586 cri.go:89] found id: ""
	I1209 11:55:24.744139  662586 logs.go:282] 0 containers: []
	W1209 11:55:24.744148  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:24.744154  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:24.744210  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:24.777811  662586 cri.go:89] found id: ""
	I1209 11:55:24.777853  662586 logs.go:282] 0 containers: []
	W1209 11:55:24.777866  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:24.777876  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:24.777938  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:24.810524  662586 cri.go:89] found id: ""
	I1209 11:55:24.810558  662586 logs.go:282] 0 containers: []
	W1209 11:55:24.810571  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:24.810580  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:24.810648  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:24.843551  662586 cri.go:89] found id: ""
	I1209 11:55:24.843582  662586 logs.go:282] 0 containers: []
	W1209 11:55:24.843590  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:24.843597  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:24.843649  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:24.875342  662586 cri.go:89] found id: ""
	I1209 11:55:24.875371  662586 logs.go:282] 0 containers: []
	W1209 11:55:24.875384  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:24.875390  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:24.875446  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:24.910298  662586 cri.go:89] found id: ""
	I1209 11:55:24.910329  662586 logs.go:282] 0 containers: []
	W1209 11:55:24.910340  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:24.910352  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:24.910377  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:24.962151  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:24.962204  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:24.976547  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:24.976577  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:25.050606  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:25.050635  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:25.050652  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:25.134204  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:25.134254  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:27.671220  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:27.685132  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:27.685194  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:26.895311  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:28.895954  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:26.950106  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:28.950915  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:30.952019  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:27.591908  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:30.090353  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:27.718113  662586 cri.go:89] found id: ""
	I1209 11:55:27.718141  662586 logs.go:282] 0 containers: []
	W1209 11:55:27.718150  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:27.718160  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:27.718242  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:27.752350  662586 cri.go:89] found id: ""
	I1209 11:55:27.752384  662586 logs.go:282] 0 containers: []
	W1209 11:55:27.752395  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:27.752401  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:27.752481  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:27.797360  662586 cri.go:89] found id: ""
	I1209 11:55:27.797393  662586 logs.go:282] 0 containers: []
	W1209 11:55:27.797406  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:27.797415  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:27.797488  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:27.834549  662586 cri.go:89] found id: ""
	I1209 11:55:27.834579  662586 logs.go:282] 0 containers: []
	W1209 11:55:27.834588  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:27.834594  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:27.834655  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:27.874403  662586 cri.go:89] found id: ""
	I1209 11:55:27.874440  662586 logs.go:282] 0 containers: []
	W1209 11:55:27.874465  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:27.874474  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:27.874557  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:27.914324  662586 cri.go:89] found id: ""
	I1209 11:55:27.914360  662586 logs.go:282] 0 containers: []
	W1209 11:55:27.914373  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:27.914380  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:27.914450  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:27.948001  662586 cri.go:89] found id: ""
	I1209 11:55:27.948043  662586 logs.go:282] 0 containers: []
	W1209 11:55:27.948056  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:27.948066  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:27.948219  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:27.982329  662586 cri.go:89] found id: ""
	I1209 11:55:27.982360  662586 logs.go:282] 0 containers: []
	W1209 11:55:27.982369  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:27.982379  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:27.982391  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:28.038165  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:28.038228  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:28.051578  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:28.051609  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:28.119914  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:28.119937  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:28.119951  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:28.195634  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:28.195679  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:30.735392  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:30.748430  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:30.748521  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:30.780500  662586 cri.go:89] found id: ""
	I1209 11:55:30.780528  662586 logs.go:282] 0 containers: []
	W1209 11:55:30.780537  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:30.780544  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:30.780606  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:30.812430  662586 cri.go:89] found id: ""
	I1209 11:55:30.812462  662586 logs.go:282] 0 containers: []
	W1209 11:55:30.812470  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:30.812477  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:30.812530  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:30.854030  662586 cri.go:89] found id: ""
	I1209 11:55:30.854057  662586 logs.go:282] 0 containers: []
	W1209 11:55:30.854066  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:30.854073  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:30.854130  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:30.892144  662586 cri.go:89] found id: ""
	I1209 11:55:30.892182  662586 logs.go:282] 0 containers: []
	W1209 11:55:30.892202  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:30.892211  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:30.892284  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:30.927540  662586 cri.go:89] found id: ""
	I1209 11:55:30.927576  662586 logs.go:282] 0 containers: []
	W1209 11:55:30.927590  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:30.927597  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:30.927660  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:30.963820  662586 cri.go:89] found id: ""
	I1209 11:55:30.963852  662586 logs.go:282] 0 containers: []
	W1209 11:55:30.963861  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:30.963867  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:30.963920  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:30.997793  662586 cri.go:89] found id: ""
	I1209 11:55:30.997819  662586 logs.go:282] 0 containers: []
	W1209 11:55:30.997828  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:30.997836  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:30.997902  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:31.031649  662586 cri.go:89] found id: ""
	I1209 11:55:31.031699  662586 logs.go:282] 0 containers: []
	W1209 11:55:31.031712  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:31.031726  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:31.031746  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:31.101464  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:31.101492  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:31.101509  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:31.184635  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:31.184681  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:31.222690  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:31.222732  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:31.276518  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:31.276566  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:30.896544  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:33.395861  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:33.451560  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:35.952567  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:32.091788  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:34.592091  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:33.790941  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:33.805299  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:33.805390  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:33.844205  662586 cri.go:89] found id: ""
	I1209 11:55:33.844241  662586 logs.go:282] 0 containers: []
	W1209 11:55:33.844253  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:33.844262  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:33.844337  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:33.883378  662586 cri.go:89] found id: ""
	I1209 11:55:33.883410  662586 logs.go:282] 0 containers: []
	W1209 11:55:33.883424  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:33.883431  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:33.883505  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:33.920007  662586 cri.go:89] found id: ""
	I1209 11:55:33.920049  662586 logs.go:282] 0 containers: []
	W1209 11:55:33.920061  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:33.920074  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:33.920141  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:33.956111  662586 cri.go:89] found id: ""
	I1209 11:55:33.956163  662586 logs.go:282] 0 containers: []
	W1209 11:55:33.956175  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:33.956183  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:33.956241  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:33.990057  662586 cri.go:89] found id: ""
	I1209 11:55:33.990092  662586 logs.go:282] 0 containers: []
	W1209 11:55:33.990102  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:33.990109  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:33.990166  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:34.023046  662586 cri.go:89] found id: ""
	I1209 11:55:34.023082  662586 logs.go:282] 0 containers: []
	W1209 11:55:34.023096  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:34.023103  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:34.023171  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:34.055864  662586 cri.go:89] found id: ""
	I1209 11:55:34.055898  662586 logs.go:282] 0 containers: []
	W1209 11:55:34.055909  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:34.055916  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:34.055987  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:34.091676  662586 cri.go:89] found id: ""
	I1209 11:55:34.091710  662586 logs.go:282] 0 containers: []
	W1209 11:55:34.091722  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:34.091733  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:34.091747  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:34.142959  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:34.143002  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:34.156431  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:34.156466  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:34.230277  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:34.230303  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:34.230320  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:34.313660  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:34.313713  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:36.850056  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:36.862486  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:36.862582  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:36.893134  662586 cri.go:89] found id: ""
	I1209 11:55:36.893163  662586 logs.go:282] 0 containers: []
	W1209 11:55:36.893173  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:36.893179  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:36.893257  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:36.927438  662586 cri.go:89] found id: ""
	I1209 11:55:36.927469  662586 logs.go:282] 0 containers: []
	W1209 11:55:36.927479  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:36.927485  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:36.927546  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:36.958787  662586 cri.go:89] found id: ""
	I1209 11:55:36.958818  662586 logs.go:282] 0 containers: []
	W1209 11:55:36.958829  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:36.958837  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:36.958901  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:36.995470  662586 cri.go:89] found id: ""
	I1209 11:55:36.995508  662586 logs.go:282] 0 containers: []
	W1209 11:55:36.995520  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:36.995529  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:36.995590  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:37.026705  662586 cri.go:89] found id: ""
	I1209 11:55:37.026736  662586 logs.go:282] 0 containers: []
	W1209 11:55:37.026746  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:37.026752  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:37.026805  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:37.059717  662586 cri.go:89] found id: ""
	I1209 11:55:37.059748  662586 logs.go:282] 0 containers: []
	W1209 11:55:37.059756  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:37.059762  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:37.059820  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:37.094049  662586 cri.go:89] found id: ""
	I1209 11:55:37.094076  662586 logs.go:282] 0 containers: []
	W1209 11:55:37.094088  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:37.094097  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:37.094190  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:37.128684  662586 cri.go:89] found id: ""
	I1209 11:55:37.128715  662586 logs.go:282] 0 containers: []
	W1209 11:55:37.128724  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:37.128735  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:37.128755  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:37.177932  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:37.177973  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:37.191218  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:37.191252  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:37.256488  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:37.256521  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:37.256538  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:37.330603  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:37.330647  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:35.895823  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:37.895972  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:37.952764  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:40.450704  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:37.092013  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:39.591402  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:39.868604  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:39.881991  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:39.882063  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:39.916750  662586 cri.go:89] found id: ""
	I1209 11:55:39.916786  662586 logs.go:282] 0 containers: []
	W1209 11:55:39.916799  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:39.916806  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:39.916874  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:39.957744  662586 cri.go:89] found id: ""
	I1209 11:55:39.957773  662586 logs.go:282] 0 containers: []
	W1209 11:55:39.957781  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:39.957788  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:39.957854  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:39.994613  662586 cri.go:89] found id: ""
	I1209 11:55:39.994645  662586 logs.go:282] 0 containers: []
	W1209 11:55:39.994654  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:39.994661  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:39.994726  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:40.032606  662586 cri.go:89] found id: ""
	I1209 11:55:40.032635  662586 logs.go:282] 0 containers: []
	W1209 11:55:40.032644  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:40.032650  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:40.032710  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:40.067172  662586 cri.go:89] found id: ""
	I1209 11:55:40.067204  662586 logs.go:282] 0 containers: []
	W1209 11:55:40.067214  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:40.067221  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:40.067278  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:40.101391  662586 cri.go:89] found id: ""
	I1209 11:55:40.101423  662586 logs.go:282] 0 containers: []
	W1209 11:55:40.101432  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:40.101439  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:40.101510  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:40.133160  662586 cri.go:89] found id: ""
	I1209 11:55:40.133196  662586 logs.go:282] 0 containers: []
	W1209 11:55:40.133209  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:40.133217  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:40.133283  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:40.166105  662586 cri.go:89] found id: ""
	I1209 11:55:40.166137  662586 logs.go:282] 0 containers: []
	W1209 11:55:40.166145  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:40.166160  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:40.166187  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:40.231525  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:40.231559  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:40.231582  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:40.311298  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:40.311354  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:40.350040  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:40.350077  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:40.404024  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:40.404061  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:39.896541  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:42.396800  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:42.453720  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:44.950595  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:42.091300  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:44.591230  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:42.917868  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:42.930289  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:42.930357  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:42.962822  662586 cri.go:89] found id: ""
	I1209 11:55:42.962856  662586 logs.go:282] 0 containers: []
	W1209 11:55:42.962869  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:42.962878  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:42.962950  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:42.996932  662586 cri.go:89] found id: ""
	I1209 11:55:42.996962  662586 logs.go:282] 0 containers: []
	W1209 11:55:42.996972  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:42.996979  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:42.997040  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:43.031782  662586 cri.go:89] found id: ""
	I1209 11:55:43.031824  662586 logs.go:282] 0 containers: []
	W1209 11:55:43.031837  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:43.031846  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:43.031915  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:43.064717  662586 cri.go:89] found id: ""
	I1209 11:55:43.064751  662586 logs.go:282] 0 containers: []
	W1209 11:55:43.064764  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:43.064774  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:43.064851  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:43.097248  662586 cri.go:89] found id: ""
	I1209 11:55:43.097278  662586 logs.go:282] 0 containers: []
	W1209 11:55:43.097287  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:43.097294  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:43.097356  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:43.135726  662586 cri.go:89] found id: ""
	I1209 11:55:43.135766  662586 logs.go:282] 0 containers: []
	W1209 11:55:43.135779  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:43.135788  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:43.135881  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:43.171120  662586 cri.go:89] found id: ""
	I1209 11:55:43.171148  662586 logs.go:282] 0 containers: []
	W1209 11:55:43.171157  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:43.171163  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:43.171216  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:43.207488  662586 cri.go:89] found id: ""
	I1209 11:55:43.207523  662586 logs.go:282] 0 containers: []
	W1209 11:55:43.207533  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:43.207545  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:43.207565  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:43.276112  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:43.276142  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:43.276159  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:43.354942  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:43.354990  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:43.392755  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:43.392800  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:43.445708  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:43.445752  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:45.962533  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:45.975508  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:45.975589  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:46.009619  662586 cri.go:89] found id: ""
	I1209 11:55:46.009653  662586 logs.go:282] 0 containers: []
	W1209 11:55:46.009663  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:46.009670  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:46.009726  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:46.042218  662586 cri.go:89] found id: ""
	I1209 11:55:46.042250  662586 logs.go:282] 0 containers: []
	W1209 11:55:46.042259  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:46.042265  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:46.042318  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:46.076204  662586 cri.go:89] found id: ""
	I1209 11:55:46.076239  662586 logs.go:282] 0 containers: []
	W1209 11:55:46.076249  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:46.076255  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:46.076326  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:46.113117  662586 cri.go:89] found id: ""
	I1209 11:55:46.113145  662586 logs.go:282] 0 containers: []
	W1209 11:55:46.113154  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:46.113160  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:46.113225  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:46.148232  662586 cri.go:89] found id: ""
	I1209 11:55:46.148277  662586 logs.go:282] 0 containers: []
	W1209 11:55:46.148293  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:46.148303  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:46.148379  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:46.185028  662586 cri.go:89] found id: ""
	I1209 11:55:46.185083  662586 logs.go:282] 0 containers: []
	W1209 11:55:46.185096  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:46.185106  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:46.185200  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:46.222882  662586 cri.go:89] found id: ""
	I1209 11:55:46.222920  662586 logs.go:282] 0 containers: []
	W1209 11:55:46.222933  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:46.222941  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:46.223007  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:46.263486  662586 cri.go:89] found id: ""
	I1209 11:55:46.263528  662586 logs.go:282] 0 containers: []
	W1209 11:55:46.263538  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:46.263549  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:46.263565  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:46.340524  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:46.340550  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:46.340567  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:46.422768  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:46.422810  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:46.464344  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:46.464382  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:46.517311  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:46.517354  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:44.895283  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:46.895427  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:48.895674  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:46.952912  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:48.953432  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:46.591521  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:49.093057  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:49.031192  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:49.043840  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:49.043929  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:49.077648  662586 cri.go:89] found id: ""
	I1209 11:55:49.077705  662586 logs.go:282] 0 containers: []
	W1209 11:55:49.077720  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:49.077730  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:49.077802  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:49.114111  662586 cri.go:89] found id: ""
	I1209 11:55:49.114138  662586 logs.go:282] 0 containers: []
	W1209 11:55:49.114146  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:49.114154  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:49.114236  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:49.147870  662586 cri.go:89] found id: ""
	I1209 11:55:49.147908  662586 logs.go:282] 0 containers: []
	W1209 11:55:49.147917  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:49.147923  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:49.147976  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:49.185223  662586 cri.go:89] found id: ""
	I1209 11:55:49.185256  662586 logs.go:282] 0 containers: []
	W1209 11:55:49.185269  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:49.185277  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:49.185350  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:49.218037  662586 cri.go:89] found id: ""
	I1209 11:55:49.218068  662586 logs.go:282] 0 containers: []
	W1209 11:55:49.218077  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:49.218084  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:49.218138  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:49.255483  662586 cri.go:89] found id: ""
	I1209 11:55:49.255522  662586 logs.go:282] 0 containers: []
	W1209 11:55:49.255535  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:49.255549  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:49.255629  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:49.288623  662586 cri.go:89] found id: ""
	I1209 11:55:49.288650  662586 logs.go:282] 0 containers: []
	W1209 11:55:49.288659  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:49.288666  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:49.288732  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:49.322880  662586 cri.go:89] found id: ""
	I1209 11:55:49.322913  662586 logs.go:282] 0 containers: []
	W1209 11:55:49.322921  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:49.322930  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:49.322943  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:49.372380  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:49.372428  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:49.385877  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:49.385914  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:49.460078  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:49.460101  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:49.460114  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:49.534588  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:49.534647  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:52.071408  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:52.084198  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:52.084276  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:52.118908  662586 cri.go:89] found id: ""
	I1209 11:55:52.118937  662586 logs.go:282] 0 containers: []
	W1209 11:55:52.118950  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:52.118958  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:52.119026  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:52.156494  662586 cri.go:89] found id: ""
	I1209 11:55:52.156521  662586 logs.go:282] 0 containers: []
	W1209 11:55:52.156530  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:52.156535  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:52.156586  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:52.196037  662586 cri.go:89] found id: ""
	I1209 11:55:52.196075  662586 logs.go:282] 0 containers: []
	W1209 11:55:52.196094  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:52.196102  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:52.196177  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:52.229436  662586 cri.go:89] found id: ""
	I1209 11:55:52.229465  662586 logs.go:282] 0 containers: []
	W1209 11:55:52.229477  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:52.229486  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:52.229558  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:52.268751  662586 cri.go:89] found id: ""
	I1209 11:55:52.268785  662586 logs.go:282] 0 containers: []
	W1209 11:55:52.268797  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:52.268805  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:52.268871  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:52.302405  662586 cri.go:89] found id: ""
	I1209 11:55:52.302436  662586 logs.go:282] 0 containers: []
	W1209 11:55:52.302446  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:52.302453  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:52.302522  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:52.338641  662586 cri.go:89] found id: ""
	I1209 11:55:52.338676  662586 logs.go:282] 0 containers: []
	W1209 11:55:52.338688  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:52.338698  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:52.338754  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:52.375541  662586 cri.go:89] found id: ""
	I1209 11:55:52.375578  662586 logs.go:282] 0 containers: []
	W1209 11:55:52.375591  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:52.375604  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:52.375624  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:52.389140  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:52.389190  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:52.460520  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:52.460546  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:52.460562  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:52.535234  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:52.535280  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:52.573317  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:52.573354  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:50.896292  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:52.896875  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:51.453540  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:53.456640  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:55.950197  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:51.590899  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:53.591317  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:56.092219  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:55.124068  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:55.136800  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:55.136868  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:55.169724  662586 cri.go:89] found id: ""
	I1209 11:55:55.169757  662586 logs.go:282] 0 containers: []
	W1209 11:55:55.169769  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:55.169777  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:55.169843  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:55.207466  662586 cri.go:89] found id: ""
	I1209 11:55:55.207514  662586 logs.go:282] 0 containers: []
	W1209 11:55:55.207528  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:55.207537  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:55.207600  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:55.241761  662586 cri.go:89] found id: ""
	I1209 11:55:55.241790  662586 logs.go:282] 0 containers: []
	W1209 11:55:55.241801  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:55.241809  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:55.241874  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:55.274393  662586 cri.go:89] found id: ""
	I1209 11:55:55.274434  662586 logs.go:282] 0 containers: []
	W1209 11:55:55.274447  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:55.274455  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:55.274522  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:55.307942  662586 cri.go:89] found id: ""
	I1209 11:55:55.307988  662586 logs.go:282] 0 containers: []
	W1209 11:55:55.308002  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:55.308012  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:55.308088  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:55.340074  662586 cri.go:89] found id: ""
	I1209 11:55:55.340107  662586 logs.go:282] 0 containers: []
	W1209 11:55:55.340116  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:55.340122  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:55.340196  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:55.388077  662586 cri.go:89] found id: ""
	I1209 11:55:55.388119  662586 logs.go:282] 0 containers: []
	W1209 11:55:55.388140  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:55.388149  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:55.388230  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:55.422923  662586 cri.go:89] found id: ""
	I1209 11:55:55.422961  662586 logs.go:282] 0 containers: []
	W1209 11:55:55.422975  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:55.422990  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:55.423008  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:55.476178  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:55.476219  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:55.489891  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:55.489919  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:55.555705  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:55.555726  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:55.555745  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:55.634818  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:55.634862  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:55.396320  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:57.895122  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:57.951119  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:00.451659  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:58.092427  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:00.590304  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:58.173169  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:58.188529  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:58.188620  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:58.225602  662586 cri.go:89] found id: ""
	I1209 11:55:58.225630  662586 logs.go:282] 0 containers: []
	W1209 11:55:58.225641  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:58.225649  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:58.225709  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:58.259597  662586 cri.go:89] found id: ""
	I1209 11:55:58.259638  662586 logs.go:282] 0 containers: []
	W1209 11:55:58.259652  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:58.259662  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:58.259744  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:58.293287  662586 cri.go:89] found id: ""
	I1209 11:55:58.293320  662586 logs.go:282] 0 containers: []
	W1209 11:55:58.293329  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:58.293336  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:58.293390  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:58.326581  662586 cri.go:89] found id: ""
	I1209 11:55:58.326611  662586 logs.go:282] 0 containers: []
	W1209 11:55:58.326622  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:58.326630  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:58.326699  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:58.359636  662586 cri.go:89] found id: ""
	I1209 11:55:58.359665  662586 logs.go:282] 0 containers: []
	W1209 11:55:58.359675  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:58.359681  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:58.359736  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:58.396767  662586 cri.go:89] found id: ""
	I1209 11:55:58.396798  662586 logs.go:282] 0 containers: []
	W1209 11:55:58.396809  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:58.396818  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:58.396887  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:58.428907  662586 cri.go:89] found id: ""
	I1209 11:55:58.428941  662586 logs.go:282] 0 containers: []
	W1209 11:55:58.428954  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:58.428962  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:58.429032  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:58.466082  662586 cri.go:89] found id: ""
	I1209 11:55:58.466124  662586 logs.go:282] 0 containers: []
	W1209 11:55:58.466136  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:58.466149  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:58.466186  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:58.542333  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:58.542378  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:58.582397  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:58.582436  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:58.632980  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:58.633030  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:58.648464  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:58.648514  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:58.711714  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:56:01.212475  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:56:01.225574  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:56:01.225642  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:56:01.259666  662586 cri.go:89] found id: ""
	I1209 11:56:01.259704  662586 logs.go:282] 0 containers: []
	W1209 11:56:01.259718  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:56:01.259726  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:56:01.259800  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:56:01.295433  662586 cri.go:89] found id: ""
	I1209 11:56:01.295474  662586 logs.go:282] 0 containers: []
	W1209 11:56:01.295495  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:56:01.295503  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:56:01.295561  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:56:01.330316  662586 cri.go:89] found id: ""
	I1209 11:56:01.330352  662586 logs.go:282] 0 containers: []
	W1209 11:56:01.330364  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:56:01.330373  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:56:01.330447  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:56:01.366762  662586 cri.go:89] found id: ""
	I1209 11:56:01.366797  662586 logs.go:282] 0 containers: []
	W1209 11:56:01.366808  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:56:01.366814  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:56:01.366878  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:56:01.403511  662586 cri.go:89] found id: ""
	I1209 11:56:01.403539  662586 logs.go:282] 0 containers: []
	W1209 11:56:01.403547  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:56:01.403553  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:56:01.403604  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:56:01.436488  662586 cri.go:89] found id: ""
	I1209 11:56:01.436526  662586 logs.go:282] 0 containers: []
	W1209 11:56:01.436538  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:56:01.436546  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:56:01.436617  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:56:01.471647  662586 cri.go:89] found id: ""
	I1209 11:56:01.471676  662586 logs.go:282] 0 containers: []
	W1209 11:56:01.471685  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:56:01.471690  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:56:01.471744  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:56:01.504065  662586 cri.go:89] found id: ""
	I1209 11:56:01.504099  662586 logs.go:282] 0 containers: []
	W1209 11:56:01.504111  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:56:01.504124  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:56:01.504143  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:56:01.553434  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:56:01.553482  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:56:01.567537  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:56:01.567579  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:56:01.636968  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:56:01.636995  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:56:01.637012  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:56:01.713008  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:56:01.713049  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:59.896841  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:02.396972  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:02.451893  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:04.453118  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:02.591218  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:04.592199  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:04.253143  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:56:04.266428  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:56:04.266512  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:56:04.298769  662586 cri.go:89] found id: ""
	I1209 11:56:04.298810  662586 logs.go:282] 0 containers: []
	W1209 11:56:04.298823  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:56:04.298833  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:56:04.298913  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:56:04.330392  662586 cri.go:89] found id: ""
	I1209 11:56:04.330428  662586 logs.go:282] 0 containers: []
	W1209 11:56:04.330441  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:56:04.330449  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:56:04.330528  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:56:04.362409  662586 cri.go:89] found id: ""
	I1209 11:56:04.362443  662586 logs.go:282] 0 containers: []
	W1209 11:56:04.362455  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:56:04.362463  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:56:04.362544  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:56:04.396853  662586 cri.go:89] found id: ""
	I1209 11:56:04.396884  662586 logs.go:282] 0 containers: []
	W1209 11:56:04.396893  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:56:04.396899  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:56:04.396966  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:56:04.430425  662586 cri.go:89] found id: ""
	I1209 11:56:04.430461  662586 logs.go:282] 0 containers: []
	W1209 11:56:04.430470  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:56:04.430477  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:56:04.430531  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:56:04.465354  662586 cri.go:89] found id: ""
	I1209 11:56:04.465391  662586 logs.go:282] 0 containers: []
	W1209 11:56:04.465403  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:56:04.465411  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:56:04.465480  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:56:04.500114  662586 cri.go:89] found id: ""
	I1209 11:56:04.500156  662586 logs.go:282] 0 containers: []
	W1209 11:56:04.500167  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:56:04.500179  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:56:04.500259  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:56:04.534853  662586 cri.go:89] found id: ""
	I1209 11:56:04.534888  662586 logs.go:282] 0 containers: []
	W1209 11:56:04.534902  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:56:04.534914  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:56:04.534928  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:56:04.586419  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:56:04.586457  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:56:04.600690  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:56:04.600728  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:56:04.669645  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:56:04.669685  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:56:04.669703  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:56:04.747973  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:56:04.748026  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:56:07.288721  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:56:07.302905  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:56:07.302975  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:56:07.336686  662586 cri.go:89] found id: ""
	I1209 11:56:07.336720  662586 logs.go:282] 0 containers: []
	W1209 11:56:07.336728  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:56:07.336735  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:56:07.336798  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:56:07.370119  662586 cri.go:89] found id: ""
	I1209 11:56:07.370150  662586 logs.go:282] 0 containers: []
	W1209 11:56:07.370159  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:56:07.370165  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:56:07.370245  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:56:07.402818  662586 cri.go:89] found id: ""
	I1209 11:56:07.402845  662586 logs.go:282] 0 containers: []
	W1209 11:56:07.402853  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:56:07.402861  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:56:07.402923  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:56:07.437694  662586 cri.go:89] found id: ""
	I1209 11:56:07.437722  662586 logs.go:282] 0 containers: []
	W1209 11:56:07.437732  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:56:07.437741  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:56:07.437806  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:56:07.474576  662586 cri.go:89] found id: ""
	I1209 11:56:07.474611  662586 logs.go:282] 0 containers: []
	W1209 11:56:07.474622  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:56:07.474629  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:56:07.474705  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:56:07.508538  662586 cri.go:89] found id: ""
	I1209 11:56:07.508575  662586 logs.go:282] 0 containers: []
	W1209 11:56:07.508585  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:56:07.508592  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:56:07.508661  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:56:07.548863  662586 cri.go:89] found id: ""
	I1209 11:56:07.548897  662586 logs.go:282] 0 containers: []
	W1209 11:56:07.548911  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:56:07.548922  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:56:07.549093  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:56:07.592515  662586 cri.go:89] found id: ""
	I1209 11:56:07.592543  662586 logs.go:282] 0 containers: []
	W1209 11:56:07.592555  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:56:07.592564  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:56:07.592579  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:56:07.652176  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:56:07.652219  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:56:04.895898  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:07.395712  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:09.398273  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:06.950668  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:09.450539  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:07.091573  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:09.591049  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:07.703040  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:56:07.703094  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:56:07.717880  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:56:07.717924  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:56:07.783396  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:56:07.783425  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:56:07.783441  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:56:10.362395  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:56:10.377478  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:56:10.377574  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:56:10.411923  662586 cri.go:89] found id: ""
	I1209 11:56:10.411956  662586 logs.go:282] 0 containers: []
	W1209 11:56:10.411969  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:56:10.411978  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:56:10.412049  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:56:10.444601  662586 cri.go:89] found id: ""
	I1209 11:56:10.444633  662586 logs.go:282] 0 containers: []
	W1209 11:56:10.444642  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:56:10.444648  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:56:10.444705  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:56:10.486720  662586 cri.go:89] found id: ""
	I1209 11:56:10.486753  662586 logs.go:282] 0 containers: []
	W1209 11:56:10.486763  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:56:10.486769  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:56:10.486822  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:56:10.523535  662586 cri.go:89] found id: ""
	I1209 11:56:10.523572  662586 logs.go:282] 0 containers: []
	W1209 11:56:10.523581  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:56:10.523587  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:56:10.523641  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:56:10.557701  662586 cri.go:89] found id: ""
	I1209 11:56:10.557741  662586 logs.go:282] 0 containers: []
	W1209 11:56:10.557754  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:56:10.557762  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:56:10.557834  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:56:10.593914  662586 cri.go:89] found id: ""
	I1209 11:56:10.593949  662586 logs.go:282] 0 containers: []
	W1209 11:56:10.593959  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:56:10.593965  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:56:10.594017  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:56:10.626367  662586 cri.go:89] found id: ""
	I1209 11:56:10.626469  662586 logs.go:282] 0 containers: []
	W1209 11:56:10.626482  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:56:10.626489  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:56:10.626547  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:56:10.665415  662586 cri.go:89] found id: ""
	I1209 11:56:10.665446  662586 logs.go:282] 0 containers: []
	W1209 11:56:10.665456  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:56:10.665467  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:56:10.665480  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:56:10.747483  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:56:10.747532  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:56:10.787728  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:56:10.787758  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:56:10.840678  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:56:10.840722  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:56:10.855774  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:56:10.855809  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:56:10.929638  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:56:11.896254  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:14.395661  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:11.451031  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:13.452502  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:15.951720  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:11.592197  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:13.593711  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:16.091641  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:13.430793  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:56:13.446156  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:56:13.446261  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:56:13.491624  662586 cri.go:89] found id: ""
	I1209 11:56:13.491662  662586 logs.go:282] 0 containers: []
	W1209 11:56:13.491675  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:56:13.491684  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:56:13.491758  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:56:13.537619  662586 cri.go:89] found id: ""
	I1209 11:56:13.537653  662586 logs.go:282] 0 containers: []
	W1209 11:56:13.537666  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:56:13.537675  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:56:13.537750  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:56:13.585761  662586 cri.go:89] found id: ""
	I1209 11:56:13.585796  662586 logs.go:282] 0 containers: []
	W1209 11:56:13.585810  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:56:13.585819  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:56:13.585883  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:56:13.620740  662586 cri.go:89] found id: ""
	I1209 11:56:13.620774  662586 logs.go:282] 0 containers: []
	W1209 11:56:13.620785  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:56:13.620791  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:56:13.620858  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:56:13.654405  662586 cri.go:89] found id: ""
	I1209 11:56:13.654433  662586 logs.go:282] 0 containers: []
	W1209 11:56:13.654442  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:56:13.654448  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:56:13.654509  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:56:13.687520  662586 cri.go:89] found id: ""
	I1209 11:56:13.687547  662586 logs.go:282] 0 containers: []
	W1209 11:56:13.687558  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:56:13.687566  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:56:13.687642  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:56:13.721105  662586 cri.go:89] found id: ""
	I1209 11:56:13.721140  662586 logs.go:282] 0 containers: []
	W1209 11:56:13.721153  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:56:13.721162  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:56:13.721238  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:56:13.753900  662586 cri.go:89] found id: ""
	I1209 11:56:13.753933  662586 logs.go:282] 0 containers: []
	W1209 11:56:13.753945  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:56:13.753960  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:56:13.753978  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:56:13.805864  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:56:13.805909  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:56:13.819356  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:56:13.819393  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:56:13.896097  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:56:13.896128  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:56:13.896150  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:56:13.979041  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:56:13.979084  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:56:16.516777  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:56:16.529916  662586 kubeadm.go:597] duration metric: took 4m1.869807937s to restartPrimaryControlPlane
	W1209 11:56:16.530015  662586 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1209 11:56:16.530067  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1209 11:56:16.396353  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:18.896097  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:18.451294  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:20.452525  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:18.092780  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:20.593275  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:18.635832  662586 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.105742271s)
	I1209 11:56:18.635914  662586 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 11:56:18.651678  662586 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 11:56:18.661965  662586 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 11:56:18.672060  662586 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 11:56:18.672082  662586 kubeadm.go:157] found existing configuration files:
	
	I1209 11:56:18.672147  662586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 11:56:18.681627  662586 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 11:56:18.681697  662586 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 11:56:18.691514  662586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 11:56:18.701210  662586 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 11:56:18.701292  662586 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 11:56:18.710934  662586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 11:56:18.720506  662586 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 11:56:18.720583  662586 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 11:56:18.729996  662586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 11:56:18.739425  662586 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 11:56:18.739486  662586 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 11:56:18.748788  662586 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1209 11:56:18.981849  662586 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1209 11:56:21.396764  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:23.894781  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:22.950912  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:24.951678  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:23.091306  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:24.592439  662109 pod_ready.go:82] duration metric: took 4m0.007699806s for pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace to be "Ready" ...
	E1209 11:56:24.592477  662109 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1209 11:56:24.592486  662109 pod_ready.go:39] duration metric: took 4m7.416528348s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 11:56:24.592504  662109 api_server.go:52] waiting for apiserver process to appear ...
	I1209 11:56:24.592537  662109 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:56:24.592590  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:56:24.643050  662109 cri.go:89] found id: "478ca5095dcdb90c166952aee1291e7de907591fa02ac65de132b6d7e6ea79cb"
	I1209 11:56:24.643085  662109 cri.go:89] found id: ""
	I1209 11:56:24.643094  662109 logs.go:282] 1 containers: [478ca5095dcdb90c166952aee1291e7de907591fa02ac65de132b6d7e6ea79cb]
	I1209 11:56:24.643151  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:24.647529  662109 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:56:24.647590  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:56:24.683125  662109 cri.go:89] found id: "13e00a6fef368f36aa166cfe9a5a6e37476d0bff708b09a552a102e6103aff16"
	I1209 11:56:24.683150  662109 cri.go:89] found id: ""
	I1209 11:56:24.683159  662109 logs.go:282] 1 containers: [13e00a6fef368f36aa166cfe9a5a6e37476d0bff708b09a552a102e6103aff16]
	I1209 11:56:24.683222  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:24.687584  662109 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:56:24.687706  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:56:24.720663  662109 cri.go:89] found id: "909852cc820d2c5faeb9b92e83a833c397dbbf46bbc715f79ef8b77f0a338f42"
	I1209 11:56:24.720699  662109 cri.go:89] found id: ""
	I1209 11:56:24.720708  662109 logs.go:282] 1 containers: [909852cc820d2c5faeb9b92e83a833c397dbbf46bbc715f79ef8b77f0a338f42]
	I1209 11:56:24.720769  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:24.724881  662109 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:56:24.724942  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:56:24.766055  662109 cri.go:89] found id: "73b01a8a4080f1488054462bb97f98c08528c799629927ce6a684fb0ade63413"
	I1209 11:56:24.766081  662109 cri.go:89] found id: ""
	I1209 11:56:24.766091  662109 logs.go:282] 1 containers: [73b01a8a4080f1488054462bb97f98c08528c799629927ce6a684fb0ade63413]
	I1209 11:56:24.766152  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:24.770491  662109 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:56:24.770557  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:56:24.804523  662109 cri.go:89] found id: "de64a319ab30ae80695633af0e1c9206551e2408918b743c2258216259de56d2"
	I1209 11:56:24.804549  662109 cri.go:89] found id: ""
	I1209 11:56:24.804558  662109 logs.go:282] 1 containers: [de64a319ab30ae80695633af0e1c9206551e2408918b743c2258216259de56d2]
	I1209 11:56:24.804607  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:24.808452  662109 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:56:24.808528  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:56:24.846043  662109 cri.go:89] found id: "b6662f1bed1995329291aee23d70ac4157125e45c75dd92e34e07fa257f2be2d"
	I1209 11:56:24.846072  662109 cri.go:89] found id: ""
	I1209 11:56:24.846084  662109 logs.go:282] 1 containers: [b6662f1bed1995329291aee23d70ac4157125e45c75dd92e34e07fa257f2be2d]
	I1209 11:56:24.846140  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:24.849991  662109 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:56:24.850057  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:56:24.884853  662109 cri.go:89] found id: ""
	I1209 11:56:24.884889  662109 logs.go:282] 0 containers: []
	W1209 11:56:24.884902  662109 logs.go:284] No container was found matching "kindnet"
	I1209 11:56:24.884912  662109 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 11:56:24.884983  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 11:56:24.920103  662109 cri.go:89] found id: "d184b6139f52f80605886a70426659453eee61916ead187b881c64f6b4c59bbb"
	I1209 11:56:24.920131  662109 cri.go:89] found id: "0ef403336ca71e96a468d654fafbbe9fd9c37a3d04d2578e06771b425f20004f"
	I1209 11:56:24.920135  662109 cri.go:89] found id: ""
	I1209 11:56:24.920152  662109 logs.go:282] 2 containers: [d184b6139f52f80605886a70426659453eee61916ead187b881c64f6b4c59bbb 0ef403336ca71e96a468d654fafbbe9fd9c37a3d04d2578e06771b425f20004f]
	I1209 11:56:24.920223  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:24.924212  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:24.928416  662109 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:56:24.928436  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 11:56:25.077407  662109 logs.go:123] Gathering logs for kube-apiserver [478ca5095dcdb90c166952aee1291e7de907591fa02ac65de132b6d7e6ea79cb] ...
	I1209 11:56:25.077468  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 478ca5095dcdb90c166952aee1291e7de907591fa02ac65de132b6d7e6ea79cb"
	I1209 11:56:25.125600  662109 logs.go:123] Gathering logs for storage-provisioner [d184b6139f52f80605886a70426659453eee61916ead187b881c64f6b4c59bbb] ...
	I1209 11:56:25.125649  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d184b6139f52f80605886a70426659453eee61916ead187b881c64f6b4c59bbb"
	I1209 11:56:25.163222  662109 logs.go:123] Gathering logs for container status ...
	I1209 11:56:25.163268  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:56:25.208430  662109 logs.go:123] Gathering logs for storage-provisioner [0ef403336ca71e96a468d654fafbbe9fd9c37a3d04d2578e06771b425f20004f] ...
	I1209 11:56:25.208465  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0ef403336ca71e96a468d654fafbbe9fd9c37a3d04d2578e06771b425f20004f"
	I1209 11:56:25.245884  662109 logs.go:123] Gathering logs for kubelet ...
	I1209 11:56:25.245917  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:56:25.318723  662109 logs.go:123] Gathering logs for dmesg ...
	I1209 11:56:25.318775  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:56:25.333173  662109 logs.go:123] Gathering logs for etcd [13e00a6fef368f36aa166cfe9a5a6e37476d0bff708b09a552a102e6103aff16] ...
	I1209 11:56:25.333207  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 13e00a6fef368f36aa166cfe9a5a6e37476d0bff708b09a552a102e6103aff16"
	I1209 11:56:25.394636  662109 logs.go:123] Gathering logs for coredns [909852cc820d2c5faeb9b92e83a833c397dbbf46bbc715f79ef8b77f0a338f42] ...
	I1209 11:56:25.394683  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 909852cc820d2c5faeb9b92e83a833c397dbbf46bbc715f79ef8b77f0a338f42"
	I1209 11:56:25.435210  662109 logs.go:123] Gathering logs for kube-scheduler [73b01a8a4080f1488054462bb97f98c08528c799629927ce6a684fb0ade63413] ...
	I1209 11:56:25.435248  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 73b01a8a4080f1488054462bb97f98c08528c799629927ce6a684fb0ade63413"
	I1209 11:56:25.482142  662109 logs.go:123] Gathering logs for kube-proxy [de64a319ab30ae80695633af0e1c9206551e2408918b743c2258216259de56d2] ...
	I1209 11:56:25.482184  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de64a319ab30ae80695633af0e1c9206551e2408918b743c2258216259de56d2"
	I1209 11:56:25.516975  662109 logs.go:123] Gathering logs for kube-controller-manager [b6662f1bed1995329291aee23d70ac4157125e45c75dd92e34e07fa257f2be2d] ...
	I1209 11:56:25.517006  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b6662f1bed1995329291aee23d70ac4157125e45c75dd92e34e07fa257f2be2d"
	I1209 11:56:25.565526  662109 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:56:25.565565  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:56:25.896281  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:28.395529  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:27.454449  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:29.950704  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:28.549071  662109 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:56:28.567288  662109 api_server.go:72] duration metric: took 4m18.770451099s to wait for apiserver process to appear ...
	I1209 11:56:28.567319  662109 api_server.go:88] waiting for apiserver healthz status ...
	I1209 11:56:28.567367  662109 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:56:28.567418  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:56:28.603341  662109 cri.go:89] found id: "478ca5095dcdb90c166952aee1291e7de907591fa02ac65de132b6d7e6ea79cb"
	I1209 11:56:28.603365  662109 cri.go:89] found id: ""
	I1209 11:56:28.603372  662109 logs.go:282] 1 containers: [478ca5095dcdb90c166952aee1291e7de907591fa02ac65de132b6d7e6ea79cb]
	I1209 11:56:28.603423  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:28.607416  662109 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:56:28.607493  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:56:28.647437  662109 cri.go:89] found id: "13e00a6fef368f36aa166cfe9a5a6e37476d0bff708b09a552a102e6103aff16"
	I1209 11:56:28.647465  662109 cri.go:89] found id: ""
	I1209 11:56:28.647477  662109 logs.go:282] 1 containers: [13e00a6fef368f36aa166cfe9a5a6e37476d0bff708b09a552a102e6103aff16]
	I1209 11:56:28.647539  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:28.651523  662109 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:56:28.651584  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:56:28.687889  662109 cri.go:89] found id: "909852cc820d2c5faeb9b92e83a833c397dbbf46bbc715f79ef8b77f0a338f42"
	I1209 11:56:28.687920  662109 cri.go:89] found id: ""
	I1209 11:56:28.687929  662109 logs.go:282] 1 containers: [909852cc820d2c5faeb9b92e83a833c397dbbf46bbc715f79ef8b77f0a338f42]
	I1209 11:56:28.687983  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:28.692025  662109 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:56:28.692100  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:56:28.728934  662109 cri.go:89] found id: "73b01a8a4080f1488054462bb97f98c08528c799629927ce6a684fb0ade63413"
	I1209 11:56:28.728961  662109 cri.go:89] found id: ""
	I1209 11:56:28.728969  662109 logs.go:282] 1 containers: [73b01a8a4080f1488054462bb97f98c08528c799629927ce6a684fb0ade63413]
	I1209 11:56:28.729020  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:28.733217  662109 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:56:28.733300  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:56:28.768700  662109 cri.go:89] found id: "de64a319ab30ae80695633af0e1c9206551e2408918b743c2258216259de56d2"
	I1209 11:56:28.768726  662109 cri.go:89] found id: ""
	I1209 11:56:28.768735  662109 logs.go:282] 1 containers: [de64a319ab30ae80695633af0e1c9206551e2408918b743c2258216259de56d2]
	I1209 11:56:28.768790  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:28.772844  662109 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:56:28.772921  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:56:28.812073  662109 cri.go:89] found id: "b6662f1bed1995329291aee23d70ac4157125e45c75dd92e34e07fa257f2be2d"
	I1209 11:56:28.812104  662109 cri.go:89] found id: ""
	I1209 11:56:28.812116  662109 logs.go:282] 1 containers: [b6662f1bed1995329291aee23d70ac4157125e45c75dd92e34e07fa257f2be2d]
	I1209 11:56:28.812195  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:28.816542  662109 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:56:28.816612  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:56:28.850959  662109 cri.go:89] found id: ""
	I1209 11:56:28.850997  662109 logs.go:282] 0 containers: []
	W1209 11:56:28.851010  662109 logs.go:284] No container was found matching "kindnet"
	I1209 11:56:28.851018  662109 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 11:56:28.851075  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 11:56:28.894115  662109 cri.go:89] found id: "d184b6139f52f80605886a70426659453eee61916ead187b881c64f6b4c59bbb"
	I1209 11:56:28.894142  662109 cri.go:89] found id: "0ef403336ca71e96a468d654fafbbe9fd9c37a3d04d2578e06771b425f20004f"
	I1209 11:56:28.894148  662109 cri.go:89] found id: ""
	I1209 11:56:28.894157  662109 logs.go:282] 2 containers: [d184b6139f52f80605886a70426659453eee61916ead187b881c64f6b4c59bbb 0ef403336ca71e96a468d654fafbbe9fd9c37a3d04d2578e06771b425f20004f]
	I1209 11:56:28.894228  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:28.899260  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:28.903033  662109 logs.go:123] Gathering logs for dmesg ...
	I1209 11:56:28.903055  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:56:28.916411  662109 logs.go:123] Gathering logs for kube-apiserver [478ca5095dcdb90c166952aee1291e7de907591fa02ac65de132b6d7e6ea79cb] ...
	I1209 11:56:28.916447  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 478ca5095dcdb90c166952aee1291e7de907591fa02ac65de132b6d7e6ea79cb"
	I1209 11:56:28.965873  662109 logs.go:123] Gathering logs for coredns [909852cc820d2c5faeb9b92e83a833c397dbbf46bbc715f79ef8b77f0a338f42] ...
	I1209 11:56:28.965911  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 909852cc820d2c5faeb9b92e83a833c397dbbf46bbc715f79ef8b77f0a338f42"
	I1209 11:56:29.003553  662109 logs.go:123] Gathering logs for kube-scheduler [73b01a8a4080f1488054462bb97f98c08528c799629927ce6a684fb0ade63413] ...
	I1209 11:56:29.003591  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 73b01a8a4080f1488054462bb97f98c08528c799629927ce6a684fb0ade63413"
	I1209 11:56:29.038945  662109 logs.go:123] Gathering logs for kube-proxy [de64a319ab30ae80695633af0e1c9206551e2408918b743c2258216259de56d2] ...
	I1209 11:56:29.038989  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de64a319ab30ae80695633af0e1c9206551e2408918b743c2258216259de56d2"
	I1209 11:56:29.079595  662109 logs.go:123] Gathering logs for storage-provisioner [d184b6139f52f80605886a70426659453eee61916ead187b881c64f6b4c59bbb] ...
	I1209 11:56:29.079636  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d184b6139f52f80605886a70426659453eee61916ead187b881c64f6b4c59bbb"
	I1209 11:56:29.117632  662109 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:56:29.117665  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:56:29.556193  662109 logs.go:123] Gathering logs for kubelet ...
	I1209 11:56:29.556245  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:56:29.629530  662109 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:56:29.629571  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 11:56:29.746102  662109 logs.go:123] Gathering logs for etcd [13e00a6fef368f36aa166cfe9a5a6e37476d0bff708b09a552a102e6103aff16] ...
	I1209 11:56:29.746137  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 13e00a6fef368f36aa166cfe9a5a6e37476d0bff708b09a552a102e6103aff16"
	I1209 11:56:29.799342  662109 logs.go:123] Gathering logs for kube-controller-manager [b6662f1bed1995329291aee23d70ac4157125e45c75dd92e34e07fa257f2be2d] ...
	I1209 11:56:29.799379  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b6662f1bed1995329291aee23d70ac4157125e45c75dd92e34e07fa257f2be2d"
	I1209 11:56:29.851197  662109 logs.go:123] Gathering logs for storage-provisioner [0ef403336ca71e96a468d654fafbbe9fd9c37a3d04d2578e06771b425f20004f] ...
	I1209 11:56:29.851254  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0ef403336ca71e96a468d654fafbbe9fd9c37a3d04d2578e06771b425f20004f"
	I1209 11:56:29.884688  662109 logs.go:123] Gathering logs for container status ...
	I1209 11:56:29.884725  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:56:30.396025  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:32.396195  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:34.396605  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:31.951405  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:34.451838  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:32.425773  662109 api_server.go:253] Checking apiserver healthz at https://192.168.39.169:8443/healthz ...
	I1209 11:56:32.432276  662109 api_server.go:279] https://192.168.39.169:8443/healthz returned 200:
	ok
	I1209 11:56:32.433602  662109 api_server.go:141] control plane version: v1.31.2
	I1209 11:56:32.433634  662109 api_server.go:131] duration metric: took 3.866306159s to wait for apiserver health ...
	I1209 11:56:32.433647  662109 system_pods.go:43] waiting for kube-system pods to appear ...
	I1209 11:56:32.433680  662109 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:56:32.433744  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:56:32.471560  662109 cri.go:89] found id: "478ca5095dcdb90c166952aee1291e7de907591fa02ac65de132b6d7e6ea79cb"
	I1209 11:56:32.471593  662109 cri.go:89] found id: ""
	I1209 11:56:32.471604  662109 logs.go:282] 1 containers: [478ca5095dcdb90c166952aee1291e7de907591fa02ac65de132b6d7e6ea79cb]
	I1209 11:56:32.471684  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:32.475735  662109 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:56:32.475809  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:56:32.509788  662109 cri.go:89] found id: "13e00a6fef368f36aa166cfe9a5a6e37476d0bff708b09a552a102e6103aff16"
	I1209 11:56:32.509821  662109 cri.go:89] found id: ""
	I1209 11:56:32.509833  662109 logs.go:282] 1 containers: [13e00a6fef368f36aa166cfe9a5a6e37476d0bff708b09a552a102e6103aff16]
	I1209 11:56:32.509889  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:32.513849  662109 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:56:32.513908  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:56:32.547022  662109 cri.go:89] found id: "909852cc820d2c5faeb9b92e83a833c397dbbf46bbc715f79ef8b77f0a338f42"
	I1209 11:56:32.547046  662109 cri.go:89] found id: ""
	I1209 11:56:32.547055  662109 logs.go:282] 1 containers: [909852cc820d2c5faeb9b92e83a833c397dbbf46bbc715f79ef8b77f0a338f42]
	I1209 11:56:32.547113  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:32.551393  662109 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:56:32.551476  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:56:32.586478  662109 cri.go:89] found id: "73b01a8a4080f1488054462bb97f98c08528c799629927ce6a684fb0ade63413"
	I1209 11:56:32.586516  662109 cri.go:89] found id: ""
	I1209 11:56:32.586536  662109 logs.go:282] 1 containers: [73b01a8a4080f1488054462bb97f98c08528c799629927ce6a684fb0ade63413]
	I1209 11:56:32.586605  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:32.592876  662109 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:56:32.592950  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:56:32.626775  662109 cri.go:89] found id: "de64a319ab30ae80695633af0e1c9206551e2408918b743c2258216259de56d2"
	I1209 11:56:32.626803  662109 cri.go:89] found id: ""
	I1209 11:56:32.626812  662109 logs.go:282] 1 containers: [de64a319ab30ae80695633af0e1c9206551e2408918b743c2258216259de56d2]
	I1209 11:56:32.626869  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:32.630757  662109 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:56:32.630825  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:56:32.663980  662109 cri.go:89] found id: "b6662f1bed1995329291aee23d70ac4157125e45c75dd92e34e07fa257f2be2d"
	I1209 11:56:32.664013  662109 cri.go:89] found id: ""
	I1209 11:56:32.664026  662109 logs.go:282] 1 containers: [b6662f1bed1995329291aee23d70ac4157125e45c75dd92e34e07fa257f2be2d]
	I1209 11:56:32.664093  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:32.668368  662109 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:56:32.668449  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:56:32.704638  662109 cri.go:89] found id: ""
	I1209 11:56:32.704675  662109 logs.go:282] 0 containers: []
	W1209 11:56:32.704688  662109 logs.go:284] No container was found matching "kindnet"
	I1209 11:56:32.704695  662109 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 11:56:32.704752  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 11:56:32.743694  662109 cri.go:89] found id: "d184b6139f52f80605886a70426659453eee61916ead187b881c64f6b4c59bbb"
	I1209 11:56:32.743729  662109 cri.go:89] found id: "0ef403336ca71e96a468d654fafbbe9fd9c37a3d04d2578e06771b425f20004f"
	I1209 11:56:32.743735  662109 cri.go:89] found id: ""
	I1209 11:56:32.743746  662109 logs.go:282] 2 containers: [d184b6139f52f80605886a70426659453eee61916ead187b881c64f6b4c59bbb 0ef403336ca71e96a468d654fafbbe9fd9c37a3d04d2578e06771b425f20004f]
	I1209 11:56:32.743814  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:32.749146  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:32.753226  662109 logs.go:123] Gathering logs for kube-scheduler [73b01a8a4080f1488054462bb97f98c08528c799629927ce6a684fb0ade63413] ...
	I1209 11:56:32.753253  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 73b01a8a4080f1488054462bb97f98c08528c799629927ce6a684fb0ade63413"
	I1209 11:56:32.787832  662109 logs.go:123] Gathering logs for kube-proxy [de64a319ab30ae80695633af0e1c9206551e2408918b743c2258216259de56d2] ...
	I1209 11:56:32.787877  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de64a319ab30ae80695633af0e1c9206551e2408918b743c2258216259de56d2"
	I1209 11:56:32.824859  662109 logs.go:123] Gathering logs for kube-controller-manager [b6662f1bed1995329291aee23d70ac4157125e45c75dd92e34e07fa257f2be2d] ...
	I1209 11:56:32.824891  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b6662f1bed1995329291aee23d70ac4157125e45c75dd92e34e07fa257f2be2d"
	I1209 11:56:32.881776  662109 logs.go:123] Gathering logs for storage-provisioner [d184b6139f52f80605886a70426659453eee61916ead187b881c64f6b4c59bbb] ...
	I1209 11:56:32.881808  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d184b6139f52f80605886a70426659453eee61916ead187b881c64f6b4c59bbb"
	I1209 11:56:32.919018  662109 logs.go:123] Gathering logs for storage-provisioner [0ef403336ca71e96a468d654fafbbe9fd9c37a3d04d2578e06771b425f20004f] ...
	I1209 11:56:32.919064  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0ef403336ca71e96a468d654fafbbe9fd9c37a3d04d2578e06771b425f20004f"
	I1209 11:56:32.956839  662109 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:56:32.956869  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:56:33.334255  662109 logs.go:123] Gathering logs for kubelet ...
	I1209 11:56:33.334300  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:56:33.406008  662109 logs.go:123] Gathering logs for etcd [13e00a6fef368f36aa166cfe9a5a6e37476d0bff708b09a552a102e6103aff16] ...
	I1209 11:56:33.406049  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 13e00a6fef368f36aa166cfe9a5a6e37476d0bff708b09a552a102e6103aff16"
	I1209 11:56:33.453689  662109 logs.go:123] Gathering logs for kube-apiserver [478ca5095dcdb90c166952aee1291e7de907591fa02ac65de132b6d7e6ea79cb] ...
	I1209 11:56:33.453724  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 478ca5095dcdb90c166952aee1291e7de907591fa02ac65de132b6d7e6ea79cb"
	I1209 11:56:33.496168  662109 logs.go:123] Gathering logs for coredns [909852cc820d2c5faeb9b92e83a833c397dbbf46bbc715f79ef8b77f0a338f42] ...
	I1209 11:56:33.496209  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 909852cc820d2c5faeb9b92e83a833c397dbbf46bbc715f79ef8b77f0a338f42"
	I1209 11:56:33.532057  662109 logs.go:123] Gathering logs for container status ...
	I1209 11:56:33.532090  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:56:33.575050  662109 logs.go:123] Gathering logs for dmesg ...
	I1209 11:56:33.575087  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:56:33.588543  662109 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:56:33.588575  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 11:56:36.194483  662109 system_pods.go:59] 8 kube-system pods found
	I1209 11:56:36.194516  662109 system_pods.go:61] "coredns-7c65d6cfc9-z647g" [0e15e13e-efe6-4ae2-8bac-205aadf8f95a] Running
	I1209 11:56:36.194522  662109 system_pods.go:61] "etcd-no-preload-820741" [3ea97088-f3b5-4c8f-ac11-7ab96f37bc5b] Running
	I1209 11:56:36.194527  662109 system_pods.go:61] "kube-apiserver-no-preload-820741" [bd0d3e20-bdab-41c1-bb25-0c5149b4e456] Running
	I1209 11:56:36.194531  662109 system_pods.go:61] "kube-controller-manager-no-preload-820741" [5eda77af-a169-4be5-9f96-6ad5406f9036] Running
	I1209 11:56:36.194534  662109 system_pods.go:61] "kube-proxy-hpvvp" [0945206c-8d1e-47e0-b35b-9011073423b2] Running
	I1209 11:56:36.194538  662109 system_pods.go:61] "kube-scheduler-no-preload-820741" [e7003668-a70d-4334-bb99-c63c463d87e0] Running
	I1209 11:56:36.194543  662109 system_pods.go:61] "metrics-server-6867b74b74-pwcsr" [40d4df7e-de82-478b-a77b-b27208d8262e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1209 11:56:36.194549  662109 system_pods.go:61] "storage-provisioner" [aeba46d3-ecf1-4923-b89c-75b34e75a06d] Running
	I1209 11:56:36.194559  662109 system_pods.go:74] duration metric: took 3.76090495s to wait for pod list to return data ...
	I1209 11:56:36.194567  662109 default_sa.go:34] waiting for default service account to be created ...
	I1209 11:56:36.197070  662109 default_sa.go:45] found service account: "default"
	I1209 11:56:36.197094  662109 default_sa.go:55] duration metric: took 2.520926ms for default service account to be created ...
	I1209 11:56:36.197104  662109 system_pods.go:116] waiting for k8s-apps to be running ...
	I1209 11:56:36.201494  662109 system_pods.go:86] 8 kube-system pods found
	I1209 11:56:36.201518  662109 system_pods.go:89] "coredns-7c65d6cfc9-z647g" [0e15e13e-efe6-4ae2-8bac-205aadf8f95a] Running
	I1209 11:56:36.201524  662109 system_pods.go:89] "etcd-no-preload-820741" [3ea97088-f3b5-4c8f-ac11-7ab96f37bc5b] Running
	I1209 11:56:36.201528  662109 system_pods.go:89] "kube-apiserver-no-preload-820741" [bd0d3e20-bdab-41c1-bb25-0c5149b4e456] Running
	I1209 11:56:36.201533  662109 system_pods.go:89] "kube-controller-manager-no-preload-820741" [5eda77af-a169-4be5-9f96-6ad5406f9036] Running
	I1209 11:56:36.201537  662109 system_pods.go:89] "kube-proxy-hpvvp" [0945206c-8d1e-47e0-b35b-9011073423b2] Running
	I1209 11:56:36.201540  662109 system_pods.go:89] "kube-scheduler-no-preload-820741" [e7003668-a70d-4334-bb99-c63c463d87e0] Running
	I1209 11:56:36.201547  662109 system_pods.go:89] "metrics-server-6867b74b74-pwcsr" [40d4df7e-de82-478b-a77b-b27208d8262e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1209 11:56:36.201551  662109 system_pods.go:89] "storage-provisioner" [aeba46d3-ecf1-4923-b89c-75b34e75a06d] Running
	I1209 11:56:36.201558  662109 system_pods.go:126] duration metric: took 4.448871ms to wait for k8s-apps to be running ...
	I1209 11:56:36.201567  662109 system_svc.go:44] waiting for kubelet service to be running ....
	I1209 11:56:36.201628  662109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 11:56:36.217457  662109 system_svc.go:56] duration metric: took 15.878252ms WaitForService to wait for kubelet
	I1209 11:56:36.217503  662109 kubeadm.go:582] duration metric: took 4m26.420670146s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 11:56:36.217527  662109 node_conditions.go:102] verifying NodePressure condition ...
	I1209 11:56:36.220498  662109 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1209 11:56:36.220526  662109 node_conditions.go:123] node cpu capacity is 2
	I1209 11:56:36.220572  662109 node_conditions.go:105] duration metric: took 3.039367ms to run NodePressure ...
	I1209 11:56:36.220586  662109 start.go:241] waiting for startup goroutines ...
	I1209 11:56:36.220597  662109 start.go:246] waiting for cluster config update ...
	I1209 11:56:36.220628  662109 start.go:255] writing updated cluster config ...
	I1209 11:56:36.220974  662109 ssh_runner.go:195] Run: rm -f paused
	I1209 11:56:36.272920  662109 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1209 11:56:36.274686  662109 out.go:177] * Done! kubectl is now configured to use "no-preload-820741" cluster and "default" namespace by default
	I1209 11:56:36.895681  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:38.896066  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:36.951281  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:39.455225  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:41.395880  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:43.895464  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:41.951287  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:44.451357  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:45.896184  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:48.398617  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:46.451733  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:48.950857  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:50.950964  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:50.895678  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:52.896291  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:53.389365  663024 pod_ready.go:82] duration metric: took 4m0.00015362s for pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace to be "Ready" ...
	E1209 11:56:53.389414  663024 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1209 11:56:53.389440  663024 pod_ready.go:39] duration metric: took 4m13.044002506s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 11:56:53.389480  663024 kubeadm.go:597] duration metric: took 4m21.286289463s to restartPrimaryControlPlane
	W1209 11:56:53.389572  663024 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1209 11:56:53.389610  663024 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1209 11:56:52.951153  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:55.451223  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:57.950413  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:57:00.449904  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:57:02.450069  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:57:04.451074  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:57:06.950873  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:57:08.951176  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:57:11.450596  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:57:13.451552  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:57:13.944884  661546 pod_ready.go:82] duration metric: took 4m0.000348644s for pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace to be "Ready" ...
	E1209 11:57:13.944919  661546 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace to be "Ready" (will not retry!)
	I1209 11:57:13.944943  661546 pod_ready.go:39] duration metric: took 4m14.049505666s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 11:57:13.944980  661546 kubeadm.go:597] duration metric: took 4m22.094543781s to restartPrimaryControlPlane
	W1209 11:57:13.945086  661546 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1209 11:57:13.945123  661546 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1209 11:57:19.569119  663024 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.179481312s)
	I1209 11:57:19.569196  663024 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 11:57:19.583584  663024 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 11:57:19.592807  663024 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 11:57:19.602121  663024 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 11:57:19.602190  663024 kubeadm.go:157] found existing configuration files:
	
	I1209 11:57:19.602249  663024 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1209 11:57:19.611109  663024 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 11:57:19.611187  663024 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 11:57:19.620264  663024 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1209 11:57:19.629026  663024 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 11:57:19.629103  663024 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 11:57:19.638036  663024 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1209 11:57:19.646265  663024 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 11:57:19.646331  663024 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 11:57:19.655187  663024 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1209 11:57:19.663908  663024 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 11:57:19.663962  663024 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 11:57:19.673002  663024 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1209 11:57:19.717664  663024 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1209 11:57:19.717737  663024 kubeadm.go:310] [preflight] Running pre-flight checks
	I1209 11:57:19.818945  663024 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1209 11:57:19.819065  663024 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1209 11:57:19.819160  663024 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1209 11:57:19.828186  663024 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1209 11:57:19.829831  663024 out.go:235]   - Generating certificates and keys ...
	I1209 11:57:19.829938  663024 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1209 11:57:19.830031  663024 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1209 11:57:19.830145  663024 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1209 11:57:19.830252  663024 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1209 11:57:19.830377  663024 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1209 11:57:19.830470  663024 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1209 11:57:19.830568  663024 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1209 11:57:19.830644  663024 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1209 11:57:19.830745  663024 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1209 11:57:19.830825  663024 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1209 11:57:19.830878  663024 kubeadm.go:310] [certs] Using the existing "sa" key
	I1209 11:57:19.830963  663024 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1209 11:57:19.961813  663024 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1209 11:57:20.436964  663024 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1209 11:57:20.652041  663024 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1209 11:57:20.837664  663024 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1209 11:57:20.892035  663024 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1209 11:57:20.892497  663024 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1209 11:57:20.895295  663024 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1209 11:57:20.896871  663024 out.go:235]   - Booting up control plane ...
	I1209 11:57:20.896992  663024 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1209 11:57:20.897139  663024 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1209 11:57:20.897260  663024 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1209 11:57:20.914735  663024 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1209 11:57:20.920520  663024 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1209 11:57:20.920566  663024 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1209 11:57:21.047290  663024 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1209 11:57:21.047437  663024 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1209 11:57:22.049131  663024 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001914766s
	I1209 11:57:22.049257  663024 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1209 11:57:27.053443  663024 kubeadm.go:310] [api-check] The API server is healthy after 5.002570817s
	I1209 11:57:27.068518  663024 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1209 11:57:27.086371  663024 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1209 11:57:27.114617  663024 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1209 11:57:27.114833  663024 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-482476 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1209 11:57:27.131354  663024 kubeadm.go:310] [bootstrap-token] Using token: 6aanjy.0y855mmcca5ic9co
	I1209 11:57:27.132852  663024 out.go:235]   - Configuring RBAC rules ...
	I1209 11:57:27.132992  663024 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1209 11:57:27.139770  663024 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1209 11:57:27.147974  663024 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1209 11:57:27.155508  663024 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1209 11:57:27.159181  663024 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1209 11:57:27.163403  663024 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1209 11:57:27.458812  663024 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1209 11:57:27.900322  663024 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1209 11:57:28.458864  663024 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1209 11:57:28.459944  663024 kubeadm.go:310] 
	I1209 11:57:28.460043  663024 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1209 11:57:28.460054  663024 kubeadm.go:310] 
	I1209 11:57:28.460156  663024 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1209 11:57:28.460166  663024 kubeadm.go:310] 
	I1209 11:57:28.460198  663024 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1209 11:57:28.460284  663024 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1209 11:57:28.460385  663024 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1209 11:57:28.460414  663024 kubeadm.go:310] 
	I1209 11:57:28.460499  663024 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1209 11:57:28.460509  663024 kubeadm.go:310] 
	I1209 11:57:28.460576  663024 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1209 11:57:28.460586  663024 kubeadm.go:310] 
	I1209 11:57:28.460663  663024 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1209 11:57:28.460766  663024 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1209 11:57:28.460862  663024 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1209 11:57:28.460871  663024 kubeadm.go:310] 
	I1209 11:57:28.460992  663024 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1209 11:57:28.461096  663024 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1209 11:57:28.461121  663024 kubeadm.go:310] 
	I1209 11:57:28.461244  663024 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token 6aanjy.0y855mmcca5ic9co \
	I1209 11:57:28.461395  663024 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c011865ce27f188d55feab5f04226df0aae8d2e7cfe661283806c6f2de0ef29a \
	I1209 11:57:28.461435  663024 kubeadm.go:310] 	--control-plane 
	I1209 11:57:28.461446  663024 kubeadm.go:310] 
	I1209 11:57:28.461551  663024 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1209 11:57:28.461574  663024 kubeadm.go:310] 
	I1209 11:57:28.461679  663024 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token 6aanjy.0y855mmcca5ic9co \
	I1209 11:57:28.461832  663024 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c011865ce27f188d55feab5f04226df0aae8d2e7cfe661283806c6f2de0ef29a 
	I1209 11:57:28.462544  663024 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1209 11:57:28.462594  663024 cni.go:84] Creating CNI manager for ""
	I1209 11:57:28.462620  663024 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 11:57:28.464574  663024 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1209 11:57:28.465952  663024 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1209 11:57:28.476155  663024 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1209 11:57:28.493471  663024 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1209 11:57:28.493551  663024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 11:57:28.493594  663024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-482476 minikube.k8s.io/updated_at=2024_12_09T11_57_28_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=60110addcdbf0fec7168b962521659e922988d6c minikube.k8s.io/name=default-k8s-diff-port-482476 minikube.k8s.io/primary=true
	I1209 11:57:28.506467  663024 ops.go:34] apiserver oom_adj: -16
	I1209 11:57:28.724224  663024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 11:57:29.224971  663024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 11:57:29.724660  663024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 11:57:30.224466  663024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 11:57:30.724354  663024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 11:57:31.224702  663024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 11:57:31.725101  663024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 11:57:32.224364  663024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 11:57:32.724357  663024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 11:57:32.844191  663024 kubeadm.go:1113] duration metric: took 4.350713188s to wait for elevateKubeSystemPrivileges
	I1209 11:57:32.844243  663024 kubeadm.go:394] duration metric: took 5m0.79272843s to StartCluster
	I1209 11:57:32.844287  663024 settings.go:142] acquiring lock: {Name:mkd819f3469f56ef651f2e5bb461e93bb6b2b5fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 11:57:32.844417  663024 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20068-609844/kubeconfig
	I1209 11:57:32.846697  663024 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/kubeconfig: {Name:mk32876a1ab47f13c0d7c0ba1ee5e32de01b4a4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 11:57:32.847014  663024 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.25 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 11:57:32.847067  663024 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1209 11:57:32.847162  663024 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-482476"
	I1209 11:57:32.847186  663024 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-482476"
	I1209 11:57:32.847192  663024 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-482476"
	W1209 11:57:32.847201  663024 addons.go:243] addon storage-provisioner should already be in state true
	I1209 11:57:32.847204  663024 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-482476"
	I1209 11:57:32.847228  663024 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-482476"
	I1209 11:57:32.847272  663024 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-482476"
	W1209 11:57:32.847287  663024 addons.go:243] addon metrics-server should already be in state true
	I1209 11:57:32.847285  663024 config.go:182] Loaded profile config "default-k8s-diff-port-482476": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 11:57:32.847328  663024 host.go:66] Checking if "default-k8s-diff-port-482476" exists ...
	I1209 11:57:32.847237  663024 host.go:66] Checking if "default-k8s-diff-port-482476" exists ...
	I1209 11:57:32.847705  663024 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:57:32.847713  663024 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:57:32.847750  663024 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:57:32.847755  663024 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:57:32.847841  663024 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:57:32.847873  663024 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:57:32.848599  663024 out.go:177] * Verifying Kubernetes components...
	I1209 11:57:32.850246  663024 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 11:57:32.864945  663024 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44639
	I1209 11:57:32.865141  663024 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37211
	I1209 11:57:32.865203  663024 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42835
	I1209 11:57:32.865473  663024 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:57:32.865635  663024 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:57:32.865733  663024 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:57:32.866096  663024 main.go:141] libmachine: Using API Version  1
	I1209 11:57:32.866115  663024 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:57:32.866241  663024 main.go:141] libmachine: Using API Version  1
	I1209 11:57:32.866264  663024 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:57:32.866241  663024 main.go:141] libmachine: Using API Version  1
	I1209 11:57:32.866316  663024 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:57:32.866642  663024 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:57:32.866654  663024 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:57:32.866656  663024 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:57:32.866865  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetState
	I1209 11:57:32.867243  663024 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:57:32.867287  663024 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:57:32.867321  663024 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:57:32.867358  663024 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:57:32.871085  663024 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-482476"
	W1209 11:57:32.871109  663024 addons.go:243] addon default-storageclass should already be in state true
	I1209 11:57:32.871142  663024 host.go:66] Checking if "default-k8s-diff-port-482476" exists ...
	I1209 11:57:32.871395  663024 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:57:32.871431  663024 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:57:32.883301  663024 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45049
	I1209 11:57:32.883976  663024 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:57:32.884508  663024 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36509
	I1209 11:57:32.884758  663024 main.go:141] libmachine: Using API Version  1
	I1209 11:57:32.884775  663024 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:57:32.885123  663024 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:57:32.885279  663024 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:57:32.885610  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetState
	I1209 11:57:32.885801  663024 main.go:141] libmachine: Using API Version  1
	I1209 11:57:32.885817  663024 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:57:32.886142  663024 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:57:32.886347  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetState
	I1209 11:57:32.888357  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .DriverName
	I1209 11:57:32.888762  663024 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37611
	I1209 11:57:32.889103  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .DriverName
	I1209 11:57:32.889192  663024 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:57:32.889669  663024 main.go:141] libmachine: Using API Version  1
	I1209 11:57:32.889692  663024 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:57:32.890035  663024 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:57:32.890082  663024 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 11:57:32.890647  663024 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:57:32.890687  663024 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:57:32.890867  663024 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1209 11:57:32.891756  663024 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 11:57:32.891774  663024 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1209 11:57:32.891794  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHHostname
	I1209 11:57:32.892543  663024 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1209 11:57:32.892563  663024 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1209 11:57:32.892587  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHHostname
	I1209 11:57:32.896754  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:57:32.897437  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:c9:8a", ip: ""} in network mk-default-k8s-diff-port-482476: {Iface:virbr2 ExpiryTime:2024-12-09 12:52:18 +0000 UTC Type:0 Mac:52:54:00:f0:c9:8a Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:default-k8s-diff-port-482476 Clientid:01:52:54:00:f0:c9:8a}
	I1209 11:57:32.897471  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined IP address 192.168.50.25 and MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:57:32.897752  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHPort
	I1209 11:57:32.897836  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:57:32.897975  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHKeyPath
	I1209 11:57:32.898370  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:c9:8a", ip: ""} in network mk-default-k8s-diff-port-482476: {Iface:virbr2 ExpiryTime:2024-12-09 12:52:18 +0000 UTC Type:0 Mac:52:54:00:f0:c9:8a Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:default-k8s-diff-port-482476 Clientid:01:52:54:00:f0:c9:8a}
	I1209 11:57:32.898381  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHUsername
	I1209 11:57:32.898395  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined IP address 192.168.50.25 and MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:57:32.898556  663024 sshutil.go:53] new ssh client: &{IP:192.168.50.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/default-k8s-diff-port-482476/id_rsa Username:docker}
	I1209 11:57:32.898649  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHPort
	I1209 11:57:32.898829  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHKeyPath
	I1209 11:57:32.898975  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHUsername
	I1209 11:57:32.899101  663024 sshutil.go:53] new ssh client: &{IP:192.168.50.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/default-k8s-diff-port-482476/id_rsa Username:docker}
	I1209 11:57:32.907891  663024 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34063
	I1209 11:57:32.908317  663024 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:57:32.908827  663024 main.go:141] libmachine: Using API Version  1
	I1209 11:57:32.908848  663024 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:57:32.909352  663024 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:57:32.909551  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetState
	I1209 11:57:32.911172  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .DriverName
	I1209 11:57:32.911417  663024 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1209 11:57:32.911434  663024 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1209 11:57:32.911460  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHHostname
	I1209 11:57:32.914016  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:57:32.914474  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:c9:8a", ip: ""} in network mk-default-k8s-diff-port-482476: {Iface:virbr2 ExpiryTime:2024-12-09 12:52:18 +0000 UTC Type:0 Mac:52:54:00:f0:c9:8a Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:default-k8s-diff-port-482476 Clientid:01:52:54:00:f0:c9:8a}
	I1209 11:57:32.914490  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined IP address 192.168.50.25 and MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:57:32.914646  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHPort
	I1209 11:57:32.914838  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHKeyPath
	I1209 11:57:32.914965  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHUsername
	I1209 11:57:32.915071  663024 sshutil.go:53] new ssh client: &{IP:192.168.50.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/default-k8s-diff-port-482476/id_rsa Username:docker}
	I1209 11:57:33.067075  663024 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 11:57:33.085671  663024 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-482476" to be "Ready" ...
	I1209 11:57:33.095765  663024 node_ready.go:49] node "default-k8s-diff-port-482476" has status "Ready":"True"
	I1209 11:57:33.095801  663024 node_ready.go:38] duration metric: took 10.096442ms for node "default-k8s-diff-port-482476" to be "Ready" ...
	I1209 11:57:33.095815  663024 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 11:57:33.105497  663024 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-7rr27" in "kube-system" namespace to be "Ready" ...
	I1209 11:57:33.200059  663024 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 11:57:33.218467  663024 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1209 11:57:33.218496  663024 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1209 11:57:33.225990  663024 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1209 11:57:33.278736  663024 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1209 11:57:33.278772  663024 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1209 11:57:33.342270  663024 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1209 11:57:33.342304  663024 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1209 11:57:33.412771  663024 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1209 11:57:34.250639  663024 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.050535014s)
	I1209 11:57:34.250706  663024 main.go:141] libmachine: Making call to close driver server
	I1209 11:57:34.250720  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .Close
	I1209 11:57:34.250704  663024 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.024681453s)
	I1209 11:57:34.250811  663024 main.go:141] libmachine: Making call to close driver server
	I1209 11:57:34.250820  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .Close
	I1209 11:57:34.251151  663024 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:57:34.251170  663024 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:57:34.251182  663024 main.go:141] libmachine: Making call to close driver server
	I1209 11:57:34.251192  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .Close
	I1209 11:57:34.251197  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | Closing plugin on server side
	I1209 11:57:34.251238  663024 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:57:34.251245  663024 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:57:34.251253  663024 main.go:141] libmachine: Making call to close driver server
	I1209 11:57:34.251261  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .Close
	I1209 11:57:34.253136  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | Closing plugin on server side
	I1209 11:57:34.253141  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | Closing plugin on server side
	I1209 11:57:34.253180  663024 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:57:34.253182  663024 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:57:34.253194  663024 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:57:34.253214  663024 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:57:34.279650  663024 main.go:141] libmachine: Making call to close driver server
	I1209 11:57:34.279682  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .Close
	I1209 11:57:34.280064  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | Closing plugin on server side
	I1209 11:57:34.280116  663024 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:57:34.280130  663024 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:57:34.656217  663024 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.243394493s)
	I1209 11:57:34.656287  663024 main.go:141] libmachine: Making call to close driver server
	I1209 11:57:34.656305  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .Close
	I1209 11:57:34.656641  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | Closing plugin on server side
	I1209 11:57:34.656655  663024 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:57:34.656671  663024 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:57:34.656683  663024 main.go:141] libmachine: Making call to close driver server
	I1209 11:57:34.656691  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .Close
	I1209 11:57:34.656982  663024 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:57:34.656999  663024 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:57:34.657011  663024 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-482476"
	I1209 11:57:34.658878  663024 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1209 11:57:34.660089  663024 addons.go:510] duration metric: took 1.813029421s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1209 11:57:35.122487  663024 pod_ready.go:103] pod "coredns-7c65d6cfc9-7rr27" in "kube-system" namespace has status "Ready":"False"
	I1209 11:57:36.112072  663024 pod_ready.go:93] pod "coredns-7c65d6cfc9-7rr27" in "kube-system" namespace has status "Ready":"True"
	I1209 11:57:36.112097  663024 pod_ready.go:82] duration metric: took 3.006564547s for pod "coredns-7c65d6cfc9-7rr27" in "kube-system" namespace to be "Ready" ...
	I1209 11:57:36.112110  663024 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-bb47s" in "kube-system" namespace to be "Ready" ...
	I1209 11:57:36.117521  663024 pod_ready.go:93] pod "coredns-7c65d6cfc9-bb47s" in "kube-system" namespace has status "Ready":"True"
	I1209 11:57:36.117545  663024 pod_ready.go:82] duration metric: took 5.428168ms for pod "coredns-7c65d6cfc9-bb47s" in "kube-system" namespace to be "Ready" ...
	I1209 11:57:36.117554  663024 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-482476" in "kube-system" namespace to be "Ready" ...
	I1209 11:57:36.122929  663024 pod_ready.go:93] pod "etcd-default-k8s-diff-port-482476" in "kube-system" namespace has status "Ready":"True"
	I1209 11:57:36.122953  663024 pod_ready.go:82] duration metric: took 5.392834ms for pod "etcd-default-k8s-diff-port-482476" in "kube-system" namespace to be "Ready" ...
	I1209 11:57:36.122972  663024 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-482476" in "kube-system" namespace to be "Ready" ...
	I1209 11:57:36.127025  663024 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-482476" in "kube-system" namespace has status "Ready":"True"
	I1209 11:57:36.127047  663024 pod_ready.go:82] duration metric: took 4.068175ms for pod "kube-apiserver-default-k8s-diff-port-482476" in "kube-system" namespace to be "Ready" ...
	I1209 11:57:36.127056  663024 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-482476" in "kube-system" namespace to be "Ready" ...
	I1209 11:57:36.131036  663024 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-482476" in "kube-system" namespace has status "Ready":"True"
	I1209 11:57:36.131055  663024 pod_ready.go:82] duration metric: took 3.993825ms for pod "kube-controller-manager-default-k8s-diff-port-482476" in "kube-system" namespace to be "Ready" ...
	I1209 11:57:36.131064  663024 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-pgs52" in "kube-system" namespace to be "Ready" ...
	I1209 11:57:36.508951  663024 pod_ready.go:93] pod "kube-proxy-pgs52" in "kube-system" namespace has status "Ready":"True"
	I1209 11:57:36.508980  663024 pod_ready.go:82] duration metric: took 377.910722ms for pod "kube-proxy-pgs52" in "kube-system" namespace to be "Ready" ...
	I1209 11:57:36.508991  663024 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-482476" in "kube-system" namespace to be "Ready" ...
	I1209 11:57:36.909065  663024 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-482476" in "kube-system" namespace has status "Ready":"True"
	I1209 11:57:36.909093  663024 pod_ready.go:82] duration metric: took 400.095775ms for pod "kube-scheduler-default-k8s-diff-port-482476" in "kube-system" namespace to be "Ready" ...
	I1209 11:57:36.909100  663024 pod_ready.go:39] duration metric: took 3.813270613s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 11:57:36.909116  663024 api_server.go:52] waiting for apiserver process to appear ...
	I1209 11:57:36.909169  663024 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:57:36.924688  663024 api_server.go:72] duration metric: took 4.077626254s to wait for apiserver process to appear ...
	I1209 11:57:36.924726  663024 api_server.go:88] waiting for apiserver healthz status ...
	I1209 11:57:36.924752  663024 api_server.go:253] Checking apiserver healthz at https://192.168.50.25:8444/healthz ...
	I1209 11:57:36.930782  663024 api_server.go:279] https://192.168.50.25:8444/healthz returned 200:
	ok
	I1209 11:57:36.931734  663024 api_server.go:141] control plane version: v1.31.2
	I1209 11:57:36.931758  663024 api_server.go:131] duration metric: took 7.024599ms to wait for apiserver health ...
	I1209 11:57:36.931766  663024 system_pods.go:43] waiting for kube-system pods to appear ...
	I1209 11:57:37.112291  663024 system_pods.go:59] 9 kube-system pods found
	I1209 11:57:37.112323  663024 system_pods.go:61] "coredns-7c65d6cfc9-7rr27" [a5dd0401-80bf-4c87-9771-e1837c960425] Running
	I1209 11:57:37.112328  663024 system_pods.go:61] "coredns-7c65d6cfc9-bb47s" [2ff9fcf2-1494-4739-9bf4-6c9dd5bcbbf4] Running
	I1209 11:57:37.112332  663024 system_pods.go:61] "etcd-default-k8s-diff-port-482476" [01dfcc5e-2ac5-4cee-8b68-ac1f3bf8866b] Running
	I1209 11:57:37.112337  663024 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-482476" [c65f1162-8d8a-47e8-8f1a-4abc0ebc8649] Running
	I1209 11:57:37.112340  663024 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-482476" [363e5f34-878a-434e-8bf2-21cdc06be305] Running
	I1209 11:57:37.112343  663024 system_pods.go:61] "kube-proxy-pgs52" [d5a3463e-e955-4345-9559-b23cce44fa0e] Running
	I1209 11:57:37.112346  663024 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-482476" [53f924fa-554b-4ceb-b171-efcfecdc137e] Running
	I1209 11:57:37.112356  663024 system_pods.go:61] "metrics-server-6867b74b74-2lmtn" [60803d31-d0b0-4d51-a9f2-cadafd184a90] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1209 11:57:37.112363  663024 system_pods.go:61] "storage-provisioner" [6b53e3ba-9bc9-4b5a-bec9-d06336616c8a] Running
	I1209 11:57:37.112373  663024 system_pods.go:74] duration metric: took 180.599339ms to wait for pod list to return data ...
	I1209 11:57:37.112387  663024 default_sa.go:34] waiting for default service account to be created ...
	I1209 11:57:37.309750  663024 default_sa.go:45] found service account: "default"
	I1209 11:57:37.309777  663024 default_sa.go:55] duration metric: took 197.382304ms for default service account to be created ...
	I1209 11:57:37.309787  663024 system_pods.go:116] waiting for k8s-apps to be running ...
	I1209 11:57:37.513080  663024 system_pods.go:86] 9 kube-system pods found
	I1209 11:57:37.513112  663024 system_pods.go:89] "coredns-7c65d6cfc9-7rr27" [a5dd0401-80bf-4c87-9771-e1837c960425] Running
	I1209 11:57:37.513118  663024 system_pods.go:89] "coredns-7c65d6cfc9-bb47s" [2ff9fcf2-1494-4739-9bf4-6c9dd5bcbbf4] Running
	I1209 11:57:37.513121  663024 system_pods.go:89] "etcd-default-k8s-diff-port-482476" [01dfcc5e-2ac5-4cee-8b68-ac1f3bf8866b] Running
	I1209 11:57:37.513128  663024 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-482476" [c65f1162-8d8a-47e8-8f1a-4abc0ebc8649] Running
	I1209 11:57:37.513133  663024 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-482476" [363e5f34-878a-434e-8bf2-21cdc06be305] Running
	I1209 11:57:37.513136  663024 system_pods.go:89] "kube-proxy-pgs52" [d5a3463e-e955-4345-9559-b23cce44fa0e] Running
	I1209 11:57:37.513141  663024 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-482476" [53f924fa-554b-4ceb-b171-efcfecdc137e] Running
	I1209 11:57:37.513150  663024 system_pods.go:89] "metrics-server-6867b74b74-2lmtn" [60803d31-d0b0-4d51-a9f2-cadafd184a90] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1209 11:57:37.513156  663024 system_pods.go:89] "storage-provisioner" [6b53e3ba-9bc9-4b5a-bec9-d06336616c8a] Running
	I1209 11:57:37.513168  663024 system_pods.go:126] duration metric: took 203.373238ms to wait for k8s-apps to be running ...
	I1209 11:57:37.513181  663024 system_svc.go:44] waiting for kubelet service to be running ....
	I1209 11:57:37.513233  663024 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 11:57:37.527419  663024 system_svc.go:56] duration metric: took 14.22618ms WaitForService to wait for kubelet
	I1209 11:57:37.527451  663024 kubeadm.go:582] duration metric: took 4.680397826s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 11:57:37.527473  663024 node_conditions.go:102] verifying NodePressure condition ...
	I1209 11:57:37.710396  663024 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1209 11:57:37.710429  663024 node_conditions.go:123] node cpu capacity is 2
	I1209 11:57:37.710447  663024 node_conditions.go:105] duration metric: took 182.968526ms to run NodePressure ...
	I1209 11:57:37.710463  663024 start.go:241] waiting for startup goroutines ...
	I1209 11:57:37.710473  663024 start.go:246] waiting for cluster config update ...
	I1209 11:57:37.710487  663024 start.go:255] writing updated cluster config ...
	I1209 11:57:37.710799  663024 ssh_runner.go:195] Run: rm -f paused
	I1209 11:57:37.760468  663024 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1209 11:57:37.762472  663024 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-482476" cluster and "default" namespace by default
	I1209 11:57:40.219406  661546 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.274255602s)
	I1209 11:57:40.219478  661546 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 11:57:40.234863  661546 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 11:57:40.245357  661546 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 11:57:40.255253  661546 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 11:57:40.255276  661546 kubeadm.go:157] found existing configuration files:
	
	I1209 11:57:40.255319  661546 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 11:57:40.264881  661546 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 11:57:40.264934  661546 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 11:57:40.274990  661546 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 11:57:40.284941  661546 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 11:57:40.284998  661546 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 11:57:40.295188  661546 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 11:57:40.305136  661546 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 11:57:40.305181  661546 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 11:57:40.315125  661546 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 11:57:40.324727  661546 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 11:57:40.324789  661546 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 11:57:40.333574  661546 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1209 11:57:40.378743  661546 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1209 11:57:40.378932  661546 kubeadm.go:310] [preflight] Running pre-flight checks
	I1209 11:57:40.492367  661546 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1209 11:57:40.492493  661546 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1209 11:57:40.492658  661546 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1209 11:57:40.504994  661546 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1209 11:57:40.506760  661546 out.go:235]   - Generating certificates and keys ...
	I1209 11:57:40.506878  661546 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1209 11:57:40.506955  661546 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1209 11:57:40.507033  661546 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1209 11:57:40.507088  661546 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1209 11:57:40.507156  661546 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1209 11:57:40.507274  661546 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1209 11:57:40.507377  661546 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1209 11:57:40.507463  661546 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1209 11:57:40.507573  661546 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1209 11:57:40.507692  661546 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1209 11:57:40.507756  661546 kubeadm.go:310] [certs] Using the existing "sa" key
	I1209 11:57:40.507836  661546 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1209 11:57:40.607744  661546 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1209 11:57:40.684950  661546 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1209 11:57:40.826079  661546 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1209 11:57:40.945768  661546 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1209 11:57:41.212984  661546 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1209 11:57:41.213406  661546 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1209 11:57:41.216390  661546 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1209 11:57:41.218053  661546 out.go:235]   - Booting up control plane ...
	I1209 11:57:41.218202  661546 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1209 11:57:41.218307  661546 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1209 11:57:41.220009  661546 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1209 11:57:41.237816  661546 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1209 11:57:41.244148  661546 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1209 11:57:41.244204  661546 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1209 11:57:41.371083  661546 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1209 11:57:41.371245  661546 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1209 11:57:41.872938  661546 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.998998ms
	I1209 11:57:41.873141  661546 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1209 11:57:46.874725  661546 kubeadm.go:310] [api-check] The API server is healthy after 5.001587898s
	I1209 11:57:46.886996  661546 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1209 11:57:46.897941  661546 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1209 11:57:46.927451  661546 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1209 11:57:46.927718  661546 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-005123 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1209 11:57:46.945578  661546 kubeadm.go:310] [bootstrap-token] Using token: bhdcn7.orsewwwtbk1gmdg8
	I1209 11:57:46.946894  661546 out.go:235]   - Configuring RBAC rules ...
	I1209 11:57:46.947041  661546 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1209 11:57:46.950006  661546 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1209 11:57:46.956761  661546 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1209 11:57:46.959756  661546 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1209 11:57:46.962973  661546 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1209 11:57:46.970016  661546 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1209 11:57:47.282251  661546 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1209 11:57:47.714588  661546 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1209 11:57:48.283610  661546 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1209 11:57:48.283671  661546 kubeadm.go:310] 
	I1209 11:57:48.283774  661546 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1209 11:57:48.283786  661546 kubeadm.go:310] 
	I1209 11:57:48.283901  661546 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1209 11:57:48.283948  661546 kubeadm.go:310] 
	I1209 11:57:48.283995  661546 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1209 11:57:48.284089  661546 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1209 11:57:48.284139  661546 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1209 11:57:48.284148  661546 kubeadm.go:310] 
	I1209 11:57:48.284216  661546 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1209 11:57:48.284224  661546 kubeadm.go:310] 
	I1209 11:57:48.284281  661546 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1209 11:57:48.284291  661546 kubeadm.go:310] 
	I1209 11:57:48.284359  661546 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1209 11:57:48.284465  661546 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1209 11:57:48.284583  661546 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1209 11:57:48.284596  661546 kubeadm.go:310] 
	I1209 11:57:48.284739  661546 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1209 11:57:48.284846  661546 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1209 11:57:48.284859  661546 kubeadm.go:310] 
	I1209 11:57:48.284972  661546 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token bhdcn7.orsewwwtbk1gmdg8 \
	I1209 11:57:48.285133  661546 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c011865ce27f188d55feab5f04226df0aae8d2e7cfe661283806c6f2de0ef29a \
	I1209 11:57:48.285170  661546 kubeadm.go:310] 	--control-plane 
	I1209 11:57:48.285184  661546 kubeadm.go:310] 
	I1209 11:57:48.285312  661546 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1209 11:57:48.285321  661546 kubeadm.go:310] 
	I1209 11:57:48.285388  661546 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token bhdcn7.orsewwwtbk1gmdg8 \
	I1209 11:57:48.285530  661546 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c011865ce27f188d55feab5f04226df0aae8d2e7cfe661283806c6f2de0ef29a 
	I1209 11:57:48.286117  661546 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1209 11:57:48.286246  661546 cni.go:84] Creating CNI manager for ""
	I1209 11:57:48.286263  661546 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 11:57:48.288141  661546 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1209 11:57:48.289484  661546 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1209 11:57:48.301160  661546 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1209 11:57:48.320752  661546 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1209 11:57:48.320831  661546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 11:57:48.320831  661546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-005123 minikube.k8s.io/updated_at=2024_12_09T11_57_48_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=60110addcdbf0fec7168b962521659e922988d6c minikube.k8s.io/name=embed-certs-005123 minikube.k8s.io/primary=true
	I1209 11:57:48.552069  661546 ops.go:34] apiserver oom_adj: -16
	I1209 11:57:48.552119  661546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 11:57:49.052304  661546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 11:57:49.552516  661546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 11:57:50.052548  661546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 11:57:50.552931  661546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 11:57:51.052381  661546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 11:57:51.552589  661546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 11:57:52.052273  661546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 11:57:52.552546  661546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 11:57:52.645059  661546 kubeadm.go:1113] duration metric: took 4.324296774s to wait for elevateKubeSystemPrivileges
	I1209 11:57:52.645107  661546 kubeadm.go:394] duration metric: took 5m0.847017281s to StartCluster
	I1209 11:57:52.645133  661546 settings.go:142] acquiring lock: {Name:mkd819f3469f56ef651f2e5bb461e93bb6b2b5fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 11:57:52.645241  661546 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20068-609844/kubeconfig
	I1209 11:57:52.647822  661546 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/kubeconfig: {Name:mk32876a1ab47f13c0d7c0ba1ee5e32de01b4a4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 11:57:52.648129  661546 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.218 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 11:57:52.648226  661546 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1209 11:57:52.648338  661546 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-005123"
	I1209 11:57:52.648354  661546 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-005123"
	W1209 11:57:52.648366  661546 addons.go:243] addon storage-provisioner should already be in state true
	I1209 11:57:52.648367  661546 addons.go:69] Setting default-storageclass=true in profile "embed-certs-005123"
	I1209 11:57:52.648396  661546 config.go:182] Loaded profile config "embed-certs-005123": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 11:57:52.648397  661546 addons.go:69] Setting metrics-server=true in profile "embed-certs-005123"
	I1209 11:57:52.648434  661546 addons.go:234] Setting addon metrics-server=true in "embed-certs-005123"
	I1209 11:57:52.648399  661546 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-005123"
	W1209 11:57:52.648448  661546 addons.go:243] addon metrics-server should already be in state true
	I1209 11:57:52.648499  661546 host.go:66] Checking if "embed-certs-005123" exists ...
	I1209 11:57:52.648400  661546 host.go:66] Checking if "embed-certs-005123" exists ...
	I1209 11:57:52.648867  661546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:57:52.648883  661546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:57:52.648914  661546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:57:52.648932  661546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:57:52.648947  661546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:57:52.648917  661546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:57:52.649702  661546 out.go:177] * Verifying Kubernetes components...
	I1209 11:57:52.651094  661546 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 11:57:52.665090  661546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38065
	I1209 11:57:52.665309  661546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35905
	I1209 11:57:52.665602  661546 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:57:52.665889  661546 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:57:52.666308  661546 main.go:141] libmachine: Using API Version  1
	I1209 11:57:52.666329  661546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:57:52.666470  661546 main.go:141] libmachine: Using API Version  1
	I1209 11:57:52.666492  661546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:57:52.666768  661546 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:57:52.666907  661546 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:57:52.667140  661546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35159
	I1209 11:57:52.667344  661546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:57:52.667387  661546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:57:52.667536  661546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:57:52.667580  661546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:57:52.667652  661546 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:57:52.668127  661546 main.go:141] libmachine: Using API Version  1
	I1209 11:57:52.668154  661546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:57:52.668657  661546 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:57:52.668868  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetState
	I1209 11:57:52.672550  661546 addons.go:234] Setting addon default-storageclass=true in "embed-certs-005123"
	W1209 11:57:52.672580  661546 addons.go:243] addon default-storageclass should already be in state true
	I1209 11:57:52.672612  661546 host.go:66] Checking if "embed-certs-005123" exists ...
	I1209 11:57:52.672985  661546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:57:52.673032  661546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:57:52.684848  661546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43667
	I1209 11:57:52.684854  661546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36851
	I1209 11:57:52.685398  661546 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:57:52.685451  661546 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:57:52.686054  661546 main.go:141] libmachine: Using API Version  1
	I1209 11:57:52.686081  661546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:57:52.686155  661546 main.go:141] libmachine: Using API Version  1
	I1209 11:57:52.686228  661546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:57:52.686553  661546 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:57:52.686614  661546 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:57:52.686753  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetState
	I1209 11:57:52.686930  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetState
	I1209 11:57:52.687838  661546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33245
	I1209 11:57:52.688391  661546 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:57:52.688818  661546 main.go:141] libmachine: (embed-certs-005123) Calling .DriverName
	I1209 11:57:52.689013  661546 main.go:141] libmachine: Using API Version  1
	I1209 11:57:52.689040  661546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:57:52.689314  661546 main.go:141] libmachine: (embed-certs-005123) Calling .DriverName
	I1209 11:57:52.689450  661546 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:57:52.689908  661546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:57:52.689943  661546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:57:52.691136  661546 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 11:57:52.691137  661546 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1209 11:57:52.692714  661546 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1209 11:57:52.692732  661546 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1209 11:57:52.692749  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHHostname
	I1209 11:57:52.692789  661546 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 11:57:52.692800  661546 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1209 11:57:52.692813  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHHostname
	I1209 11:57:52.696349  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:57:52.696791  661546 main.go:141] libmachine: (embed-certs-005123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:a0:a8", ip: ""} in network mk-embed-certs-005123: {Iface:virbr4 ExpiryTime:2024-12-09 12:52:37 +0000 UTC Type:0 Mac:52:54:00:ee:a0:a8 Iaid: IPaddr:192.168.72.218 Prefix:24 Hostname:embed-certs-005123 Clientid:01:52:54:00:ee:a0:a8}
	I1209 11:57:52.696815  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined IP address 192.168.72.218 and MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:57:52.697088  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:57:52.697143  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHPort
	I1209 11:57:52.697314  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHKeyPath
	I1209 11:57:52.697482  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHUsername
	I1209 11:57:52.697512  661546 main.go:141] libmachine: (embed-certs-005123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:a0:a8", ip: ""} in network mk-embed-certs-005123: {Iface:virbr4 ExpiryTime:2024-12-09 12:52:37 +0000 UTC Type:0 Mac:52:54:00:ee:a0:a8 Iaid: IPaddr:192.168.72.218 Prefix:24 Hostname:embed-certs-005123 Clientid:01:52:54:00:ee:a0:a8}
	I1209 11:57:52.697547  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined IP address 192.168.72.218 and MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:57:52.697658  661546 sshutil.go:53] new ssh client: &{IP:192.168.72.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/embed-certs-005123/id_rsa Username:docker}
	I1209 11:57:52.697787  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHPort
	I1209 11:57:52.697962  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHKeyPath
	I1209 11:57:52.698093  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHUsername
	I1209 11:57:52.698209  661546 sshutil.go:53] new ssh client: &{IP:192.168.72.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/embed-certs-005123/id_rsa Username:docker}
	I1209 11:57:52.705766  661546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37265
	I1209 11:57:52.706265  661546 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:57:52.706694  661546 main.go:141] libmachine: Using API Version  1
	I1209 11:57:52.706721  661546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:57:52.707031  661546 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:57:52.707241  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetState
	I1209 11:57:52.708747  661546 main.go:141] libmachine: (embed-certs-005123) Calling .DriverName
	I1209 11:57:52.708980  661546 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1209 11:57:52.708997  661546 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1209 11:57:52.709016  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHHostname
	I1209 11:57:52.711546  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:57:52.711986  661546 main.go:141] libmachine: (embed-certs-005123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:a0:a8", ip: ""} in network mk-embed-certs-005123: {Iface:virbr4 ExpiryTime:2024-12-09 12:52:37 +0000 UTC Type:0 Mac:52:54:00:ee:a0:a8 Iaid: IPaddr:192.168.72.218 Prefix:24 Hostname:embed-certs-005123 Clientid:01:52:54:00:ee:a0:a8}
	I1209 11:57:52.712011  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined IP address 192.168.72.218 and MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:57:52.712263  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHPort
	I1209 11:57:52.712438  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHKeyPath
	I1209 11:57:52.712604  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHUsername
	I1209 11:57:52.712751  661546 sshutil.go:53] new ssh client: &{IP:192.168.72.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/embed-certs-005123/id_rsa Username:docker}
	I1209 11:57:52.858535  661546 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 11:57:52.879035  661546 node_ready.go:35] waiting up to 6m0s for node "embed-certs-005123" to be "Ready" ...
	I1209 11:57:52.899550  661546 node_ready.go:49] node "embed-certs-005123" has status "Ready":"True"
	I1209 11:57:52.899575  661546 node_ready.go:38] duration metric: took 20.508179ms for node "embed-certs-005123" to be "Ready" ...
	I1209 11:57:52.899589  661546 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 11:57:52.960716  661546 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-t49mk" in "kube-system" namespace to be "Ready" ...
	I1209 11:57:52.962755  661546 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1209 11:57:52.962779  661546 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1209 11:57:52.995747  661546 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1209 11:57:52.995787  661546 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1209 11:57:53.031395  661546 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1209 11:57:53.031426  661546 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1209 11:57:53.031535  661546 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1209 11:57:53.049695  661546 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 11:57:53.061716  661546 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1209 11:57:53.314158  661546 main.go:141] libmachine: Making call to close driver server
	I1209 11:57:53.314212  661546 main.go:141] libmachine: (embed-certs-005123) Calling .Close
	I1209 11:57:53.314523  661546 main.go:141] libmachine: (embed-certs-005123) DBG | Closing plugin on server side
	I1209 11:57:53.314548  661546 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:57:53.314565  661546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:57:53.314586  661546 main.go:141] libmachine: Making call to close driver server
	I1209 11:57:53.314598  661546 main.go:141] libmachine: (embed-certs-005123) Calling .Close
	I1209 11:57:53.314857  661546 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:57:53.314875  661546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:57:53.323573  661546 main.go:141] libmachine: Making call to close driver server
	I1209 11:57:53.323590  661546 main.go:141] libmachine: (embed-certs-005123) Calling .Close
	I1209 11:57:53.323822  661546 main.go:141] libmachine: (embed-certs-005123) DBG | Closing plugin on server side
	I1209 11:57:53.323873  661546 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:57:53.323882  661546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:57:54.004616  661546 main.go:141] libmachine: Making call to close driver server
	I1209 11:57:54.004655  661546 main.go:141] libmachine: (embed-certs-005123) Calling .Close
	I1209 11:57:54.005050  661546 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:57:54.005067  661546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:57:54.005075  661546 main.go:141] libmachine: Making call to close driver server
	I1209 11:57:54.005083  661546 main.go:141] libmachine: (embed-certs-005123) Calling .Close
	I1209 11:57:54.005351  661546 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:57:54.005372  661546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:57:54.005370  661546 main.go:141] libmachine: (embed-certs-005123) DBG | Closing plugin on server side
	I1209 11:57:54.352527  661546 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.290758533s)
	I1209 11:57:54.352616  661546 main.go:141] libmachine: Making call to close driver server
	I1209 11:57:54.352636  661546 main.go:141] libmachine: (embed-certs-005123) Calling .Close
	I1209 11:57:54.352957  661546 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:57:54.352977  661546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:57:54.352987  661546 main.go:141] libmachine: Making call to close driver server
	I1209 11:57:54.352995  661546 main.go:141] libmachine: (embed-certs-005123) Calling .Close
	I1209 11:57:54.353278  661546 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:57:54.353320  661546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:57:54.353336  661546 addons.go:475] Verifying addon metrics-server=true in "embed-certs-005123"
	I1209 11:57:54.353387  661546 main.go:141] libmachine: (embed-certs-005123) DBG | Closing plugin on server side
	I1209 11:57:54.355153  661546 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1209 11:57:54.356250  661546 addons.go:510] duration metric: took 1.708044398s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1209 11:57:54.968202  661546 pod_ready.go:103] pod "coredns-7c65d6cfc9-t49mk" in "kube-system" namespace has status "Ready":"False"
	I1209 11:57:57.467948  661546 pod_ready.go:93] pod "coredns-7c65d6cfc9-t49mk" in "kube-system" namespace has status "Ready":"True"
	I1209 11:57:57.467979  661546 pod_ready.go:82] duration metric: took 4.507228843s for pod "coredns-7c65d6cfc9-t49mk" in "kube-system" namespace to be "Ready" ...
	I1209 11:57:57.467992  661546 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-xspr9" in "kube-system" namespace to be "Ready" ...
	I1209 11:57:59.475024  661546 pod_ready.go:103] pod "coredns-7c65d6cfc9-xspr9" in "kube-system" namespace has status "Ready":"False"
	I1209 11:58:00.473961  661546 pod_ready.go:93] pod "coredns-7c65d6cfc9-xspr9" in "kube-system" namespace has status "Ready":"True"
	I1209 11:58:00.473987  661546 pod_ready.go:82] duration metric: took 3.005987981s for pod "coredns-7c65d6cfc9-xspr9" in "kube-system" namespace to be "Ready" ...
	I1209 11:58:00.473996  661546 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-005123" in "kube-system" namespace to be "Ready" ...
	I1209 11:58:00.478022  661546 pod_ready.go:93] pod "etcd-embed-certs-005123" in "kube-system" namespace has status "Ready":"True"
	I1209 11:58:00.478040  661546 pod_ready.go:82] duration metric: took 4.038353ms for pod "etcd-embed-certs-005123" in "kube-system" namespace to be "Ready" ...
	I1209 11:58:00.478049  661546 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-005123" in "kube-system" namespace to be "Ready" ...
	I1209 11:58:00.482415  661546 pod_ready.go:93] pod "kube-apiserver-embed-certs-005123" in "kube-system" namespace has status "Ready":"True"
	I1209 11:58:00.482439  661546 pod_ready.go:82] duration metric: took 4.384854ms for pod "kube-apiserver-embed-certs-005123" in "kube-system" namespace to be "Ready" ...
	I1209 11:58:00.482449  661546 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-005123" in "kube-system" namespace to be "Ready" ...
	I1209 11:58:00.486284  661546 pod_ready.go:93] pod "kube-controller-manager-embed-certs-005123" in "kube-system" namespace has status "Ready":"True"
	I1209 11:58:00.486311  661546 pod_ready.go:82] duration metric: took 3.85467ms for pod "kube-controller-manager-embed-certs-005123" in "kube-system" namespace to be "Ready" ...
	I1209 11:58:00.486326  661546 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-n4pph" in "kube-system" namespace to be "Ready" ...
	I1209 11:58:00.490260  661546 pod_ready.go:93] pod "kube-proxy-n4pph" in "kube-system" namespace has status "Ready":"True"
	I1209 11:58:00.490284  661546 pod_ready.go:82] duration metric: took 3.949342ms for pod "kube-proxy-n4pph" in "kube-system" namespace to be "Ready" ...
	I1209 11:58:00.490296  661546 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-005123" in "kube-system" namespace to be "Ready" ...
	I1209 11:58:00.872396  661546 pod_ready.go:93] pod "kube-scheduler-embed-certs-005123" in "kube-system" namespace has status "Ready":"True"
	I1209 11:58:00.872420  661546 pod_ready.go:82] duration metric: took 382.116873ms for pod "kube-scheduler-embed-certs-005123" in "kube-system" namespace to be "Ready" ...
	I1209 11:58:00.872428  661546 pod_ready.go:39] duration metric: took 7.97282742s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 11:58:00.872446  661546 api_server.go:52] waiting for apiserver process to appear ...
	I1209 11:58:00.872502  661546 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:58:00.887281  661546 api_server.go:72] duration metric: took 8.239108757s to wait for apiserver process to appear ...
	I1209 11:58:00.887312  661546 api_server.go:88] waiting for apiserver healthz status ...
	I1209 11:58:00.887333  661546 api_server.go:253] Checking apiserver healthz at https://192.168.72.218:8443/healthz ...
	I1209 11:58:00.892005  661546 api_server.go:279] https://192.168.72.218:8443/healthz returned 200:
	ok
	I1209 11:58:00.893247  661546 api_server.go:141] control plane version: v1.31.2
	I1209 11:58:00.893277  661546 api_server.go:131] duration metric: took 5.95753ms to wait for apiserver health ...
	I1209 11:58:00.893288  661546 system_pods.go:43] waiting for kube-system pods to appear ...
	I1209 11:58:01.074723  661546 system_pods.go:59] 9 kube-system pods found
	I1209 11:58:01.074756  661546 system_pods.go:61] "coredns-7c65d6cfc9-t49mk" [ca3ba094-58a2-401d-8aea-46d6d96baacb] Running
	I1209 11:58:01.074762  661546 system_pods.go:61] "coredns-7c65d6cfc9-xspr9" [9384e9ea-987e-4728-bdf2-773645d52ab1] Running
	I1209 11:58:01.074766  661546 system_pods.go:61] "etcd-embed-certs-005123" [f8b23ab4-b852-4598-93d7-3b6eb1543a4b] Running
	I1209 11:58:01.074771  661546 system_pods.go:61] "kube-apiserver-embed-certs-005123" [96b0cb5a-1d81-48fc-bbc9-015de7c48ac5] Running
	I1209 11:58:01.074774  661546 system_pods.go:61] "kube-controller-manager-embed-certs-005123" [33c237d8-8f5a-4d8f-870a-7ba0dc2180dd] Running
	I1209 11:58:01.074777  661546 system_pods.go:61] "kube-proxy-n4pph" [520d101f-0df0-413f-a0fc-22ecc2884d40] Running
	I1209 11:58:01.074780  661546 system_pods.go:61] "kube-scheduler-embed-certs-005123" [11dcaf15-fc3b-4945-af9b-d08aa5528679] Running
	I1209 11:58:01.074786  661546 system_pods.go:61] "metrics-server-6867b74b74-zfw9r" [8438b820-4cc5-4d7b-8af5-9349fdd87ca8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1209 11:58:01.074791  661546 system_pods.go:61] "storage-provisioner" [91ceb801-7262-4d7e-9623-c8c1931fc34b] Running
	I1209 11:58:01.074797  661546 system_pods.go:74] duration metric: took 181.502993ms to wait for pod list to return data ...
	I1209 11:58:01.074804  661546 default_sa.go:34] waiting for default service account to be created ...
	I1209 11:58:01.272664  661546 default_sa.go:45] found service account: "default"
	I1209 11:58:01.272697  661546 default_sa.go:55] duration metric: took 197.886347ms for default service account to be created ...
	I1209 11:58:01.272707  661546 system_pods.go:116] waiting for k8s-apps to be running ...
	I1209 11:58:01.475062  661546 system_pods.go:86] 9 kube-system pods found
	I1209 11:58:01.475096  661546 system_pods.go:89] "coredns-7c65d6cfc9-t49mk" [ca3ba094-58a2-401d-8aea-46d6d96baacb] Running
	I1209 11:58:01.475102  661546 system_pods.go:89] "coredns-7c65d6cfc9-xspr9" [9384e9ea-987e-4728-bdf2-773645d52ab1] Running
	I1209 11:58:01.475105  661546 system_pods.go:89] "etcd-embed-certs-005123" [f8b23ab4-b852-4598-93d7-3b6eb1543a4b] Running
	I1209 11:58:01.475109  661546 system_pods.go:89] "kube-apiserver-embed-certs-005123" [96b0cb5a-1d81-48fc-bbc9-015de7c48ac5] Running
	I1209 11:58:01.475114  661546 system_pods.go:89] "kube-controller-manager-embed-certs-005123" [33c237d8-8f5a-4d8f-870a-7ba0dc2180dd] Running
	I1209 11:58:01.475118  661546 system_pods.go:89] "kube-proxy-n4pph" [520d101f-0df0-413f-a0fc-22ecc2884d40] Running
	I1209 11:58:01.475121  661546 system_pods.go:89] "kube-scheduler-embed-certs-005123" [11dcaf15-fc3b-4945-af9b-d08aa5528679] Running
	I1209 11:58:01.475131  661546 system_pods.go:89] "metrics-server-6867b74b74-zfw9r" [8438b820-4cc5-4d7b-8af5-9349fdd87ca8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1209 11:58:01.475138  661546 system_pods.go:89] "storage-provisioner" [91ceb801-7262-4d7e-9623-c8c1931fc34b] Running
	I1209 11:58:01.475148  661546 system_pods.go:126] duration metric: took 202.434687ms to wait for k8s-apps to be running ...
	I1209 11:58:01.475158  661546 system_svc.go:44] waiting for kubelet service to be running ....
	I1209 11:58:01.475220  661546 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 11:58:01.490373  661546 system_svc.go:56] duration metric: took 15.20079ms WaitForService to wait for kubelet
	I1209 11:58:01.490416  661546 kubeadm.go:582] duration metric: took 8.842250416s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 11:58:01.490451  661546 node_conditions.go:102] verifying NodePressure condition ...
	I1209 11:58:01.673621  661546 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1209 11:58:01.673651  661546 node_conditions.go:123] node cpu capacity is 2
	I1209 11:58:01.673662  661546 node_conditions.go:105] duration metric: took 183.205852ms to run NodePressure ...
	I1209 11:58:01.673674  661546 start.go:241] waiting for startup goroutines ...
	I1209 11:58:01.673681  661546 start.go:246] waiting for cluster config update ...
	I1209 11:58:01.673691  661546 start.go:255] writing updated cluster config ...
	I1209 11:58:01.673995  661546 ssh_runner.go:195] Run: rm -f paused
	I1209 11:58:01.725363  661546 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1209 11:58:01.727275  661546 out.go:177] * Done! kubectl is now configured to use "embed-certs-005123" cluster and "default" namespace by default
	I1209 11:58:14.994765  662586 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1209 11:58:14.994918  662586 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1209 11:58:14.995050  662586 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1209 11:58:14.995118  662586 kubeadm.go:310] [preflight] Running pre-flight checks
	I1209 11:58:14.995182  662586 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1209 11:58:14.995272  662586 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1209 11:58:14.995353  662586 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1209 11:58:14.995410  662586 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1209 11:58:14.996905  662586 out.go:235]   - Generating certificates and keys ...
	I1209 11:58:14.997000  662586 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1209 11:58:14.997055  662586 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1209 11:58:14.997123  662586 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1209 11:58:14.997184  662586 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1209 11:58:14.997278  662586 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1209 11:58:14.997349  662586 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1209 11:58:14.997474  662586 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1209 11:58:14.997567  662586 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1209 11:58:14.997631  662586 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1209 11:58:14.997700  662586 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1209 11:58:14.997736  662586 kubeadm.go:310] [certs] Using the existing "sa" key
	I1209 11:58:14.997783  662586 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1209 11:58:14.997826  662586 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1209 11:58:14.997871  662586 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1209 11:58:14.997930  662586 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1209 11:58:14.997977  662586 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1209 11:58:14.998063  662586 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1209 11:58:14.998141  662586 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1209 11:58:14.998199  662586 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1209 11:58:14.998264  662586 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1209 11:58:14.999539  662586 out.go:235]   - Booting up control plane ...
	I1209 11:58:14.999663  662586 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1209 11:58:14.999748  662586 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1209 11:58:14.999824  662586 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1209 11:58:14.999946  662586 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1209 11:58:15.000148  662586 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1209 11:58:15.000221  662586 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1209 11:58:15.000326  662586 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1209 11:58:15.000532  662586 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1209 11:58:15.000598  662586 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1209 11:58:15.000753  662586 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1209 11:58:15.000814  662586 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1209 11:58:15.000971  662586 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1209 11:58:15.001064  662586 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1209 11:58:15.001273  662586 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1209 11:58:15.001335  662586 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1209 11:58:15.001486  662586 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1209 11:58:15.001493  662586 kubeadm.go:310] 
	I1209 11:58:15.001553  662586 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1209 11:58:15.001616  662586 kubeadm.go:310] 		timed out waiting for the condition
	I1209 11:58:15.001631  662586 kubeadm.go:310] 
	I1209 11:58:15.001685  662586 kubeadm.go:310] 	This error is likely caused by:
	I1209 11:58:15.001732  662586 kubeadm.go:310] 		- The kubelet is not running
	I1209 11:58:15.001883  662586 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1209 11:58:15.001897  662586 kubeadm.go:310] 
	I1209 11:58:15.002041  662586 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1209 11:58:15.002087  662586 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1209 11:58:15.002146  662586 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1209 11:58:15.002156  662586 kubeadm.go:310] 
	I1209 11:58:15.002294  662586 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1209 11:58:15.002373  662586 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1209 11:58:15.002380  662586 kubeadm.go:310] 
	I1209 11:58:15.002502  662586 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1209 11:58:15.002623  662586 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1209 11:58:15.002725  662586 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1209 11:58:15.002799  662586 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1209 11:58:15.002835  662586 kubeadm.go:310] 
	W1209 11:58:15.002956  662586 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1209 11:58:15.003022  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1209 11:58:15.469838  662586 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 11:58:15.484503  662586 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 11:58:15.493409  662586 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 11:58:15.493430  662586 kubeadm.go:157] found existing configuration files:
	
	I1209 11:58:15.493487  662586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 11:58:15.502508  662586 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 11:58:15.502568  662586 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 11:58:15.511743  662586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 11:58:15.519855  662586 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 11:58:15.519913  662586 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 11:58:15.528743  662586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 11:58:15.537000  662586 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 11:58:15.537072  662586 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 11:58:15.546520  662586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 11:58:15.555448  662586 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 11:58:15.555526  662586 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 11:58:15.565618  662586 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1209 11:58:15.631763  662586 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1209 11:58:15.631832  662586 kubeadm.go:310] [preflight] Running pre-flight checks
	I1209 11:58:15.798683  662586 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1209 11:58:15.798822  662586 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1209 11:58:15.798957  662586 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1209 11:58:15.974522  662586 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1209 11:58:15.976286  662586 out.go:235]   - Generating certificates and keys ...
	I1209 11:58:15.976408  662586 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1209 11:58:15.976492  662586 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1209 11:58:15.976616  662586 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1209 11:58:15.976714  662586 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1209 11:58:15.976813  662586 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1209 11:58:15.976889  662586 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1209 11:58:15.976978  662586 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1209 11:58:15.977064  662586 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1209 11:58:15.977184  662586 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1209 11:58:15.977251  662586 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1209 11:58:15.977287  662586 kubeadm.go:310] [certs] Using the existing "sa" key
	I1209 11:58:15.977363  662586 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1209 11:58:16.193383  662586 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1209 11:58:16.324912  662586 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1209 11:58:16.541372  662586 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1209 11:58:16.786389  662586 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1209 11:58:16.807241  662586 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1209 11:58:16.808750  662586 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1209 11:58:16.808823  662586 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1209 11:58:16.951756  662586 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1209 11:58:16.954338  662586 out.go:235]   - Booting up control plane ...
	I1209 11:58:16.954486  662586 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1209 11:58:16.968892  662586 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1209 11:58:16.970556  662586 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1209 11:58:16.971301  662586 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1209 11:58:16.974040  662586 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1209 11:58:56.976537  662586 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1209 11:58:56.976966  662586 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1209 11:58:56.977214  662586 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1209 11:59:01.977861  662586 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1209 11:59:01.978074  662586 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1209 11:59:11.978821  662586 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1209 11:59:11.979056  662586 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1209 11:59:31.980118  662586 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1209 11:59:31.980386  662586 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1209 12:00:11.981507  662586 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1209 12:00:11.981791  662586 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1209 12:00:11.981804  662586 kubeadm.go:310] 
	I1209 12:00:11.981863  662586 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1209 12:00:11.981916  662586 kubeadm.go:310] 		timed out waiting for the condition
	I1209 12:00:11.981926  662586 kubeadm.go:310] 
	I1209 12:00:11.981977  662586 kubeadm.go:310] 	This error is likely caused by:
	I1209 12:00:11.982028  662586 kubeadm.go:310] 		- The kubelet is not running
	I1209 12:00:11.982232  662586 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1209 12:00:11.982262  662586 kubeadm.go:310] 
	I1209 12:00:11.982449  662586 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1209 12:00:11.982506  662586 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1209 12:00:11.982555  662586 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1209 12:00:11.982564  662586 kubeadm.go:310] 
	I1209 12:00:11.982709  662586 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1209 12:00:11.982824  662586 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1209 12:00:11.982837  662586 kubeadm.go:310] 
	I1209 12:00:11.982975  662586 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1209 12:00:11.983092  662586 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1209 12:00:11.983186  662586 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1209 12:00:11.983259  662586 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1209 12:00:11.983308  662586 kubeadm.go:310] 
	I1209 12:00:11.983442  662586 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1209 12:00:11.983534  662586 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1209 12:00:11.983622  662586 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1209 12:00:11.983692  662586 kubeadm.go:394] duration metric: took 7m57.372617524s to StartCluster
	I1209 12:00:11.983778  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 12:00:11.983852  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 12:00:12.032068  662586 cri.go:89] found id: ""
	I1209 12:00:12.032110  662586 logs.go:282] 0 containers: []
	W1209 12:00:12.032126  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 12:00:12.032139  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 12:00:12.032232  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 12:00:12.074929  662586 cri.go:89] found id: ""
	I1209 12:00:12.074977  662586 logs.go:282] 0 containers: []
	W1209 12:00:12.074990  662586 logs.go:284] No container was found matching "etcd"
	I1209 12:00:12.075001  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 12:00:12.075074  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 12:00:12.113547  662586 cri.go:89] found id: ""
	I1209 12:00:12.113582  662586 logs.go:282] 0 containers: []
	W1209 12:00:12.113592  662586 logs.go:284] No container was found matching "coredns"
	I1209 12:00:12.113598  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 12:00:12.113661  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 12:00:12.147436  662586 cri.go:89] found id: ""
	I1209 12:00:12.147465  662586 logs.go:282] 0 containers: []
	W1209 12:00:12.147475  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 12:00:12.147481  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 12:00:12.147535  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 12:00:12.184398  662586 cri.go:89] found id: ""
	I1209 12:00:12.184439  662586 logs.go:282] 0 containers: []
	W1209 12:00:12.184453  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 12:00:12.184463  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 12:00:12.184541  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 12:00:12.230844  662586 cri.go:89] found id: ""
	I1209 12:00:12.230884  662586 logs.go:282] 0 containers: []
	W1209 12:00:12.230896  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 12:00:12.230905  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 12:00:12.230981  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 12:00:12.264897  662586 cri.go:89] found id: ""
	I1209 12:00:12.264930  662586 logs.go:282] 0 containers: []
	W1209 12:00:12.264939  662586 logs.go:284] No container was found matching "kindnet"
	I1209 12:00:12.264946  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 12:00:12.265001  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 12:00:12.303553  662586 cri.go:89] found id: ""
	I1209 12:00:12.303594  662586 logs.go:282] 0 containers: []
	W1209 12:00:12.303607  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 12:00:12.303622  662586 logs.go:123] Gathering logs for container status ...
	I1209 12:00:12.303638  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 12:00:12.342799  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 12:00:12.342838  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 12:00:12.392992  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 12:00:12.393039  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 12:00:12.407065  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 12:00:12.407100  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 12:00:12.483599  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 12:00:12.483651  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 12:00:12.483675  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1209 12:00:12.591518  662586 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1209 12:00:12.591615  662586 out.go:270] * 
	W1209 12:00:12.591715  662586 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1209 12:00:12.591737  662586 out.go:270] * 
	W1209 12:00:12.592644  662586 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 12:00:12.596340  662586 out.go:201] 
	W1209 12:00:12.597706  662586 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1209 12:00:12.597757  662586 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1209 12:00:12.597798  662586 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1209 12:00:12.599219  662586 out.go:201] 
	
	
	==> CRI-O <==
	Dec 09 12:06:39 default-k8s-diff-port-482476 crio[709]: time="2024-12-09 12:06:39.842217772Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733745999842197398,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b6c56c0b-7d14-4dfd-8290-38241bf3ffa6 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 12:06:39 default-k8s-diff-port-482476 crio[709]: time="2024-12-09 12:06:39.842907124Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c861ee40-e23f-47fb-a33a-2a1e89278437 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 12:06:39 default-k8s-diff-port-482476 crio[709]: time="2024-12-09 12:06:39.842953930Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c861ee40-e23f-47fb-a33a-2a1e89278437 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 12:06:39 default-k8s-diff-port-482476 crio[709]: time="2024-12-09 12:06:39.843190209Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a6497e24ed8d6cf7d755dd9862baef214e283f280f4ef19432b7b946ffdc04af,PodSandboxId:9c6c1503f2aa142f6e1b790794ac4f72469f489f668cb81dbd99ad79be651d54,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733745455068209528,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b53e3ba-9bc9-4b5a-bec9-d06336616c8a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bc393f8ca069a02f4255df67b57878759855f7c23b28e333fa0164b3723d3ee,PodSandboxId:7b14ae84b4e8d70026481c33596cd578202967e2d5f80b7f1344ad74a2a8aef4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733745454054388888,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bb47s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ff9fcf2-1494-4739-9bf4-6c9dd5bcbbf4,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8c6e423fd231089f00f6c48db7dc922a6a0e874923f459286cce0c29e586c56,PodSandboxId:3bf9b1181e2f348d99e3601e877d02010b31bb3f3dd25be4c94adeac273be018,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733745454043841945,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7rr27,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: a5dd0401-80bf-4c87-9771-e1837c960425,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a06e322b94f2e8c310d68246143453b976f57c7b04a64527bb6d2624556e53b8,PodSandboxId:505c512572eeadc3423e7d92e427ae440cce2a8578f8be04d1e408798f589494,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING
,CreatedAt:1733745453383214534,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pgs52,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5a3463e-e955-4345-9559-b23cce44fa0e,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fe459e350a0a69c68e97eabcc631964637bad5192e31fc4f9bde455313887ff,PodSandboxId:54dba79d9d7cecfcb8ad76bb275e38e59d7421c662be0f31363876f1335e47a9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:173374544
2491333987,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-482476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08d9a3116ed44b8533a3cadf46fa536a,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1efd65a61828e45b53b8be59a071fc5b43c47d488fe8aba5126af1fe231338fa,PodSandboxId:e9eddec54729a9e6fc7f103e8e007467a867f4ec9e1e250497854ff9068a0e5e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,Cre
atedAt:1733745442461208526,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-482476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f4ab6bcaa941321f87de927012cee9d,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c9f81dc1c7ecf5412ad6a0ac64634688986ff124ce873c0846bd10cf54ff761,PodSandboxId:0e1162e4c3cf07ce5d3804edb5623a5567c82710bf86e6dd75a93bddd7c26573,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:17337
45442475944828,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-482476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b73f65421285a8dd1839e442c0c6af24,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ba56b590110406b1c4e6646e981b8f00f6fbc55308dacd367bda5339a72a122,PodSandboxId:62374aca74b288e067c703ef56dd1a6f6f6ead07233461f8e14daf9b603e84e2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733745442438840869,Labels:
map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-482476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05146362456c7def7e2b7c92028be8b7,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:404b8057940700687ef437a2f86b23b1eb47d811420982efb7d526eb07510390,PodSandboxId:913e6ad25da255a4f64f5ac795cc16bcd6a8e9cd85a4c954180010d07e3629d3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733745155403085403,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-482476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05146362456c7def7e2b7c92028be8b7,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c861ee40-e23f-47fb-a33a-2a1e89278437 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 12:06:39 default-k8s-diff-port-482476 crio[709]: time="2024-12-09 12:06:39.878275444Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c3dc3ad0-8a45-485c-8552-8fb8d09a1176 name=/runtime.v1.RuntimeService/Version
	Dec 09 12:06:39 default-k8s-diff-port-482476 crio[709]: time="2024-12-09 12:06:39.878400600Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c3dc3ad0-8a45-485c-8552-8fb8d09a1176 name=/runtime.v1.RuntimeService/Version
	Dec 09 12:06:39 default-k8s-diff-port-482476 crio[709]: time="2024-12-09 12:06:39.879505993Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8f80eb55-b7b3-4d00-b033-b221fb5d7474 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 12:06:39 default-k8s-diff-port-482476 crio[709]: time="2024-12-09 12:06:39.879885900Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733745999879863551,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8f80eb55-b7b3-4d00-b033-b221fb5d7474 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 12:06:39 default-k8s-diff-port-482476 crio[709]: time="2024-12-09 12:06:39.880434534Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a7ceebcf-18f9-41a6-8615-560f8ebaf9b0 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 12:06:39 default-k8s-diff-port-482476 crio[709]: time="2024-12-09 12:06:39.880504742Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a7ceebcf-18f9-41a6-8615-560f8ebaf9b0 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 12:06:39 default-k8s-diff-port-482476 crio[709]: time="2024-12-09 12:06:39.880716454Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a6497e24ed8d6cf7d755dd9862baef214e283f280f4ef19432b7b946ffdc04af,PodSandboxId:9c6c1503f2aa142f6e1b790794ac4f72469f489f668cb81dbd99ad79be651d54,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733745455068209528,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b53e3ba-9bc9-4b5a-bec9-d06336616c8a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bc393f8ca069a02f4255df67b57878759855f7c23b28e333fa0164b3723d3ee,PodSandboxId:7b14ae84b4e8d70026481c33596cd578202967e2d5f80b7f1344ad74a2a8aef4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733745454054388888,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bb47s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ff9fcf2-1494-4739-9bf4-6c9dd5bcbbf4,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8c6e423fd231089f00f6c48db7dc922a6a0e874923f459286cce0c29e586c56,PodSandboxId:3bf9b1181e2f348d99e3601e877d02010b31bb3f3dd25be4c94adeac273be018,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733745454043841945,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7rr27,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: a5dd0401-80bf-4c87-9771-e1837c960425,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a06e322b94f2e8c310d68246143453b976f57c7b04a64527bb6d2624556e53b8,PodSandboxId:505c512572eeadc3423e7d92e427ae440cce2a8578f8be04d1e408798f589494,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING
,CreatedAt:1733745453383214534,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pgs52,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5a3463e-e955-4345-9559-b23cce44fa0e,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fe459e350a0a69c68e97eabcc631964637bad5192e31fc4f9bde455313887ff,PodSandboxId:54dba79d9d7cecfcb8ad76bb275e38e59d7421c662be0f31363876f1335e47a9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:173374544
2491333987,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-482476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08d9a3116ed44b8533a3cadf46fa536a,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1efd65a61828e45b53b8be59a071fc5b43c47d488fe8aba5126af1fe231338fa,PodSandboxId:e9eddec54729a9e6fc7f103e8e007467a867f4ec9e1e250497854ff9068a0e5e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,Cre
atedAt:1733745442461208526,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-482476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f4ab6bcaa941321f87de927012cee9d,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c9f81dc1c7ecf5412ad6a0ac64634688986ff124ce873c0846bd10cf54ff761,PodSandboxId:0e1162e4c3cf07ce5d3804edb5623a5567c82710bf86e6dd75a93bddd7c26573,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:17337
45442475944828,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-482476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b73f65421285a8dd1839e442c0c6af24,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ba56b590110406b1c4e6646e981b8f00f6fbc55308dacd367bda5339a72a122,PodSandboxId:62374aca74b288e067c703ef56dd1a6f6f6ead07233461f8e14daf9b603e84e2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733745442438840869,Labels:
map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-482476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05146362456c7def7e2b7c92028be8b7,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:404b8057940700687ef437a2f86b23b1eb47d811420982efb7d526eb07510390,PodSandboxId:913e6ad25da255a4f64f5ac795cc16bcd6a8e9cd85a4c954180010d07e3629d3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733745155403085403,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-482476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05146362456c7def7e2b7c92028be8b7,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a7ceebcf-18f9-41a6-8615-560f8ebaf9b0 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 12:06:39 default-k8s-diff-port-482476 crio[709]: time="2024-12-09 12:06:39.929465362Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c0e1297a-e32e-4172-9aa9-656d498f6e1d name=/runtime.v1.RuntimeService/Version
	Dec 09 12:06:39 default-k8s-diff-port-482476 crio[709]: time="2024-12-09 12:06:39.929547324Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c0e1297a-e32e-4172-9aa9-656d498f6e1d name=/runtime.v1.RuntimeService/Version
	Dec 09 12:06:39 default-k8s-diff-port-482476 crio[709]: time="2024-12-09 12:06:39.930561363Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=89c6c0d7-c439-4058-97ea-8b0385a94e54 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 12:06:39 default-k8s-diff-port-482476 crio[709]: time="2024-12-09 12:06:39.930959088Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733745999930935710,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=89c6c0d7-c439-4058-97ea-8b0385a94e54 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 12:06:39 default-k8s-diff-port-482476 crio[709]: time="2024-12-09 12:06:39.931491769Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b92bfd26-5a99-41b0-bfbc-187e16234ec5 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 12:06:39 default-k8s-diff-port-482476 crio[709]: time="2024-12-09 12:06:39.931557158Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b92bfd26-5a99-41b0-bfbc-187e16234ec5 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 12:06:39 default-k8s-diff-port-482476 crio[709]: time="2024-12-09 12:06:39.931767235Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a6497e24ed8d6cf7d755dd9862baef214e283f280f4ef19432b7b946ffdc04af,PodSandboxId:9c6c1503f2aa142f6e1b790794ac4f72469f489f668cb81dbd99ad79be651d54,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733745455068209528,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b53e3ba-9bc9-4b5a-bec9-d06336616c8a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bc393f8ca069a02f4255df67b57878759855f7c23b28e333fa0164b3723d3ee,PodSandboxId:7b14ae84b4e8d70026481c33596cd578202967e2d5f80b7f1344ad74a2a8aef4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733745454054388888,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bb47s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ff9fcf2-1494-4739-9bf4-6c9dd5bcbbf4,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8c6e423fd231089f00f6c48db7dc922a6a0e874923f459286cce0c29e586c56,PodSandboxId:3bf9b1181e2f348d99e3601e877d02010b31bb3f3dd25be4c94adeac273be018,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733745454043841945,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7rr27,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: a5dd0401-80bf-4c87-9771-e1837c960425,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a06e322b94f2e8c310d68246143453b976f57c7b04a64527bb6d2624556e53b8,PodSandboxId:505c512572eeadc3423e7d92e427ae440cce2a8578f8be04d1e408798f589494,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING
,CreatedAt:1733745453383214534,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pgs52,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5a3463e-e955-4345-9559-b23cce44fa0e,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fe459e350a0a69c68e97eabcc631964637bad5192e31fc4f9bde455313887ff,PodSandboxId:54dba79d9d7cecfcb8ad76bb275e38e59d7421c662be0f31363876f1335e47a9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:173374544
2491333987,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-482476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08d9a3116ed44b8533a3cadf46fa536a,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1efd65a61828e45b53b8be59a071fc5b43c47d488fe8aba5126af1fe231338fa,PodSandboxId:e9eddec54729a9e6fc7f103e8e007467a867f4ec9e1e250497854ff9068a0e5e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,Cre
atedAt:1733745442461208526,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-482476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f4ab6bcaa941321f87de927012cee9d,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c9f81dc1c7ecf5412ad6a0ac64634688986ff124ce873c0846bd10cf54ff761,PodSandboxId:0e1162e4c3cf07ce5d3804edb5623a5567c82710bf86e6dd75a93bddd7c26573,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:17337
45442475944828,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-482476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b73f65421285a8dd1839e442c0c6af24,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ba56b590110406b1c4e6646e981b8f00f6fbc55308dacd367bda5339a72a122,PodSandboxId:62374aca74b288e067c703ef56dd1a6f6f6ead07233461f8e14daf9b603e84e2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733745442438840869,Labels:
map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-482476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05146362456c7def7e2b7c92028be8b7,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:404b8057940700687ef437a2f86b23b1eb47d811420982efb7d526eb07510390,PodSandboxId:913e6ad25da255a4f64f5ac795cc16bcd6a8e9cd85a4c954180010d07e3629d3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733745155403085403,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-482476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05146362456c7def7e2b7c92028be8b7,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b92bfd26-5a99-41b0-bfbc-187e16234ec5 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 12:06:39 default-k8s-diff-port-482476 crio[709]: time="2024-12-09 12:06:39.969183081Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f395bd6d-7dcc-4aec-9539-5257651a4139 name=/runtime.v1.RuntimeService/Version
	Dec 09 12:06:39 default-k8s-diff-port-482476 crio[709]: time="2024-12-09 12:06:39.969264251Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f395bd6d-7dcc-4aec-9539-5257651a4139 name=/runtime.v1.RuntimeService/Version
	Dec 09 12:06:39 default-k8s-diff-port-482476 crio[709]: time="2024-12-09 12:06:39.970433390Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c15769f7-c225-45be-8c98-9f4d23bb0f6c name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 12:06:39 default-k8s-diff-port-482476 crio[709]: time="2024-12-09 12:06:39.970825829Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733745999970804147,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c15769f7-c225-45be-8c98-9f4d23bb0f6c name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 12:06:39 default-k8s-diff-port-482476 crio[709]: time="2024-12-09 12:06:39.971484017Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d65957dc-0e13-499c-8437-d0be99d759fc name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 12:06:39 default-k8s-diff-port-482476 crio[709]: time="2024-12-09 12:06:39.971552262Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d65957dc-0e13-499c-8437-d0be99d759fc name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 12:06:39 default-k8s-diff-port-482476 crio[709]: time="2024-12-09 12:06:39.971755240Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a6497e24ed8d6cf7d755dd9862baef214e283f280f4ef19432b7b946ffdc04af,PodSandboxId:9c6c1503f2aa142f6e1b790794ac4f72469f489f668cb81dbd99ad79be651d54,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733745455068209528,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b53e3ba-9bc9-4b5a-bec9-d06336616c8a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bc393f8ca069a02f4255df67b57878759855f7c23b28e333fa0164b3723d3ee,PodSandboxId:7b14ae84b4e8d70026481c33596cd578202967e2d5f80b7f1344ad74a2a8aef4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733745454054388888,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bb47s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ff9fcf2-1494-4739-9bf4-6c9dd5bcbbf4,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8c6e423fd231089f00f6c48db7dc922a6a0e874923f459286cce0c29e586c56,PodSandboxId:3bf9b1181e2f348d99e3601e877d02010b31bb3f3dd25be4c94adeac273be018,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733745454043841945,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7rr27,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: a5dd0401-80bf-4c87-9771-e1837c960425,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a06e322b94f2e8c310d68246143453b976f57c7b04a64527bb6d2624556e53b8,PodSandboxId:505c512572eeadc3423e7d92e427ae440cce2a8578f8be04d1e408798f589494,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING
,CreatedAt:1733745453383214534,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pgs52,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5a3463e-e955-4345-9559-b23cce44fa0e,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fe459e350a0a69c68e97eabcc631964637bad5192e31fc4f9bde455313887ff,PodSandboxId:54dba79d9d7cecfcb8ad76bb275e38e59d7421c662be0f31363876f1335e47a9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:173374544
2491333987,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-482476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08d9a3116ed44b8533a3cadf46fa536a,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1efd65a61828e45b53b8be59a071fc5b43c47d488fe8aba5126af1fe231338fa,PodSandboxId:e9eddec54729a9e6fc7f103e8e007467a867f4ec9e1e250497854ff9068a0e5e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,Cre
atedAt:1733745442461208526,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-482476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f4ab6bcaa941321f87de927012cee9d,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c9f81dc1c7ecf5412ad6a0ac64634688986ff124ce873c0846bd10cf54ff761,PodSandboxId:0e1162e4c3cf07ce5d3804edb5623a5567c82710bf86e6dd75a93bddd7c26573,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:17337
45442475944828,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-482476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b73f65421285a8dd1839e442c0c6af24,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ba56b590110406b1c4e6646e981b8f00f6fbc55308dacd367bda5339a72a122,PodSandboxId:62374aca74b288e067c703ef56dd1a6f6f6ead07233461f8e14daf9b603e84e2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733745442438840869,Labels:
map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-482476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05146362456c7def7e2b7c92028be8b7,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:404b8057940700687ef437a2f86b23b1eb47d811420982efb7d526eb07510390,PodSandboxId:913e6ad25da255a4f64f5ac795cc16bcd6a8e9cd85a4c954180010d07e3629d3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733745155403085403,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-482476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05146362456c7def7e2b7c92028be8b7,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d65957dc-0e13-499c-8437-d0be99d759fc name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a6497e24ed8d6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   9c6c1503f2aa1       storage-provisioner
	2bc393f8ca069       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   7b14ae84b4e8d       coredns-7c65d6cfc9-bb47s
	d8c6e423fd231       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   3bf9b1181e2f3       coredns-7c65d6cfc9-7rr27
	a06e322b94f2e       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38   9 minutes ago       Running             kube-proxy                0                   505c512572eea       kube-proxy-pgs52
	2fe459e350a0a       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503   9 minutes ago       Running             kube-controller-manager   2                   54dba79d9d7ce       kube-controller-manager-default-k8s-diff-port-482476
	3c9f81dc1c7ec       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   9 minutes ago       Running             etcd                      2                   0e1162e4c3cf0       etcd-default-k8s-diff-port-482476
	1efd65a61828e       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856   9 minutes ago       Running             kube-scheduler            2                   e9eddec54729a       kube-scheduler-default-k8s-diff-port-482476
	1ba56b5901104       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   9 minutes ago       Running             kube-apiserver            2                   62374aca74b28       kube-apiserver-default-k8s-diff-port-482476
	404b805794070       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   14 minutes ago      Exited              kube-apiserver            1                   913e6ad25da25       kube-apiserver-default-k8s-diff-port-482476
	
	
	==> coredns [2bc393f8ca069a02f4255df67b57878759855f7c23b28e333fa0164b3723d3ee] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [d8c6e423fd231089f00f6c48db7dc922a6a0e874923f459286cce0c29e586c56] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-482476
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-482476
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=60110addcdbf0fec7168b962521659e922988d6c
	                    minikube.k8s.io/name=default-k8s-diff-port-482476
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_09T11_57_28_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 09 Dec 2024 11:57:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-482476
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 09 Dec 2024 12:06:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 09 Dec 2024 12:02:44 +0000   Mon, 09 Dec 2024 11:57:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 09 Dec 2024 12:02:44 +0000   Mon, 09 Dec 2024 11:57:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 09 Dec 2024 12:02:44 +0000   Mon, 09 Dec 2024 11:57:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 09 Dec 2024 12:02:44 +0000   Mon, 09 Dec 2024 11:57:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.25
	  Hostname:    default-k8s-diff-port-482476
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 3db52be855aa4f8d8abfa5bc1b27dc59
	  System UUID:                3db52be8-55aa-4f8d-8abf-a5bc1b27dc59
	  Boot ID:                    090d3d7b-d360-4f8d-8f79-abf46cb9ac89
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-7rr27                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m8s
	  kube-system                 coredns-7c65d6cfc9-bb47s                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m8s
	  kube-system                 etcd-default-k8s-diff-port-482476                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         9m13s
	  kube-system                 kube-apiserver-default-k8s-diff-port-482476             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m13s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-482476    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m14s
	  kube-system                 kube-proxy-pgs52                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m8s
	  kube-system                 kube-scheduler-default-k8s-diff-port-482476             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m13s
	  kube-system                 metrics-server-6867b74b74-2lmtn                         100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         9m6s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m6s   kube-proxy       
	  Normal  Starting                 9m13s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m13s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m13s  kubelet          Node default-k8s-diff-port-482476 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m13s  kubelet          Node default-k8s-diff-port-482476 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m13s  kubelet          Node default-k8s-diff-port-482476 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m9s   node-controller  Node default-k8s-diff-port-482476 event: Registered Node default-k8s-diff-port-482476 in Controller
	
	
	==> dmesg <==
	[  +0.051967] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.129819] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.208254] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.409388] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000004] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.486673] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.070745] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.075723] systemd-fstab-generator[646]: Ignoring "noauto" option for root device
	[  +0.192715] systemd-fstab-generator[660]: Ignoring "noauto" option for root device
	[  +0.117868] systemd-fstab-generator[672]: Ignoring "noauto" option for root device
	[  +0.318838] systemd-fstab-generator[701]: Ignoring "noauto" option for root device
	[  +4.147301] systemd-fstab-generator[791]: Ignoring "noauto" option for root device
	[  +2.112878] systemd-fstab-generator[913]: Ignoring "noauto" option for root device
	[  +0.073644] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.566190] kauditd_printk_skb: 69 callbacks suppressed
	[  +6.965514] kauditd_printk_skb: 90 callbacks suppressed
	[Dec 9 11:56] kauditd_printk_skb: 4 callbacks suppressed
	[Dec 9 11:57] kauditd_printk_skb: 3 callbacks suppressed
	[  +1.632440] systemd-fstab-generator[2604]: Ignoring "noauto" option for root device
	[  +4.921203] kauditd_printk_skb: 58 callbacks suppressed
	[  +1.639335] systemd-fstab-generator[2927]: Ignoring "noauto" option for root device
	[  +5.440565] systemd-fstab-generator[3059]: Ignoring "noauto" option for root device
	[  +0.090326] kauditd_printk_skb: 14 callbacks suppressed
	[Dec 9 11:58] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [3c9f81dc1c7ecf5412ad6a0ac64634688986ff124ce873c0846bd10cf54ff761] <==
	{"level":"info","ts":"2024-12-09T11:57:22.817681Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-12-09T11:57:22.817920Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"9bed631aec89f51c","initial-advertise-peer-urls":["https://192.168.50.25:2380"],"listen-peer-urls":["https://192.168.50.25:2380"],"advertise-client-urls":["https://192.168.50.25:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.25:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-12-09T11:57:22.817935Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-12-09T11:57:22.822377Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.50.25:2380"}
	{"level":"info","ts":"2024-12-09T11:57:22.822587Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"c83bdfd763dc36e2","local-member-id":"9bed631aec89f51c","added-peer-id":"9bed631aec89f51c","added-peer-peer-urls":["https://192.168.50.25:2380"]}
	{"level":"info","ts":"2024-12-09T11:57:23.261410Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9bed631aec89f51c is starting a new election at term 1"}
	{"level":"info","ts":"2024-12-09T11:57:23.261524Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9bed631aec89f51c became pre-candidate at term 1"}
	{"level":"info","ts":"2024-12-09T11:57:23.261573Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9bed631aec89f51c received MsgPreVoteResp from 9bed631aec89f51c at term 1"}
	{"level":"info","ts":"2024-12-09T11:57:23.261612Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9bed631aec89f51c became candidate at term 2"}
	{"level":"info","ts":"2024-12-09T11:57:23.261636Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9bed631aec89f51c received MsgVoteResp from 9bed631aec89f51c at term 2"}
	{"level":"info","ts":"2024-12-09T11:57:23.261663Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9bed631aec89f51c became leader at term 2"}
	{"level":"info","ts":"2024-12-09T11:57:23.261689Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9bed631aec89f51c elected leader 9bed631aec89f51c at term 2"}
	{"level":"info","ts":"2024-12-09T11:57:23.265525Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"9bed631aec89f51c","local-member-attributes":"{Name:default-k8s-diff-port-482476 ClientURLs:[https://192.168.50.25:2379]}","request-path":"/0/members/9bed631aec89f51c/attributes","cluster-id":"c83bdfd763dc36e2","publish-timeout":"7s"}
	{"level":"info","ts":"2024-12-09T11:57:23.265609Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-09T11:57:23.266020Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-09T11:57:23.268920Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-09T11:57:23.269683Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-12-09T11:57:23.276353Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-12-09T11:57:23.276414Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-12-09T11:57:23.278387Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"c83bdfd763dc36e2","local-member-id":"9bed631aec89f51c","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-09T11:57:23.278488Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-09T11:57:23.278530Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-09T11:57:23.274553Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-09T11:57:23.290188Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-09T11:57:23.342262Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.25:2379"}
	
	
	==> kernel <==
	 12:06:40 up 14 min,  0 users,  load average: 0.30, 0.12, 0.08
	Linux default-k8s-diff-port-482476 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [1ba56b590110406b1c4e6646e981b8f00f6fbc55308dacd367bda5339a72a122] <==
	W1209 12:02:25.937001       1 handler_proxy.go:99] no RequestInfo found in the context
	E1209 12:02:25.937105       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1209 12:02:25.938050       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1209 12:02:25.938140       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1209 12:03:25.938928       1 handler_proxy.go:99] no RequestInfo found in the context
	E1209 12:03:25.939026       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1209 12:03:25.939074       1 handler_proxy.go:99] no RequestInfo found in the context
	E1209 12:03:25.939142       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1209 12:03:25.940162       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1209 12:03:25.940275       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1209 12:05:25.940646       1 handler_proxy.go:99] no RequestInfo found in the context
	W1209 12:05:25.940715       1 handler_proxy.go:99] no RequestInfo found in the context
	E1209 12:05:25.940997       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E1209 12:05:25.941070       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1209 12:05:25.942200       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1209 12:05:25.942258       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [404b8057940700687ef437a2f86b23b1eb47d811420982efb7d526eb07510390] <==
	W1209 11:57:15.213857       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:57:15.281115       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:57:15.300961       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:57:15.314618       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:57:15.412660       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:57:15.412661       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:57:15.439283       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:57:15.446766       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:57:15.492979       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:57:15.533730       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:57:15.568980       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:57:15.591592       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:57:15.599011       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:57:15.711884       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:57:15.791048       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:57:15.837428       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:57:15.839744       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:57:15.865629       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:57:15.898850       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:57:15.922095       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:57:15.941619       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:57:15.958102       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:57:16.120584       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:57:16.257291       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:57:16.259874       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [2fe459e350a0a69c68e97eabcc631964637bad5192e31fc4f9bde455313887ff] <==
	E1209 12:01:31.833238       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1209 12:01:32.368785       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1209 12:02:01.839019       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1209 12:02:02.378363       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1209 12:02:31.846993       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1209 12:02:32.387764       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1209 12:02:44.576400       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-482476"
	E1209 12:03:01.854070       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1209 12:03:02.396519       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1209 12:03:31.860404       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1209 12:03:32.404260       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1209 12:03:46.785443       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="257.473µs"
	I1209 12:03:58.782256       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="131.403µs"
	E1209 12:04:01.867405       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1209 12:04:02.412020       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1209 12:04:31.875132       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1209 12:04:32.420802       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1209 12:05:01.883523       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1209 12:05:02.429982       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1209 12:05:31.891000       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1209 12:05:32.439779       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1209 12:06:01.897280       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1209 12:06:02.446694       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1209 12:06:31.905109       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1209 12:06:32.453839       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [a06e322b94f2e8c310d68246143453b976f57c7b04a64527bb6d2624556e53b8] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1209 11:57:33.938738       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1209 11:57:33.975546       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.25"]
	E1209 11:57:33.975626       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1209 11:57:34.075671       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1209 11:57:34.075708       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1209 11:57:34.075738       1 server_linux.go:169] "Using iptables Proxier"
	I1209 11:57:34.079213       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1209 11:57:34.079675       1 server.go:483] "Version info" version="v1.31.2"
	I1209 11:57:34.079687       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1209 11:57:34.081807       1 config.go:199] "Starting service config controller"
	I1209 11:57:34.081825       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1209 11:57:34.081886       1 config.go:105] "Starting endpoint slice config controller"
	I1209 11:57:34.081892       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1209 11:57:34.082385       1 config.go:328] "Starting node config controller"
	I1209 11:57:34.082396       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1209 11:57:34.182478       1 shared_informer.go:320] Caches are synced for node config
	I1209 11:57:34.182519       1 shared_informer.go:320] Caches are synced for service config
	I1209 11:57:34.182549       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [1efd65a61828e45b53b8be59a071fc5b43c47d488fe8aba5126af1fe231338fa] <==
	W1209 11:57:24.958517       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1209 11:57:24.958561       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1209 11:57:25.766565       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1209 11:57:25.766675       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1209 11:57:25.782455       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1209 11:57:25.782499       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1209 11:57:25.959279       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1209 11:57:25.959373       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1209 11:57:25.973473       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1209 11:57:25.973516       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1209 11:57:25.999914       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1209 11:57:25.999959       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1209 11:57:26.010617       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1209 11:57:26.010689       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1209 11:57:26.094835       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1209 11:57:26.094918       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1209 11:57:26.099191       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1209 11:57:26.099272       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1209 11:57:26.141387       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1209 11:57:26.141475       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1209 11:57:26.162913       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1209 11:57:26.162957       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1209 11:57:26.165709       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1209 11:57:26.165758       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1209 11:57:29.051506       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 09 12:05:27 default-k8s-diff-port-482476 kubelet[2934]: E1209 12:05:27.932778    2934 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733745927932555585,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 12:05:37 default-k8s-diff-port-482476 kubelet[2934]: E1209 12:05:37.935615    2934 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733745937935087440,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 12:05:37 default-k8s-diff-port-482476 kubelet[2934]: E1209 12:05:37.935982    2934 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733745937935087440,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 12:05:38 default-k8s-diff-port-482476 kubelet[2934]: E1209 12:05:38.767720    2934 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-2lmtn" podUID="60803d31-d0b0-4d51-a9f2-cadafd184a90"
	Dec 09 12:05:47 default-k8s-diff-port-482476 kubelet[2934]: E1209 12:05:47.938175    2934 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733745947937790805,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 12:05:47 default-k8s-diff-port-482476 kubelet[2934]: E1209 12:05:47.938202    2934 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733745947937790805,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 12:05:52 default-k8s-diff-port-482476 kubelet[2934]: E1209 12:05:52.768076    2934 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-2lmtn" podUID="60803d31-d0b0-4d51-a9f2-cadafd184a90"
	Dec 09 12:05:57 default-k8s-diff-port-482476 kubelet[2934]: E1209 12:05:57.939366    2934 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733745957939047469,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 12:05:57 default-k8s-diff-port-482476 kubelet[2934]: E1209 12:05:57.939727    2934 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733745957939047469,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 12:06:04 default-k8s-diff-port-482476 kubelet[2934]: E1209 12:06:04.767885    2934 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-2lmtn" podUID="60803d31-d0b0-4d51-a9f2-cadafd184a90"
	Dec 09 12:06:07 default-k8s-diff-port-482476 kubelet[2934]: E1209 12:06:07.941457    2934 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733745967941167767,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 12:06:07 default-k8s-diff-port-482476 kubelet[2934]: E1209 12:06:07.941497    2934 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733745967941167767,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 12:06:17 default-k8s-diff-port-482476 kubelet[2934]: E1209 12:06:17.770256    2934 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-2lmtn" podUID="60803d31-d0b0-4d51-a9f2-cadafd184a90"
	Dec 09 12:06:17 default-k8s-diff-port-482476 kubelet[2934]: E1209 12:06:17.943205    2934 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733745977942903820,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 12:06:17 default-k8s-diff-port-482476 kubelet[2934]: E1209 12:06:17.943232    2934 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733745977942903820,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 12:06:27 default-k8s-diff-port-482476 kubelet[2934]: E1209 12:06:27.789992    2934 iptables.go:577] "Could not set up iptables canary" err=<
	Dec 09 12:06:27 default-k8s-diff-port-482476 kubelet[2934]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 09 12:06:27 default-k8s-diff-port-482476 kubelet[2934]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 09 12:06:27 default-k8s-diff-port-482476 kubelet[2934]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 09 12:06:27 default-k8s-diff-port-482476 kubelet[2934]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 09 12:06:27 default-k8s-diff-port-482476 kubelet[2934]: E1209 12:06:27.945249    2934 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733745987944268653,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 12:06:27 default-k8s-diff-port-482476 kubelet[2934]: E1209 12:06:27.945487    2934 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733745987944268653,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 12:06:31 default-k8s-diff-port-482476 kubelet[2934]: E1209 12:06:31.769237    2934 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-2lmtn" podUID="60803d31-d0b0-4d51-a9f2-cadafd184a90"
	Dec 09 12:06:37 default-k8s-diff-port-482476 kubelet[2934]: E1209 12:06:37.947605    2934 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733745997947163527,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 12:06:37 default-k8s-diff-port-482476 kubelet[2934]: E1209 12:06:37.947631    2934 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733745997947163527,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [a6497e24ed8d6cf7d755dd9862baef214e283f280f4ef19432b7b946ffdc04af] <==
	I1209 11:57:35.191270       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1209 11:57:35.205785       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1209 11:57:35.206168       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1209 11:57:35.218112       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1209 11:57:35.218286       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-482476_2fb80794-68f7-4032-bc04-c068a5d502d0!
	I1209 11:57:35.220806       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9f2ba308-83e1-4c51-b2b2-b8ad9215dee4", APIVersion:"v1", ResourceVersion:"410", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-482476_2fb80794-68f7-4032-bc04-c068a5d502d0 became leader
	I1209 11:57:35.320655       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-482476_2fb80794-68f7-4032-bc04-c068a5d502d0!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-482476 -n default-k8s-diff-port-482476
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-482476 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-2lmtn
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-482476 describe pod metrics-server-6867b74b74-2lmtn
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-482476 describe pod metrics-server-6867b74b74-2lmtn: exit status 1 (66.653664ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-2lmtn" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-482476 describe pod metrics-server-6867b74b74-2lmtn: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1209 11:58:22.652689  617017 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/functional-032350/client.crt: no such file or directory" logger="UnhandledError"
E1209 11:59:45.729944  617017 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/functional-032350/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-005123 -n embed-certs-005123
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-12-09 12:07:02.276928648 +0000 UTC m=+5610.737654167
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-005123 -n embed-certs-005123
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-005123 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-005123 logs -n 25: (1.996194774s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p running-upgrade-119214                              | running-upgrade-119214       | jenkins | v1.34.0 | 09 Dec 24 11:43 UTC | 09 Dec 24 11:43 UTC |
	| delete  | -p                                                     | disable-driver-mounts-905993 | jenkins | v1.34.0 | 09 Dec 24 11:43 UTC | 09 Dec 24 11:43 UTC |
	|         | disable-driver-mounts-905993                           |                              |         |         |                     |                     |
	| start   | -p no-preload-820741                                   | no-preload-820741            | jenkins | v1.34.0 | 09 Dec 24 11:43 UTC | 09 Dec 24 11:44 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-005123            | embed-certs-005123           | jenkins | v1.34.0 | 09 Dec 24 11:44 UTC | 09 Dec 24 11:44 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-005123                                  | embed-certs-005123           | jenkins | v1.34.0 | 09 Dec 24 11:44 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-820741             | no-preload-820741            | jenkins | v1.34.0 | 09 Dec 24 11:44 UTC | 09 Dec 24 11:44 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-820741                                   | no-preload-820741            | jenkins | v1.34.0 | 09 Dec 24 11:44 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-835095                           | kubernetes-upgrade-835095    | jenkins | v1.34.0 | 09 Dec 24 11:45 UTC | 09 Dec 24 11:45 UTC |
	| start   | -p kubernetes-upgrade-835095                           | kubernetes-upgrade-835095    | jenkins | v1.34.0 | 09 Dec 24 11:45 UTC | 09 Dec 24 11:45 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-835095                           | kubernetes-upgrade-835095    | jenkins | v1.34.0 | 09 Dec 24 11:45 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-835095                           | kubernetes-upgrade-835095    | jenkins | v1.34.0 | 09 Dec 24 11:45 UTC | 09 Dec 24 11:46 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-835095                           | kubernetes-upgrade-835095    | jenkins | v1.34.0 | 09 Dec 24 11:46 UTC | 09 Dec 24 11:46 UTC |
	| start   | -p                                                     | default-k8s-diff-port-482476 | jenkins | v1.34.0 | 09 Dec 24 11:46 UTC | 09 Dec 24 11:47 UTC |
	|         | default-k8s-diff-port-482476                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-005123                 | embed-certs-005123           | jenkins | v1.34.0 | 09 Dec 24 11:46 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-005123                                  | embed-certs-005123           | jenkins | v1.34.0 | 09 Dec 24 11:46 UTC | 09 Dec 24 11:58 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-014592        | old-k8s-version-014592       | jenkins | v1.34.0 | 09 Dec 24 11:46 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-820741                  | no-preload-820741            | jenkins | v1.34.0 | 09 Dec 24 11:47 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-820741                                   | no-preload-820741            | jenkins | v1.34.0 | 09 Dec 24 11:47 UTC | 09 Dec 24 11:56 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-482476  | default-k8s-diff-port-482476 | jenkins | v1.34.0 | 09 Dec 24 11:47 UTC | 09 Dec 24 11:47 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-482476 | jenkins | v1.34.0 | 09 Dec 24 11:47 UTC |                     |
	|         | default-k8s-diff-port-482476                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-014592                              | old-k8s-version-014592       | jenkins | v1.34.0 | 09 Dec 24 11:48 UTC | 09 Dec 24 11:48 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-014592             | old-k8s-version-014592       | jenkins | v1.34.0 | 09 Dec 24 11:48 UTC | 09 Dec 24 11:48 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-014592                              | old-k8s-version-014592       | jenkins | v1.34.0 | 09 Dec 24 11:48 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-482476       | default-k8s-diff-port-482476 | jenkins | v1.34.0 | 09 Dec 24 11:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-482476 | jenkins | v1.34.0 | 09 Dec 24 11:49 UTC | 09 Dec 24 11:57 UTC |
	|         | default-k8s-diff-port-482476                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/09 11:49:59
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 11:49:59.489110  663024 out.go:345] Setting OutFile to fd 1 ...
	I1209 11:49:59.489218  663024 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 11:49:59.489223  663024 out.go:358] Setting ErrFile to fd 2...
	I1209 11:49:59.489227  663024 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 11:49:59.489393  663024 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20068-609844/.minikube/bin
	I1209 11:49:59.489968  663024 out.go:352] Setting JSON to false
	I1209 11:49:59.491001  663024 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":16343,"bootTime":1733728656,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 11:49:59.491116  663024 start.go:139] virtualization: kvm guest
	I1209 11:49:59.493422  663024 out.go:177] * [default-k8s-diff-port-482476] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1209 11:49:59.495230  663024 notify.go:220] Checking for updates...
	I1209 11:49:59.495310  663024 out.go:177]   - MINIKUBE_LOCATION=20068
	I1209 11:49:59.496833  663024 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 11:49:59.498350  663024 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20068-609844/kubeconfig
	I1209 11:49:59.499799  663024 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20068-609844/.minikube
	I1209 11:49:59.501159  663024 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1209 11:49:59.502351  663024 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 11:49:59.503976  663024 config.go:182] Loaded profile config "default-k8s-diff-port-482476": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 11:49:59.504355  663024 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:49:59.504434  663024 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:49:59.519867  663024 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39863
	I1209 11:49:59.520292  663024 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:49:59.520859  663024 main.go:141] libmachine: Using API Version  1
	I1209 11:49:59.520886  663024 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:49:59.521235  663024 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:49:59.521438  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .DriverName
	I1209 11:49:59.521739  663024 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 11:49:59.522124  663024 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:49:59.522225  663024 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:49:59.537355  663024 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39881
	I1209 11:49:59.537882  663024 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:49:59.538473  663024 main.go:141] libmachine: Using API Version  1
	I1209 11:49:59.538507  663024 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:49:59.538862  663024 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:49:59.539111  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .DriverName
	I1209 11:49:59.573642  663024 out.go:177] * Using the kvm2 driver based on existing profile
	I1209 11:49:59.574808  663024 start.go:297] selected driver: kvm2
	I1209 11:49:59.574821  663024 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-482476 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-482476 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.25 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 11:49:59.574939  663024 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 11:49:59.575618  663024 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 11:49:59.575711  663024 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20068-609844/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1209 11:49:59.591990  663024 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1209 11:49:59.592425  663024 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 11:49:59.592468  663024 cni.go:84] Creating CNI manager for ""
	I1209 11:49:59.592500  663024 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 11:49:59.592535  663024 start.go:340] cluster config:
	{Name:default-k8s-diff-port-482476 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-482476 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.25 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-h
ost Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 11:49:59.592645  663024 iso.go:125] acquiring lock: {Name:mk3bf4d9da9b75cc0ae0a3732f4492bac54a4b46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 11:49:59.594451  663024 out.go:177] * Starting "default-k8s-diff-port-482476" primary control-plane node in "default-k8s-diff-port-482476" cluster
	I1209 11:49:56.270467  661546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.218:22: connect: no route to host
	I1209 11:49:59.342522  661546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.218:22: connect: no route to host
	I1209 11:49:59.595812  663024 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 11:49:59.595868  663024 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1209 11:49:59.595876  663024 cache.go:56] Caching tarball of preloaded images
	I1209 11:49:59.595966  663024 preload.go:172] Found /home/jenkins/minikube-integration/20068-609844/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1209 11:49:59.595978  663024 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1209 11:49:59.596080  663024 profile.go:143] Saving config to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/default-k8s-diff-port-482476/config.json ...
	I1209 11:49:59.596311  663024 start.go:360] acquireMachinesLock for default-k8s-diff-port-482476: {Name:mka6afffe9ddf2faebdc603ed0401805d58bf31e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 11:50:05.422464  661546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.218:22: connect: no route to host
	I1209 11:50:08.494459  661546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.218:22: connect: no route to host
	I1209 11:50:14.574530  661546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.218:22: connect: no route to host
	I1209 11:50:17.646514  661546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.218:22: connect: no route to host
	I1209 11:50:23.726481  661546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.218:22: connect: no route to host
	I1209 11:50:26.798485  661546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.218:22: connect: no route to host
	I1209 11:50:32.878439  661546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.218:22: connect: no route to host
	I1209 11:50:35.950501  661546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.218:22: connect: no route to host
	I1209 11:50:42.030519  661546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.218:22: connect: no route to host
	I1209 11:50:45.102528  661546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.218:22: connect: no route to host
	I1209 11:50:51.182489  661546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.218:22: connect: no route to host
	I1209 11:50:54.254539  661546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.218:22: connect: no route to host
	I1209 11:51:00.334461  661546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.218:22: connect: no route to host
	I1209 11:51:03.406475  661546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.218:22: connect: no route to host
	I1209 11:51:09.486483  661546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.218:22: connect: no route to host
	I1209 11:51:12.558522  661546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.218:22: connect: no route to host
	I1209 11:51:18.638454  661546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.218:22: connect: no route to host
	I1209 11:51:24.715494  662109 start.go:364] duration metric: took 4m3.035196519s to acquireMachinesLock for "no-preload-820741"
	I1209 11:51:24.715567  662109 start.go:96] Skipping create...Using existing machine configuration
	I1209 11:51:24.715578  662109 fix.go:54] fixHost starting: 
	I1209 11:51:24.715984  662109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:51:24.716040  662109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:51:24.731722  662109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43799
	I1209 11:51:24.732247  662109 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:51:24.732853  662109 main.go:141] libmachine: Using API Version  1
	I1209 11:51:24.732876  662109 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:51:24.733244  662109 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:51:24.733437  662109 main.go:141] libmachine: (no-preload-820741) Calling .DriverName
	I1209 11:51:24.733606  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetState
	I1209 11:51:24.735295  662109 fix.go:112] recreateIfNeeded on no-preload-820741: state=Stopped err=<nil>
	I1209 11:51:24.735325  662109 main.go:141] libmachine: (no-preload-820741) Calling .DriverName
	W1209 11:51:24.735521  662109 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 11:51:24.737237  662109 out.go:177] * Restarting existing kvm2 VM for "no-preload-820741" ...
	I1209 11:51:21.710446  661546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.218:22: connect: no route to host
	I1209 11:51:24.712631  661546 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 11:51:24.712695  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetMachineName
	I1209 11:51:24.713111  661546 buildroot.go:166] provisioning hostname "embed-certs-005123"
	I1209 11:51:24.713140  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetMachineName
	I1209 11:51:24.713398  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHHostname
	I1209 11:51:24.715321  661546 machine.go:96] duration metric: took 4m34.547615205s to provisionDockerMachine
	I1209 11:51:24.715372  661546 fix.go:56] duration metric: took 4m34.572283015s for fixHost
	I1209 11:51:24.715381  661546 start.go:83] releasing machines lock for "embed-certs-005123", held for 4m34.572321017s
	W1209 11:51:24.715401  661546 start.go:714] error starting host: provision: host is not running
	W1209 11:51:24.715538  661546 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1209 11:51:24.715550  661546 start.go:729] Will try again in 5 seconds ...
	I1209 11:51:24.738507  662109 main.go:141] libmachine: (no-preload-820741) Calling .Start
	I1209 11:51:24.738692  662109 main.go:141] libmachine: (no-preload-820741) Ensuring networks are active...
	I1209 11:51:24.739450  662109 main.go:141] libmachine: (no-preload-820741) Ensuring network default is active
	I1209 11:51:24.739799  662109 main.go:141] libmachine: (no-preload-820741) Ensuring network mk-no-preload-820741 is active
	I1209 11:51:24.740206  662109 main.go:141] libmachine: (no-preload-820741) Getting domain xml...
	I1209 11:51:24.740963  662109 main.go:141] libmachine: (no-preload-820741) Creating domain...
	I1209 11:51:25.958244  662109 main.go:141] libmachine: (no-preload-820741) Waiting to get IP...
	I1209 11:51:25.959122  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:25.959507  662109 main.go:141] libmachine: (no-preload-820741) DBG | unable to find current IP address of domain no-preload-820741 in network mk-no-preload-820741
	I1209 11:51:25.959585  662109 main.go:141] libmachine: (no-preload-820741) DBG | I1209 11:51:25.959486  663348 retry.go:31] will retry after 256.759149ms: waiting for machine to come up
	I1209 11:51:26.218626  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:26.219187  662109 main.go:141] libmachine: (no-preload-820741) DBG | unable to find current IP address of domain no-preload-820741 in network mk-no-preload-820741
	I1209 11:51:26.219222  662109 main.go:141] libmachine: (no-preload-820741) DBG | I1209 11:51:26.219121  663348 retry.go:31] will retry after 259.957451ms: waiting for machine to come up
	I1209 11:51:26.480403  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:26.480800  662109 main.go:141] libmachine: (no-preload-820741) DBG | unable to find current IP address of domain no-preload-820741 in network mk-no-preload-820741
	I1209 11:51:26.480828  662109 main.go:141] libmachine: (no-preload-820741) DBG | I1209 11:51:26.480753  663348 retry.go:31] will retry after 482.242492ms: waiting for machine to come up
	I1209 11:51:29.718422  661546 start.go:360] acquireMachinesLock for embed-certs-005123: {Name:mka6afffe9ddf2faebdc603ed0401805d58bf31e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 11:51:26.964420  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:26.964870  662109 main.go:141] libmachine: (no-preload-820741) DBG | unable to find current IP address of domain no-preload-820741 in network mk-no-preload-820741
	I1209 11:51:26.964903  662109 main.go:141] libmachine: (no-preload-820741) DBG | I1209 11:51:26.964821  663348 retry.go:31] will retry after 386.489156ms: waiting for machine to come up
	I1209 11:51:27.353471  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:27.353850  662109 main.go:141] libmachine: (no-preload-820741) DBG | unable to find current IP address of domain no-preload-820741 in network mk-no-preload-820741
	I1209 11:51:27.353875  662109 main.go:141] libmachine: (no-preload-820741) DBG | I1209 11:51:27.353796  663348 retry.go:31] will retry after 602.322538ms: waiting for machine to come up
	I1209 11:51:27.957621  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:27.958020  662109 main.go:141] libmachine: (no-preload-820741) DBG | unable to find current IP address of domain no-preload-820741 in network mk-no-preload-820741
	I1209 11:51:27.958051  662109 main.go:141] libmachine: (no-preload-820741) DBG | I1209 11:51:27.957967  663348 retry.go:31] will retry after 747.355263ms: waiting for machine to come up
	I1209 11:51:28.707049  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:28.707486  662109 main.go:141] libmachine: (no-preload-820741) DBG | unable to find current IP address of domain no-preload-820741 in network mk-no-preload-820741
	I1209 11:51:28.707515  662109 main.go:141] libmachine: (no-preload-820741) DBG | I1209 11:51:28.707436  663348 retry.go:31] will retry after 1.034218647s: waiting for machine to come up
	I1209 11:51:29.743755  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:29.744171  662109 main.go:141] libmachine: (no-preload-820741) DBG | unable to find current IP address of domain no-preload-820741 in network mk-no-preload-820741
	I1209 11:51:29.744213  662109 main.go:141] libmachine: (no-preload-820741) DBG | I1209 11:51:29.744119  663348 retry.go:31] will retry after 1.348194555s: waiting for machine to come up
	I1209 11:51:31.094696  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:31.095202  662109 main.go:141] libmachine: (no-preload-820741) DBG | unable to find current IP address of domain no-preload-820741 in network mk-no-preload-820741
	I1209 11:51:31.095234  662109 main.go:141] libmachine: (no-preload-820741) DBG | I1209 11:51:31.095124  663348 retry.go:31] will retry after 1.226653754s: waiting for machine to come up
	I1209 11:51:32.323529  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:32.323935  662109 main.go:141] libmachine: (no-preload-820741) DBG | unable to find current IP address of domain no-preload-820741 in network mk-no-preload-820741
	I1209 11:51:32.323959  662109 main.go:141] libmachine: (no-preload-820741) DBG | I1209 11:51:32.323884  663348 retry.go:31] will retry after 2.008914491s: waiting for machine to come up
	I1209 11:51:34.335246  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:34.335619  662109 main.go:141] libmachine: (no-preload-820741) DBG | unable to find current IP address of domain no-preload-820741 in network mk-no-preload-820741
	I1209 11:51:34.335658  662109 main.go:141] libmachine: (no-preload-820741) DBG | I1209 11:51:34.335593  663348 retry.go:31] will retry after 1.835576732s: waiting for machine to come up
	I1209 11:51:36.173316  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:36.173752  662109 main.go:141] libmachine: (no-preload-820741) DBG | unable to find current IP address of domain no-preload-820741 in network mk-no-preload-820741
	I1209 11:51:36.173786  662109 main.go:141] libmachine: (no-preload-820741) DBG | I1209 11:51:36.173711  663348 retry.go:31] will retry after 3.204076548s: waiting for machine to come up
	I1209 11:51:39.382184  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:39.382619  662109 main.go:141] libmachine: (no-preload-820741) DBG | unable to find current IP address of domain no-preload-820741 in network mk-no-preload-820741
	I1209 11:51:39.382656  662109 main.go:141] libmachine: (no-preload-820741) DBG | I1209 11:51:39.382560  663348 retry.go:31] will retry after 3.298451611s: waiting for machine to come up
	I1209 11:51:44.103077  662586 start.go:364] duration metric: took 3m16.308265809s to acquireMachinesLock for "old-k8s-version-014592"
	I1209 11:51:44.103164  662586 start.go:96] Skipping create...Using existing machine configuration
	I1209 11:51:44.103178  662586 fix.go:54] fixHost starting: 
	I1209 11:51:44.103657  662586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:51:44.103716  662586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:51:44.121162  662586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33531
	I1209 11:51:44.121672  662586 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:51:44.122203  662586 main.go:141] libmachine: Using API Version  1
	I1209 11:51:44.122232  662586 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:51:44.122644  662586 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:51:44.122852  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .DriverName
	I1209 11:51:44.123023  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetState
	I1209 11:51:44.124544  662586 fix.go:112] recreateIfNeeded on old-k8s-version-014592: state=Stopped err=<nil>
	I1209 11:51:44.124567  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .DriverName
	W1209 11:51:44.124704  662586 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 11:51:44.126942  662586 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-014592" ...
	I1209 11:51:42.684438  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:42.684824  662109 main.go:141] libmachine: (no-preload-820741) Found IP for machine: 192.168.39.169
	I1209 11:51:42.684859  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has current primary IP address 192.168.39.169 and MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:42.684867  662109 main.go:141] libmachine: (no-preload-820741) Reserving static IP address...
	I1209 11:51:42.685269  662109 main.go:141] libmachine: (no-preload-820741) DBG | found host DHCP lease matching {name: "no-preload-820741", mac: "52:54:00:27:4c:0e", ip: "192.168.39.169"} in network mk-no-preload-820741: {Iface:virbr1 ExpiryTime:2024-12-09 12:43:41 +0000 UTC Type:0 Mac:52:54:00:27:4c:0e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:no-preload-820741 Clientid:01:52:54:00:27:4c:0e}
	I1209 11:51:42.685296  662109 main.go:141] libmachine: (no-preload-820741) DBG | skip adding static IP to network mk-no-preload-820741 - found existing host DHCP lease matching {name: "no-preload-820741", mac: "52:54:00:27:4c:0e", ip: "192.168.39.169"}
	I1209 11:51:42.685311  662109 main.go:141] libmachine: (no-preload-820741) Reserved static IP address: 192.168.39.169
	I1209 11:51:42.685334  662109 main.go:141] libmachine: (no-preload-820741) Waiting for SSH to be available...
	I1209 11:51:42.685348  662109 main.go:141] libmachine: (no-preload-820741) DBG | Getting to WaitForSSH function...
	I1209 11:51:42.687295  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:42.687588  662109 main.go:141] libmachine: (no-preload-820741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4c:0e", ip: ""} in network mk-no-preload-820741: {Iface:virbr1 ExpiryTime:2024-12-09 12:43:41 +0000 UTC Type:0 Mac:52:54:00:27:4c:0e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:no-preload-820741 Clientid:01:52:54:00:27:4c:0e}
	I1209 11:51:42.687625  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined IP address 192.168.39.169 and MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:42.687702  662109 main.go:141] libmachine: (no-preload-820741) DBG | Using SSH client type: external
	I1209 11:51:42.687790  662109 main.go:141] libmachine: (no-preload-820741) DBG | Using SSH private key: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/no-preload-820741/id_rsa (-rw-------)
	I1209 11:51:42.687824  662109 main.go:141] libmachine: (no-preload-820741) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.169 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20068-609844/.minikube/machines/no-preload-820741/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1209 11:51:42.687844  662109 main.go:141] libmachine: (no-preload-820741) DBG | About to run SSH command:
	I1209 11:51:42.687857  662109 main.go:141] libmachine: (no-preload-820741) DBG | exit 0
	I1209 11:51:42.822609  662109 main.go:141] libmachine: (no-preload-820741) DBG | SSH cmd err, output: <nil>: 
	I1209 11:51:42.822996  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetConfigRaw
	I1209 11:51:42.823665  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetIP
	I1209 11:51:42.826484  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:42.826783  662109 main.go:141] libmachine: (no-preload-820741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4c:0e", ip: ""} in network mk-no-preload-820741: {Iface:virbr1 ExpiryTime:2024-12-09 12:43:41 +0000 UTC Type:0 Mac:52:54:00:27:4c:0e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:no-preload-820741 Clientid:01:52:54:00:27:4c:0e}
	I1209 11:51:42.826808  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined IP address 192.168.39.169 and MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:42.827050  662109 profile.go:143] Saving config to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/no-preload-820741/config.json ...
	I1209 11:51:42.827323  662109 machine.go:93] provisionDockerMachine start ...
	I1209 11:51:42.827346  662109 main.go:141] libmachine: (no-preload-820741) Calling .DriverName
	I1209 11:51:42.827620  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHHostname
	I1209 11:51:42.830224  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:42.830569  662109 main.go:141] libmachine: (no-preload-820741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4c:0e", ip: ""} in network mk-no-preload-820741: {Iface:virbr1 ExpiryTime:2024-12-09 12:43:41 +0000 UTC Type:0 Mac:52:54:00:27:4c:0e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:no-preload-820741 Clientid:01:52:54:00:27:4c:0e}
	I1209 11:51:42.830599  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined IP address 192.168.39.169 and MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:42.830717  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHPort
	I1209 11:51:42.830909  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHKeyPath
	I1209 11:51:42.831107  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHKeyPath
	I1209 11:51:42.831274  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHUsername
	I1209 11:51:42.831454  662109 main.go:141] libmachine: Using SSH client type: native
	I1209 11:51:42.831790  662109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.169 22 <nil> <nil>}
	I1209 11:51:42.831807  662109 main.go:141] libmachine: About to run SSH command:
	hostname
	I1209 11:51:42.938456  662109 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1209 11:51:42.938500  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetMachineName
	I1209 11:51:42.938778  662109 buildroot.go:166] provisioning hostname "no-preload-820741"
	I1209 11:51:42.938813  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetMachineName
	I1209 11:51:42.939023  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHHostname
	I1209 11:51:42.941706  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:42.942236  662109 main.go:141] libmachine: (no-preload-820741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4c:0e", ip: ""} in network mk-no-preload-820741: {Iface:virbr1 ExpiryTime:2024-12-09 12:43:41 +0000 UTC Type:0 Mac:52:54:00:27:4c:0e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:no-preload-820741 Clientid:01:52:54:00:27:4c:0e}
	I1209 11:51:42.942267  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined IP address 192.168.39.169 and MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:42.942390  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHPort
	I1209 11:51:42.942606  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHKeyPath
	I1209 11:51:42.942773  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHKeyPath
	I1209 11:51:42.942922  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHUsername
	I1209 11:51:42.943177  662109 main.go:141] libmachine: Using SSH client type: native
	I1209 11:51:42.943382  662109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.169 22 <nil> <nil>}
	I1209 11:51:42.943406  662109 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-820741 && echo "no-preload-820741" | sudo tee /etc/hostname
	I1209 11:51:43.065816  662109 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-820741
	
	I1209 11:51:43.065849  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHHostname
	I1209 11:51:43.068607  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:43.068916  662109 main.go:141] libmachine: (no-preload-820741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4c:0e", ip: ""} in network mk-no-preload-820741: {Iface:virbr1 ExpiryTime:2024-12-09 12:43:41 +0000 UTC Type:0 Mac:52:54:00:27:4c:0e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:no-preload-820741 Clientid:01:52:54:00:27:4c:0e}
	I1209 11:51:43.068951  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined IP address 192.168.39.169 and MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:43.069127  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHPort
	I1209 11:51:43.069256  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHKeyPath
	I1209 11:51:43.069351  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHKeyPath
	I1209 11:51:43.069514  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHUsername
	I1209 11:51:43.069637  662109 main.go:141] libmachine: Using SSH client type: native
	I1209 11:51:43.069841  662109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.169 22 <nil> <nil>}
	I1209 11:51:43.069861  662109 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-820741' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-820741/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-820741' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 11:51:43.182210  662109 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 11:51:43.182257  662109 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20068-609844/.minikube CaCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20068-609844/.minikube}
	I1209 11:51:43.182289  662109 buildroot.go:174] setting up certificates
	I1209 11:51:43.182305  662109 provision.go:84] configureAuth start
	I1209 11:51:43.182323  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetMachineName
	I1209 11:51:43.182674  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetIP
	I1209 11:51:43.185513  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:43.185872  662109 main.go:141] libmachine: (no-preload-820741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4c:0e", ip: ""} in network mk-no-preload-820741: {Iface:virbr1 ExpiryTime:2024-12-09 12:43:41 +0000 UTC Type:0 Mac:52:54:00:27:4c:0e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:no-preload-820741 Clientid:01:52:54:00:27:4c:0e}
	I1209 11:51:43.185897  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined IP address 192.168.39.169 and MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:43.186018  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHHostname
	I1209 11:51:43.188128  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:43.188482  662109 main.go:141] libmachine: (no-preload-820741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4c:0e", ip: ""} in network mk-no-preload-820741: {Iface:virbr1 ExpiryTime:2024-12-09 12:43:41 +0000 UTC Type:0 Mac:52:54:00:27:4c:0e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:no-preload-820741 Clientid:01:52:54:00:27:4c:0e}
	I1209 11:51:43.188534  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined IP address 192.168.39.169 and MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:43.188668  662109 provision.go:143] copyHostCerts
	I1209 11:51:43.188752  662109 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem, removing ...
	I1209 11:51:43.188774  662109 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem
	I1209 11:51:43.188840  662109 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem (1679 bytes)
	I1209 11:51:43.188928  662109 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem, removing ...
	I1209 11:51:43.188936  662109 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem
	I1209 11:51:43.188963  662109 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem (1082 bytes)
	I1209 11:51:43.189019  662109 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem, removing ...
	I1209 11:51:43.189027  662109 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem
	I1209 11:51:43.189049  662109 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem (1123 bytes)
	I1209 11:51:43.189104  662109 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem org=jenkins.no-preload-820741 san=[127.0.0.1 192.168.39.169 localhost minikube no-preload-820741]
	I1209 11:51:43.488258  662109 provision.go:177] copyRemoteCerts
	I1209 11:51:43.488336  662109 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 11:51:43.488367  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHHostname
	I1209 11:51:43.491689  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:43.492025  662109 main.go:141] libmachine: (no-preload-820741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4c:0e", ip: ""} in network mk-no-preload-820741: {Iface:virbr1 ExpiryTime:2024-12-09 12:43:41 +0000 UTC Type:0 Mac:52:54:00:27:4c:0e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:no-preload-820741 Clientid:01:52:54:00:27:4c:0e}
	I1209 11:51:43.492059  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined IP address 192.168.39.169 and MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:43.492267  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHPort
	I1209 11:51:43.492465  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHKeyPath
	I1209 11:51:43.492635  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHUsername
	I1209 11:51:43.492768  662109 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/no-preload-820741/id_rsa Username:docker}
	I1209 11:51:43.577708  662109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1209 11:51:43.602000  662109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1209 11:51:43.627251  662109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1209 11:51:43.651591  662109 provision.go:87] duration metric: took 469.266358ms to configureAuth
	I1209 11:51:43.651626  662109 buildroot.go:189] setting minikube options for container-runtime
	I1209 11:51:43.651863  662109 config.go:182] Loaded profile config "no-preload-820741": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 11:51:43.652059  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHHostname
	I1209 11:51:43.655150  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:43.655489  662109 main.go:141] libmachine: (no-preload-820741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4c:0e", ip: ""} in network mk-no-preload-820741: {Iface:virbr1 ExpiryTime:2024-12-09 12:43:41 +0000 UTC Type:0 Mac:52:54:00:27:4c:0e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:no-preload-820741 Clientid:01:52:54:00:27:4c:0e}
	I1209 11:51:43.655518  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined IP address 192.168.39.169 and MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:43.655738  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHPort
	I1209 11:51:43.655963  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHKeyPath
	I1209 11:51:43.656146  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHKeyPath
	I1209 11:51:43.656295  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHUsername
	I1209 11:51:43.656483  662109 main.go:141] libmachine: Using SSH client type: native
	I1209 11:51:43.656688  662109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.169 22 <nil> <nil>}
	I1209 11:51:43.656710  662109 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 11:51:43.870704  662109 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 11:51:43.870738  662109 machine.go:96] duration metric: took 1.043398486s to provisionDockerMachine
	I1209 11:51:43.870756  662109 start.go:293] postStartSetup for "no-preload-820741" (driver="kvm2")
	I1209 11:51:43.870771  662109 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 11:51:43.870796  662109 main.go:141] libmachine: (no-preload-820741) Calling .DriverName
	I1209 11:51:43.871158  662109 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 11:51:43.871186  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHHostname
	I1209 11:51:43.873863  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:43.874207  662109 main.go:141] libmachine: (no-preload-820741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4c:0e", ip: ""} in network mk-no-preload-820741: {Iface:virbr1 ExpiryTime:2024-12-09 12:43:41 +0000 UTC Type:0 Mac:52:54:00:27:4c:0e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:no-preload-820741 Clientid:01:52:54:00:27:4c:0e}
	I1209 11:51:43.874230  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined IP address 192.168.39.169 and MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:43.874408  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHPort
	I1209 11:51:43.874610  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHKeyPath
	I1209 11:51:43.874800  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHUsername
	I1209 11:51:43.874925  662109 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/no-preload-820741/id_rsa Username:docker}
	I1209 11:51:43.956874  662109 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 11:51:43.960825  662109 info.go:137] Remote host: Buildroot 2023.02.9
	I1209 11:51:43.960853  662109 filesync.go:126] Scanning /home/jenkins/minikube-integration/20068-609844/.minikube/addons for local assets ...
	I1209 11:51:43.960919  662109 filesync.go:126] Scanning /home/jenkins/minikube-integration/20068-609844/.minikube/files for local assets ...
	I1209 11:51:43.960993  662109 filesync.go:149] local asset: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem -> 6170172.pem in /etc/ssl/certs
	I1209 11:51:43.961095  662109 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 11:51:43.970138  662109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem --> /etc/ssl/certs/6170172.pem (1708 bytes)
	I1209 11:51:43.991975  662109 start.go:296] duration metric: took 121.20118ms for postStartSetup
	I1209 11:51:43.992020  662109 fix.go:56] duration metric: took 19.276442325s for fixHost
	I1209 11:51:43.992043  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHHostname
	I1209 11:51:43.994707  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:43.995035  662109 main.go:141] libmachine: (no-preload-820741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4c:0e", ip: ""} in network mk-no-preload-820741: {Iface:virbr1 ExpiryTime:2024-12-09 12:43:41 +0000 UTC Type:0 Mac:52:54:00:27:4c:0e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:no-preload-820741 Clientid:01:52:54:00:27:4c:0e}
	I1209 11:51:43.995069  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined IP address 192.168.39.169 and MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:43.995186  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHPort
	I1209 11:51:43.995403  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHKeyPath
	I1209 11:51:43.995568  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHKeyPath
	I1209 11:51:43.995716  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHUsername
	I1209 11:51:43.995927  662109 main.go:141] libmachine: Using SSH client type: native
	I1209 11:51:43.996107  662109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.169 22 <nil> <nil>}
	I1209 11:51:43.996117  662109 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1209 11:51:44.102890  662109 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733745104.077047488
	
	I1209 11:51:44.102914  662109 fix.go:216] guest clock: 1733745104.077047488
	I1209 11:51:44.102922  662109 fix.go:229] Guest: 2024-12-09 11:51:44.077047488 +0000 UTC Remote: 2024-12-09 11:51:43.992024296 +0000 UTC m=+262.463051778 (delta=85.023192ms)
	I1209 11:51:44.102952  662109 fix.go:200] guest clock delta is within tolerance: 85.023192ms
	I1209 11:51:44.102957  662109 start.go:83] releasing machines lock for "no-preload-820741", held for 19.387413234s
	I1209 11:51:44.102980  662109 main.go:141] libmachine: (no-preload-820741) Calling .DriverName
	I1209 11:51:44.103272  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetIP
	I1209 11:51:44.105929  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:44.106314  662109 main.go:141] libmachine: (no-preload-820741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4c:0e", ip: ""} in network mk-no-preload-820741: {Iface:virbr1 ExpiryTime:2024-12-09 12:43:41 +0000 UTC Type:0 Mac:52:54:00:27:4c:0e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:no-preload-820741 Clientid:01:52:54:00:27:4c:0e}
	I1209 11:51:44.106341  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined IP address 192.168.39.169 and MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:44.106567  662109 main.go:141] libmachine: (no-preload-820741) Calling .DriverName
	I1209 11:51:44.107102  662109 main.go:141] libmachine: (no-preload-820741) Calling .DriverName
	I1209 11:51:44.107323  662109 main.go:141] libmachine: (no-preload-820741) Calling .DriverName
	I1209 11:51:44.107453  662109 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 11:51:44.107507  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHHostname
	I1209 11:51:44.107640  662109 ssh_runner.go:195] Run: cat /version.json
	I1209 11:51:44.107672  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHHostname
	I1209 11:51:44.110422  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:44.110792  662109 main.go:141] libmachine: (no-preload-820741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4c:0e", ip: ""} in network mk-no-preload-820741: {Iface:virbr1 ExpiryTime:2024-12-09 12:43:41 +0000 UTC Type:0 Mac:52:54:00:27:4c:0e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:no-preload-820741 Clientid:01:52:54:00:27:4c:0e}
	I1209 11:51:44.110822  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined IP address 192.168.39.169 and MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:44.110840  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:44.110984  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHPort
	I1209 11:51:44.111194  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHKeyPath
	I1209 11:51:44.111376  662109 main.go:141] libmachine: (no-preload-820741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4c:0e", ip: ""} in network mk-no-preload-820741: {Iface:virbr1 ExpiryTime:2024-12-09 12:43:41 +0000 UTC Type:0 Mac:52:54:00:27:4c:0e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:no-preload-820741 Clientid:01:52:54:00:27:4c:0e}
	I1209 11:51:44.111395  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined IP address 192.168.39.169 and MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:44.111408  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHUsername
	I1209 11:51:44.111569  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHPort
	I1209 11:51:44.111589  662109 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/no-preload-820741/id_rsa Username:docker}
	I1209 11:51:44.111722  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHKeyPath
	I1209 11:51:44.111827  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHUsername
	I1209 11:51:44.111986  662109 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/no-preload-820741/id_rsa Username:docker}
	I1209 11:51:44.228799  662109 ssh_runner.go:195] Run: systemctl --version
	I1209 11:51:44.234678  662109 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 11:51:44.383290  662109 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 11:51:44.388906  662109 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 11:51:44.388981  662109 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 11:51:44.405271  662109 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1209 11:51:44.405308  662109 start.go:495] detecting cgroup driver to use...
	I1209 11:51:44.405389  662109 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 11:51:44.425480  662109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 11:51:44.439827  662109 docker.go:217] disabling cri-docker service (if available) ...
	I1209 11:51:44.439928  662109 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 11:51:44.454750  662109 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 11:51:44.470828  662109 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 11:51:44.595400  662109 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 11:51:44.756743  662109 docker.go:233] disabling docker service ...
	I1209 11:51:44.756817  662109 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 11:51:44.774069  662109 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 11:51:44.788188  662109 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 11:51:44.909156  662109 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 11:51:45.036992  662109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 11:51:45.051284  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 11:51:45.071001  662109 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1209 11:51:45.071074  662109 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:51:45.081491  662109 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 11:51:45.081549  662109 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:51:45.091476  662109 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:51:45.103237  662109 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:51:45.114723  662109 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 11:51:45.126330  662109 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:51:45.136501  662109 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:51:45.152804  662109 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:51:45.163221  662109 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 11:51:45.173297  662109 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1209 11:51:45.173379  662109 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1209 11:51:45.186209  662109 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 11:51:45.195773  662109 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 11:51:45.339593  662109 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 11:51:45.438766  662109 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 11:51:45.438851  662109 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 11:51:45.444775  662109 start.go:563] Will wait 60s for crictl version
	I1209 11:51:45.444847  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:51:45.449585  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 11:51:45.493796  662109 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1209 11:51:45.493899  662109 ssh_runner.go:195] Run: crio --version
	I1209 11:51:45.521391  662109 ssh_runner.go:195] Run: crio --version
	I1209 11:51:45.551249  662109 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1209 11:51:45.552714  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetIP
	I1209 11:51:45.555910  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:45.556271  662109 main.go:141] libmachine: (no-preload-820741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4c:0e", ip: ""} in network mk-no-preload-820741: {Iface:virbr1 ExpiryTime:2024-12-09 12:43:41 +0000 UTC Type:0 Mac:52:54:00:27:4c:0e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:no-preload-820741 Clientid:01:52:54:00:27:4c:0e}
	I1209 11:51:45.556298  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined IP address 192.168.39.169 and MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:45.556571  662109 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1209 11:51:45.560718  662109 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 11:51:45.573027  662109 kubeadm.go:883] updating cluster {Name:no-preload-820741 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:no-preload-820741 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.169 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 11:51:45.573171  662109 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 11:51:45.573226  662109 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 11:51:45.613696  662109 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1209 11:51:45.613724  662109 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.2 registry.k8s.io/kube-controller-manager:v1.31.2 registry.k8s.io/kube-scheduler:v1.31.2 registry.k8s.io/kube-proxy:v1.31.2 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1209 11:51:45.613810  662109 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1209 11:51:45.613847  662109 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.2
	I1209 11:51:45.613864  662109 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.2
	I1209 11:51:45.613880  662109 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1209 11:51:45.613857  662109 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1209 11:51:45.613939  662109 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1209 11:51:45.613801  662109 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.2
	I1209 11:51:45.613810  662109 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 11:51:45.615885  662109 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1209 11:51:45.615983  662109 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.2
	I1209 11:51:45.615889  662109 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1209 11:51:45.615885  662109 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 11:51:45.615893  662109 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1209 11:51:45.615891  662109 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1209 11:51:45.615897  662109 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.2
	I1209 11:51:45.615893  662109 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.2
	I1209 11:51:45.819757  662109 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1209 11:51:45.836546  662109 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1209 11:51:45.851918  662109 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.2
	I1209 11:51:45.857461  662109 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.2
	I1209 11:51:45.857468  662109 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.2
	I1209 11:51:45.863981  662109 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1209 11:51:45.864038  662109 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1209 11:51:45.864122  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:51:45.865289  662109 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.2
	I1209 11:51:45.868361  662109 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I1209 11:51:46.030476  662109 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.2" does not exist at hash "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173" in container runtime
	I1209 11:51:46.030525  662109 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.2
	I1209 11:51:46.030582  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:51:46.030525  662109 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.2" needs transfer: "registry.k8s.io/kube-proxy:v1.31.2" does not exist at hash "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38" in container runtime
	I1209 11:51:46.030603  662109 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.2" does not exist at hash "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856" in container runtime
	I1209 11:51:46.030625  662109 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.2
	I1209 11:51:46.030652  662109 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.2
	I1209 11:51:46.030694  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:51:46.030655  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:51:46.030720  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1209 11:51:46.030760  662109 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.2" does not exist at hash "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503" in container runtime
	I1209 11:51:46.030794  662109 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1209 11:51:46.030823  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:51:46.030823  662109 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I1209 11:51:46.030845  662109 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I1209 11:51:46.030868  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:51:46.041983  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1209 11:51:46.042072  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1209 11:51:46.042088  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1209 11:51:46.086909  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1209 11:51:46.086966  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1209 11:51:46.086997  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1209 11:51:46.141636  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1209 11:51:46.141723  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1209 11:51:46.141779  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1209 11:51:46.249908  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1209 11:51:46.249972  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1209 11:51:46.250024  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1209 11:51:46.250056  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1209 11:51:46.266345  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1209 11:51:46.266425  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1209 11:51:46.376691  662109 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1209 11:51:46.376784  662109 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2
	I1209 11:51:46.376904  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1209 11:51:46.376937  662109 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1209 11:51:46.376911  662109 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1209 11:51:46.376980  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1209 11:51:46.407997  662109 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2
	I1209 11:51:46.408015  662109 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2
	I1209 11:51:46.408120  662109 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.2
	I1209 11:51:46.408120  662109 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1209 11:51:46.450341  662109 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.2 (exists)
	I1209 11:51:46.450374  662109 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1209 11:51:46.450445  662109 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1209 11:51:46.450503  662109 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I1209 11:51:46.450537  662109 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I1209 11:51:46.450541  662109 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2
	I1209 11:51:46.450570  662109 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.2 (exists)
	I1209 11:51:46.450621  662109 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I1209 11:51:46.450621  662109 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1209 11:51:46.450621  662109 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.2 (exists)
	I1209 11:51:44.128421  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .Start
	I1209 11:51:44.128663  662586 main.go:141] libmachine: (old-k8s-version-014592) Ensuring networks are active...
	I1209 11:51:44.129435  662586 main.go:141] libmachine: (old-k8s-version-014592) Ensuring network default is active
	I1209 11:51:44.129805  662586 main.go:141] libmachine: (old-k8s-version-014592) Ensuring network mk-old-k8s-version-014592 is active
	I1209 11:51:44.130314  662586 main.go:141] libmachine: (old-k8s-version-014592) Getting domain xml...
	I1209 11:51:44.131070  662586 main.go:141] libmachine: (old-k8s-version-014592) Creating domain...
	I1209 11:51:45.405214  662586 main.go:141] libmachine: (old-k8s-version-014592) Waiting to get IP...
	I1209 11:51:45.406116  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:51:45.406680  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | unable to find current IP address of domain old-k8s-version-014592 in network mk-old-k8s-version-014592
	I1209 11:51:45.406716  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | I1209 11:51:45.406613  663492 retry.go:31] will retry after 249.130873ms: waiting for machine to come up
	I1209 11:51:45.657224  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:51:45.657727  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | unable to find current IP address of domain old-k8s-version-014592 in network mk-old-k8s-version-014592
	I1209 11:51:45.657756  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | I1209 11:51:45.657687  663492 retry.go:31] will retry after 363.458278ms: waiting for machine to come up
	I1209 11:51:46.023431  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:51:46.023912  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | unable to find current IP address of domain old-k8s-version-014592 in network mk-old-k8s-version-014592
	I1209 11:51:46.023945  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | I1209 11:51:46.023851  663492 retry.go:31] will retry after 313.220722ms: waiting for machine to come up
	I1209 11:51:46.339300  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:51:46.339850  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | unable to find current IP address of domain old-k8s-version-014592 in network mk-old-k8s-version-014592
	I1209 11:51:46.339876  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | I1209 11:51:46.339791  663492 retry.go:31] will retry after 517.613322ms: waiting for machine to come up
	I1209 11:51:46.859825  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:51:46.860229  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | unable to find current IP address of domain old-k8s-version-014592 in network mk-old-k8s-version-014592
	I1209 11:51:46.860260  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | I1209 11:51:46.860198  663492 retry.go:31] will retry after 710.195232ms: waiting for machine to come up
	I1209 11:51:47.572460  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:51:47.573030  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | unable to find current IP address of domain old-k8s-version-014592 in network mk-old-k8s-version-014592
	I1209 11:51:47.573080  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | I1209 11:51:47.573008  663492 retry.go:31] will retry after 620.717522ms: waiting for machine to come up
	I1209 11:51:46.869631  662109 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 11:51:48.822213  662109 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2: (2.371704342s)
	I1209 11:51:48.822263  662109 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2 from cache
	I1209 11:51:48.822262  662109 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0: (2.371603127s)
	I1209 11:51:48.822296  662109 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I1209 11:51:48.822295  662109 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.2: (2.371584353s)
	I1209 11:51:48.822298  662109 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1209 11:51:48.822309  662109 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.2 (exists)
	I1209 11:51:48.822324  662109 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.952666874s)
	I1209 11:51:48.822364  662109 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1209 11:51:48.822367  662109 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1209 11:51:48.822416  662109 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 11:51:48.822460  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:51:50.794288  662109 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (1.971891497s)
	I1209 11:51:50.794330  662109 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1209 11:51:50.794357  662109 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.2
	I1209 11:51:50.794357  662109 ssh_runner.go:235] Completed: which crictl: (1.971876587s)
	I1209 11:51:50.794417  662109 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2
	I1209 11:51:50.794437  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 11:51:48.195603  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:51:48.196140  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | unable to find current IP address of domain old-k8s-version-014592 in network mk-old-k8s-version-014592
	I1209 11:51:48.196172  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | I1209 11:51:48.196083  663492 retry.go:31] will retry after 747.45082ms: waiting for machine to come up
	I1209 11:51:48.945230  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:51:48.945682  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | unable to find current IP address of domain old-k8s-version-014592 in network mk-old-k8s-version-014592
	I1209 11:51:48.945737  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | I1209 11:51:48.945661  663492 retry.go:31] will retry after 1.307189412s: waiting for machine to come up
	I1209 11:51:50.254747  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:51:50.255335  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | unable to find current IP address of domain old-k8s-version-014592 in network mk-old-k8s-version-014592
	I1209 11:51:50.255359  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | I1209 11:51:50.255276  663492 retry.go:31] will retry after 1.269881759s: waiting for machine to come up
	I1209 11:51:51.526966  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:51:51.527400  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | unable to find current IP address of domain old-k8s-version-014592 in network mk-old-k8s-version-014592
	I1209 11:51:51.527431  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | I1209 11:51:51.527348  663492 retry.go:31] will retry after 1.424091669s: waiting for machine to come up
	I1209 11:51:52.958981  662109 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.164517823s)
	I1209 11:51:52.959044  662109 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2: (2.164597978s)
	I1209 11:51:52.959089  662109 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2 from cache
	I1209 11:51:52.959120  662109 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1209 11:51:52.959057  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 11:51:52.959203  662109 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1209 11:51:53.007629  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 11:51:54.832641  662109 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2: (1.873398185s)
	I1209 11:51:54.832686  662109 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2 from cache
	I1209 11:51:54.832694  662109 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.825022672s)
	I1209 11:51:54.832714  662109 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I1209 11:51:54.832748  662109 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1209 11:51:54.832769  662109 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I1209 11:51:54.832853  662109 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1209 11:51:52.953290  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:51:52.953711  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | unable to find current IP address of domain old-k8s-version-014592 in network mk-old-k8s-version-014592
	I1209 11:51:52.953743  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | I1209 11:51:52.953658  663492 retry.go:31] will retry after 2.009829783s: waiting for machine to come up
	I1209 11:51:54.965818  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:51:54.966337  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | unable to find current IP address of domain old-k8s-version-014592 in network mk-old-k8s-version-014592
	I1209 11:51:54.966372  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | I1209 11:51:54.966285  663492 retry.go:31] will retry after 2.209879817s: waiting for machine to come up
	I1209 11:51:57.177397  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:51:57.177870  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | unable to find current IP address of domain old-k8s-version-014592 in network mk-old-k8s-version-014592
	I1209 11:51:57.177901  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | I1209 11:51:57.177805  663492 retry.go:31] will retry after 2.999056002s: waiting for machine to come up
	I1209 11:51:58.433813  662109 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.600992195s)
	I1209 11:51:58.433889  662109 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I1209 11:51:58.433913  662109 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1209 11:51:58.433831  662109 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (3.600948593s)
	I1209 11:51:58.433947  662109 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1209 11:51:58.433961  662109 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1209 11:51:59.792012  662109 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2: (1.35801884s)
	I1209 11:51:59.792049  662109 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2 from cache
	I1209 11:51:59.792078  662109 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1209 11:51:59.792127  662109 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1209 11:52:00.635140  662109 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1209 11:52:00.635193  662109 cache_images.go:123] Successfully loaded all cached images
	I1209 11:52:00.635212  662109 cache_images.go:92] duration metric: took 15.021464053s to LoadCachedImages
	I1209 11:52:00.635232  662109 kubeadm.go:934] updating node { 192.168.39.169 8443 v1.31.2 crio true true} ...
	I1209 11:52:00.635395  662109 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-820741 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.169
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:no-preload-820741 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 11:52:00.635481  662109 ssh_runner.go:195] Run: crio config
	I1209 11:52:00.680321  662109 cni.go:84] Creating CNI manager for ""
	I1209 11:52:00.680345  662109 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 11:52:00.680370  662109 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1209 11:52:00.680394  662109 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.169 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-820741 NodeName:no-preload-820741 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.169"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.169 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 11:52:00.680545  662109 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.169
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-820741"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.169"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.169"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 11:52:00.680614  662109 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1209 11:52:00.690391  662109 binaries.go:44] Found k8s binaries, skipping transfer
	I1209 11:52:00.690484  662109 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 11:52:00.699034  662109 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1209 11:52:00.714710  662109 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 11:52:00.730375  662109 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2297 bytes)
	I1209 11:52:00.747519  662109 ssh_runner.go:195] Run: grep 192.168.39.169	control-plane.minikube.internal$ /etc/hosts
	I1209 11:52:00.751163  662109 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.169	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 11:52:00.762405  662109 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 11:52:00.881308  662109 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 11:52:00.898028  662109 certs.go:68] Setting up /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/no-preload-820741 for IP: 192.168.39.169
	I1209 11:52:00.898060  662109 certs.go:194] generating shared ca certs ...
	I1209 11:52:00.898085  662109 certs.go:226] acquiring lock for ca certs: {Name:mk2df5887e08965d909a9c950da5dfffb8a04ddc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 11:52:00.898349  662109 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.key
	I1209 11:52:00.898415  662109 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.key
	I1209 11:52:00.898429  662109 certs.go:256] generating profile certs ...
	I1209 11:52:00.898565  662109 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/no-preload-820741/client.key
	I1209 11:52:00.898646  662109 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/no-preload-820741/apiserver.key.814e22a1
	I1209 11:52:00.898701  662109 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/no-preload-820741/proxy-client.key
	I1209 11:52:00.898859  662109 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017.pem (1338 bytes)
	W1209 11:52:00.898904  662109 certs.go:480] ignoring /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017_empty.pem, impossibly tiny 0 bytes
	I1209 11:52:00.898918  662109 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem (1675 bytes)
	I1209 11:52:00.898949  662109 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem (1082 bytes)
	I1209 11:52:00.898982  662109 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem (1123 bytes)
	I1209 11:52:00.899007  662109 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem (1679 bytes)
	I1209 11:52:00.899045  662109 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem (1708 bytes)
	I1209 11:52:00.899994  662109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 11:52:00.943848  662109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1209 11:52:00.970587  662109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 11:52:01.025164  662109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1209 11:52:01.055766  662109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/no-preload-820741/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1209 11:52:01.089756  662109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/no-preload-820741/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1209 11:52:01.112171  662109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/no-preload-820741/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 11:52:01.135928  662109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/no-preload-820741/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1209 11:52:01.157703  662109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017.pem --> /usr/share/ca-certificates/617017.pem (1338 bytes)
	I1209 11:52:01.179806  662109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem --> /usr/share/ca-certificates/6170172.pem (1708 bytes)
	I1209 11:52:01.201663  662109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 11:52:01.223314  662109 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 11:52:01.239214  662109 ssh_runner.go:195] Run: openssl version
	I1209 11:52:01.244687  662109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/617017.pem && ln -fs /usr/share/ca-certificates/617017.pem /etc/ssl/certs/617017.pem"
	I1209 11:52:01.254630  662109 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/617017.pem
	I1209 11:52:01.258801  662109 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 10:45 /usr/share/ca-certificates/617017.pem
	I1209 11:52:01.258849  662109 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/617017.pem
	I1209 11:52:01.264219  662109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/617017.pem /etc/ssl/certs/51391683.0"
	I1209 11:52:01.274077  662109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6170172.pem && ln -fs /usr/share/ca-certificates/6170172.pem /etc/ssl/certs/6170172.pem"
	I1209 11:52:01.284511  662109 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6170172.pem
	I1209 11:52:01.289141  662109 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 10:45 /usr/share/ca-certificates/6170172.pem
	I1209 11:52:01.289216  662109 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6170172.pem
	I1209 11:52:01.295079  662109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6170172.pem /etc/ssl/certs/3ec20f2e.0"
	I1209 11:52:01.305606  662109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1209 11:52:01.315795  662109 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 11:52:01.320085  662109 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 10:34 /usr/share/ca-certificates/minikubeCA.pem
	I1209 11:52:01.320147  662109 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 11:52:01.325590  662109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1209 11:52:01.335747  662109 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 11:52:01.340113  662109 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1209 11:52:01.346217  662109 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1209 11:52:01.351799  662109 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1209 11:52:01.357441  662109 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1209 11:52:01.362784  662109 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1209 11:52:01.368210  662109 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1209 11:52:01.373975  662109 kubeadm.go:392] StartCluster: {Name:no-preload-820741 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:no-preload-820741 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.169 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 11:52:01.374101  662109 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 11:52:01.374160  662109 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 11:52:01.409780  662109 cri.go:89] found id: ""
	I1209 11:52:01.409852  662109 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 11:52:01.419505  662109 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1209 11:52:01.419550  662109 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1209 11:52:01.419603  662109 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1209 11:52:01.429000  662109 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1209 11:52:01.429999  662109 kubeconfig.go:125] found "no-preload-820741" server: "https://192.168.39.169:8443"
	I1209 11:52:01.432151  662109 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1209 11:52:01.440964  662109 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.169
	I1209 11:52:01.441003  662109 kubeadm.go:1160] stopping kube-system containers ...
	I1209 11:52:01.441021  662109 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1209 11:52:01.441084  662109 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 11:52:01.474788  662109 cri.go:89] found id: ""
	I1209 11:52:01.474865  662109 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1209 11:52:01.491360  662109 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 11:52:01.500483  662109 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 11:52:01.500505  662109 kubeadm.go:157] found existing configuration files:
	
	I1209 11:52:01.500558  662109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 11:52:01.509190  662109 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 11:52:01.509251  662109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 11:52:01.518248  662109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 11:52:01.526845  662109 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 11:52:01.526909  662109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 11:52:01.535849  662109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 11:52:01.544609  662109 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 11:52:01.544672  662109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 11:52:01.553527  662109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 11:52:01.561876  662109 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 11:52:01.561928  662109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 11:52:00.178781  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:00.179225  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | unable to find current IP address of domain old-k8s-version-014592 in network mk-old-k8s-version-014592
	I1209 11:52:00.179273  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | I1209 11:52:00.179165  663492 retry.go:31] will retry after 4.532370187s: waiting for machine to come up
	I1209 11:52:05.915073  663024 start.go:364] duration metric: took 2m6.318720193s to acquireMachinesLock for "default-k8s-diff-port-482476"
	I1209 11:52:05.915166  663024 start.go:96] Skipping create...Using existing machine configuration
	I1209 11:52:05.915179  663024 fix.go:54] fixHost starting: 
	I1209 11:52:05.915652  663024 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:52:05.915716  663024 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:52:05.933810  663024 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35679
	I1209 11:52:05.934363  663024 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:52:05.935019  663024 main.go:141] libmachine: Using API Version  1
	I1209 11:52:05.935071  663024 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:52:05.935489  663024 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:52:05.935682  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .DriverName
	I1209 11:52:05.935879  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetState
	I1209 11:52:05.937627  663024 fix.go:112] recreateIfNeeded on default-k8s-diff-port-482476: state=Stopped err=<nil>
	I1209 11:52:05.937660  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .DriverName
	W1209 11:52:05.937842  663024 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 11:52:05.939893  663024 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-482476" ...
	I1209 11:52:01.570657  662109 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 11:52:01.579782  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:01.680268  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:02.573653  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:02.762024  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:02.826444  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:02.932170  662109 api_server.go:52] waiting for apiserver process to appear ...
	I1209 11:52:02.932291  662109 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:03.432933  662109 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:03.933186  662109 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:03.948529  662109 api_server.go:72] duration metric: took 1.016357501s to wait for apiserver process to appear ...
	I1209 11:52:03.948565  662109 api_server.go:88] waiting for apiserver healthz status ...
	I1209 11:52:03.948595  662109 api_server.go:253] Checking apiserver healthz at https://192.168.39.169:8443/healthz ...
	I1209 11:52:06.443635  662109 api_server.go:279] https://192.168.39.169:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1209 11:52:06.443675  662109 api_server.go:103] status: https://192.168.39.169:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1209 11:52:06.443692  662109 api_server.go:253] Checking apiserver healthz at https://192.168.39.169:8443/healthz ...
	I1209 11:52:06.490801  662109 api_server.go:279] https://192.168.39.169:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1209 11:52:06.490839  662109 api_server.go:103] status: https://192.168.39.169:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1209 11:52:06.490860  662109 api_server.go:253] Checking apiserver healthz at https://192.168.39.169:8443/healthz ...
	I1209 11:52:06.502460  662109 api_server.go:279] https://192.168.39.169:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1209 11:52:06.502497  662109 api_server.go:103] status: https://192.168.39.169:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1209 11:52:04.713201  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:04.713711  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has current primary IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:04.713817  662586 main.go:141] libmachine: (old-k8s-version-014592) Found IP for machine: 192.168.61.132
	I1209 11:52:04.713853  662586 main.go:141] libmachine: (old-k8s-version-014592) Reserving static IP address...
	I1209 11:52:04.714267  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "old-k8s-version-014592", mac: "52:54:00:54:72:3e", ip: "192.168.61.132"} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:51:55 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:52:04.714298  662586 main.go:141] libmachine: (old-k8s-version-014592) Reserved static IP address: 192.168.61.132
	I1209 11:52:04.714318  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | skip adding static IP to network mk-old-k8s-version-014592 - found existing host DHCP lease matching {name: "old-k8s-version-014592", mac: "52:54:00:54:72:3e", ip: "192.168.61.132"}
	I1209 11:52:04.714332  662586 main.go:141] libmachine: (old-k8s-version-014592) Waiting for SSH to be available...
	I1209 11:52:04.714347  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | Getting to WaitForSSH function...
	I1209 11:52:04.716632  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:04.716972  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:72:3e", ip: ""} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:51:55 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:52:04.717005  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:04.717129  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | Using SSH client type: external
	I1209 11:52:04.717157  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | Using SSH private key: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/old-k8s-version-014592/id_rsa (-rw-------)
	I1209 11:52:04.717192  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.132 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20068-609844/.minikube/machines/old-k8s-version-014592/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1209 11:52:04.717206  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | About to run SSH command:
	I1209 11:52:04.717223  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | exit 0
	I1209 11:52:04.846290  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | SSH cmd err, output: <nil>: 
	I1209 11:52:04.846675  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetConfigRaw
	I1209 11:52:04.847483  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetIP
	I1209 11:52:04.850430  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:04.850859  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:72:3e", ip: ""} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:51:55 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:52:04.850888  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:04.851113  662586 profile.go:143] Saving config to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/old-k8s-version-014592/config.json ...
	I1209 11:52:04.851328  662586 machine.go:93] provisionDockerMachine start ...
	I1209 11:52:04.851348  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .DriverName
	I1209 11:52:04.851547  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHHostname
	I1209 11:52:04.854318  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:04.854622  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:72:3e", ip: ""} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:51:55 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:52:04.854654  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:04.854782  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHPort
	I1209 11:52:04.854959  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHKeyPath
	I1209 11:52:04.855134  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHKeyPath
	I1209 11:52:04.855276  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHUsername
	I1209 11:52:04.855438  662586 main.go:141] libmachine: Using SSH client type: native
	I1209 11:52:04.855696  662586 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.132 22 <nil> <nil>}
	I1209 11:52:04.855709  662586 main.go:141] libmachine: About to run SSH command:
	hostname
	I1209 11:52:04.963021  662586 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1209 11:52:04.963059  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetMachineName
	I1209 11:52:04.963344  662586 buildroot.go:166] provisioning hostname "old-k8s-version-014592"
	I1209 11:52:04.963368  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetMachineName
	I1209 11:52:04.963545  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHHostname
	I1209 11:52:04.966102  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:04.966461  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:72:3e", ip: ""} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:51:55 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:52:04.966496  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:04.966607  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHPort
	I1209 11:52:04.966780  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHKeyPath
	I1209 11:52:04.966919  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHKeyPath
	I1209 11:52:04.967056  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHUsername
	I1209 11:52:04.967221  662586 main.go:141] libmachine: Using SSH client type: native
	I1209 11:52:04.967407  662586 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.132 22 <nil> <nil>}
	I1209 11:52:04.967419  662586 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-014592 && echo "old-k8s-version-014592" | sudo tee /etc/hostname
	I1209 11:52:05.094147  662586 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-014592
	
	I1209 11:52:05.094210  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHHostname
	I1209 11:52:05.097298  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.097729  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:72:3e", ip: ""} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:51:55 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:52:05.097765  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.097949  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHPort
	I1209 11:52:05.098197  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHKeyPath
	I1209 11:52:05.098460  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHKeyPath
	I1209 11:52:05.098632  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHUsername
	I1209 11:52:05.098829  662586 main.go:141] libmachine: Using SSH client type: native
	I1209 11:52:05.099046  662586 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.132 22 <nil> <nil>}
	I1209 11:52:05.099082  662586 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-014592' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-014592/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-014592' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 11:52:05.210739  662586 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 11:52:05.210785  662586 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20068-609844/.minikube CaCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20068-609844/.minikube}
	I1209 11:52:05.210846  662586 buildroot.go:174] setting up certificates
	I1209 11:52:05.210859  662586 provision.go:84] configureAuth start
	I1209 11:52:05.210881  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetMachineName
	I1209 11:52:05.211210  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetIP
	I1209 11:52:05.214546  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.214937  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:72:3e", ip: ""} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:51:55 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:52:05.214967  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.215167  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHHostname
	I1209 11:52:05.217866  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.218269  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:72:3e", ip: ""} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:51:55 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:52:05.218300  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.218452  662586 provision.go:143] copyHostCerts
	I1209 11:52:05.218530  662586 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem, removing ...
	I1209 11:52:05.218558  662586 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem
	I1209 11:52:05.218630  662586 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem (1082 bytes)
	I1209 11:52:05.218807  662586 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem, removing ...
	I1209 11:52:05.218820  662586 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem
	I1209 11:52:05.218863  662586 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem (1123 bytes)
	I1209 11:52:05.218943  662586 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem, removing ...
	I1209 11:52:05.218953  662586 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem
	I1209 11:52:05.218983  662586 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem (1679 bytes)
	I1209 11:52:05.219060  662586 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-014592 san=[127.0.0.1 192.168.61.132 localhost minikube old-k8s-version-014592]
	I1209 11:52:05.292744  662586 provision.go:177] copyRemoteCerts
	I1209 11:52:05.292830  662586 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 11:52:05.292867  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHHostname
	I1209 11:52:05.296244  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.296670  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:72:3e", ip: ""} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:51:55 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:52:05.296712  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.296896  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHPort
	I1209 11:52:05.297111  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHKeyPath
	I1209 11:52:05.297330  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHUsername
	I1209 11:52:05.297514  662586 sshutil.go:53] new ssh client: &{IP:192.168.61.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/old-k8s-version-014592/id_rsa Username:docker}
	I1209 11:52:05.381148  662586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1209 11:52:05.404883  662586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1209 11:52:05.433421  662586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1209 11:52:05.456775  662586 provision.go:87] duration metric: took 245.894878ms to configureAuth
	I1209 11:52:05.456811  662586 buildroot.go:189] setting minikube options for container-runtime
	I1209 11:52:05.457003  662586 config.go:182] Loaded profile config "old-k8s-version-014592": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1209 11:52:05.457082  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHHostname
	I1209 11:52:05.459984  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.460372  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:72:3e", ip: ""} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:51:55 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:52:05.460415  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.460631  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHPort
	I1209 11:52:05.460851  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHKeyPath
	I1209 11:52:05.461021  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHKeyPath
	I1209 11:52:05.461217  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHUsername
	I1209 11:52:05.461481  662586 main.go:141] libmachine: Using SSH client type: native
	I1209 11:52:05.461702  662586 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.132 22 <nil> <nil>}
	I1209 11:52:05.461722  662586 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 11:52:05.683276  662586 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 11:52:05.683311  662586 machine.go:96] duration metric: took 831.968459ms to provisionDockerMachine
	I1209 11:52:05.683335  662586 start.go:293] postStartSetup for "old-k8s-version-014592" (driver="kvm2")
	I1209 11:52:05.683349  662586 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 11:52:05.683391  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .DriverName
	I1209 11:52:05.683809  662586 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 11:52:05.683850  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHHostname
	I1209 11:52:05.687116  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.687540  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:72:3e", ip: ""} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:51:55 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:52:05.687579  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.687787  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHPort
	I1209 11:52:05.688013  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHKeyPath
	I1209 11:52:05.688204  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHUsername
	I1209 11:52:05.688439  662586 sshutil.go:53] new ssh client: &{IP:192.168.61.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/old-k8s-version-014592/id_rsa Username:docker}
	I1209 11:52:05.768777  662586 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 11:52:05.772572  662586 info.go:137] Remote host: Buildroot 2023.02.9
	I1209 11:52:05.772603  662586 filesync.go:126] Scanning /home/jenkins/minikube-integration/20068-609844/.minikube/addons for local assets ...
	I1209 11:52:05.772690  662586 filesync.go:126] Scanning /home/jenkins/minikube-integration/20068-609844/.minikube/files for local assets ...
	I1209 11:52:05.772813  662586 filesync.go:149] local asset: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem -> 6170172.pem in /etc/ssl/certs
	I1209 11:52:05.772942  662586 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 11:52:05.784153  662586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem --> /etc/ssl/certs/6170172.pem (1708 bytes)
	I1209 11:52:05.808677  662586 start.go:296] duration metric: took 125.320445ms for postStartSetup
	I1209 11:52:05.808736  662586 fix.go:56] duration metric: took 21.705557963s for fixHost
	I1209 11:52:05.808766  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHHostname
	I1209 11:52:05.811685  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.812053  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:72:3e", ip: ""} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:51:55 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:52:05.812090  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.812426  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHPort
	I1209 11:52:05.812639  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHKeyPath
	I1209 11:52:05.812853  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHKeyPath
	I1209 11:52:05.812996  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHUsername
	I1209 11:52:05.813345  662586 main.go:141] libmachine: Using SSH client type: native
	I1209 11:52:05.813562  662586 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.132 22 <nil> <nil>}
	I1209 11:52:05.813572  662586 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1209 11:52:05.914863  662586 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733745125.875320243
	
	I1209 11:52:05.914892  662586 fix.go:216] guest clock: 1733745125.875320243
	I1209 11:52:05.914906  662586 fix.go:229] Guest: 2024-12-09 11:52:05.875320243 +0000 UTC Remote: 2024-12-09 11:52:05.808742373 +0000 UTC m=+218.159686894 (delta=66.57787ms)
	I1209 11:52:05.914941  662586 fix.go:200] guest clock delta is within tolerance: 66.57787ms
	I1209 11:52:05.914952  662586 start.go:83] releasing machines lock for "old-k8s-version-014592", held for 21.811813657s
	I1209 11:52:05.914983  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .DriverName
	I1209 11:52:05.915289  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetIP
	I1209 11:52:05.918015  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.918513  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:72:3e", ip: ""} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:51:55 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:52:05.918546  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.918662  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .DriverName
	I1209 11:52:05.919315  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .DriverName
	I1209 11:52:05.919508  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .DriverName
	I1209 11:52:05.919628  662586 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 11:52:05.919684  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHHostname
	I1209 11:52:05.919739  662586 ssh_runner.go:195] Run: cat /version.json
	I1209 11:52:05.919767  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHHostname
	I1209 11:52:05.922529  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.922816  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.923096  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:72:3e", ip: ""} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:51:55 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:52:05.923121  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.923223  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:72:3e", ip: ""} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:51:55 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:52:05.923258  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.923291  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHPort
	I1209 11:52:05.923459  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHPort
	I1209 11:52:05.923602  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHKeyPath
	I1209 11:52:05.923616  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHKeyPath
	I1209 11:52:05.923848  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHUsername
	I1209 11:52:05.923900  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHUsername
	I1209 11:52:05.924030  662586 sshutil.go:53] new ssh client: &{IP:192.168.61.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/old-k8s-version-014592/id_rsa Username:docker}
	I1209 11:52:05.924104  662586 sshutil.go:53] new ssh client: &{IP:192.168.61.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/old-k8s-version-014592/id_rsa Username:docker}
	I1209 11:52:06.037215  662586 ssh_runner.go:195] Run: systemctl --version
	I1209 11:52:06.043193  662586 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 11:52:06.193717  662586 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 11:52:06.199693  662586 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 11:52:06.199786  662586 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 11:52:06.216007  662586 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1209 11:52:06.216040  662586 start.go:495] detecting cgroup driver to use...
	I1209 11:52:06.216131  662586 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 11:52:06.233631  662586 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 11:52:06.249730  662586 docker.go:217] disabling cri-docker service (if available) ...
	I1209 11:52:06.249817  662586 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 11:52:06.265290  662586 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 11:52:06.281676  662586 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 11:52:06.432116  662586 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 11:52:06.605899  662586 docker.go:233] disabling docker service ...
	I1209 11:52:06.606004  662586 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 11:52:06.622861  662586 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 11:52:06.637605  662586 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 11:52:06.772842  662586 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 11:52:06.905950  662586 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 11:52:06.923048  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 11:52:06.943483  662586 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1209 11:52:06.943542  662586 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:52:06.957647  662586 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 11:52:06.957725  662586 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:52:06.970221  662586 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:52:06.981243  662586 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:52:06.992084  662586 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 11:52:07.004284  662586 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 11:52:07.014329  662586 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1209 11:52:07.014411  662586 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1209 11:52:07.028104  662586 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 11:52:07.038782  662586 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 11:52:07.155779  662586 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 11:52:07.271726  662586 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 11:52:07.271815  662586 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 11:52:07.276994  662586 start.go:563] Will wait 60s for crictl version
	I1209 11:52:07.277061  662586 ssh_runner.go:195] Run: which crictl
	I1209 11:52:07.281212  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 11:52:07.328839  662586 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1209 11:52:07.328959  662586 ssh_runner.go:195] Run: crio --version
	I1209 11:52:07.360632  662586 ssh_runner.go:195] Run: crio --version
	I1209 11:52:07.393046  662586 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1209 11:52:07.394357  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetIP
	I1209 11:52:07.398002  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:07.398539  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:72:3e", ip: ""} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:51:55 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:52:07.398564  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:07.398893  662586 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1209 11:52:07.404512  662586 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 11:52:07.417822  662586 kubeadm.go:883] updating cluster {Name:old-k8s-version-014592 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-014592 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.132 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 11:52:07.418006  662586 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1209 11:52:07.418108  662586 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 11:52:07.473163  662586 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1209 11:52:07.473249  662586 ssh_runner.go:195] Run: which lz4
	I1209 11:52:07.478501  662586 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1209 11:52:07.483744  662586 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1209 11:52:07.483786  662586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1209 11:52:06.949438  662109 api_server.go:253] Checking apiserver healthz at https://192.168.39.169:8443/healthz ...
	I1209 11:52:06.959097  662109 api_server.go:279] https://192.168.39.169:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1209 11:52:06.959150  662109 api_server.go:103] status: https://192.168.39.169:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1209 11:52:07.449249  662109 api_server.go:253] Checking apiserver healthz at https://192.168.39.169:8443/healthz ...
	I1209 11:52:07.466817  662109 api_server.go:279] https://192.168.39.169:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1209 11:52:07.466860  662109 api_server.go:103] status: https://192.168.39.169:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1209 11:52:07.948998  662109 api_server.go:253] Checking apiserver healthz at https://192.168.39.169:8443/healthz ...
	I1209 11:52:07.958340  662109 api_server.go:279] https://192.168.39.169:8443/healthz returned 200:
	ok
	I1209 11:52:07.966049  662109 api_server.go:141] control plane version: v1.31.2
	I1209 11:52:07.966095  662109 api_server.go:131] duration metric: took 4.017521352s to wait for apiserver health ...
	I1209 11:52:07.966111  662109 cni.go:84] Creating CNI manager for ""
	I1209 11:52:07.966121  662109 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 11:52:07.967962  662109 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1209 11:52:05.941206  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .Start
	I1209 11:52:05.941411  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Ensuring networks are active...
	I1209 11:52:05.942245  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Ensuring network default is active
	I1209 11:52:05.942724  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Ensuring network mk-default-k8s-diff-port-482476 is active
	I1209 11:52:05.943274  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Getting domain xml...
	I1209 11:52:05.944080  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Creating domain...
	I1209 11:52:07.394633  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting to get IP...
	I1209 11:52:07.396032  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:07.397444  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | unable to find current IP address of domain default-k8s-diff-port-482476 in network mk-default-k8s-diff-port-482476
	I1209 11:52:07.397560  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | I1209 11:52:07.397434  663663 retry.go:31] will retry after 205.256699ms: waiting for machine to come up
	I1209 11:52:07.604209  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:07.604884  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | unable to find current IP address of domain default-k8s-diff-port-482476 in network mk-default-k8s-diff-port-482476
	I1209 11:52:07.604920  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | I1209 11:52:07.604828  663663 retry.go:31] will retry after 291.255961ms: waiting for machine to come up
	I1209 11:52:07.897467  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:07.898992  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | unable to find current IP address of domain default-k8s-diff-port-482476 in network mk-default-k8s-diff-port-482476
	I1209 11:52:07.899020  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | I1209 11:52:07.898866  663663 retry.go:31] will retry after 437.180412ms: waiting for machine to come up
	I1209 11:52:08.337664  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:08.338195  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | unable to find current IP address of domain default-k8s-diff-port-482476 in network mk-default-k8s-diff-port-482476
	I1209 11:52:08.338235  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | I1209 11:52:08.338151  663663 retry.go:31] will retry after 603.826089ms: waiting for machine to come up
	I1209 11:52:08.944048  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:08.944672  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | unable to find current IP address of domain default-k8s-diff-port-482476 in network mk-default-k8s-diff-port-482476
	I1209 11:52:08.944702  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | I1209 11:52:08.944612  663663 retry.go:31] will retry after 557.882868ms: waiting for machine to come up
	I1209 11:52:07.969367  662109 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1209 11:52:07.986045  662109 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1209 11:52:08.075377  662109 system_pods.go:43] waiting for kube-system pods to appear ...
	I1209 11:52:08.091609  662109 system_pods.go:59] 8 kube-system pods found
	I1209 11:52:08.091648  662109 system_pods.go:61] "coredns-7c65d6cfc9-z647g" [0e15e13e-efe6-4ae2-8bac-205aadf8f95a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1209 11:52:08.091656  662109 system_pods.go:61] "etcd-no-preload-820741" [3ea97088-f3b5-4c8f-ac11-7ab96f37bc5b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1209 11:52:08.091664  662109 system_pods.go:61] "kube-apiserver-no-preload-820741" [bd0d3e20-bdab-41c1-bb25-0c5149b4e456] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1209 11:52:08.091670  662109 system_pods.go:61] "kube-controller-manager-no-preload-820741" [5eda77af-a169-4be5-9f96-6ad5406f9036] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1209 11:52:08.091675  662109 system_pods.go:61] "kube-proxy-hpvvp" [0945206c-8d1e-47e0-b35b-9011073423b2] Running
	I1209 11:52:08.091681  662109 system_pods.go:61] "kube-scheduler-no-preload-820741" [e7003668-a70d-4334-bb99-c63c463d87e0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1209 11:52:08.091686  662109 system_pods.go:61] "metrics-server-6867b74b74-pwcsr" [40d4df7e-de82-478b-a77b-b27208d8262e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1209 11:52:08.091691  662109 system_pods.go:61] "storage-provisioner" [aeba46d3-ecf1-4923-b89c-75b34e75a06d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1209 11:52:08.091699  662109 system_pods.go:74] duration metric: took 16.289433ms to wait for pod list to return data ...
	I1209 11:52:08.091707  662109 node_conditions.go:102] verifying NodePressure condition ...
	I1209 11:52:08.096961  662109 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1209 11:52:08.097010  662109 node_conditions.go:123] node cpu capacity is 2
	I1209 11:52:08.097047  662109 node_conditions.go:105] duration metric: took 5.334194ms to run NodePressure ...
	I1209 11:52:08.097073  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:08.573868  662109 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1209 11:52:08.583670  662109 kubeadm.go:739] kubelet initialised
	I1209 11:52:08.583700  662109 kubeadm.go:740] duration metric: took 9.800796ms waiting for restarted kubelet to initialise ...
	I1209 11:52:08.583713  662109 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 11:52:08.592490  662109 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-z647g" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:08.600581  662109 pod_ready.go:98] node "no-preload-820741" hosting pod "coredns-7c65d6cfc9-z647g" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-820741" has status "Ready":"False"
	I1209 11:52:08.600611  662109 pod_ready.go:82] duration metric: took 8.087599ms for pod "coredns-7c65d6cfc9-z647g" in "kube-system" namespace to be "Ready" ...
	E1209 11:52:08.600623  662109 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-820741" hosting pod "coredns-7c65d6cfc9-z647g" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-820741" has status "Ready":"False"
	I1209 11:52:08.600633  662109 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-820741" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:08.609663  662109 pod_ready.go:98] node "no-preload-820741" hosting pod "etcd-no-preload-820741" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-820741" has status "Ready":"False"
	I1209 11:52:08.609698  662109 pod_ready.go:82] duration metric: took 9.054194ms for pod "etcd-no-preload-820741" in "kube-system" namespace to be "Ready" ...
	E1209 11:52:08.609712  662109 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-820741" hosting pod "etcd-no-preload-820741" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-820741" has status "Ready":"False"
	I1209 11:52:08.609722  662109 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-820741" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:08.615482  662109 pod_ready.go:98] node "no-preload-820741" hosting pod "kube-apiserver-no-preload-820741" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-820741" has status "Ready":"False"
	I1209 11:52:08.615514  662109 pod_ready.go:82] duration metric: took 5.78152ms for pod "kube-apiserver-no-preload-820741" in "kube-system" namespace to be "Ready" ...
	E1209 11:52:08.615526  662109 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-820741" hosting pod "kube-apiserver-no-preload-820741" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-820741" has status "Ready":"False"
	I1209 11:52:08.615536  662109 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-820741" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:08.623662  662109 pod_ready.go:98] node "no-preload-820741" hosting pod "kube-controller-manager-no-preload-820741" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-820741" has status "Ready":"False"
	I1209 11:52:08.623698  662109 pod_ready.go:82] duration metric: took 8.151877ms for pod "kube-controller-manager-no-preload-820741" in "kube-system" namespace to be "Ready" ...
	E1209 11:52:08.623713  662109 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-820741" hosting pod "kube-controller-manager-no-preload-820741" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-820741" has status "Ready":"False"
	I1209 11:52:08.623722  662109 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-hpvvp" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:08.978286  662109 pod_ready.go:98] node "no-preload-820741" hosting pod "kube-proxy-hpvvp" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-820741" has status "Ready":"False"
	I1209 11:52:08.978323  662109 pod_ready.go:82] duration metric: took 354.589596ms for pod "kube-proxy-hpvvp" in "kube-system" namespace to be "Ready" ...
	E1209 11:52:08.978344  662109 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-820741" hosting pod "kube-proxy-hpvvp" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-820741" has status "Ready":"False"
	I1209 11:52:08.978356  662109 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-820741" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:09.378434  662109 pod_ready.go:98] node "no-preload-820741" hosting pod "kube-scheduler-no-preload-820741" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-820741" has status "Ready":"False"
	I1209 11:52:09.378471  662109 pod_ready.go:82] duration metric: took 400.107028ms for pod "kube-scheduler-no-preload-820741" in "kube-system" namespace to be "Ready" ...
	E1209 11:52:09.378484  662109 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-820741" hosting pod "kube-scheduler-no-preload-820741" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-820741" has status "Ready":"False"
	I1209 11:52:09.378494  662109 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:09.778087  662109 pod_ready.go:98] node "no-preload-820741" hosting pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-820741" has status "Ready":"False"
	I1209 11:52:09.778117  662109 pod_ready.go:82] duration metric: took 399.613592ms for pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace to be "Ready" ...
	E1209 11:52:09.778129  662109 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-820741" hosting pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-820741" has status "Ready":"False"
	I1209 11:52:09.778138  662109 pod_ready.go:39] duration metric: took 1.194413796s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 11:52:09.778162  662109 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1209 11:52:09.793629  662109 ops.go:34] apiserver oom_adj: -16
	I1209 11:52:09.793663  662109 kubeadm.go:597] duration metric: took 8.374104555s to restartPrimaryControlPlane
	I1209 11:52:09.793681  662109 kubeadm.go:394] duration metric: took 8.419719684s to StartCluster
	I1209 11:52:09.793708  662109 settings.go:142] acquiring lock: {Name:mkd819f3469f56ef651f2e5bb461e93bb6b2b5fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 11:52:09.793848  662109 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20068-609844/kubeconfig
	I1209 11:52:09.796407  662109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/kubeconfig: {Name:mk32876a1ab47f13c0d7c0ba1ee5e32de01b4a4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 11:52:09.796774  662109 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.169 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 11:52:09.796837  662109 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1209 11:52:09.796954  662109 addons.go:69] Setting storage-provisioner=true in profile "no-preload-820741"
	I1209 11:52:09.796975  662109 addons.go:234] Setting addon storage-provisioner=true in "no-preload-820741"
	W1209 11:52:09.796984  662109 addons.go:243] addon storage-provisioner should already be in state true
	I1209 11:52:09.797023  662109 host.go:66] Checking if "no-preload-820741" exists ...
	I1209 11:52:09.797048  662109 config.go:182] Loaded profile config "no-preload-820741": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 11:52:09.797086  662109 addons.go:69] Setting default-storageclass=true in profile "no-preload-820741"
	I1209 11:52:09.797110  662109 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-820741"
	I1209 11:52:09.797119  662109 addons.go:69] Setting metrics-server=true in profile "no-preload-820741"
	I1209 11:52:09.797150  662109 addons.go:234] Setting addon metrics-server=true in "no-preload-820741"
	W1209 11:52:09.797160  662109 addons.go:243] addon metrics-server should already be in state true
	I1209 11:52:09.797204  662109 host.go:66] Checking if "no-preload-820741" exists ...
	I1209 11:52:09.797545  662109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:52:09.797571  662109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:52:09.797579  662109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:52:09.797596  662109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:52:09.797611  662109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:52:09.797620  662109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:52:09.799690  662109 out.go:177] * Verifying Kubernetes components...
	I1209 11:52:09.801035  662109 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 11:52:09.814968  662109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33991
	I1209 11:52:09.815010  662109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34319
	I1209 11:52:09.815576  662109 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:52:09.815715  662109 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:52:09.816340  662109 main.go:141] libmachine: Using API Version  1
	I1209 11:52:09.816361  662109 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:52:09.816666  662109 main.go:141] libmachine: Using API Version  1
	I1209 11:52:09.816683  662109 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:52:09.816745  662109 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:52:09.817402  662109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:52:09.817449  662109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:52:09.818118  662109 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:52:09.818680  662109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:52:09.818718  662109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:52:09.842345  662109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37501
	I1209 11:52:09.842582  662109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44183
	I1209 11:52:09.842703  662109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38793
	I1209 11:52:09.843479  662109 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:52:09.843608  662109 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:52:09.843667  662109 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:52:09.843973  662109 main.go:141] libmachine: Using API Version  1
	I1209 11:52:09.843999  662109 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:52:09.844168  662109 main.go:141] libmachine: Using API Version  1
	I1209 11:52:09.844180  662109 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:52:09.844575  662109 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:52:09.844773  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetState
	I1209 11:52:09.845107  662109 main.go:141] libmachine: Using API Version  1
	I1209 11:52:09.845122  662109 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:52:09.845633  662109 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:52:09.845887  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetState
	I1209 11:52:09.847386  662109 main.go:141] libmachine: (no-preload-820741) Calling .DriverName
	I1209 11:52:09.848553  662109 main.go:141] libmachine: (no-preload-820741) Calling .DriverName
	I1209 11:52:09.849410  662109 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1209 11:52:09.849690  662109 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:52:09.850230  662109 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 11:52:09.850303  662109 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1209 11:52:09.850323  662109 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1209 11:52:09.850346  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHHostname
	I1209 11:52:09.851051  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetState
	I1209 11:52:09.851404  662109 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 11:52:09.851426  662109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1209 11:52:09.851447  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHHostname
	I1209 11:52:09.855303  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:52:09.855935  662109 addons.go:234] Setting addon default-storageclass=true in "no-preload-820741"
	W1209 11:52:09.855958  662109 addons.go:243] addon default-storageclass should already be in state true
	I1209 11:52:09.855991  662109 host.go:66] Checking if "no-preload-820741" exists ...
	I1209 11:52:09.856373  662109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:52:09.856429  662109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:52:09.857583  662109 main.go:141] libmachine: (no-preload-820741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4c:0e", ip: ""} in network mk-no-preload-820741: {Iface:virbr1 ExpiryTime:2024-12-09 12:43:41 +0000 UTC Type:0 Mac:52:54:00:27:4c:0e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:no-preload-820741 Clientid:01:52:54:00:27:4c:0e}
	I1209 11:52:09.857614  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined IP address 192.168.39.169 and MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:52:09.857874  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHPort
	I1209 11:52:09.858206  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHKeyPath
	I1209 11:52:09.858588  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHUsername
	I1209 11:52:09.858766  662109 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/no-preload-820741/id_rsa Username:docker}
	I1209 11:52:09.859464  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:52:09.859875  662109 main.go:141] libmachine: (no-preload-820741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4c:0e", ip: ""} in network mk-no-preload-820741: {Iface:virbr1 ExpiryTime:2024-12-09 12:43:41 +0000 UTC Type:0 Mac:52:54:00:27:4c:0e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:no-preload-820741 Clientid:01:52:54:00:27:4c:0e}
	I1209 11:52:09.859897  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined IP address 192.168.39.169 and MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:52:09.860238  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHPort
	I1209 11:52:09.860449  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHKeyPath
	I1209 11:52:09.860597  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHUsername
	I1209 11:52:09.860736  662109 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/no-preload-820741/id_rsa Username:docker}
	I1209 11:52:09.880235  662109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38341
	I1209 11:52:09.880846  662109 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:52:09.881409  662109 main.go:141] libmachine: Using API Version  1
	I1209 11:52:09.881429  662109 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:52:09.881855  662109 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:52:09.882651  662109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:52:09.882711  662109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:52:09.904576  662109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36779
	I1209 11:52:09.905132  662109 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:52:09.905765  662109 main.go:141] libmachine: Using API Version  1
	I1209 11:52:09.905788  662109 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:52:09.906224  662109 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:52:09.906469  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetState
	I1209 11:52:09.908475  662109 main.go:141] libmachine: (no-preload-820741) Calling .DriverName
	I1209 11:52:09.908715  662109 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1209 11:52:09.908735  662109 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1209 11:52:09.908756  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHHostname
	I1209 11:52:09.912294  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:52:09.912928  662109 main.go:141] libmachine: (no-preload-820741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4c:0e", ip: ""} in network mk-no-preload-820741: {Iface:virbr1 ExpiryTime:2024-12-09 12:43:41 +0000 UTC Type:0 Mac:52:54:00:27:4c:0e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:no-preload-820741 Clientid:01:52:54:00:27:4c:0e}
	I1209 11:52:09.912963  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined IP address 192.168.39.169 and MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:52:09.913128  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHPort
	I1209 11:52:09.913383  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHKeyPath
	I1209 11:52:09.913563  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHUsername
	I1209 11:52:09.913711  662109 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/no-preload-820741/id_rsa Username:docker}
	I1209 11:52:10.141200  662109 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 11:52:10.172182  662109 node_ready.go:35] waiting up to 6m0s for node "no-preload-820741" to be "Ready" ...
	I1209 11:52:10.306617  662109 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1209 11:52:10.306646  662109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1209 11:52:10.321962  662109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 11:52:10.326125  662109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1209 11:52:10.360534  662109 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1209 11:52:10.360568  662109 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1209 11:52:10.470875  662109 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1209 11:52:10.470917  662109 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1209 11:52:10.555610  662109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1209 11:52:11.721480  662109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.395310752s)
	I1209 11:52:11.721571  662109 main.go:141] libmachine: Making call to close driver server
	I1209 11:52:11.721638  662109 main.go:141] libmachine: (no-preload-820741) Calling .Close
	I1209 11:52:11.721581  662109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.165925756s)
	I1209 11:52:11.721735  662109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.399738143s)
	I1209 11:52:11.721753  662109 main.go:141] libmachine: Making call to close driver server
	I1209 11:52:11.721766  662109 main.go:141] libmachine: (no-preload-820741) Calling .Close
	I1209 11:52:11.721765  662109 main.go:141] libmachine: Making call to close driver server
	I1209 11:52:11.721779  662109 main.go:141] libmachine: (no-preload-820741) Calling .Close
	I1209 11:52:11.722002  662109 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:52:11.722014  662109 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:52:11.722021  662109 main.go:141] libmachine: Making call to close driver server
	I1209 11:52:11.722028  662109 main.go:141] libmachine: (no-preload-820741) Calling .Close
	I1209 11:52:11.722201  662109 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:52:11.722213  662109 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:52:11.722221  662109 main.go:141] libmachine: Making call to close driver server
	I1209 11:52:11.722226  662109 main.go:141] libmachine: (no-preload-820741) Calling .Close
	I1209 11:52:11.722320  662109 main.go:141] libmachine: (no-preload-820741) DBG | Closing plugin on server side
	I1209 11:52:11.722329  662109 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:52:11.722349  662109 main.go:141] libmachine: (no-preload-820741) DBG | Closing plugin on server side
	I1209 11:52:11.722360  662109 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:52:11.722384  662109 main.go:141] libmachine: Making call to close driver server
	I1209 11:52:11.722395  662109 main.go:141] libmachine: (no-preload-820741) Calling .Close
	I1209 11:52:11.722424  662109 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:52:11.722438  662109 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:52:11.722465  662109 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:52:11.722475  662109 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:52:11.722490  662109 addons.go:475] Verifying addon metrics-server=true in "no-preload-820741"
	I1209 11:52:11.722560  662109 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:52:11.722579  662109 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:52:11.722564  662109 main.go:141] libmachine: (no-preload-820741) DBG | Closing plugin on server side
	I1209 11:52:11.729638  662109 main.go:141] libmachine: Making call to close driver server
	I1209 11:52:11.729660  662109 main.go:141] libmachine: (no-preload-820741) Calling .Close
	I1209 11:52:11.729934  662109 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:52:11.729950  662109 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:52:11.731642  662109 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I1209 11:52:09.097654  662586 crio.go:462] duration metric: took 1.619191765s to copy over tarball
	I1209 11:52:09.097748  662586 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1209 11:52:12.304496  662586 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.20670295s)
	I1209 11:52:12.304543  662586 crio.go:469] duration metric: took 3.206852542s to extract the tarball
	I1209 11:52:12.304553  662586 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1209 11:52:12.347991  662586 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 11:52:12.385411  662586 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1209 11:52:12.385438  662586 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1209 11:52:12.385533  662586 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 11:52:12.385557  662586 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1209 11:52:12.385570  662586 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1209 11:52:12.385609  662586 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 11:52:12.385641  662586 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1209 11:52:12.385650  662586 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1209 11:52:12.385645  662586 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1209 11:52:12.385620  662586 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1209 11:52:12.387326  662586 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1209 11:52:12.387335  662586 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 11:52:12.387371  662586 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1209 11:52:12.387372  662586 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1209 11:52:12.387328  662586 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1209 11:52:12.387338  662586 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 11:52:12.387328  662586 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1209 11:52:12.387383  662586 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1209 11:52:12.621631  662586 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1209 11:52:12.623694  662586 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 11:52:12.632536  662586 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1209 11:52:12.634550  662586 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1209 11:52:12.638401  662586 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1209 11:52:12.641071  662586 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1209 11:52:12.645344  662586 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1209 11:52:09.504566  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:09.505124  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | unable to find current IP address of domain default-k8s-diff-port-482476 in network mk-default-k8s-diff-port-482476
	I1209 11:52:09.505155  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | I1209 11:52:09.505076  663663 retry.go:31] will retry after 636.87343ms: waiting for machine to come up
	I1209 11:52:10.144387  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:10.145090  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | unable to find current IP address of domain default-k8s-diff-port-482476 in network mk-default-k8s-diff-port-482476
	I1209 11:52:10.145119  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | I1209 11:52:10.145037  663663 retry.go:31] will retry after 716.448577ms: waiting for machine to come up
	I1209 11:52:10.863113  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:10.863816  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | unable to find current IP address of domain default-k8s-diff-port-482476 in network mk-default-k8s-diff-port-482476
	I1209 11:52:10.863848  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | I1209 11:52:10.863762  663663 retry.go:31] will retry after 901.007245ms: waiting for machine to come up
	I1209 11:52:11.766356  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:11.766745  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | unable to find current IP address of domain default-k8s-diff-port-482476 in network mk-default-k8s-diff-port-482476
	I1209 11:52:11.766773  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | I1209 11:52:11.766688  663663 retry.go:31] will retry after 1.570604193s: waiting for machine to come up
	I1209 11:52:13.339318  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:13.339796  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | unable to find current IP address of domain default-k8s-diff-port-482476 in network mk-default-k8s-diff-port-482476
	I1209 11:52:13.339828  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | I1209 11:52:13.339744  663663 retry.go:31] will retry after 1.928200683s: waiting for machine to come up
	I1209 11:52:11.732956  662109 addons.go:510] duration metric: took 1.936137102s for enable addons: enabled=[metrics-server storage-provisioner default-storageclass]
	I1209 11:52:12.175844  662109 node_ready.go:53] node "no-preload-820741" has status "Ready":"False"
	I1209 11:52:14.504491  662109 node_ready.go:53] node "no-preload-820741" has status "Ready":"False"
	I1209 11:52:12.756066  662586 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1209 11:52:12.756121  662586 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1209 11:52:12.756134  662586 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1209 11:52:12.756175  662586 ssh_runner.go:195] Run: which crictl
	I1209 11:52:12.756179  662586 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 11:52:12.756230  662586 ssh_runner.go:195] Run: which crictl
	I1209 11:52:12.808091  662586 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1209 11:52:12.808139  662586 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1209 11:52:12.808186  662586 ssh_runner.go:195] Run: which crictl
	I1209 11:52:12.809593  662586 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1209 11:52:12.809622  662586 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1209 11:52:12.809637  662586 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1209 11:52:12.809659  662586 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1209 11:52:12.809682  662586 ssh_runner.go:195] Run: which crictl
	I1209 11:52:12.809712  662586 ssh_runner.go:195] Run: which crictl
	I1209 11:52:12.809775  662586 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1209 11:52:12.809803  662586 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1209 11:52:12.809829  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 11:52:12.809841  662586 ssh_runner.go:195] Run: which crictl
	I1209 11:52:12.809724  662586 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1209 11:52:12.809873  662586 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1209 11:52:12.809898  662586 ssh_runner.go:195] Run: which crictl
	I1209 11:52:12.809933  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1209 11:52:12.812256  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1209 11:52:12.819121  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1209 11:52:12.825106  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1209 11:52:12.910431  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1209 11:52:12.910501  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 11:52:12.910560  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1209 11:52:12.910503  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1209 11:52:12.910638  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1209 11:52:12.910713  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1209 11:52:12.930461  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1209 11:52:13.079147  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1209 11:52:13.079189  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 11:52:13.079233  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1209 11:52:13.079276  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1209 11:52:13.079418  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1209 11:52:13.079447  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1209 11:52:13.079517  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1209 11:52:13.224753  662586 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1209 11:52:13.227126  662586 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1209 11:52:13.227190  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1209 11:52:13.227253  662586 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1209 11:52:13.227291  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1209 11:52:13.227332  662586 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1209 11:52:13.227393  662586 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1209 11:52:13.277747  662586 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1209 11:52:13.285286  662586 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1209 11:52:13.663858  662586 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 11:52:13.805603  662586 cache_images.go:92] duration metric: took 1.420145666s to LoadCachedImages
	W1209 11:52:13.805814  662586 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I1209 11:52:13.805848  662586 kubeadm.go:934] updating node { 192.168.61.132 8443 v1.20.0 crio true true} ...
	I1209 11:52:13.805980  662586 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-014592 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.132
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-014592 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 11:52:13.806079  662586 ssh_runner.go:195] Run: crio config
	I1209 11:52:13.870766  662586 cni.go:84] Creating CNI manager for ""
	I1209 11:52:13.870797  662586 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 11:52:13.870813  662586 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1209 11:52:13.870841  662586 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.132 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-014592 NodeName:old-k8s-version-014592 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.132"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.132 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1209 11:52:13.871050  662586 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.132
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-014592"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.132
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.132"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 11:52:13.871136  662586 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1209 11:52:13.881556  662586 binaries.go:44] Found k8s binaries, skipping transfer
	I1209 11:52:13.881628  662586 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 11:52:13.891122  662586 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1209 11:52:13.908181  662586 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 11:52:13.925041  662586 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1209 11:52:13.941567  662586 ssh_runner.go:195] Run: grep 192.168.61.132	control-plane.minikube.internal$ /etc/hosts
	I1209 11:52:13.945502  662586 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.132	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 11:52:13.957476  662586 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 11:52:14.091699  662586 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 11:52:14.108772  662586 certs.go:68] Setting up /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/old-k8s-version-014592 for IP: 192.168.61.132
	I1209 11:52:14.108810  662586 certs.go:194] generating shared ca certs ...
	I1209 11:52:14.108838  662586 certs.go:226] acquiring lock for ca certs: {Name:mk2df5887e08965d909a9c950da5dfffb8a04ddc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 11:52:14.109024  662586 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.key
	I1209 11:52:14.109087  662586 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.key
	I1209 11:52:14.109105  662586 certs.go:256] generating profile certs ...
	I1209 11:52:14.109248  662586 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/old-k8s-version-014592/client.key
	I1209 11:52:14.109323  662586 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/old-k8s-version-014592/apiserver.key.28078577
	I1209 11:52:14.109383  662586 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/old-k8s-version-014592/proxy-client.key
	I1209 11:52:14.109572  662586 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017.pem (1338 bytes)
	W1209 11:52:14.109609  662586 certs.go:480] ignoring /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017_empty.pem, impossibly tiny 0 bytes
	I1209 11:52:14.109619  662586 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem (1675 bytes)
	I1209 11:52:14.109659  662586 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem (1082 bytes)
	I1209 11:52:14.109697  662586 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem (1123 bytes)
	I1209 11:52:14.109737  662586 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem (1679 bytes)
	I1209 11:52:14.109802  662586 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem (1708 bytes)
	I1209 11:52:14.110497  662586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 11:52:14.145815  662586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1209 11:52:14.179452  662586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 11:52:14.217469  662586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1209 11:52:14.250288  662586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/old-k8s-version-014592/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1209 11:52:14.287110  662586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/old-k8s-version-014592/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1209 11:52:14.317190  662586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/old-k8s-version-014592/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 11:52:14.356825  662586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/old-k8s-version-014592/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1209 11:52:14.379756  662586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017.pem --> /usr/share/ca-certificates/617017.pem (1338 bytes)
	I1209 11:52:14.402045  662586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem --> /usr/share/ca-certificates/6170172.pem (1708 bytes)
	I1209 11:52:14.425287  662586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 11:52:14.448025  662586 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 11:52:14.464144  662586 ssh_runner.go:195] Run: openssl version
	I1209 11:52:14.470256  662586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/617017.pem && ln -fs /usr/share/ca-certificates/617017.pem /etc/ssl/certs/617017.pem"
	I1209 11:52:14.481298  662586 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/617017.pem
	I1209 11:52:14.485849  662586 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 10:45 /usr/share/ca-certificates/617017.pem
	I1209 11:52:14.485904  662586 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/617017.pem
	I1209 11:52:14.492321  662586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/617017.pem /etc/ssl/certs/51391683.0"
	I1209 11:52:14.504155  662586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6170172.pem && ln -fs /usr/share/ca-certificates/6170172.pem /etc/ssl/certs/6170172.pem"
	I1209 11:52:14.515819  662586 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6170172.pem
	I1209 11:52:14.520876  662586 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 10:45 /usr/share/ca-certificates/6170172.pem
	I1209 11:52:14.520955  662586 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6170172.pem
	I1209 11:52:14.527295  662586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6170172.pem /etc/ssl/certs/3ec20f2e.0"
	I1209 11:52:14.538319  662586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1209 11:52:14.549753  662586 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 11:52:14.554273  662586 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 10:34 /usr/share/ca-certificates/minikubeCA.pem
	I1209 11:52:14.554341  662586 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 11:52:14.559893  662586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1209 11:52:14.570744  662586 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 11:52:14.575763  662586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1209 11:52:14.582279  662586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1209 11:52:14.588549  662586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1209 11:52:14.594376  662586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1209 11:52:14.599758  662586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1209 11:52:14.605497  662586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1209 11:52:14.611083  662586 kubeadm.go:392] StartCluster: {Name:old-k8s-version-014592 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-014592 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.132 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 11:52:14.611213  662586 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 11:52:14.611288  662586 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 11:52:14.649447  662586 cri.go:89] found id: ""
	I1209 11:52:14.649538  662586 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 11:52:14.660070  662586 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1209 11:52:14.660094  662586 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1209 11:52:14.660145  662586 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1209 11:52:14.670412  662586 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1209 11:52:14.671387  662586 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-014592" does not appear in /home/jenkins/minikube-integration/20068-609844/kubeconfig
	I1209 11:52:14.672043  662586 kubeconfig.go:62] /home/jenkins/minikube-integration/20068-609844/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-014592" cluster setting kubeconfig missing "old-k8s-version-014592" context setting]
	I1209 11:52:14.673337  662586 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/kubeconfig: {Name:mk32876a1ab47f13c0d7c0ba1ee5e32de01b4a4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 11:52:14.708285  662586 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1209 11:52:14.719486  662586 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.132
	I1209 11:52:14.719535  662586 kubeadm.go:1160] stopping kube-system containers ...
	I1209 11:52:14.719563  662586 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1209 11:52:14.719635  662586 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 11:52:14.755280  662586 cri.go:89] found id: ""
	I1209 11:52:14.755369  662586 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1209 11:52:14.771385  662586 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 11:52:14.781364  662586 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 11:52:14.781387  662586 kubeadm.go:157] found existing configuration files:
	
	I1209 11:52:14.781455  662586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 11:52:14.790942  662586 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 11:52:14.791016  662586 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 11:52:14.800481  662586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 11:52:14.809875  662586 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 11:52:14.809948  662586 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 11:52:14.819619  662586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 11:52:14.831670  662586 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 11:52:14.831750  662586 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 11:52:14.844244  662586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 11:52:14.853328  662586 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 11:52:14.853403  662586 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 11:52:14.862428  662586 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 11:52:14.871346  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:15.007799  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:15.697594  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:15.921787  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:16.031826  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:16.132199  662586 api_server.go:52] waiting for apiserver process to appear ...
	I1209 11:52:16.132310  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:16.633329  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:17.133389  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:17.632581  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:15.270255  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:15.270804  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | unable to find current IP address of domain default-k8s-diff-port-482476 in network mk-default-k8s-diff-port-482476
	I1209 11:52:15.270836  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | I1209 11:52:15.270741  663663 retry.go:31] will retry after 2.90998032s: waiting for machine to come up
	I1209 11:52:18.182069  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:18.182741  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | unable to find current IP address of domain default-k8s-diff-port-482476 in network mk-default-k8s-diff-port-482476
	I1209 11:52:18.182774  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | I1209 11:52:18.182689  663663 retry.go:31] will retry after 3.196470388s: waiting for machine to come up
	I1209 11:52:16.676188  662109 node_ready.go:53] node "no-preload-820741" has status "Ready":"False"
	I1209 11:52:17.175894  662109 node_ready.go:49] node "no-preload-820741" has status "Ready":"True"
	I1209 11:52:17.175928  662109 node_ready.go:38] duration metric: took 7.003696159s for node "no-preload-820741" to be "Ready" ...
	I1209 11:52:17.175945  662109 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 11:52:17.180647  662109 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-z647g" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:19.188583  662109 pod_ready.go:103] pod "coredns-7c65d6cfc9-z647g" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:18.133165  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:18.632403  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:19.132416  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:19.633332  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:20.133343  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:20.632968  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:21.133411  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:21.632656  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:22.132876  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:22.632816  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:21.381260  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:21.381912  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | unable to find current IP address of domain default-k8s-diff-port-482476 in network mk-default-k8s-diff-port-482476
	I1209 11:52:21.381943  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | I1209 11:52:21.381834  663663 retry.go:31] will retry after 3.621023528s: waiting for machine to come up
	I1209 11:52:26.142813  661546 start.go:364] duration metric: took 56.424295065s to acquireMachinesLock for "embed-certs-005123"
	I1209 11:52:26.142877  661546 start.go:96] Skipping create...Using existing machine configuration
	I1209 11:52:26.142886  661546 fix.go:54] fixHost starting: 
	I1209 11:52:26.143376  661546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:52:26.143416  661546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:52:26.164438  661546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45591
	I1209 11:52:26.165041  661546 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:52:26.165779  661546 main.go:141] libmachine: Using API Version  1
	I1209 11:52:26.165828  661546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:52:26.166318  661546 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:52:26.166544  661546 main.go:141] libmachine: (embed-certs-005123) Calling .DriverName
	I1209 11:52:26.166745  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetState
	I1209 11:52:26.168534  661546 fix.go:112] recreateIfNeeded on embed-certs-005123: state=Stopped err=<nil>
	I1209 11:52:26.168564  661546 main.go:141] libmachine: (embed-certs-005123) Calling .DriverName
	W1209 11:52:26.168753  661546 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 11:52:26.170973  661546 out.go:177] * Restarting existing kvm2 VM for "embed-certs-005123" ...
	I1209 11:52:26.172269  661546 main.go:141] libmachine: (embed-certs-005123) Calling .Start
	I1209 11:52:26.172500  661546 main.go:141] libmachine: (embed-certs-005123) Ensuring networks are active...
	I1209 11:52:26.173391  661546 main.go:141] libmachine: (embed-certs-005123) Ensuring network default is active
	I1209 11:52:26.173747  661546 main.go:141] libmachine: (embed-certs-005123) Ensuring network mk-embed-certs-005123 is active
	I1209 11:52:26.174208  661546 main.go:141] libmachine: (embed-certs-005123) Getting domain xml...
	I1209 11:52:26.174990  661546 main.go:141] libmachine: (embed-certs-005123) Creating domain...
	I1209 11:52:21.687274  662109 pod_ready.go:103] pod "coredns-7c65d6cfc9-z647g" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:23.688011  662109 pod_ready.go:103] pod "coredns-7c65d6cfc9-z647g" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:24.187886  662109 pod_ready.go:93] pod "coredns-7c65d6cfc9-z647g" in "kube-system" namespace has status "Ready":"True"
	I1209 11:52:24.187917  662109 pod_ready.go:82] duration metric: took 7.007243363s for pod "coredns-7c65d6cfc9-z647g" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:24.187928  662109 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-820741" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:24.193936  662109 pod_ready.go:93] pod "etcd-no-preload-820741" in "kube-system" namespace has status "Ready":"True"
	I1209 11:52:24.193958  662109 pod_ready.go:82] duration metric: took 6.02353ms for pod "etcd-no-preload-820741" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:24.193966  662109 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-820741" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:24.203685  662109 pod_ready.go:93] pod "kube-apiserver-no-preload-820741" in "kube-system" namespace has status "Ready":"True"
	I1209 11:52:24.203712  662109 pod_ready.go:82] duration metric: took 9.739287ms for pod "kube-apiserver-no-preload-820741" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:24.203722  662109 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-820741" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:24.210004  662109 pod_ready.go:93] pod "kube-controller-manager-no-preload-820741" in "kube-system" namespace has status "Ready":"True"
	I1209 11:52:24.210034  662109 pod_ready.go:82] duration metric: took 6.304008ms for pod "kube-controller-manager-no-preload-820741" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:24.210048  662109 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-hpvvp" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:24.216225  662109 pod_ready.go:93] pod "kube-proxy-hpvvp" in "kube-system" namespace has status "Ready":"True"
	I1209 11:52:24.216249  662109 pod_ready.go:82] duration metric: took 6.193945ms for pod "kube-proxy-hpvvp" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:24.216258  662109 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-820741" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:24.584682  662109 pod_ready.go:93] pod "kube-scheduler-no-preload-820741" in "kube-system" namespace has status "Ready":"True"
	I1209 11:52:24.584711  662109 pod_ready.go:82] duration metric: took 368.445803ms for pod "kube-scheduler-no-preload-820741" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:24.584724  662109 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:25.004323  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.004761  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Found IP for machine: 192.168.50.25
	I1209 11:52:25.004791  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has current primary IP address 192.168.50.25 and MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.004798  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Reserving static IP address...
	I1209 11:52:25.005275  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-482476", mac: "52:54:00:f0:c9:8a", ip: "192.168.50.25"} in network mk-default-k8s-diff-port-482476: {Iface:virbr2 ExpiryTime:2024-12-09 12:52:18 +0000 UTC Type:0 Mac:52:54:00:f0:c9:8a Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:default-k8s-diff-port-482476 Clientid:01:52:54:00:f0:c9:8a}
	I1209 11:52:25.005301  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | skip adding static IP to network mk-default-k8s-diff-port-482476 - found existing host DHCP lease matching {name: "default-k8s-diff-port-482476", mac: "52:54:00:f0:c9:8a", ip: "192.168.50.25"}
	I1209 11:52:25.005314  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Reserved static IP address: 192.168.50.25
	I1209 11:52:25.005328  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for SSH to be available...
	I1209 11:52:25.005342  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | Getting to WaitForSSH function...
	I1209 11:52:25.007758  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.008146  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:c9:8a", ip: ""} in network mk-default-k8s-diff-port-482476: {Iface:virbr2 ExpiryTime:2024-12-09 12:52:18 +0000 UTC Type:0 Mac:52:54:00:f0:c9:8a Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:default-k8s-diff-port-482476 Clientid:01:52:54:00:f0:c9:8a}
	I1209 11:52:25.008189  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined IP address 192.168.50.25 and MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.008291  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | Using SSH client type: external
	I1209 11:52:25.008318  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | Using SSH private key: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/default-k8s-diff-port-482476/id_rsa (-rw-------)
	I1209 11:52:25.008348  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.25 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20068-609844/.minikube/machines/default-k8s-diff-port-482476/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1209 11:52:25.008361  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | About to run SSH command:
	I1209 11:52:25.008369  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | exit 0
	I1209 11:52:25.130532  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | SSH cmd err, output: <nil>: 
	I1209 11:52:25.130901  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetConfigRaw
	I1209 11:52:25.131568  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetIP
	I1209 11:52:25.134487  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.134816  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:c9:8a", ip: ""} in network mk-default-k8s-diff-port-482476: {Iface:virbr2 ExpiryTime:2024-12-09 12:52:18 +0000 UTC Type:0 Mac:52:54:00:f0:c9:8a Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:default-k8s-diff-port-482476 Clientid:01:52:54:00:f0:c9:8a}
	I1209 11:52:25.134854  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined IP address 192.168.50.25 and MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.135163  663024 profile.go:143] Saving config to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/default-k8s-diff-port-482476/config.json ...
	I1209 11:52:25.135451  663024 machine.go:93] provisionDockerMachine start ...
	I1209 11:52:25.135480  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .DriverName
	I1209 11:52:25.135736  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHHostname
	I1209 11:52:25.138444  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.138853  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:c9:8a", ip: ""} in network mk-default-k8s-diff-port-482476: {Iface:virbr2 ExpiryTime:2024-12-09 12:52:18 +0000 UTC Type:0 Mac:52:54:00:f0:c9:8a Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:default-k8s-diff-port-482476 Clientid:01:52:54:00:f0:c9:8a}
	I1209 11:52:25.138894  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined IP address 192.168.50.25 and MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.138981  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHPort
	I1209 11:52:25.139188  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHKeyPath
	I1209 11:52:25.139327  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHKeyPath
	I1209 11:52:25.139491  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHUsername
	I1209 11:52:25.139655  663024 main.go:141] libmachine: Using SSH client type: native
	I1209 11:52:25.139895  663024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.25 22 <nil> <nil>}
	I1209 11:52:25.139906  663024 main.go:141] libmachine: About to run SSH command:
	hostname
	I1209 11:52:25.242441  663024 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1209 11:52:25.242472  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetMachineName
	I1209 11:52:25.242837  663024 buildroot.go:166] provisioning hostname "default-k8s-diff-port-482476"
	I1209 11:52:25.242878  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetMachineName
	I1209 11:52:25.243093  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHHostname
	I1209 11:52:25.245995  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.246447  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:c9:8a", ip: ""} in network mk-default-k8s-diff-port-482476: {Iface:virbr2 ExpiryTime:2024-12-09 12:52:18 +0000 UTC Type:0 Mac:52:54:00:f0:c9:8a Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:default-k8s-diff-port-482476 Clientid:01:52:54:00:f0:c9:8a}
	I1209 11:52:25.246478  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined IP address 192.168.50.25 and MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.246685  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHPort
	I1209 11:52:25.246900  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHKeyPath
	I1209 11:52:25.247052  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHKeyPath
	I1209 11:52:25.247175  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHUsername
	I1209 11:52:25.247330  663024 main.go:141] libmachine: Using SSH client type: native
	I1209 11:52:25.247518  663024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.25 22 <nil> <nil>}
	I1209 11:52:25.247531  663024 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-482476 && echo "default-k8s-diff-port-482476" | sudo tee /etc/hostname
	I1209 11:52:25.361366  663024 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-482476
	
	I1209 11:52:25.361397  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHHostname
	I1209 11:52:25.364194  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.364608  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:c9:8a", ip: ""} in network mk-default-k8s-diff-port-482476: {Iface:virbr2 ExpiryTime:2024-12-09 12:52:18 +0000 UTC Type:0 Mac:52:54:00:f0:c9:8a Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:default-k8s-diff-port-482476 Clientid:01:52:54:00:f0:c9:8a}
	I1209 11:52:25.364639  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined IP address 192.168.50.25 and MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.364813  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHPort
	I1209 11:52:25.365064  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHKeyPath
	I1209 11:52:25.365267  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHKeyPath
	I1209 11:52:25.365369  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHUsername
	I1209 11:52:25.365613  663024 main.go:141] libmachine: Using SSH client type: native
	I1209 11:52:25.365790  663024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.25 22 <nil> <nil>}
	I1209 11:52:25.365808  663024 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-482476' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-482476/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-482476' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 11:52:25.475311  663024 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 11:52:25.475346  663024 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20068-609844/.minikube CaCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20068-609844/.minikube}
	I1209 11:52:25.475386  663024 buildroot.go:174] setting up certificates
	I1209 11:52:25.475403  663024 provision.go:84] configureAuth start
	I1209 11:52:25.475412  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetMachineName
	I1209 11:52:25.475711  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetIP
	I1209 11:52:25.478574  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.478903  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:c9:8a", ip: ""} in network mk-default-k8s-diff-port-482476: {Iface:virbr2 ExpiryTime:2024-12-09 12:52:18 +0000 UTC Type:0 Mac:52:54:00:f0:c9:8a Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:default-k8s-diff-port-482476 Clientid:01:52:54:00:f0:c9:8a}
	I1209 11:52:25.478935  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined IP address 192.168.50.25 and MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.479055  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHHostname
	I1209 11:52:25.481280  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.481655  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:c9:8a", ip: ""} in network mk-default-k8s-diff-port-482476: {Iface:virbr2 ExpiryTime:2024-12-09 12:52:18 +0000 UTC Type:0 Mac:52:54:00:f0:c9:8a Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:default-k8s-diff-port-482476 Clientid:01:52:54:00:f0:c9:8a}
	I1209 11:52:25.481688  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined IP address 192.168.50.25 and MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.481788  663024 provision.go:143] copyHostCerts
	I1209 11:52:25.481845  663024 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem, removing ...
	I1209 11:52:25.481876  663024 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem
	I1209 11:52:25.481957  663024 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem (1123 bytes)
	I1209 11:52:25.482056  663024 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem, removing ...
	I1209 11:52:25.482065  663024 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem
	I1209 11:52:25.482090  663024 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem (1679 bytes)
	I1209 11:52:25.482243  663024 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem, removing ...
	I1209 11:52:25.482254  663024 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem
	I1209 11:52:25.482279  663024 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem (1082 bytes)
	I1209 11:52:25.482336  663024 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-482476 san=[127.0.0.1 192.168.50.25 default-k8s-diff-port-482476 localhost minikube]
	I1209 11:52:25.534856  663024 provision.go:177] copyRemoteCerts
	I1209 11:52:25.534921  663024 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 11:52:25.534951  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHHostname
	I1209 11:52:25.537732  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.538138  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:c9:8a", ip: ""} in network mk-default-k8s-diff-port-482476: {Iface:virbr2 ExpiryTime:2024-12-09 12:52:18 +0000 UTC Type:0 Mac:52:54:00:f0:c9:8a Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:default-k8s-diff-port-482476 Clientid:01:52:54:00:f0:c9:8a}
	I1209 11:52:25.538190  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined IP address 192.168.50.25 and MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.538390  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHPort
	I1209 11:52:25.538611  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHKeyPath
	I1209 11:52:25.538783  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHUsername
	I1209 11:52:25.538943  663024 sshutil.go:53] new ssh client: &{IP:192.168.50.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/default-k8s-diff-port-482476/id_rsa Username:docker}
	I1209 11:52:25.619772  663024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1209 11:52:25.643527  663024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1209 11:52:25.668517  663024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1209 11:52:25.693573  663024 provision.go:87] duration metric: took 218.153182ms to configureAuth
	I1209 11:52:25.693615  663024 buildroot.go:189] setting minikube options for container-runtime
	I1209 11:52:25.693807  663024 config.go:182] Loaded profile config "default-k8s-diff-port-482476": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 11:52:25.693906  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHHostname
	I1209 11:52:25.696683  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.697058  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:c9:8a", ip: ""} in network mk-default-k8s-diff-port-482476: {Iface:virbr2 ExpiryTime:2024-12-09 12:52:18 +0000 UTC Type:0 Mac:52:54:00:f0:c9:8a Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:default-k8s-diff-port-482476 Clientid:01:52:54:00:f0:c9:8a}
	I1209 11:52:25.697092  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined IP address 192.168.50.25 and MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.697344  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHPort
	I1209 11:52:25.697548  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHKeyPath
	I1209 11:52:25.697741  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHKeyPath
	I1209 11:52:25.697868  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHUsername
	I1209 11:52:25.698033  663024 main.go:141] libmachine: Using SSH client type: native
	I1209 11:52:25.698229  663024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.25 22 <nil> <nil>}
	I1209 11:52:25.698254  663024 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 11:52:25.915568  663024 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 11:52:25.915595  663024 machine.go:96] duration metric: took 780.126343ms to provisionDockerMachine
	I1209 11:52:25.915610  663024 start.go:293] postStartSetup for "default-k8s-diff-port-482476" (driver="kvm2")
	I1209 11:52:25.915620  663024 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 11:52:25.915644  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .DriverName
	I1209 11:52:25.916005  663024 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 11:52:25.916047  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHHostname
	I1209 11:52:25.919268  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.919602  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:c9:8a", ip: ""} in network mk-default-k8s-diff-port-482476: {Iface:virbr2 ExpiryTime:2024-12-09 12:52:18 +0000 UTC Type:0 Mac:52:54:00:f0:c9:8a Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:default-k8s-diff-port-482476 Clientid:01:52:54:00:f0:c9:8a}
	I1209 11:52:25.919628  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined IP address 192.168.50.25 and MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.919775  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHPort
	I1209 11:52:25.919967  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHKeyPath
	I1209 11:52:25.920133  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHUsername
	I1209 11:52:25.920285  663024 sshutil.go:53] new ssh client: &{IP:192.168.50.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/default-k8s-diff-port-482476/id_rsa Username:docker}
	I1209 11:52:26.000530  663024 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 11:52:26.004544  663024 info.go:137] Remote host: Buildroot 2023.02.9
	I1209 11:52:26.004574  663024 filesync.go:126] Scanning /home/jenkins/minikube-integration/20068-609844/.minikube/addons for local assets ...
	I1209 11:52:26.004651  663024 filesync.go:126] Scanning /home/jenkins/minikube-integration/20068-609844/.minikube/files for local assets ...
	I1209 11:52:26.004759  663024 filesync.go:149] local asset: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem -> 6170172.pem in /etc/ssl/certs
	I1209 11:52:26.004885  663024 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 11:52:26.013444  663024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem --> /etc/ssl/certs/6170172.pem (1708 bytes)
	I1209 11:52:26.036052  663024 start.go:296] duration metric: took 120.422739ms for postStartSetup
	I1209 11:52:26.036110  663024 fix.go:56] duration metric: took 20.120932786s for fixHost
	I1209 11:52:26.036135  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHHostname
	I1209 11:52:26.039079  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:26.039445  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:c9:8a", ip: ""} in network mk-default-k8s-diff-port-482476: {Iface:virbr2 ExpiryTime:2024-12-09 12:52:18 +0000 UTC Type:0 Mac:52:54:00:f0:c9:8a Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:default-k8s-diff-port-482476 Clientid:01:52:54:00:f0:c9:8a}
	I1209 11:52:26.039478  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined IP address 192.168.50.25 and MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:26.039797  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHPort
	I1209 11:52:26.040065  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHKeyPath
	I1209 11:52:26.040228  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHKeyPath
	I1209 11:52:26.040427  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHUsername
	I1209 11:52:26.040620  663024 main.go:141] libmachine: Using SSH client type: native
	I1209 11:52:26.040906  663024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.25 22 <nil> <nil>}
	I1209 11:52:26.040924  663024 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1209 11:52:26.142590  663024 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733745146.090497627
	
	I1209 11:52:26.142623  663024 fix.go:216] guest clock: 1733745146.090497627
	I1209 11:52:26.142634  663024 fix.go:229] Guest: 2024-12-09 11:52:26.090497627 +0000 UTC Remote: 2024-12-09 11:52:26.036115182 +0000 UTC m=+146.587055001 (delta=54.382445ms)
	I1209 11:52:26.142669  663024 fix.go:200] guest clock delta is within tolerance: 54.382445ms
	I1209 11:52:26.142681  663024 start.go:83] releasing machines lock for "default-k8s-diff-port-482476", held for 20.227543026s
	I1209 11:52:26.142723  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .DriverName
	I1209 11:52:26.143032  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetIP
	I1209 11:52:26.146118  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:26.146602  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:c9:8a", ip: ""} in network mk-default-k8s-diff-port-482476: {Iface:virbr2 ExpiryTime:2024-12-09 12:52:18 +0000 UTC Type:0 Mac:52:54:00:f0:c9:8a Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:default-k8s-diff-port-482476 Clientid:01:52:54:00:f0:c9:8a}
	I1209 11:52:26.146634  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined IP address 192.168.50.25 and MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:26.146841  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .DriverName
	I1209 11:52:26.147440  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .DriverName
	I1209 11:52:26.147709  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .DriverName
	I1209 11:52:26.147833  663024 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 11:52:26.147872  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHHostname
	I1209 11:52:26.147980  663024 ssh_runner.go:195] Run: cat /version.json
	I1209 11:52:26.148009  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHHostname
	I1209 11:52:26.151002  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:26.151346  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:26.151379  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:c9:8a", ip: ""} in network mk-default-k8s-diff-port-482476: {Iface:virbr2 ExpiryTime:2024-12-09 12:52:18 +0000 UTC Type:0 Mac:52:54:00:f0:c9:8a Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:default-k8s-diff-port-482476 Clientid:01:52:54:00:f0:c9:8a}
	I1209 11:52:26.151410  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined IP address 192.168.50.25 and MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:26.151534  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHPort
	I1209 11:52:26.151729  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHKeyPath
	I1209 11:52:26.151848  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:c9:8a", ip: ""} in network mk-default-k8s-diff-port-482476: {Iface:virbr2 ExpiryTime:2024-12-09 12:52:18 +0000 UTC Type:0 Mac:52:54:00:f0:c9:8a Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:default-k8s-diff-port-482476 Clientid:01:52:54:00:f0:c9:8a}
	I1209 11:52:26.151876  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined IP address 192.168.50.25 and MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:26.151904  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHUsername
	I1209 11:52:26.152003  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHPort
	I1209 11:52:26.152082  663024 sshutil.go:53] new ssh client: &{IP:192.168.50.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/default-k8s-diff-port-482476/id_rsa Username:docker}
	I1209 11:52:26.152159  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHKeyPath
	I1209 11:52:26.152322  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHUsername
	I1209 11:52:26.152565  663024 sshutil.go:53] new ssh client: &{IP:192.168.50.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/default-k8s-diff-port-482476/id_rsa Username:docker}
	I1209 11:52:26.231575  663024 ssh_runner.go:195] Run: systemctl --version
	I1209 11:52:26.267939  663024 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 11:52:26.418953  663024 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 11:52:26.426243  663024 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 11:52:26.426337  663024 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 11:52:26.448407  663024 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1209 11:52:26.448442  663024 start.go:495] detecting cgroup driver to use...
	I1209 11:52:26.448540  663024 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 11:52:26.469675  663024 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 11:52:26.488825  663024 docker.go:217] disabling cri-docker service (if available) ...
	I1209 11:52:26.488902  663024 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 11:52:26.507716  663024 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 11:52:26.525232  663024 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 11:52:26.664062  663024 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 11:52:26.854813  663024 docker.go:233] disabling docker service ...
	I1209 11:52:26.854883  663024 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 11:52:26.870021  663024 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 11:52:26.883610  663024 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 11:52:27.001237  663024 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 11:52:27.126865  663024 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 11:52:27.144121  663024 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 11:52:27.168073  663024 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1209 11:52:27.168242  663024 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:52:27.180516  663024 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 11:52:27.180587  663024 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:52:27.191681  663024 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:52:27.204047  663024 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:52:27.214157  663024 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 11:52:27.225934  663024 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:52:27.236691  663024 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:52:27.258774  663024 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:52:27.271986  663024 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 11:52:27.283488  663024 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1209 11:52:27.283539  663024 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1209 11:52:27.299065  663024 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 11:52:27.309203  663024 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 11:52:27.431740  663024 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 11:52:27.529577  663024 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 11:52:27.529668  663024 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 11:52:27.534733  663024 start.go:563] Will wait 60s for crictl version
	I1209 11:52:27.534800  663024 ssh_runner.go:195] Run: which crictl
	I1209 11:52:27.538544  663024 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 11:52:27.577577  663024 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1209 11:52:27.577684  663024 ssh_runner.go:195] Run: crio --version
	I1209 11:52:27.607938  663024 ssh_runner.go:195] Run: crio --version
	I1209 11:52:27.645210  663024 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1209 11:52:23.133393  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:23.632776  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:24.133286  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:24.632415  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:25.132348  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:25.632478  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:26.132982  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:26.632517  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:27.132692  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:27.633291  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:27.646510  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetIP
	I1209 11:52:27.650014  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:27.650439  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:c9:8a", ip: ""} in network mk-default-k8s-diff-port-482476: {Iface:virbr2 ExpiryTime:2024-12-09 12:52:18 +0000 UTC Type:0 Mac:52:54:00:f0:c9:8a Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:default-k8s-diff-port-482476 Clientid:01:52:54:00:f0:c9:8a}
	I1209 11:52:27.650469  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined IP address 192.168.50.25 and MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:27.650705  663024 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1209 11:52:27.654738  663024 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 11:52:27.668671  663024 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-482476 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:default-k8s-diff-port-482476 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.25 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 11:52:27.668808  663024 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 11:52:27.668873  663024 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 11:52:27.709582  663024 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1209 11:52:27.709679  663024 ssh_runner.go:195] Run: which lz4
	I1209 11:52:27.713702  663024 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1209 11:52:27.717851  663024 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1209 11:52:27.717887  663024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1209 11:52:29.037160  663024 crio.go:462] duration metric: took 1.32348676s to copy over tarball
	I1209 11:52:29.037262  663024 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1209 11:52:27.500098  661546 main.go:141] libmachine: (embed-certs-005123) Waiting to get IP...
	I1209 11:52:27.501088  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:27.501538  661546 main.go:141] libmachine: (embed-certs-005123) DBG | unable to find current IP address of domain embed-certs-005123 in network mk-embed-certs-005123
	I1209 11:52:27.501605  661546 main.go:141] libmachine: (embed-certs-005123) DBG | I1209 11:52:27.501510  663907 retry.go:31] will retry after 191.187925ms: waiting for machine to come up
	I1209 11:52:27.694017  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:27.694574  661546 main.go:141] libmachine: (embed-certs-005123) DBG | unable to find current IP address of domain embed-certs-005123 in network mk-embed-certs-005123
	I1209 11:52:27.694605  661546 main.go:141] libmachine: (embed-certs-005123) DBG | I1209 11:52:27.694512  663907 retry.go:31] will retry after 256.268ms: waiting for machine to come up
	I1209 11:52:27.952185  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:27.952863  661546 main.go:141] libmachine: (embed-certs-005123) DBG | unable to find current IP address of domain embed-certs-005123 in network mk-embed-certs-005123
	I1209 11:52:27.952908  661546 main.go:141] libmachine: (embed-certs-005123) DBG | I1209 11:52:27.952759  663907 retry.go:31] will retry after 460.272204ms: waiting for machine to come up
	I1209 11:52:28.414403  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:28.414925  661546 main.go:141] libmachine: (embed-certs-005123) DBG | unable to find current IP address of domain embed-certs-005123 in network mk-embed-certs-005123
	I1209 11:52:28.414967  661546 main.go:141] libmachine: (embed-certs-005123) DBG | I1209 11:52:28.414873  663907 retry.go:31] will retry after 450.761189ms: waiting for machine to come up
	I1209 11:52:28.867687  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:28.868350  661546 main.go:141] libmachine: (embed-certs-005123) DBG | unable to find current IP address of domain embed-certs-005123 in network mk-embed-certs-005123
	I1209 11:52:28.868389  661546 main.go:141] libmachine: (embed-certs-005123) DBG | I1209 11:52:28.868313  663907 retry.go:31] will retry after 615.800863ms: waiting for machine to come up
	I1209 11:52:29.486566  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:29.487179  661546 main.go:141] libmachine: (embed-certs-005123) DBG | unable to find current IP address of domain embed-certs-005123 in network mk-embed-certs-005123
	I1209 11:52:29.487218  661546 main.go:141] libmachine: (embed-certs-005123) DBG | I1209 11:52:29.487108  663907 retry.go:31] will retry after 628.641045ms: waiting for machine to come up
	I1209 11:52:30.117051  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:30.117424  661546 main.go:141] libmachine: (embed-certs-005123) DBG | unable to find current IP address of domain embed-certs-005123 in network mk-embed-certs-005123
	I1209 11:52:30.117459  661546 main.go:141] libmachine: (embed-certs-005123) DBG | I1209 11:52:30.117356  663907 retry.go:31] will retry after 902.465226ms: waiting for machine to come up
	I1209 11:52:31.021756  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:31.022268  661546 main.go:141] libmachine: (embed-certs-005123) DBG | unable to find current IP address of domain embed-certs-005123 in network mk-embed-certs-005123
	I1209 11:52:31.022298  661546 main.go:141] libmachine: (embed-certs-005123) DBG | I1209 11:52:31.022229  663907 retry.go:31] will retry after 918.939368ms: waiting for machine to come up
	I1209 11:52:26.594953  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:29.093499  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:28.132379  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:28.633377  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:29.132983  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:29.633370  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:30.132748  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:30.633383  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:31.133450  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:31.633210  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:32.132406  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:32.632598  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:31.234956  663024 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.197609203s)
	I1209 11:52:31.235007  663024 crio.go:469] duration metric: took 2.197798334s to extract the tarball
	I1209 11:52:31.235018  663024 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1209 11:52:31.275616  663024 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 11:52:31.320918  663024 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 11:52:31.320945  663024 cache_images.go:84] Images are preloaded, skipping loading
	I1209 11:52:31.320961  663024 kubeadm.go:934] updating node { 192.168.50.25 8444 v1.31.2 crio true true} ...
	I1209 11:52:31.321122  663024 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-482476 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.25
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-482476 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 11:52:31.321246  663024 ssh_runner.go:195] Run: crio config
	I1209 11:52:31.367805  663024 cni.go:84] Creating CNI manager for ""
	I1209 11:52:31.367827  663024 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 11:52:31.367839  663024 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1209 11:52:31.367863  663024 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.25 APIServerPort:8444 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-482476 NodeName:default-k8s-diff-port-482476 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.25"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.25 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 11:52:31.368005  663024 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.25
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-482476"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.25"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.25"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 11:52:31.368074  663024 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1209 11:52:31.377831  663024 binaries.go:44] Found k8s binaries, skipping transfer
	I1209 11:52:31.377902  663024 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 11:52:31.386872  663024 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I1209 11:52:31.403764  663024 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 11:52:31.419295  663024 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2305 bytes)
	I1209 11:52:31.435856  663024 ssh_runner.go:195] Run: grep 192.168.50.25	control-plane.minikube.internal$ /etc/hosts
	I1209 11:52:31.439480  663024 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.25	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 11:52:31.455136  663024 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 11:52:31.573295  663024 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 11:52:31.589679  663024 certs.go:68] Setting up /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/default-k8s-diff-port-482476 for IP: 192.168.50.25
	I1209 11:52:31.589703  663024 certs.go:194] generating shared ca certs ...
	I1209 11:52:31.589741  663024 certs.go:226] acquiring lock for ca certs: {Name:mk2df5887e08965d909a9c950da5dfffb8a04ddc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 11:52:31.589930  663024 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.key
	I1209 11:52:31.589982  663024 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.key
	I1209 11:52:31.589995  663024 certs.go:256] generating profile certs ...
	I1209 11:52:31.590137  663024 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/default-k8s-diff-port-482476/client.key
	I1209 11:52:31.590256  663024 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/default-k8s-diff-port-482476/apiserver.key.e2346b12
	I1209 11:52:31.590322  663024 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/default-k8s-diff-port-482476/proxy-client.key
	I1209 11:52:31.590479  663024 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017.pem (1338 bytes)
	W1209 11:52:31.590522  663024 certs.go:480] ignoring /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017_empty.pem, impossibly tiny 0 bytes
	I1209 11:52:31.590535  663024 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem (1675 bytes)
	I1209 11:52:31.590571  663024 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem (1082 bytes)
	I1209 11:52:31.590612  663024 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem (1123 bytes)
	I1209 11:52:31.590649  663024 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem (1679 bytes)
	I1209 11:52:31.590710  663024 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem (1708 bytes)
	I1209 11:52:31.591643  663024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 11:52:31.634363  663024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1209 11:52:31.660090  663024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 11:52:31.692933  663024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1209 11:52:31.726010  663024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/default-k8s-diff-port-482476/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1209 11:52:31.757565  663024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/default-k8s-diff-port-482476/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1209 11:52:31.781368  663024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/default-k8s-diff-port-482476/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 11:52:31.805233  663024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/default-k8s-diff-port-482476/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1209 11:52:31.828391  663024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem --> /usr/share/ca-certificates/6170172.pem (1708 bytes)
	I1209 11:52:31.850407  663024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 11:52:31.873159  663024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017.pem --> /usr/share/ca-certificates/617017.pem (1338 bytes)
	I1209 11:52:31.895503  663024 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 11:52:31.911754  663024 ssh_runner.go:195] Run: openssl version
	I1209 11:52:31.917771  663024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6170172.pem && ln -fs /usr/share/ca-certificates/6170172.pem /etc/ssl/certs/6170172.pem"
	I1209 11:52:31.929857  663024 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6170172.pem
	I1209 11:52:31.934518  663024 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 10:45 /usr/share/ca-certificates/6170172.pem
	I1209 11:52:31.934596  663024 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6170172.pem
	I1209 11:52:31.940382  663024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6170172.pem /etc/ssl/certs/3ec20f2e.0"
	I1209 11:52:31.951417  663024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1209 11:52:31.961966  663024 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 11:52:31.966234  663024 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 10:34 /usr/share/ca-certificates/minikubeCA.pem
	I1209 11:52:31.966286  663024 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 11:52:31.972070  663024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1209 11:52:31.982547  663024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/617017.pem && ln -fs /usr/share/ca-certificates/617017.pem /etc/ssl/certs/617017.pem"
	I1209 11:52:31.993215  663024 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/617017.pem
	I1209 11:52:31.997579  663024 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 10:45 /usr/share/ca-certificates/617017.pem
	I1209 11:52:31.997641  663024 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/617017.pem
	I1209 11:52:32.003050  663024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/617017.pem /etc/ssl/certs/51391683.0"
	I1209 11:52:32.013463  663024 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 11:52:32.017936  663024 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1209 11:52:32.024029  663024 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1209 11:52:32.029686  663024 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1209 11:52:32.035260  663024 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1209 11:52:32.040696  663024 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1209 11:52:32.046116  663024 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1209 11:52:32.051521  663024 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-482476 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.2 ClusterName:default-k8s-diff-port-482476 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.25 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 11:52:32.051605  663024 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 11:52:32.051676  663024 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 11:52:32.092529  663024 cri.go:89] found id: ""
	I1209 11:52:32.092623  663024 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 11:52:32.103153  663024 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1209 11:52:32.103183  663024 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1209 11:52:32.103247  663024 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1209 11:52:32.113029  663024 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1209 11:52:32.114506  663024 kubeconfig.go:125] found "default-k8s-diff-port-482476" server: "https://192.168.50.25:8444"
	I1209 11:52:32.116929  663024 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1209 11:52:32.127055  663024 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.25
	I1209 11:52:32.127108  663024 kubeadm.go:1160] stopping kube-system containers ...
	I1209 11:52:32.127124  663024 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1209 11:52:32.127189  663024 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 11:52:32.169401  663024 cri.go:89] found id: ""
	I1209 11:52:32.169507  663024 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1209 11:52:32.187274  663024 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 11:52:32.196843  663024 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 11:52:32.196867  663024 kubeadm.go:157] found existing configuration files:
	
	I1209 11:52:32.196925  663024 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1209 11:52:32.205670  663024 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 11:52:32.205754  663024 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 11:52:32.214977  663024 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1209 11:52:32.223707  663024 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 11:52:32.223782  663024 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 11:52:32.232514  663024 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1209 11:52:32.240999  663024 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 11:52:32.241076  663024 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 11:52:32.250049  663024 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1209 11:52:32.258782  663024 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 11:52:32.258846  663024 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 11:52:32.268447  663024 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 11:52:32.277875  663024 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:32.394016  663024 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:33.494978  663024 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.100920844s)
	I1209 11:52:33.495030  663024 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:33.719319  663024 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:33.787272  663024 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:33.882783  663024 api_server.go:52] waiting for apiserver process to appear ...
	I1209 11:52:33.882876  663024 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:34.383090  663024 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:31.942735  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:31.943207  661546 main.go:141] libmachine: (embed-certs-005123) DBG | unable to find current IP address of domain embed-certs-005123 in network mk-embed-certs-005123
	I1209 11:52:31.943244  661546 main.go:141] libmachine: (embed-certs-005123) DBG | I1209 11:52:31.943141  663907 retry.go:31] will retry after 1.153139191s: waiting for machine to come up
	I1209 11:52:33.097672  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:33.098233  661546 main.go:141] libmachine: (embed-certs-005123) DBG | unable to find current IP address of domain embed-certs-005123 in network mk-embed-certs-005123
	I1209 11:52:33.098299  661546 main.go:141] libmachine: (embed-certs-005123) DBG | I1209 11:52:33.098199  663907 retry.go:31] will retry after 2.002880852s: waiting for machine to come up
	I1209 11:52:35.103239  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:35.103693  661546 main.go:141] libmachine: (embed-certs-005123) DBG | unable to find current IP address of domain embed-certs-005123 in network mk-embed-certs-005123
	I1209 11:52:35.103724  661546 main.go:141] libmachine: (embed-certs-005123) DBG | I1209 11:52:35.103639  663907 retry.go:31] will retry after 2.219510124s: waiting for machine to come up
	I1209 11:52:31.593184  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:34.090877  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:36.094569  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:33.132924  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:33.632884  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:34.132528  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:34.632989  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:35.133398  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:35.632376  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:36.132936  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:36.633152  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:37.133343  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:37.633367  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:34.883172  663024 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:35.384008  663024 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:35.883940  663024 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:35.901453  663024 api_server.go:72] duration metric: took 2.018670363s to wait for apiserver process to appear ...
	I1209 11:52:35.901489  663024 api_server.go:88] waiting for apiserver healthz status ...
	I1209 11:52:35.901524  663024 api_server.go:253] Checking apiserver healthz at https://192.168.50.25:8444/healthz ...
	I1209 11:52:38.225976  663024 api_server.go:279] https://192.168.50.25:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1209 11:52:38.226017  663024 api_server.go:103] status: https://192.168.50.25:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1209 11:52:38.226037  663024 api_server.go:253] Checking apiserver healthz at https://192.168.50.25:8444/healthz ...
	I1209 11:52:38.269459  663024 api_server.go:279] https://192.168.50.25:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1209 11:52:38.269549  663024 api_server.go:103] status: https://192.168.50.25:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1209 11:52:38.401652  663024 api_server.go:253] Checking apiserver healthz at https://192.168.50.25:8444/healthz ...
	I1209 11:52:38.407995  663024 api_server.go:279] https://192.168.50.25:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1209 11:52:38.408028  663024 api_server.go:103] status: https://192.168.50.25:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1209 11:52:38.902416  663024 api_server.go:253] Checking apiserver healthz at https://192.168.50.25:8444/healthz ...
	I1209 11:52:38.914550  663024 api_server.go:279] https://192.168.50.25:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1209 11:52:38.914579  663024 api_server.go:103] status: https://192.168.50.25:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1209 11:52:39.401719  663024 api_server.go:253] Checking apiserver healthz at https://192.168.50.25:8444/healthz ...
	I1209 11:52:39.409382  663024 api_server.go:279] https://192.168.50.25:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1209 11:52:39.409427  663024 api_server.go:103] status: https://192.168.50.25:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1209 11:52:39.902488  663024 api_server.go:253] Checking apiserver healthz at https://192.168.50.25:8444/healthz ...
	I1209 11:52:39.907511  663024 api_server.go:279] https://192.168.50.25:8444/healthz returned 200:
	ok
	I1209 11:52:39.914532  663024 api_server.go:141] control plane version: v1.31.2
	I1209 11:52:39.914562  663024 api_server.go:131] duration metric: took 4.013066199s to wait for apiserver health ...
	I1209 11:52:39.914586  663024 cni.go:84] Creating CNI manager for ""
	I1209 11:52:39.914594  663024 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 11:52:39.915954  663024 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1209 11:52:37.324833  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:37.325397  661546 main.go:141] libmachine: (embed-certs-005123) DBG | unable to find current IP address of domain embed-certs-005123 in network mk-embed-certs-005123
	I1209 11:52:37.325430  661546 main.go:141] libmachine: (embed-certs-005123) DBG | I1209 11:52:37.325338  663907 retry.go:31] will retry after 3.636796307s: waiting for machine to come up
	I1209 11:52:40.966039  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:40.966438  661546 main.go:141] libmachine: (embed-certs-005123) DBG | unable to find current IP address of domain embed-certs-005123 in network mk-embed-certs-005123
	I1209 11:52:40.966463  661546 main.go:141] libmachine: (embed-certs-005123) DBG | I1209 11:52:40.966419  663907 retry.go:31] will retry after 3.704289622s: waiting for machine to come up
	I1209 11:52:38.592804  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:40.593407  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:38.133368  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:38.632475  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:39.132993  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:39.633225  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:40.132552  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:40.633292  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:41.132443  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:41.632994  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:42.132631  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:42.633378  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:39.917397  663024 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1209 11:52:39.928995  663024 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1209 11:52:39.953045  663024 system_pods.go:43] waiting for kube-system pods to appear ...
	I1209 11:52:39.962582  663024 system_pods.go:59] 8 kube-system pods found
	I1209 11:52:39.962628  663024 system_pods.go:61] "coredns-7c65d6cfc9-zzrgn" [dca7a835-3b66-4515-b571-6420afc42c44] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1209 11:52:39.962639  663024 system_pods.go:61] "etcd-default-k8s-diff-port-482476" [2323dbbc-9e7f-4047-b0be-b68b851f4986] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1209 11:52:39.962649  663024 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-482476" [0b7a4936-5282-46a4-a08a-e225b303f6f2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1209 11:52:39.962658  663024 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-482476" [c6ff79a0-2177-4c79-8021-c523f8d53e9b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1209 11:52:39.962666  663024 system_pods.go:61] "kube-proxy-6th5d" [0cff6df1-1adb-4b7e-8d59-a837db026339] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1209 11:52:39.962682  663024 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-482476" [524125eb-afd4-4e20-b0f0-e58019e84962] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1209 11:52:39.962694  663024 system_pods.go:61] "metrics-server-6867b74b74-bpccn" [7426c800-9ff7-4778-82a0-6c71fd05a222] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1209 11:52:39.962702  663024 system_pods.go:61] "storage-provisioner" [4478313a-58e8-4d24-ab0b-c087e664200d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1209 11:52:39.962711  663024 system_pods.go:74] duration metric: took 9.637672ms to wait for pod list to return data ...
	I1209 11:52:39.962725  663024 node_conditions.go:102] verifying NodePressure condition ...
	I1209 11:52:39.969576  663024 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1209 11:52:39.969611  663024 node_conditions.go:123] node cpu capacity is 2
	I1209 11:52:39.969627  663024 node_conditions.go:105] duration metric: took 6.893708ms to run NodePressure ...
	I1209 11:52:39.969660  663024 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:40.340239  663024 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1209 11:52:40.345384  663024 kubeadm.go:739] kubelet initialised
	I1209 11:52:40.345412  663024 kubeadm.go:740] duration metric: took 5.145751ms waiting for restarted kubelet to initialise ...
	I1209 11:52:40.345425  663024 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 11:52:40.350721  663024 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-zzrgn" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:42.357138  663024 pod_ready.go:103] pod "coredns-7c65d6cfc9-zzrgn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:44.361981  663024 pod_ready.go:103] pod "coredns-7c65d6cfc9-zzrgn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:44.674598  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:44.675048  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has current primary IP address 192.168.72.218 and MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:44.675068  661546 main.go:141] libmachine: (embed-certs-005123) Found IP for machine: 192.168.72.218
	I1209 11:52:44.675075  661546 main.go:141] libmachine: (embed-certs-005123) Reserving static IP address...
	I1209 11:52:44.675492  661546 main.go:141] libmachine: (embed-certs-005123) DBG | found host DHCP lease matching {name: "embed-certs-005123", mac: "52:54:00:ee:a0:a8", ip: "192.168.72.218"} in network mk-embed-certs-005123: {Iface:virbr4 ExpiryTime:2024-12-09 12:52:37 +0000 UTC Type:0 Mac:52:54:00:ee:a0:a8 Iaid: IPaddr:192.168.72.218 Prefix:24 Hostname:embed-certs-005123 Clientid:01:52:54:00:ee:a0:a8}
	I1209 11:52:44.675522  661546 main.go:141] libmachine: (embed-certs-005123) DBG | skip adding static IP to network mk-embed-certs-005123 - found existing host DHCP lease matching {name: "embed-certs-005123", mac: "52:54:00:ee:a0:a8", ip: "192.168.72.218"}
	I1209 11:52:44.675537  661546 main.go:141] libmachine: (embed-certs-005123) Reserved static IP address: 192.168.72.218
	I1209 11:52:44.675555  661546 main.go:141] libmachine: (embed-certs-005123) Waiting for SSH to be available...
	I1209 11:52:44.675566  661546 main.go:141] libmachine: (embed-certs-005123) DBG | Getting to WaitForSSH function...
	I1209 11:52:44.677490  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:44.677814  661546 main.go:141] libmachine: (embed-certs-005123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:a0:a8", ip: ""} in network mk-embed-certs-005123: {Iface:virbr4 ExpiryTime:2024-12-09 12:52:37 +0000 UTC Type:0 Mac:52:54:00:ee:a0:a8 Iaid: IPaddr:192.168.72.218 Prefix:24 Hostname:embed-certs-005123 Clientid:01:52:54:00:ee:a0:a8}
	I1209 11:52:44.677860  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined IP address 192.168.72.218 and MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:44.677952  661546 main.go:141] libmachine: (embed-certs-005123) DBG | Using SSH client type: external
	I1209 11:52:44.678012  661546 main.go:141] libmachine: (embed-certs-005123) DBG | Using SSH private key: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/embed-certs-005123/id_rsa (-rw-------)
	I1209 11:52:44.678042  661546 main.go:141] libmachine: (embed-certs-005123) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.218 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20068-609844/.minikube/machines/embed-certs-005123/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1209 11:52:44.678056  661546 main.go:141] libmachine: (embed-certs-005123) DBG | About to run SSH command:
	I1209 11:52:44.678068  661546 main.go:141] libmachine: (embed-certs-005123) DBG | exit 0
	I1209 11:52:44.798377  661546 main.go:141] libmachine: (embed-certs-005123) DBG | SSH cmd err, output: <nil>: 
	I1209 11:52:44.798782  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetConfigRaw
	I1209 11:52:44.799532  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetIP
	I1209 11:52:44.801853  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:44.802223  661546 main.go:141] libmachine: (embed-certs-005123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:a0:a8", ip: ""} in network mk-embed-certs-005123: {Iface:virbr4 ExpiryTime:2024-12-09 12:52:37 +0000 UTC Type:0 Mac:52:54:00:ee:a0:a8 Iaid: IPaddr:192.168.72.218 Prefix:24 Hostname:embed-certs-005123 Clientid:01:52:54:00:ee:a0:a8}
	I1209 11:52:44.802255  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined IP address 192.168.72.218 and MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:44.802539  661546 profile.go:143] Saving config to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/embed-certs-005123/config.json ...
	I1209 11:52:44.802777  661546 machine.go:93] provisionDockerMachine start ...
	I1209 11:52:44.802799  661546 main.go:141] libmachine: (embed-certs-005123) Calling .DriverName
	I1209 11:52:44.802994  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHHostname
	I1209 11:52:44.805481  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:44.805803  661546 main.go:141] libmachine: (embed-certs-005123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:a0:a8", ip: ""} in network mk-embed-certs-005123: {Iface:virbr4 ExpiryTime:2024-12-09 12:52:37 +0000 UTC Type:0 Mac:52:54:00:ee:a0:a8 Iaid: IPaddr:192.168.72.218 Prefix:24 Hostname:embed-certs-005123 Clientid:01:52:54:00:ee:a0:a8}
	I1209 11:52:44.805838  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined IP address 192.168.72.218 and MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:44.805999  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHPort
	I1209 11:52:44.806219  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHKeyPath
	I1209 11:52:44.806386  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHKeyPath
	I1209 11:52:44.806555  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHUsername
	I1209 11:52:44.806716  661546 main.go:141] libmachine: Using SSH client type: native
	I1209 11:52:44.806886  661546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.218 22 <nil> <nil>}
	I1209 11:52:44.806897  661546 main.go:141] libmachine: About to run SSH command:
	hostname
	I1209 11:52:44.914443  661546 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1209 11:52:44.914480  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetMachineName
	I1209 11:52:44.914783  661546 buildroot.go:166] provisioning hostname "embed-certs-005123"
	I1209 11:52:44.914810  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetMachineName
	I1209 11:52:44.914973  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHHostname
	I1209 11:52:44.918053  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:44.918471  661546 main.go:141] libmachine: (embed-certs-005123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:a0:a8", ip: ""} in network mk-embed-certs-005123: {Iface:virbr4 ExpiryTime:2024-12-09 12:52:37 +0000 UTC Type:0 Mac:52:54:00:ee:a0:a8 Iaid: IPaddr:192.168.72.218 Prefix:24 Hostname:embed-certs-005123 Clientid:01:52:54:00:ee:a0:a8}
	I1209 11:52:44.918508  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined IP address 192.168.72.218 and MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:44.918701  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHPort
	I1209 11:52:44.918935  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHKeyPath
	I1209 11:52:44.919087  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHKeyPath
	I1209 11:52:44.919267  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHUsername
	I1209 11:52:44.919452  661546 main.go:141] libmachine: Using SSH client type: native
	I1209 11:52:44.919624  661546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.218 22 <nil> <nil>}
	I1209 11:52:44.919645  661546 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-005123 && echo "embed-certs-005123" | sudo tee /etc/hostname
	I1209 11:52:45.032725  661546 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-005123
	
	I1209 11:52:45.032760  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHHostname
	I1209 11:52:45.035820  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:45.036222  661546 main.go:141] libmachine: (embed-certs-005123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:a0:a8", ip: ""} in network mk-embed-certs-005123: {Iface:virbr4 ExpiryTime:2024-12-09 12:52:37 +0000 UTC Type:0 Mac:52:54:00:ee:a0:a8 Iaid: IPaddr:192.168.72.218 Prefix:24 Hostname:embed-certs-005123 Clientid:01:52:54:00:ee:a0:a8}
	I1209 11:52:45.036263  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined IP address 192.168.72.218 and MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:45.036466  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHPort
	I1209 11:52:45.036666  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHKeyPath
	I1209 11:52:45.036864  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHKeyPath
	I1209 11:52:45.037003  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHUsername
	I1209 11:52:45.037189  661546 main.go:141] libmachine: Using SSH client type: native
	I1209 11:52:45.037396  661546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.218 22 <nil> <nil>}
	I1209 11:52:45.037413  661546 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-005123' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-005123/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-005123' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 11:52:45.147189  661546 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 11:52:45.147225  661546 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20068-609844/.minikube CaCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20068-609844/.minikube}
	I1209 11:52:45.147284  661546 buildroot.go:174] setting up certificates
	I1209 11:52:45.147299  661546 provision.go:84] configureAuth start
	I1209 11:52:45.147313  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetMachineName
	I1209 11:52:45.147667  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetIP
	I1209 11:52:45.150526  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:45.150965  661546 main.go:141] libmachine: (embed-certs-005123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:a0:a8", ip: ""} in network mk-embed-certs-005123: {Iface:virbr4 ExpiryTime:2024-12-09 12:52:37 +0000 UTC Type:0 Mac:52:54:00:ee:a0:a8 Iaid: IPaddr:192.168.72.218 Prefix:24 Hostname:embed-certs-005123 Clientid:01:52:54:00:ee:a0:a8}
	I1209 11:52:45.151009  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined IP address 192.168.72.218 and MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:45.151118  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHHostname
	I1209 11:52:45.153778  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:45.154178  661546 main.go:141] libmachine: (embed-certs-005123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:a0:a8", ip: ""} in network mk-embed-certs-005123: {Iface:virbr4 ExpiryTime:2024-12-09 12:52:37 +0000 UTC Type:0 Mac:52:54:00:ee:a0:a8 Iaid: IPaddr:192.168.72.218 Prefix:24 Hostname:embed-certs-005123 Clientid:01:52:54:00:ee:a0:a8}
	I1209 11:52:45.154213  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined IP address 192.168.72.218 and MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:45.154382  661546 provision.go:143] copyHostCerts
	I1209 11:52:45.154455  661546 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem, removing ...
	I1209 11:52:45.154478  661546 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem
	I1209 11:52:45.154549  661546 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem (1082 bytes)
	I1209 11:52:45.154673  661546 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem, removing ...
	I1209 11:52:45.154685  661546 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem
	I1209 11:52:45.154717  661546 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem (1123 bytes)
	I1209 11:52:45.154816  661546 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem, removing ...
	I1209 11:52:45.154829  661546 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem
	I1209 11:52:45.154857  661546 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem (1679 bytes)
	I1209 11:52:45.154935  661546 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem org=jenkins.embed-certs-005123 san=[127.0.0.1 192.168.72.218 embed-certs-005123 localhost minikube]
	I1209 11:52:45.382712  661546 provision.go:177] copyRemoteCerts
	I1209 11:52:45.382772  661546 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 11:52:45.382801  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHHostname
	I1209 11:52:45.385625  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:45.386020  661546 main.go:141] libmachine: (embed-certs-005123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:a0:a8", ip: ""} in network mk-embed-certs-005123: {Iface:virbr4 ExpiryTime:2024-12-09 12:52:37 +0000 UTC Type:0 Mac:52:54:00:ee:a0:a8 Iaid: IPaddr:192.168.72.218 Prefix:24 Hostname:embed-certs-005123 Clientid:01:52:54:00:ee:a0:a8}
	I1209 11:52:45.386050  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined IP address 192.168.72.218 and MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:45.386241  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHPort
	I1209 11:52:45.386448  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHKeyPath
	I1209 11:52:45.386626  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHUsername
	I1209 11:52:45.386765  661546 sshutil.go:53] new ssh client: &{IP:192.168.72.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/embed-certs-005123/id_rsa Username:docker}
	I1209 11:52:45.464427  661546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1209 11:52:45.488111  661546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1209 11:52:45.511231  661546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1209 11:52:45.534104  661546 provision.go:87] duration metric: took 386.787703ms to configureAuth
	I1209 11:52:45.534141  661546 buildroot.go:189] setting minikube options for container-runtime
	I1209 11:52:45.534411  661546 config.go:182] Loaded profile config "embed-certs-005123": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 11:52:45.534526  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHHostname
	I1209 11:52:45.537936  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:45.538370  661546 main.go:141] libmachine: (embed-certs-005123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:a0:a8", ip: ""} in network mk-embed-certs-005123: {Iface:virbr4 ExpiryTime:2024-12-09 12:52:37 +0000 UTC Type:0 Mac:52:54:00:ee:a0:a8 Iaid: IPaddr:192.168.72.218 Prefix:24 Hostname:embed-certs-005123 Clientid:01:52:54:00:ee:a0:a8}
	I1209 11:52:45.538402  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined IP address 192.168.72.218 and MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:45.538584  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHPort
	I1209 11:52:45.538826  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHKeyPath
	I1209 11:52:45.539019  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHKeyPath
	I1209 11:52:45.539150  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHUsername
	I1209 11:52:45.539378  661546 main.go:141] libmachine: Using SSH client type: native
	I1209 11:52:45.539551  661546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.218 22 <nil> <nil>}
	I1209 11:52:45.539568  661546 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 11:52:45.771215  661546 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 11:52:45.771259  661546 machine.go:96] duration metric: took 968.466766ms to provisionDockerMachine
	I1209 11:52:45.771276  661546 start.go:293] postStartSetup for "embed-certs-005123" (driver="kvm2")
	I1209 11:52:45.771287  661546 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 11:52:45.771316  661546 main.go:141] libmachine: (embed-certs-005123) Calling .DriverName
	I1209 11:52:45.771673  661546 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 11:52:45.771709  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHHostname
	I1209 11:52:45.774881  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:45.775294  661546 main.go:141] libmachine: (embed-certs-005123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:a0:a8", ip: ""} in network mk-embed-certs-005123: {Iface:virbr4 ExpiryTime:2024-12-09 12:52:37 +0000 UTC Type:0 Mac:52:54:00:ee:a0:a8 Iaid: IPaddr:192.168.72.218 Prefix:24 Hostname:embed-certs-005123 Clientid:01:52:54:00:ee:a0:a8}
	I1209 11:52:45.775340  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined IP address 192.168.72.218 and MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:45.775510  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHPort
	I1209 11:52:45.775714  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHKeyPath
	I1209 11:52:45.775899  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHUsername
	I1209 11:52:45.776065  661546 sshutil.go:53] new ssh client: &{IP:192.168.72.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/embed-certs-005123/id_rsa Username:docker}
	I1209 11:52:45.856991  661546 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 11:52:45.862195  661546 info.go:137] Remote host: Buildroot 2023.02.9
	I1209 11:52:45.862224  661546 filesync.go:126] Scanning /home/jenkins/minikube-integration/20068-609844/.minikube/addons for local assets ...
	I1209 11:52:45.862295  661546 filesync.go:126] Scanning /home/jenkins/minikube-integration/20068-609844/.minikube/files for local assets ...
	I1209 11:52:45.862368  661546 filesync.go:149] local asset: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem -> 6170172.pem in /etc/ssl/certs
	I1209 11:52:45.862497  661546 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 11:52:45.874850  661546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem --> /etc/ssl/certs/6170172.pem (1708 bytes)
	I1209 11:52:45.899279  661546 start.go:296] duration metric: took 127.984399ms for postStartSetup
	I1209 11:52:45.899332  661546 fix.go:56] duration metric: took 19.756446591s for fixHost
	I1209 11:52:45.899362  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHHostname
	I1209 11:52:45.902428  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:45.902828  661546 main.go:141] libmachine: (embed-certs-005123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:a0:a8", ip: ""} in network mk-embed-certs-005123: {Iface:virbr4 ExpiryTime:2024-12-09 12:52:37 +0000 UTC Type:0 Mac:52:54:00:ee:a0:a8 Iaid: IPaddr:192.168.72.218 Prefix:24 Hostname:embed-certs-005123 Clientid:01:52:54:00:ee:a0:a8}
	I1209 11:52:45.902861  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined IP address 192.168.72.218 and MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:45.903117  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHPort
	I1209 11:52:45.903344  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHKeyPath
	I1209 11:52:45.903554  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHKeyPath
	I1209 11:52:45.903704  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHUsername
	I1209 11:52:45.903955  661546 main.go:141] libmachine: Using SSH client type: native
	I1209 11:52:45.904191  661546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.218 22 <nil> <nil>}
	I1209 11:52:45.904209  661546 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1209 11:52:46.007164  661546 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733745165.964649155
	
	I1209 11:52:46.007194  661546 fix.go:216] guest clock: 1733745165.964649155
	I1209 11:52:46.007217  661546 fix.go:229] Guest: 2024-12-09 11:52:45.964649155 +0000 UTC Remote: 2024-12-09 11:52:45.899337716 +0000 UTC m=+369.711404421 (delta=65.311439ms)
	I1209 11:52:46.007267  661546 fix.go:200] guest clock delta is within tolerance: 65.311439ms
	I1209 11:52:46.007280  661546 start.go:83] releasing machines lock for "embed-certs-005123", held for 19.864428938s
	I1209 11:52:46.007313  661546 main.go:141] libmachine: (embed-certs-005123) Calling .DriverName
	I1209 11:52:46.007616  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetIP
	I1209 11:52:46.011273  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:46.011799  661546 main.go:141] libmachine: (embed-certs-005123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:a0:a8", ip: ""} in network mk-embed-certs-005123: {Iface:virbr4 ExpiryTime:2024-12-09 12:52:37 +0000 UTC Type:0 Mac:52:54:00:ee:a0:a8 Iaid: IPaddr:192.168.72.218 Prefix:24 Hostname:embed-certs-005123 Clientid:01:52:54:00:ee:a0:a8}
	I1209 11:52:46.011830  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined IP address 192.168.72.218 and MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:46.012074  661546 main.go:141] libmachine: (embed-certs-005123) Calling .DriverName
	I1209 11:52:46.012681  661546 main.go:141] libmachine: (embed-certs-005123) Calling .DriverName
	I1209 11:52:46.012907  661546 main.go:141] libmachine: (embed-certs-005123) Calling .DriverName
	I1209 11:52:46.013027  661546 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 11:52:46.013099  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHHostname
	I1209 11:52:46.013170  661546 ssh_runner.go:195] Run: cat /version.json
	I1209 11:52:46.013196  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHHostname
	I1209 11:52:46.016473  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:46.016764  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:46.016840  661546 main.go:141] libmachine: (embed-certs-005123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:a0:a8", ip: ""} in network mk-embed-certs-005123: {Iface:virbr4 ExpiryTime:2024-12-09 12:52:37 +0000 UTC Type:0 Mac:52:54:00:ee:a0:a8 Iaid: IPaddr:192.168.72.218 Prefix:24 Hostname:embed-certs-005123 Clientid:01:52:54:00:ee:a0:a8}
	I1209 11:52:46.016875  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined IP address 192.168.72.218 and MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:46.016964  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHPort
	I1209 11:52:46.017186  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHKeyPath
	I1209 11:52:46.017287  661546 main.go:141] libmachine: (embed-certs-005123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:a0:a8", ip: ""} in network mk-embed-certs-005123: {Iface:virbr4 ExpiryTime:2024-12-09 12:52:37 +0000 UTC Type:0 Mac:52:54:00:ee:a0:a8 Iaid: IPaddr:192.168.72.218 Prefix:24 Hostname:embed-certs-005123 Clientid:01:52:54:00:ee:a0:a8}
	I1209 11:52:46.017401  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHUsername
	I1209 11:52:46.017442  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined IP address 192.168.72.218 and MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:46.017480  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHPort
	I1209 11:52:46.017553  661546 sshutil.go:53] new ssh client: &{IP:192.168.72.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/embed-certs-005123/id_rsa Username:docker}
	I1209 11:52:46.017785  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHKeyPath
	I1209 11:52:46.017911  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHUsername
	I1209 11:52:46.018075  661546 sshutil.go:53] new ssh client: &{IP:192.168.72.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/embed-certs-005123/id_rsa Username:docker}
	I1209 11:52:46.129248  661546 ssh_runner.go:195] Run: systemctl --version
	I1209 11:52:46.136309  661546 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 11:52:43.091899  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:45.592415  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:46.287879  661546 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 11:52:46.293689  661546 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 11:52:46.293770  661546 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 11:52:46.311972  661546 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1209 11:52:46.312009  661546 start.go:495] detecting cgroup driver to use...
	I1209 11:52:46.312085  661546 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 11:52:46.329406  661546 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 11:52:46.344607  661546 docker.go:217] disabling cri-docker service (if available) ...
	I1209 11:52:46.344664  661546 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 11:52:46.360448  661546 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 11:52:46.374509  661546 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 11:52:46.503687  661546 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 11:52:46.649152  661546 docker.go:233] disabling docker service ...
	I1209 11:52:46.649234  661546 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 11:52:46.663277  661546 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 11:52:46.677442  661546 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 11:52:46.832667  661546 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 11:52:46.949826  661546 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 11:52:46.963119  661546 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 11:52:46.981743  661546 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1209 11:52:46.981834  661546 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:52:46.991634  661546 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 11:52:46.991706  661546 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:52:47.004032  661546 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:52:47.015001  661546 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:52:47.025000  661546 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 11:52:47.035513  661546 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:52:47.045431  661546 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:52:47.061931  661546 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:52:47.071531  661546 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 11:52:47.080492  661546 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1209 11:52:47.080559  661546 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1209 11:52:47.094021  661546 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 11:52:47.104015  661546 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 11:52:47.226538  661546 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 11:52:47.318832  661546 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 11:52:47.318911  661546 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 11:52:47.323209  661546 start.go:563] Will wait 60s for crictl version
	I1209 11:52:47.323276  661546 ssh_runner.go:195] Run: which crictl
	I1209 11:52:47.326773  661546 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 11:52:47.365536  661546 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1209 11:52:47.365629  661546 ssh_runner.go:195] Run: crio --version
	I1209 11:52:47.392781  661546 ssh_runner.go:195] Run: crio --version
	I1209 11:52:47.422945  661546 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1209 11:52:43.133189  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:43.632726  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:44.132804  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:44.632952  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:45.132474  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:45.633318  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:46.133116  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:46.632595  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:47.133211  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:47.633233  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:46.858128  663024 pod_ready.go:103] pod "coredns-7c65d6cfc9-zzrgn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:49.358845  663024 pod_ready.go:103] pod "coredns-7c65d6cfc9-zzrgn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:47.423936  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetIP
	I1209 11:52:47.426959  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:47.427401  661546 main.go:141] libmachine: (embed-certs-005123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:a0:a8", ip: ""} in network mk-embed-certs-005123: {Iface:virbr4 ExpiryTime:2024-12-09 12:52:37 +0000 UTC Type:0 Mac:52:54:00:ee:a0:a8 Iaid: IPaddr:192.168.72.218 Prefix:24 Hostname:embed-certs-005123 Clientid:01:52:54:00:ee:a0:a8}
	I1209 11:52:47.427425  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined IP address 192.168.72.218 and MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:47.427636  661546 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1209 11:52:47.432509  661546 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 11:52:47.448620  661546 kubeadm.go:883] updating cluster {Name:embed-certs-005123 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:embed-certs-005123 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.218 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 11:52:47.448772  661546 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 11:52:47.448824  661546 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 11:52:47.485100  661546 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1209 11:52:47.485173  661546 ssh_runner.go:195] Run: which lz4
	I1209 11:52:47.489202  661546 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1209 11:52:47.493060  661546 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1209 11:52:47.493093  661546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1209 11:52:48.772297  661546 crio.go:462] duration metric: took 1.283133931s to copy over tarball
	I1209 11:52:48.772381  661546 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1209 11:52:50.959318  661546 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.18690714s)
	I1209 11:52:50.959352  661546 crio.go:469] duration metric: took 2.187018432s to extract the tarball
	I1209 11:52:50.959359  661546 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1209 11:52:50.995746  661546 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 11:52:51.037764  661546 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 11:52:51.037792  661546 cache_images.go:84] Images are preloaded, skipping loading
	I1209 11:52:51.037799  661546 kubeadm.go:934] updating node { 192.168.72.218 8443 v1.31.2 crio true true} ...
	I1209 11:52:51.037909  661546 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-005123 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.218
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:embed-certs-005123 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 11:52:51.037972  661546 ssh_runner.go:195] Run: crio config
	I1209 11:52:51.080191  661546 cni.go:84] Creating CNI manager for ""
	I1209 11:52:51.080220  661546 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 11:52:51.080231  661546 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1209 11:52:51.080258  661546 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.218 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-005123 NodeName:embed-certs-005123 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.218"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.218 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 11:52:51.080442  661546 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.218
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-005123"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.218"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.218"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 11:52:51.080544  661546 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1209 11:52:51.091894  661546 binaries.go:44] Found k8s binaries, skipping transfer
	I1209 11:52:51.091975  661546 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 11:52:51.101702  661546 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I1209 11:52:51.117636  661546 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 11:52:51.133662  661546 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2298 bytes)
	I1209 11:52:51.151725  661546 ssh_runner.go:195] Run: grep 192.168.72.218	control-plane.minikube.internal$ /etc/hosts
	I1209 11:52:51.155759  661546 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.218	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 11:52:51.167480  661546 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 11:52:47.592707  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:50.093177  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:48.132348  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:48.632894  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:49.133272  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:49.633015  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:50.132977  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:50.632533  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:51.132939  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:51.632463  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:52.133082  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:52.633298  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:50.357709  663024 pod_ready.go:93] pod "coredns-7c65d6cfc9-zzrgn" in "kube-system" namespace has status "Ready":"True"
	I1209 11:52:50.357740  663024 pod_ready.go:82] duration metric: took 10.006992001s for pod "coredns-7c65d6cfc9-zzrgn" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:50.357752  663024 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-482476" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:50.363374  663024 pod_ready.go:93] pod "etcd-default-k8s-diff-port-482476" in "kube-system" namespace has status "Ready":"True"
	I1209 11:52:50.363403  663024 pod_ready.go:82] duration metric: took 5.642657ms for pod "etcd-default-k8s-diff-port-482476" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:50.363417  663024 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-482476" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:50.368456  663024 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-482476" in "kube-system" namespace has status "Ready":"True"
	I1209 11:52:50.368478  663024 pod_ready.go:82] duration metric: took 5.053713ms for pod "kube-apiserver-default-k8s-diff-port-482476" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:50.368488  663024 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-482476" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:50.374156  663024 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-482476" in "kube-system" namespace has status "Ready":"True"
	I1209 11:52:50.374205  663024 pod_ready.go:82] duration metric: took 5.708489ms for pod "kube-controller-manager-default-k8s-diff-port-482476" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:50.374219  663024 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-6th5d" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:50.378734  663024 pod_ready.go:93] pod "kube-proxy-6th5d" in "kube-system" namespace has status "Ready":"True"
	I1209 11:52:50.378752  663024 pod_ready.go:82] duration metric: took 4.526066ms for pod "kube-proxy-6th5d" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:50.378760  663024 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-482476" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:52.384763  663024 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-482476" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:53.389110  663024 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-482476" in "kube-system" namespace has status "Ready":"True"
	I1209 11:52:53.389146  663024 pod_ready.go:82] duration metric: took 3.010378852s for pod "kube-scheduler-default-k8s-diff-port-482476" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:53.389162  663024 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:51.305408  661546 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 11:52:51.330738  661546 certs.go:68] Setting up /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/embed-certs-005123 for IP: 192.168.72.218
	I1209 11:52:51.330766  661546 certs.go:194] generating shared ca certs ...
	I1209 11:52:51.330791  661546 certs.go:226] acquiring lock for ca certs: {Name:mk2df5887e08965d909a9c950da5dfffb8a04ddc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 11:52:51.331002  661546 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.key
	I1209 11:52:51.331099  661546 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.key
	I1209 11:52:51.331116  661546 certs.go:256] generating profile certs ...
	I1209 11:52:51.331252  661546 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/embed-certs-005123/client.key
	I1209 11:52:51.331333  661546 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/embed-certs-005123/apiserver.key.a40d22b0
	I1209 11:52:51.331400  661546 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/embed-certs-005123/proxy-client.key
	I1209 11:52:51.331595  661546 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017.pem (1338 bytes)
	W1209 11:52:51.331631  661546 certs.go:480] ignoring /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017_empty.pem, impossibly tiny 0 bytes
	I1209 11:52:51.331645  661546 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem (1675 bytes)
	I1209 11:52:51.331680  661546 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem (1082 bytes)
	I1209 11:52:51.331717  661546 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem (1123 bytes)
	I1209 11:52:51.331747  661546 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem (1679 bytes)
	I1209 11:52:51.331824  661546 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem (1708 bytes)
	I1209 11:52:51.332728  661546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 11:52:51.366002  661546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1209 11:52:51.400591  661546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 11:52:51.431219  661546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1209 11:52:51.459334  661546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/embed-certs-005123/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1209 11:52:51.487240  661546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/embed-certs-005123/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1209 11:52:51.522273  661546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/embed-certs-005123/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 11:52:51.545757  661546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/embed-certs-005123/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1209 11:52:51.572793  661546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 11:52:51.595719  661546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017.pem --> /usr/share/ca-certificates/617017.pem (1338 bytes)
	I1209 11:52:51.618456  661546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem --> /usr/share/ca-certificates/6170172.pem (1708 bytes)
	I1209 11:52:51.643337  661546 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 11:52:51.659719  661546 ssh_runner.go:195] Run: openssl version
	I1209 11:52:51.665339  661546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/617017.pem && ln -fs /usr/share/ca-certificates/617017.pem /etc/ssl/certs/617017.pem"
	I1209 11:52:51.676145  661546 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/617017.pem
	I1209 11:52:51.680615  661546 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 10:45 /usr/share/ca-certificates/617017.pem
	I1209 11:52:51.680670  661546 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/617017.pem
	I1209 11:52:51.686782  661546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/617017.pem /etc/ssl/certs/51391683.0"
	I1209 11:52:51.697398  661546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6170172.pem && ln -fs /usr/share/ca-certificates/6170172.pem /etc/ssl/certs/6170172.pem"
	I1209 11:52:51.707438  661546 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6170172.pem
	I1209 11:52:51.711764  661546 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 10:45 /usr/share/ca-certificates/6170172.pem
	I1209 11:52:51.711832  661546 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6170172.pem
	I1209 11:52:51.717278  661546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6170172.pem /etc/ssl/certs/3ec20f2e.0"
	I1209 11:52:51.727774  661546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1209 11:52:51.738575  661546 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 11:52:51.742996  661546 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 10:34 /usr/share/ca-certificates/minikubeCA.pem
	I1209 11:52:51.743057  661546 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 11:52:51.748505  661546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1209 11:52:51.758738  661546 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 11:52:51.763005  661546 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1209 11:52:51.768964  661546 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1209 11:52:51.775011  661546 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1209 11:52:51.780810  661546 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1209 11:52:51.786716  661546 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1209 11:52:51.792351  661546 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1209 11:52:51.798098  661546 kubeadm.go:392] StartCluster: {Name:embed-certs-005123 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:embed-certs-005123 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.218 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 11:52:51.798239  661546 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 11:52:51.798296  661546 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 11:52:51.840669  661546 cri.go:89] found id: ""
	I1209 11:52:51.840755  661546 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 11:52:51.850404  661546 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1209 11:52:51.850429  661546 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1209 11:52:51.850474  661546 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1209 11:52:51.859350  661546 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1209 11:52:51.860405  661546 kubeconfig.go:125] found "embed-certs-005123" server: "https://192.168.72.218:8443"
	I1209 11:52:51.862591  661546 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1209 11:52:51.872497  661546 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.218
	I1209 11:52:51.872539  661546 kubeadm.go:1160] stopping kube-system containers ...
	I1209 11:52:51.872558  661546 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1209 11:52:51.872638  661546 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 11:52:51.913221  661546 cri.go:89] found id: ""
	I1209 11:52:51.913316  661546 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1209 11:52:51.929885  661546 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 11:52:51.940078  661546 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 11:52:51.940105  661546 kubeadm.go:157] found existing configuration files:
	
	I1209 11:52:51.940166  661546 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 11:52:51.948911  661546 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 11:52:51.948977  661546 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 11:52:51.958278  661546 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 11:52:51.966808  661546 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 11:52:51.966879  661546 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 11:52:51.975480  661546 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 11:52:51.984071  661546 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 11:52:51.984127  661546 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 11:52:51.992480  661546 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 11:52:52.000798  661546 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 11:52:52.000873  661546 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 11:52:52.009553  661546 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 11:52:52.019274  661546 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:52.133477  661546 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:53.081976  661546 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:53.293871  661546 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:53.364259  661546 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:53.452043  661546 api_server.go:52] waiting for apiserver process to appear ...
	I1209 11:52:53.452147  661546 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:53.952743  661546 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:54.452498  661546 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:54.952482  661546 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:55.452783  661546 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:55.483411  661546 api_server.go:72] duration metric: took 2.0313706s to wait for apiserver process to appear ...
	I1209 11:52:55.483448  661546 api_server.go:88] waiting for apiserver healthz status ...
	I1209 11:52:55.483473  661546 api_server.go:253] Checking apiserver healthz at https://192.168.72.218:8443/healthz ...
	I1209 11:52:55.483982  661546 api_server.go:269] stopped: https://192.168.72.218:8443/healthz: Get "https://192.168.72.218:8443/healthz": dial tcp 192.168.72.218:8443: connect: connection refused
	I1209 11:52:55.983589  661546 api_server.go:253] Checking apiserver healthz at https://192.168.72.218:8443/healthz ...
	I1209 11:52:52.592309  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:55.257400  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:53.132520  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:53.633385  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:54.132432  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:54.632974  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:55.132958  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:55.633343  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:56.132687  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:56.633236  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:57.133489  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:57.633105  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:55.396602  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:57.397077  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:58.136225  661546 api_server.go:279] https://192.168.72.218:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1209 11:52:58.136259  661546 api_server.go:103] status: https://192.168.72.218:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1209 11:52:58.136276  661546 api_server.go:253] Checking apiserver healthz at https://192.168.72.218:8443/healthz ...
	I1209 11:52:58.174521  661546 api_server.go:279] https://192.168.72.218:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1209 11:52:58.174583  661546 api_server.go:103] status: https://192.168.72.218:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1209 11:52:58.484089  661546 api_server.go:253] Checking apiserver healthz at https://192.168.72.218:8443/healthz ...
	I1209 11:52:58.489495  661546 api_server.go:279] https://192.168.72.218:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1209 11:52:58.489536  661546 api_server.go:103] status: https://192.168.72.218:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1209 11:52:58.984185  661546 api_server.go:253] Checking apiserver healthz at https://192.168.72.218:8443/healthz ...
	I1209 11:52:58.990889  661546 api_server.go:279] https://192.168.72.218:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1209 11:52:58.990932  661546 api_server.go:103] status: https://192.168.72.218:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1209 11:52:59.484415  661546 api_server.go:253] Checking apiserver healthz at https://192.168.72.218:8443/healthz ...
	I1209 11:52:59.490878  661546 api_server.go:279] https://192.168.72.218:8443/healthz returned 200:
	ok
	I1209 11:52:59.498196  661546 api_server.go:141] control plane version: v1.31.2
	I1209 11:52:59.498231  661546 api_server.go:131] duration metric: took 4.014775842s to wait for apiserver health ...
	I1209 11:52:59.498241  661546 cni.go:84] Creating CNI manager for ""
	I1209 11:52:59.498247  661546 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 11:52:59.499779  661546 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1209 11:52:59.500941  661546 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1209 11:52:59.514201  661546 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1209 11:52:59.544391  661546 system_pods.go:43] waiting for kube-system pods to appear ...
	I1209 11:52:59.555798  661546 system_pods.go:59] 8 kube-system pods found
	I1209 11:52:59.555837  661546 system_pods.go:61] "coredns-7c65d6cfc9-cdnjm" [7cb724f8-c570-4a19-808d-da994ec43eaa] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1209 11:52:59.555849  661546 system_pods.go:61] "etcd-embed-certs-005123" [bf194765-7520-4b5d-a1e5-b49830a0f620] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1209 11:52:59.555858  661546 system_pods.go:61] "kube-apiserver-embed-certs-005123" [470f6c19-0112-4b0d-89d9-b792e912cf6d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1209 11:52:59.555863  661546 system_pods.go:61] "kube-controller-manager-embed-certs-005123" [b42748b2-f3a9-4d29-a832-a30d54b329c2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1209 11:52:59.555868  661546 system_pods.go:61] "kube-proxy-b7bf2" [f9aab69c-2232-4f56-a502-ffd033f7ac10] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1209 11:52:59.555877  661546 system_pods.go:61] "kube-scheduler-embed-certs-005123" [e61a8e3c-c1d3-4dab-abb2-6f5221bc5d25] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1209 11:52:59.555885  661546 system_pods.go:61] "metrics-server-6867b74b74-x4kvn" [210cb99c-e3e7-4337-bed4-985cb98143dd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1209 11:52:59.555893  661546 system_pods.go:61] "storage-provisioner" [f2f7d9e2-1121-4df2-adb7-a0af32f957ff] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1209 11:52:59.555903  661546 system_pods.go:74] duration metric: took 11.485008ms to wait for pod list to return data ...
	I1209 11:52:59.555913  661546 node_conditions.go:102] verifying NodePressure condition ...
	I1209 11:52:59.560077  661546 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1209 11:52:59.560100  661546 node_conditions.go:123] node cpu capacity is 2
	I1209 11:52:59.560110  661546 node_conditions.go:105] duration metric: took 4.192476ms to run NodePressure ...
	I1209 11:52:59.560132  661546 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:59.890141  661546 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1209 11:52:59.895382  661546 kubeadm.go:739] kubelet initialised
	I1209 11:52:59.895414  661546 kubeadm.go:740] duration metric: took 5.227549ms waiting for restarted kubelet to initialise ...
	I1209 11:52:59.895425  661546 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 11:52:59.901454  661546 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-cdnjm" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:57.593336  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:00.094942  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:58.132858  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:58.633386  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:59.132544  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:59.633427  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:00.133402  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:00.632719  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:01.132786  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:01.632909  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:02.133197  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:02.632620  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:59.896691  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:02.396546  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:01.907730  661546 pod_ready.go:103] pod "coredns-7c65d6cfc9-cdnjm" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:03.910835  661546 pod_ready.go:103] pod "coredns-7c65d6cfc9-cdnjm" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:02.591692  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:05.090892  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:03.133091  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:03.633389  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:04.132587  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:04.633239  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:05.132773  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:05.632456  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:06.132989  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:06.632584  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:07.133153  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:07.633389  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:04.895599  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:06.896541  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:08.912963  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:06.408122  661546 pod_ready.go:103] pod "coredns-7c65d6cfc9-cdnjm" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:08.412579  661546 pod_ready.go:103] pod "coredns-7c65d6cfc9-cdnjm" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:10.419673  661546 pod_ready.go:93] pod "coredns-7c65d6cfc9-cdnjm" in "kube-system" namespace has status "Ready":"True"
	I1209 11:53:10.419702  661546 pod_ready.go:82] duration metric: took 10.518223469s for pod "coredns-7c65d6cfc9-cdnjm" in "kube-system" namespace to be "Ready" ...
	I1209 11:53:10.419716  661546 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-005123" in "kube-system" namespace to be "Ready" ...
	I1209 11:53:07.591181  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:10.091248  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:08.132885  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:08.633192  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:09.132446  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:09.633385  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:10.132534  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:10.632399  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:11.132877  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:11.633091  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:12.132592  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:12.633185  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:11.396121  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:13.901605  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:12.425696  661546 pod_ready.go:103] pod "etcd-embed-certs-005123" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:13.926007  661546 pod_ready.go:93] pod "etcd-embed-certs-005123" in "kube-system" namespace has status "Ready":"True"
	I1209 11:53:13.926041  661546 pod_ready.go:82] duration metric: took 3.50631846s for pod "etcd-embed-certs-005123" in "kube-system" namespace to be "Ready" ...
	I1209 11:53:13.926053  661546 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-005123" in "kube-system" namespace to be "Ready" ...
	I1209 11:53:13.931124  661546 pod_ready.go:93] pod "kube-apiserver-embed-certs-005123" in "kube-system" namespace has status "Ready":"True"
	I1209 11:53:13.931150  661546 pod_ready.go:82] duration metric: took 5.090118ms for pod "kube-apiserver-embed-certs-005123" in "kube-system" namespace to be "Ready" ...
	I1209 11:53:13.931163  661546 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-005123" in "kube-system" namespace to be "Ready" ...
	I1209 11:53:13.935763  661546 pod_ready.go:93] pod "kube-controller-manager-embed-certs-005123" in "kube-system" namespace has status "Ready":"True"
	I1209 11:53:13.935783  661546 pod_ready.go:82] duration metric: took 4.613902ms for pod "kube-controller-manager-embed-certs-005123" in "kube-system" namespace to be "Ready" ...
	I1209 11:53:13.935792  661546 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-b7bf2" in "kube-system" namespace to be "Ready" ...
	I1209 11:53:13.940013  661546 pod_ready.go:93] pod "kube-proxy-b7bf2" in "kube-system" namespace has status "Ready":"True"
	I1209 11:53:13.940037  661546 pod_ready.go:82] duration metric: took 4.238468ms for pod "kube-proxy-b7bf2" in "kube-system" namespace to be "Ready" ...
	I1209 11:53:13.940050  661546 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-005123" in "kube-system" namespace to be "Ready" ...
	I1209 11:53:13.944480  661546 pod_ready.go:93] pod "kube-scheduler-embed-certs-005123" in "kube-system" namespace has status "Ready":"True"
	I1209 11:53:13.944497  661546 pod_ready.go:82] duration metric: took 4.439334ms for pod "kube-scheduler-embed-certs-005123" in "kube-system" namespace to be "Ready" ...
	I1209 11:53:13.944504  661546 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace to be "Ready" ...
	I1209 11:53:15.951194  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:12.091413  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:14.591239  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:13.132852  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:13.632863  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:14.132638  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:14.632522  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:15.133201  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:15.632442  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:16.132620  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:53:16.132747  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:53:16.171708  662586 cri.go:89] found id: ""
	I1209 11:53:16.171748  662586 logs.go:282] 0 containers: []
	W1209 11:53:16.171761  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:53:16.171768  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:53:16.171823  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:53:16.206350  662586 cri.go:89] found id: ""
	I1209 11:53:16.206381  662586 logs.go:282] 0 containers: []
	W1209 11:53:16.206390  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:53:16.206398  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:53:16.206468  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:53:16.239292  662586 cri.go:89] found id: ""
	I1209 11:53:16.239325  662586 logs.go:282] 0 containers: []
	W1209 11:53:16.239334  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:53:16.239341  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:53:16.239397  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:53:16.275809  662586 cri.go:89] found id: ""
	I1209 11:53:16.275841  662586 logs.go:282] 0 containers: []
	W1209 11:53:16.275850  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:53:16.275856  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:53:16.275913  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:53:16.310434  662586 cri.go:89] found id: ""
	I1209 11:53:16.310466  662586 logs.go:282] 0 containers: []
	W1209 11:53:16.310474  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:53:16.310480  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:53:16.310540  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:53:16.347697  662586 cri.go:89] found id: ""
	I1209 11:53:16.347729  662586 logs.go:282] 0 containers: []
	W1209 11:53:16.347738  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:53:16.347745  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:53:16.347801  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:53:16.380949  662586 cri.go:89] found id: ""
	I1209 11:53:16.380977  662586 logs.go:282] 0 containers: []
	W1209 11:53:16.380985  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:53:16.380992  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:53:16.381074  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:53:16.415236  662586 cri.go:89] found id: ""
	I1209 11:53:16.415268  662586 logs.go:282] 0 containers: []
	W1209 11:53:16.415290  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:53:16.415304  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:53:16.415321  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:53:16.459614  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:53:16.459645  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:53:16.509575  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:53:16.509617  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:53:16.522864  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:53:16.522898  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:53:16.644997  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:53:16.645059  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:53:16.645106  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:53:16.396028  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:18.397195  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:17.951721  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:19.952199  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:16.591767  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:19.091470  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:21.095835  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:19.220978  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:19.233506  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:53:19.233597  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:53:19.268975  662586 cri.go:89] found id: ""
	I1209 11:53:19.269007  662586 logs.go:282] 0 containers: []
	W1209 11:53:19.269019  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:53:19.269027  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:53:19.269103  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:53:19.304898  662586 cri.go:89] found id: ""
	I1209 11:53:19.304935  662586 logs.go:282] 0 containers: []
	W1209 11:53:19.304949  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:53:19.304957  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:53:19.305034  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:53:19.344798  662586 cri.go:89] found id: ""
	I1209 11:53:19.344835  662586 logs.go:282] 0 containers: []
	W1209 11:53:19.344846  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:53:19.344855  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:53:19.344925  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:53:19.395335  662586 cri.go:89] found id: ""
	I1209 11:53:19.395377  662586 logs.go:282] 0 containers: []
	W1209 11:53:19.395387  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:53:19.395395  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:53:19.395464  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:53:19.430334  662586 cri.go:89] found id: ""
	I1209 11:53:19.430364  662586 logs.go:282] 0 containers: []
	W1209 11:53:19.430377  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:53:19.430386  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:53:19.430465  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:53:19.468732  662586 cri.go:89] found id: ""
	I1209 11:53:19.468766  662586 logs.go:282] 0 containers: []
	W1209 11:53:19.468775  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:53:19.468782  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:53:19.468836  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:53:19.503194  662586 cri.go:89] found id: ""
	I1209 11:53:19.503242  662586 logs.go:282] 0 containers: []
	W1209 11:53:19.503255  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:53:19.503263  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:53:19.503328  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:53:19.537074  662586 cri.go:89] found id: ""
	I1209 11:53:19.537114  662586 logs.go:282] 0 containers: []
	W1209 11:53:19.537125  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:53:19.537135  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:53:19.537151  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:53:19.590081  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:53:19.590130  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:53:19.604350  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:53:19.604388  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:53:19.683073  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:53:19.683106  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:53:19.683124  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:53:19.763564  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:53:19.763611  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:53:22.302792  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:22.315992  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:53:22.316079  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:53:22.350464  662586 cri.go:89] found id: ""
	I1209 11:53:22.350495  662586 logs.go:282] 0 containers: []
	W1209 11:53:22.350505  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:53:22.350511  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:53:22.350569  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:53:22.382832  662586 cri.go:89] found id: ""
	I1209 11:53:22.382867  662586 logs.go:282] 0 containers: []
	W1209 11:53:22.382880  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:53:22.382889  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:53:22.382958  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:53:22.417826  662586 cri.go:89] found id: ""
	I1209 11:53:22.417859  662586 logs.go:282] 0 containers: []
	W1209 11:53:22.417871  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:53:22.417880  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:53:22.417963  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:53:22.451545  662586 cri.go:89] found id: ""
	I1209 11:53:22.451579  662586 logs.go:282] 0 containers: []
	W1209 11:53:22.451588  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:53:22.451594  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:53:22.451659  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:53:22.488413  662586 cri.go:89] found id: ""
	I1209 11:53:22.488448  662586 logs.go:282] 0 containers: []
	W1209 11:53:22.488458  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:53:22.488464  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:53:22.488531  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:53:22.523891  662586 cri.go:89] found id: ""
	I1209 11:53:22.523916  662586 logs.go:282] 0 containers: []
	W1209 11:53:22.523925  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:53:22.523931  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:53:22.523990  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:53:22.555828  662586 cri.go:89] found id: ""
	I1209 11:53:22.555866  662586 logs.go:282] 0 containers: []
	W1209 11:53:22.555879  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:53:22.555887  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:53:22.555960  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:53:22.592133  662586 cri.go:89] found id: ""
	I1209 11:53:22.592171  662586 logs.go:282] 0 containers: []
	W1209 11:53:22.592181  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:53:22.592192  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:53:22.592209  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:53:22.641928  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:53:22.641966  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:53:22.655182  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:53:22.655215  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 11:53:20.896189  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:23.397242  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:21.957934  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:24.451292  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:23.591147  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:25.591982  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	W1209 11:53:22.724320  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:53:22.724343  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:53:22.724359  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:53:22.811692  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:53:22.811743  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:53:25.347903  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:25.360839  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:53:25.360907  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:53:25.392880  662586 cri.go:89] found id: ""
	I1209 11:53:25.392917  662586 logs.go:282] 0 containers: []
	W1209 11:53:25.392930  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:53:25.392939  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:53:25.393008  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:53:25.427862  662586 cri.go:89] found id: ""
	I1209 11:53:25.427905  662586 logs.go:282] 0 containers: []
	W1209 11:53:25.427914  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:53:25.427921  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:53:25.428009  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:53:25.463733  662586 cri.go:89] found id: ""
	I1209 11:53:25.463767  662586 logs.go:282] 0 containers: []
	W1209 11:53:25.463778  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:53:25.463788  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:53:25.463884  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:53:25.501653  662586 cri.go:89] found id: ""
	I1209 11:53:25.501681  662586 logs.go:282] 0 containers: []
	W1209 11:53:25.501690  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:53:25.501697  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:53:25.501751  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:53:25.535368  662586 cri.go:89] found id: ""
	I1209 11:53:25.535410  662586 logs.go:282] 0 containers: []
	W1209 11:53:25.535422  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:53:25.535431  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:53:25.535511  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:53:25.569709  662586 cri.go:89] found id: ""
	I1209 11:53:25.569739  662586 logs.go:282] 0 containers: []
	W1209 11:53:25.569748  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:53:25.569761  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:53:25.569827  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:53:25.604352  662586 cri.go:89] found id: ""
	I1209 11:53:25.604389  662586 logs.go:282] 0 containers: []
	W1209 11:53:25.604404  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:53:25.604413  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:53:25.604477  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:53:25.635832  662586 cri.go:89] found id: ""
	I1209 11:53:25.635865  662586 logs.go:282] 0 containers: []
	W1209 11:53:25.635878  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:53:25.635892  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:53:25.635908  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:53:25.650611  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:53:25.650647  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:53:25.721092  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:53:25.721121  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:53:25.721139  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:53:25.795552  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:53:25.795598  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:53:25.858088  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:53:25.858161  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:53:25.898217  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:28.395882  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:26.950691  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:28.951203  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:28.091306  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:30.091842  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:28.410683  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:28.422993  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:53:28.423072  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:53:28.455054  662586 cri.go:89] found id: ""
	I1209 11:53:28.455083  662586 logs.go:282] 0 containers: []
	W1209 11:53:28.455092  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:53:28.455098  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:53:28.455162  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:53:28.493000  662586 cri.go:89] found id: ""
	I1209 11:53:28.493037  662586 logs.go:282] 0 containers: []
	W1209 11:53:28.493046  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:53:28.493052  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:53:28.493104  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:53:28.526294  662586 cri.go:89] found id: ""
	I1209 11:53:28.526333  662586 logs.go:282] 0 containers: []
	W1209 11:53:28.526346  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:53:28.526354  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:53:28.526417  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:53:28.560383  662586 cri.go:89] found id: ""
	I1209 11:53:28.560414  662586 logs.go:282] 0 containers: []
	W1209 11:53:28.560423  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:53:28.560430  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:53:28.560485  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:53:28.595906  662586 cri.go:89] found id: ""
	I1209 11:53:28.595935  662586 logs.go:282] 0 containers: []
	W1209 11:53:28.595946  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:53:28.595954  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:53:28.596021  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:53:28.629548  662586 cri.go:89] found id: ""
	I1209 11:53:28.629584  662586 logs.go:282] 0 containers: []
	W1209 11:53:28.629597  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:53:28.629607  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:53:28.629673  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:53:28.666362  662586 cri.go:89] found id: ""
	I1209 11:53:28.666398  662586 logs.go:282] 0 containers: []
	W1209 11:53:28.666410  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:53:28.666418  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:53:28.666494  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:53:28.697704  662586 cri.go:89] found id: ""
	I1209 11:53:28.697736  662586 logs.go:282] 0 containers: []
	W1209 11:53:28.697746  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:53:28.697756  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:53:28.697769  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:53:28.745774  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:53:28.745816  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:53:28.759543  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:53:28.759582  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:53:28.834772  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:53:28.834795  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:53:28.834812  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:53:28.913137  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:53:28.913178  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:53:31.460658  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:31.473503  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:53:31.473575  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:53:31.506710  662586 cri.go:89] found id: ""
	I1209 11:53:31.506748  662586 logs.go:282] 0 containers: []
	W1209 11:53:31.506760  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:53:31.506770  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:53:31.506842  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:53:31.544127  662586 cri.go:89] found id: ""
	I1209 11:53:31.544188  662586 logs.go:282] 0 containers: []
	W1209 11:53:31.544202  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:53:31.544211  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:53:31.544289  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:53:31.591081  662586 cri.go:89] found id: ""
	I1209 11:53:31.591116  662586 logs.go:282] 0 containers: []
	W1209 11:53:31.591128  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:53:31.591135  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:53:31.591213  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:53:31.629311  662586 cri.go:89] found id: ""
	I1209 11:53:31.629340  662586 logs.go:282] 0 containers: []
	W1209 11:53:31.629348  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:53:31.629355  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:53:31.629432  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:53:31.671035  662586 cri.go:89] found id: ""
	I1209 11:53:31.671069  662586 logs.go:282] 0 containers: []
	W1209 11:53:31.671081  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:53:31.671090  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:53:31.671162  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:53:31.705753  662586 cri.go:89] found id: ""
	I1209 11:53:31.705792  662586 logs.go:282] 0 containers: []
	W1209 11:53:31.705805  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:53:31.705815  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:53:31.705889  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:53:31.739118  662586 cri.go:89] found id: ""
	I1209 11:53:31.739146  662586 logs.go:282] 0 containers: []
	W1209 11:53:31.739155  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:53:31.739162  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:53:31.739225  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:53:31.771085  662586 cri.go:89] found id: ""
	I1209 11:53:31.771120  662586 logs.go:282] 0 containers: []
	W1209 11:53:31.771129  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:53:31.771139  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:53:31.771152  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:53:31.820993  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:53:31.821049  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:53:31.835576  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:53:31.835612  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:53:31.903011  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:53:31.903039  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:53:31.903056  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:53:31.977784  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:53:31.977830  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:53:30.896197  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:33.395937  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:31.450832  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:33.451161  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:35.451446  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:32.590724  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:34.592352  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:34.514654  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:34.529156  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:53:34.529236  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:53:34.567552  662586 cri.go:89] found id: ""
	I1209 11:53:34.567580  662586 logs.go:282] 0 containers: []
	W1209 11:53:34.567590  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:53:34.567598  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:53:34.567665  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:53:34.608863  662586 cri.go:89] found id: ""
	I1209 11:53:34.608891  662586 logs.go:282] 0 containers: []
	W1209 11:53:34.608900  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:53:34.608907  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:53:34.608970  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:53:34.647204  662586 cri.go:89] found id: ""
	I1209 11:53:34.647242  662586 logs.go:282] 0 containers: []
	W1209 11:53:34.647254  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:53:34.647263  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:53:34.647333  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:53:34.682511  662586 cri.go:89] found id: ""
	I1209 11:53:34.682565  662586 logs.go:282] 0 containers: []
	W1209 11:53:34.682580  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:53:34.682596  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:53:34.682674  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:53:34.717557  662586 cri.go:89] found id: ""
	I1209 11:53:34.717585  662586 logs.go:282] 0 containers: []
	W1209 11:53:34.717595  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:53:34.717602  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:53:34.717670  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:53:34.749814  662586 cri.go:89] found id: ""
	I1209 11:53:34.749851  662586 logs.go:282] 0 containers: []
	W1209 11:53:34.749865  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:53:34.749876  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:53:34.749949  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:53:34.782732  662586 cri.go:89] found id: ""
	I1209 11:53:34.782766  662586 logs.go:282] 0 containers: []
	W1209 11:53:34.782776  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:53:34.782782  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:53:34.782846  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:53:34.817114  662586 cri.go:89] found id: ""
	I1209 11:53:34.817149  662586 logs.go:282] 0 containers: []
	W1209 11:53:34.817162  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:53:34.817175  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:53:34.817192  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:53:34.885963  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:53:34.885986  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:53:34.886001  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:53:34.969858  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:53:34.969905  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:53:35.006981  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:53:35.007024  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:53:35.055360  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:53:35.055401  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:53:37.570641  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:37.595904  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:53:37.595986  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:53:37.642205  662586 cri.go:89] found id: ""
	I1209 11:53:37.642248  662586 logs.go:282] 0 containers: []
	W1209 11:53:37.642261  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:53:37.642270  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:53:37.642347  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:53:37.676666  662586 cri.go:89] found id: ""
	I1209 11:53:37.676692  662586 logs.go:282] 0 containers: []
	W1209 11:53:37.676701  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:53:37.676707  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:53:37.676760  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:53:35.396037  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:37.896489  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:37.952569  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:40.450464  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:37.092250  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:39.092392  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:37.714201  662586 cri.go:89] found id: ""
	I1209 11:53:37.714233  662586 logs.go:282] 0 containers: []
	W1209 11:53:37.714243  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:53:37.714249  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:53:37.714311  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:53:37.748018  662586 cri.go:89] found id: ""
	I1209 11:53:37.748047  662586 logs.go:282] 0 containers: []
	W1209 11:53:37.748058  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:53:37.748067  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:53:37.748127  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:53:37.783763  662586 cri.go:89] found id: ""
	I1209 11:53:37.783799  662586 logs.go:282] 0 containers: []
	W1209 11:53:37.783807  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:53:37.783823  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:53:37.783898  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:53:37.822470  662586 cri.go:89] found id: ""
	I1209 11:53:37.822502  662586 logs.go:282] 0 containers: []
	W1209 11:53:37.822514  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:53:37.822523  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:53:37.822585  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:53:37.858493  662586 cri.go:89] found id: ""
	I1209 11:53:37.858527  662586 logs.go:282] 0 containers: []
	W1209 11:53:37.858537  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:53:37.858543  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:53:37.858599  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:53:37.899263  662586 cri.go:89] found id: ""
	I1209 11:53:37.899288  662586 logs.go:282] 0 containers: []
	W1209 11:53:37.899295  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:53:37.899304  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:53:37.899321  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:53:37.972531  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:53:37.972559  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:53:37.972575  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:53:38.046271  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:53:38.046315  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:53:38.088829  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:53:38.088867  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:53:38.141935  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:53:38.141985  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:53:40.657131  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:40.669884  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:53:40.669954  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:53:40.704291  662586 cri.go:89] found id: ""
	I1209 11:53:40.704332  662586 logs.go:282] 0 containers: []
	W1209 11:53:40.704345  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:53:40.704357  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:53:40.704435  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:53:40.738637  662586 cri.go:89] found id: ""
	I1209 11:53:40.738673  662586 logs.go:282] 0 containers: []
	W1209 11:53:40.738684  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:53:40.738690  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:53:40.738747  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:53:40.770737  662586 cri.go:89] found id: ""
	I1209 11:53:40.770774  662586 logs.go:282] 0 containers: []
	W1209 11:53:40.770787  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:53:40.770796  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:53:40.770865  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:53:40.805667  662586 cri.go:89] found id: ""
	I1209 11:53:40.805702  662586 logs.go:282] 0 containers: []
	W1209 11:53:40.805729  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:53:40.805739  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:53:40.805812  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:53:40.838444  662586 cri.go:89] found id: ""
	I1209 11:53:40.838482  662586 logs.go:282] 0 containers: []
	W1209 11:53:40.838496  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:53:40.838505  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:53:40.838578  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:53:40.871644  662586 cri.go:89] found id: ""
	I1209 11:53:40.871679  662586 logs.go:282] 0 containers: []
	W1209 11:53:40.871691  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:53:40.871700  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:53:40.871763  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:53:40.907242  662586 cri.go:89] found id: ""
	I1209 11:53:40.907275  662586 logs.go:282] 0 containers: []
	W1209 11:53:40.907284  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:53:40.907291  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:53:40.907359  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:53:40.941542  662586 cri.go:89] found id: ""
	I1209 11:53:40.941570  662586 logs.go:282] 0 containers: []
	W1209 11:53:40.941583  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:53:40.941595  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:53:40.941616  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:53:41.022344  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:53:41.022373  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:53:41.022387  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:53:41.097083  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:53:41.097129  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:53:41.135303  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:53:41.135349  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:53:41.191400  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:53:41.191447  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:53:40.396681  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:42.895118  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:42.451217  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:44.951893  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:41.591753  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:44.090762  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:46.091821  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:43.705246  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:43.717939  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:53:43.718001  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:53:43.750027  662586 cri.go:89] found id: ""
	I1209 11:53:43.750066  662586 logs.go:282] 0 containers: []
	W1209 11:53:43.750079  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:53:43.750087  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:53:43.750156  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:53:43.782028  662586 cri.go:89] found id: ""
	I1209 11:53:43.782067  662586 logs.go:282] 0 containers: []
	W1209 11:53:43.782081  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:53:43.782090  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:53:43.782153  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:53:43.815509  662586 cri.go:89] found id: ""
	I1209 11:53:43.815549  662586 logs.go:282] 0 containers: []
	W1209 11:53:43.815562  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:53:43.815570  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:53:43.815629  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:53:43.852803  662586 cri.go:89] found id: ""
	I1209 11:53:43.852834  662586 logs.go:282] 0 containers: []
	W1209 11:53:43.852842  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:53:43.852850  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:53:43.852915  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:53:43.886761  662586 cri.go:89] found id: ""
	I1209 11:53:43.886789  662586 logs.go:282] 0 containers: []
	W1209 11:53:43.886798  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:53:43.886805  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:53:43.886883  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:53:43.924427  662586 cri.go:89] found id: ""
	I1209 11:53:43.924458  662586 logs.go:282] 0 containers: []
	W1209 11:53:43.924466  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:53:43.924478  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:53:43.924542  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:53:43.960351  662586 cri.go:89] found id: ""
	I1209 11:53:43.960381  662586 logs.go:282] 0 containers: []
	W1209 11:53:43.960398  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:53:43.960407  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:53:43.960476  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:53:43.993933  662586 cri.go:89] found id: ""
	I1209 11:53:43.993960  662586 logs.go:282] 0 containers: []
	W1209 11:53:43.993969  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:53:43.993979  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:53:43.994002  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:53:44.006915  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:53:44.006952  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:53:44.078928  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:53:44.078981  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:53:44.078999  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:53:44.158129  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:53:44.158188  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:53:44.199543  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:53:44.199577  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:53:46.748607  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:46.762381  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:53:46.762494  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:53:46.795674  662586 cri.go:89] found id: ""
	I1209 11:53:46.795713  662586 logs.go:282] 0 containers: []
	W1209 11:53:46.795727  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:53:46.795737  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:53:46.795812  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:53:46.834027  662586 cri.go:89] found id: ""
	I1209 11:53:46.834055  662586 logs.go:282] 0 containers: []
	W1209 11:53:46.834065  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:53:46.834072  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:53:46.834124  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:53:46.872116  662586 cri.go:89] found id: ""
	I1209 11:53:46.872156  662586 logs.go:282] 0 containers: []
	W1209 11:53:46.872169  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:53:46.872179  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:53:46.872264  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:53:46.906571  662586 cri.go:89] found id: ""
	I1209 11:53:46.906599  662586 logs.go:282] 0 containers: []
	W1209 11:53:46.906608  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:53:46.906615  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:53:46.906676  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:53:46.938266  662586 cri.go:89] found id: ""
	I1209 11:53:46.938303  662586 logs.go:282] 0 containers: []
	W1209 11:53:46.938315  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:53:46.938323  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:53:46.938381  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:53:46.972281  662586 cri.go:89] found id: ""
	I1209 11:53:46.972318  662586 logs.go:282] 0 containers: []
	W1209 11:53:46.972329  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:53:46.972337  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:53:46.972391  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:53:47.004797  662586 cri.go:89] found id: ""
	I1209 11:53:47.004828  662586 logs.go:282] 0 containers: []
	W1209 11:53:47.004837  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:53:47.004843  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:53:47.004908  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:53:47.035877  662586 cri.go:89] found id: ""
	I1209 11:53:47.035905  662586 logs.go:282] 0 containers: []
	W1209 11:53:47.035917  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:53:47.035931  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:53:47.035947  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:53:47.087654  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:53:47.087706  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:53:47.102311  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:53:47.102346  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:53:47.195370  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:53:47.195396  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:53:47.195414  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:53:47.279103  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:53:47.279158  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:53:44.895382  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:46.895838  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:48.896133  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:47.453879  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:49.951686  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:48.591393  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:51.090806  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:49.817942  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:49.830291  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:53:49.830357  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:53:49.862917  662586 cri.go:89] found id: ""
	I1209 11:53:49.862950  662586 logs.go:282] 0 containers: []
	W1209 11:53:49.862959  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:53:49.862965  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:53:49.863033  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:53:49.894971  662586 cri.go:89] found id: ""
	I1209 11:53:49.895005  662586 logs.go:282] 0 containers: []
	W1209 11:53:49.895018  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:53:49.895027  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:53:49.895097  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:53:49.931737  662586 cri.go:89] found id: ""
	I1209 11:53:49.931775  662586 logs.go:282] 0 containers: []
	W1209 11:53:49.931786  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:53:49.931800  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:53:49.931862  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:53:49.971064  662586 cri.go:89] found id: ""
	I1209 11:53:49.971097  662586 logs.go:282] 0 containers: []
	W1209 11:53:49.971109  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:53:49.971118  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:53:49.971210  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:53:50.005354  662586 cri.go:89] found id: ""
	I1209 11:53:50.005393  662586 logs.go:282] 0 containers: []
	W1209 11:53:50.005417  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:53:50.005427  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:53:50.005501  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:53:50.044209  662586 cri.go:89] found id: ""
	I1209 11:53:50.044240  662586 logs.go:282] 0 containers: []
	W1209 11:53:50.044249  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:53:50.044257  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:53:50.044313  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:53:50.076360  662586 cri.go:89] found id: ""
	I1209 11:53:50.076408  662586 logs.go:282] 0 containers: []
	W1209 11:53:50.076418  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:53:50.076426  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:53:50.076494  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:53:50.112125  662586 cri.go:89] found id: ""
	I1209 11:53:50.112168  662586 logs.go:282] 0 containers: []
	W1209 11:53:50.112196  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:53:50.112210  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:53:50.112228  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:53:50.164486  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:53:50.164530  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:53:50.178489  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:53:50.178525  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:53:50.250131  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:53:50.250165  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:53:50.250196  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:53:50.329733  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:53:50.329779  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:53:50.896354  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:53.395149  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:52.450595  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:54.450939  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:53.092311  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:55.590766  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:52.874887  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:52.888518  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:53:52.888607  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:53:52.924361  662586 cri.go:89] found id: ""
	I1209 11:53:52.924389  662586 logs.go:282] 0 containers: []
	W1209 11:53:52.924398  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:53:52.924404  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:53:52.924467  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:53:52.957769  662586 cri.go:89] found id: ""
	I1209 11:53:52.957803  662586 logs.go:282] 0 containers: []
	W1209 11:53:52.957816  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:53:52.957824  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:53:52.957891  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:53:52.990339  662586 cri.go:89] found id: ""
	I1209 11:53:52.990376  662586 logs.go:282] 0 containers: []
	W1209 11:53:52.990388  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:53:52.990397  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:53:52.990461  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:53:53.022959  662586 cri.go:89] found id: ""
	I1209 11:53:53.023003  662586 logs.go:282] 0 containers: []
	W1209 11:53:53.023017  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:53:53.023028  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:53:53.023111  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:53:53.060271  662586 cri.go:89] found id: ""
	I1209 11:53:53.060299  662586 logs.go:282] 0 containers: []
	W1209 11:53:53.060315  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:53:53.060321  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:53:53.060390  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:53:53.093470  662586 cri.go:89] found id: ""
	I1209 11:53:53.093500  662586 logs.go:282] 0 containers: []
	W1209 11:53:53.093511  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:53:53.093519  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:53:53.093575  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:53:53.128902  662586 cri.go:89] found id: ""
	I1209 11:53:53.128941  662586 logs.go:282] 0 containers: []
	W1209 11:53:53.128955  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:53:53.128963  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:53:53.129036  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:53:53.161927  662586 cri.go:89] found id: ""
	I1209 11:53:53.161955  662586 logs.go:282] 0 containers: []
	W1209 11:53:53.161964  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:53:53.161974  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:53:53.161988  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:53:53.214098  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:53:53.214140  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:53:53.229191  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:53:53.229232  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:53:53.308648  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:53:53.308678  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:53:53.308695  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:53:53.386776  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:53:53.386816  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:53:55.929307  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:55.942217  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:53:55.942285  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:53:55.983522  662586 cri.go:89] found id: ""
	I1209 11:53:55.983563  662586 logs.go:282] 0 containers: []
	W1209 11:53:55.983572  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:53:55.983579  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:53:55.983645  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:53:56.017262  662586 cri.go:89] found id: ""
	I1209 11:53:56.017293  662586 logs.go:282] 0 containers: []
	W1209 11:53:56.017308  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:53:56.017314  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:53:56.017367  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:53:56.052385  662586 cri.go:89] found id: ""
	I1209 11:53:56.052419  662586 logs.go:282] 0 containers: []
	W1209 11:53:56.052429  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:53:56.052436  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:53:56.052489  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:53:56.085385  662586 cri.go:89] found id: ""
	I1209 11:53:56.085432  662586 logs.go:282] 0 containers: []
	W1209 11:53:56.085444  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:53:56.085452  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:53:56.085519  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:53:56.122754  662586 cri.go:89] found id: ""
	I1209 11:53:56.122785  662586 logs.go:282] 0 containers: []
	W1209 11:53:56.122794  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:53:56.122800  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:53:56.122862  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:53:56.159033  662586 cri.go:89] found id: ""
	I1209 11:53:56.159061  662586 logs.go:282] 0 containers: []
	W1209 11:53:56.159070  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:53:56.159077  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:53:56.159128  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:53:56.198022  662586 cri.go:89] found id: ""
	I1209 11:53:56.198058  662586 logs.go:282] 0 containers: []
	W1209 11:53:56.198070  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:53:56.198078  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:53:56.198148  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:53:56.231475  662586 cri.go:89] found id: ""
	I1209 11:53:56.231515  662586 logs.go:282] 0 containers: []
	W1209 11:53:56.231528  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:53:56.231542  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:53:56.231559  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:53:56.304922  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:53:56.304969  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:53:56.339875  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:53:56.339916  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:53:56.392893  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:53:56.392929  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:53:56.406334  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:53:56.406376  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:53:56.474037  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:53:55.895077  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:57.895835  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:56.452163  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:58.950981  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:57.590943  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:00.091057  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:58.974725  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:58.987817  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:53:58.987890  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:53:59.020951  662586 cri.go:89] found id: ""
	I1209 11:53:59.020987  662586 logs.go:282] 0 containers: []
	W1209 11:53:59.020996  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:53:59.021003  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:53:59.021055  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:53:59.055675  662586 cri.go:89] found id: ""
	I1209 11:53:59.055715  662586 logs.go:282] 0 containers: []
	W1209 11:53:59.055727  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:53:59.055733  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:53:59.055800  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:53:59.090099  662586 cri.go:89] found id: ""
	I1209 11:53:59.090138  662586 logs.go:282] 0 containers: []
	W1209 11:53:59.090150  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:53:59.090158  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:53:59.090252  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:53:59.124680  662586 cri.go:89] found id: ""
	I1209 11:53:59.124718  662586 logs.go:282] 0 containers: []
	W1209 11:53:59.124730  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:53:59.124739  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:53:59.124802  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:53:59.157772  662586 cri.go:89] found id: ""
	I1209 11:53:59.157808  662586 logs.go:282] 0 containers: []
	W1209 11:53:59.157819  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:53:59.157828  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:53:59.157892  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:53:59.191098  662586 cri.go:89] found id: ""
	I1209 11:53:59.191132  662586 logs.go:282] 0 containers: []
	W1209 11:53:59.191141  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:53:59.191148  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:53:59.191212  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:53:59.224050  662586 cri.go:89] found id: ""
	I1209 11:53:59.224090  662586 logs.go:282] 0 containers: []
	W1209 11:53:59.224102  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:53:59.224110  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:53:59.224198  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:53:59.262361  662586 cri.go:89] found id: ""
	I1209 11:53:59.262397  662586 logs.go:282] 0 containers: []
	W1209 11:53:59.262418  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:53:59.262432  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:53:59.262456  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:53:59.276811  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:53:59.276844  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:53:59.349465  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:53:59.349492  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:53:59.349506  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:53:59.429146  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:53:59.429192  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:53:59.470246  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:53:59.470287  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:02.021651  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:02.036039  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:02.036109  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:02.070999  662586 cri.go:89] found id: ""
	I1209 11:54:02.071034  662586 logs.go:282] 0 containers: []
	W1209 11:54:02.071045  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:02.071052  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:02.071119  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:02.107506  662586 cri.go:89] found id: ""
	I1209 11:54:02.107536  662586 logs.go:282] 0 containers: []
	W1209 11:54:02.107546  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:02.107554  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:02.107624  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:02.146279  662586 cri.go:89] found id: ""
	I1209 11:54:02.146314  662586 logs.go:282] 0 containers: []
	W1209 11:54:02.146326  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:02.146342  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:02.146408  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:02.178349  662586 cri.go:89] found id: ""
	I1209 11:54:02.178378  662586 logs.go:282] 0 containers: []
	W1209 11:54:02.178387  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:02.178402  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:02.178460  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:02.211916  662586 cri.go:89] found id: ""
	I1209 11:54:02.211952  662586 logs.go:282] 0 containers: []
	W1209 11:54:02.211963  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:02.211969  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:02.212038  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:02.246334  662586 cri.go:89] found id: ""
	I1209 11:54:02.246370  662586 logs.go:282] 0 containers: []
	W1209 11:54:02.246380  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:02.246387  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:02.246452  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:02.280111  662586 cri.go:89] found id: ""
	I1209 11:54:02.280157  662586 logs.go:282] 0 containers: []
	W1209 11:54:02.280168  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:02.280175  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:02.280246  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:02.314141  662586 cri.go:89] found id: ""
	I1209 11:54:02.314188  662586 logs.go:282] 0 containers: []
	W1209 11:54:02.314203  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:02.314216  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:02.314236  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:02.327220  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:02.327253  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:02.396099  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:02.396127  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:02.396142  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:02.478096  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:02.478148  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:02.515144  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:02.515175  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:53:59.896109  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:02.396485  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:04.396925  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:01.450279  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:03.450732  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:05.451265  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:02.092010  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:04.092427  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:05.069286  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:05.082453  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:05.082540  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:05.116263  662586 cri.go:89] found id: ""
	I1209 11:54:05.116299  662586 logs.go:282] 0 containers: []
	W1209 11:54:05.116313  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:05.116321  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:05.116388  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:05.150736  662586 cri.go:89] found id: ""
	I1209 11:54:05.150775  662586 logs.go:282] 0 containers: []
	W1209 11:54:05.150788  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:05.150796  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:05.150864  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:05.183757  662586 cri.go:89] found id: ""
	I1209 11:54:05.183792  662586 logs.go:282] 0 containers: []
	W1209 11:54:05.183804  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:05.183812  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:05.183873  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:05.215986  662586 cri.go:89] found id: ""
	I1209 11:54:05.216017  662586 logs.go:282] 0 containers: []
	W1209 11:54:05.216029  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:05.216038  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:05.216096  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:05.247648  662586 cri.go:89] found id: ""
	I1209 11:54:05.247686  662586 logs.go:282] 0 containers: []
	W1209 11:54:05.247698  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:05.247707  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:05.247776  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:05.279455  662586 cri.go:89] found id: ""
	I1209 11:54:05.279484  662586 logs.go:282] 0 containers: []
	W1209 11:54:05.279495  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:05.279504  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:05.279567  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:05.320334  662586 cri.go:89] found id: ""
	I1209 11:54:05.320374  662586 logs.go:282] 0 containers: []
	W1209 11:54:05.320387  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:05.320398  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:05.320490  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:05.353475  662586 cri.go:89] found id: ""
	I1209 11:54:05.353503  662586 logs.go:282] 0 containers: []
	W1209 11:54:05.353512  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:05.353522  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:05.353536  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:05.368320  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:05.368357  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:05.442152  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:05.442193  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:05.442212  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:05.523726  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:05.523768  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:05.562405  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:05.562438  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:06.895764  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:08.897032  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:07.454237  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:09.456440  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:06.591474  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:08.591578  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:11.091599  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:08.115564  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:08.129426  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:08.129523  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:08.162412  662586 cri.go:89] found id: ""
	I1209 11:54:08.162454  662586 logs.go:282] 0 containers: []
	W1209 11:54:08.162467  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:08.162477  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:08.162543  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:08.196821  662586 cri.go:89] found id: ""
	I1209 11:54:08.196860  662586 logs.go:282] 0 containers: []
	W1209 11:54:08.196873  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:08.196882  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:08.196949  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:08.233068  662586 cri.go:89] found id: ""
	I1209 11:54:08.233106  662586 logs.go:282] 0 containers: []
	W1209 11:54:08.233117  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:08.233124  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:08.233184  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:08.268683  662586 cri.go:89] found id: ""
	I1209 11:54:08.268715  662586 logs.go:282] 0 containers: []
	W1209 11:54:08.268724  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:08.268731  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:08.268790  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:08.303237  662586 cri.go:89] found id: ""
	I1209 11:54:08.303276  662586 logs.go:282] 0 containers: []
	W1209 11:54:08.303288  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:08.303309  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:08.303393  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:08.339513  662586 cri.go:89] found id: ""
	I1209 11:54:08.339543  662586 logs.go:282] 0 containers: []
	W1209 11:54:08.339551  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:08.339557  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:08.339612  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:08.376237  662586 cri.go:89] found id: ""
	I1209 11:54:08.376268  662586 logs.go:282] 0 containers: []
	W1209 11:54:08.376289  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:08.376298  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:08.376363  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:08.410530  662586 cri.go:89] found id: ""
	I1209 11:54:08.410560  662586 logs.go:282] 0 containers: []
	W1209 11:54:08.410568  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:08.410577  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:08.410589  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:08.460064  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:08.460101  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:08.474547  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:08.474582  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:08.544231  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:08.544260  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:08.544277  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:08.624727  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:08.624775  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:11.167943  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:11.183210  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:11.183294  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:11.221326  662586 cri.go:89] found id: ""
	I1209 11:54:11.221356  662586 logs.go:282] 0 containers: []
	W1209 11:54:11.221365  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:11.221371  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:11.221434  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:11.254688  662586 cri.go:89] found id: ""
	I1209 11:54:11.254721  662586 logs.go:282] 0 containers: []
	W1209 11:54:11.254730  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:11.254736  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:11.254801  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:11.287611  662586 cri.go:89] found id: ""
	I1209 11:54:11.287649  662586 logs.go:282] 0 containers: []
	W1209 11:54:11.287660  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:11.287666  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:11.287738  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:11.320533  662586 cri.go:89] found id: ""
	I1209 11:54:11.320565  662586 logs.go:282] 0 containers: []
	W1209 11:54:11.320574  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:11.320580  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:11.320638  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:11.362890  662586 cri.go:89] found id: ""
	I1209 11:54:11.362923  662586 logs.go:282] 0 containers: []
	W1209 11:54:11.362933  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:11.362939  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:11.363007  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:11.418729  662586 cri.go:89] found id: ""
	I1209 11:54:11.418762  662586 logs.go:282] 0 containers: []
	W1209 11:54:11.418772  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:11.418779  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:11.418837  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:11.455336  662586 cri.go:89] found id: ""
	I1209 11:54:11.455374  662586 logs.go:282] 0 containers: []
	W1209 11:54:11.455388  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:11.455397  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:11.455479  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:11.491307  662586 cri.go:89] found id: ""
	I1209 11:54:11.491334  662586 logs.go:282] 0 containers: []
	W1209 11:54:11.491344  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:11.491355  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:11.491369  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:11.543161  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:11.543204  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:11.556633  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:11.556670  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:11.626971  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:11.627001  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:11.627025  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:11.702061  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:11.702107  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:11.396167  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:13.897097  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:11.952029  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:14.451701  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:13.590749  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:15.591845  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:14.245241  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:14.258461  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:14.258544  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:14.292108  662586 cri.go:89] found id: ""
	I1209 11:54:14.292147  662586 logs.go:282] 0 containers: []
	W1209 11:54:14.292156  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:14.292163  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:14.292219  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:14.327347  662586 cri.go:89] found id: ""
	I1209 11:54:14.327381  662586 logs.go:282] 0 containers: []
	W1209 11:54:14.327394  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:14.327403  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:14.327484  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:14.361188  662586 cri.go:89] found id: ""
	I1209 11:54:14.361220  662586 logs.go:282] 0 containers: []
	W1209 11:54:14.361229  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:14.361236  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:14.361290  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:14.394898  662586 cri.go:89] found id: ""
	I1209 11:54:14.394936  662586 logs.go:282] 0 containers: []
	W1209 11:54:14.394948  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:14.394960  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:14.395027  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:14.429326  662586 cri.go:89] found id: ""
	I1209 11:54:14.429402  662586 logs.go:282] 0 containers: []
	W1209 11:54:14.429420  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:14.429431  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:14.429510  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:14.462903  662586 cri.go:89] found id: ""
	I1209 11:54:14.462938  662586 logs.go:282] 0 containers: []
	W1209 11:54:14.462947  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:14.462954  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:14.463009  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:14.496362  662586 cri.go:89] found id: ""
	I1209 11:54:14.496396  662586 logs.go:282] 0 containers: []
	W1209 11:54:14.496409  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:14.496418  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:14.496562  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:14.530052  662586 cri.go:89] found id: ""
	I1209 11:54:14.530085  662586 logs.go:282] 0 containers: []
	W1209 11:54:14.530098  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:14.530111  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:14.530131  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:14.543096  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:14.543133  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:14.611030  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:14.611055  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:14.611067  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:14.684984  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:14.685041  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:14.722842  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:14.722881  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:17.275868  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:17.288812  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:17.288898  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:17.323732  662586 cri.go:89] found id: ""
	I1209 11:54:17.323766  662586 logs.go:282] 0 containers: []
	W1209 11:54:17.323777  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:17.323786  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:17.323852  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:17.367753  662586 cri.go:89] found id: ""
	I1209 11:54:17.367788  662586 logs.go:282] 0 containers: []
	W1209 11:54:17.367801  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:17.367810  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:17.367878  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:17.411444  662586 cri.go:89] found id: ""
	I1209 11:54:17.411476  662586 logs.go:282] 0 containers: []
	W1209 11:54:17.411488  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:17.411496  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:17.411563  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:17.450790  662586 cri.go:89] found id: ""
	I1209 11:54:17.450821  662586 logs.go:282] 0 containers: []
	W1209 11:54:17.450832  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:17.450840  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:17.450913  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:17.488824  662586 cri.go:89] found id: ""
	I1209 11:54:17.488859  662586 logs.go:282] 0 containers: []
	W1209 11:54:17.488869  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:17.488876  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:17.488948  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:17.522051  662586 cri.go:89] found id: ""
	I1209 11:54:17.522085  662586 logs.go:282] 0 containers: []
	W1209 11:54:17.522094  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:17.522102  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:17.522165  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:17.556653  662586 cri.go:89] found id: ""
	I1209 11:54:17.556687  662586 logs.go:282] 0 containers: []
	W1209 11:54:17.556700  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:17.556707  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:17.556783  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:17.591303  662586 cri.go:89] found id: ""
	I1209 11:54:17.591337  662586 logs.go:282] 0 containers: []
	W1209 11:54:17.591355  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:17.591367  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:17.591384  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:17.656675  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:17.656699  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:17.656712  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:16.396574  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:18.896050  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:16.950508  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:19.451294  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:18.091307  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:20.091489  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:17.739894  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:17.739939  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:17.789486  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:17.789517  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:17.843606  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:17.843648  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:20.361896  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:20.378015  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:20.378105  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:20.412252  662586 cri.go:89] found id: ""
	I1209 11:54:20.412299  662586 logs.go:282] 0 containers: []
	W1209 11:54:20.412311  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:20.412327  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:20.412396  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:20.443638  662586 cri.go:89] found id: ""
	I1209 11:54:20.443671  662586 logs.go:282] 0 containers: []
	W1209 11:54:20.443682  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:20.443690  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:20.443758  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:20.478578  662586 cri.go:89] found id: ""
	I1209 11:54:20.478613  662586 logs.go:282] 0 containers: []
	W1209 11:54:20.478625  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:20.478634  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:20.478704  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:20.512232  662586 cri.go:89] found id: ""
	I1209 11:54:20.512266  662586 logs.go:282] 0 containers: []
	W1209 11:54:20.512279  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:20.512295  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:20.512357  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:20.544358  662586 cri.go:89] found id: ""
	I1209 11:54:20.544398  662586 logs.go:282] 0 containers: []
	W1209 11:54:20.544413  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:20.544429  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:20.544494  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:20.579476  662586 cri.go:89] found id: ""
	I1209 11:54:20.579513  662586 logs.go:282] 0 containers: []
	W1209 11:54:20.579525  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:20.579533  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:20.579600  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:20.613851  662586 cri.go:89] found id: ""
	I1209 11:54:20.613884  662586 logs.go:282] 0 containers: []
	W1209 11:54:20.613897  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:20.613903  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:20.613973  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:20.647311  662586 cri.go:89] found id: ""
	I1209 11:54:20.647342  662586 logs.go:282] 0 containers: []
	W1209 11:54:20.647351  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:20.647362  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:20.647375  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:20.695798  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:20.695839  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:20.709443  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:20.709478  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:20.779211  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:20.779237  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:20.779253  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:20.857966  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:20.858012  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:20.896168  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:22.896667  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:21.455716  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:23.950823  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:25.952038  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:22.592225  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:25.091934  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:23.398095  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:23.412622  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:23.412686  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:23.446582  662586 cri.go:89] found id: ""
	I1209 11:54:23.446616  662586 logs.go:282] 0 containers: []
	W1209 11:54:23.446628  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:23.446637  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:23.446705  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:23.487896  662586 cri.go:89] found id: ""
	I1209 11:54:23.487926  662586 logs.go:282] 0 containers: []
	W1209 11:54:23.487935  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:23.487941  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:23.488007  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:23.521520  662586 cri.go:89] found id: ""
	I1209 11:54:23.521559  662586 logs.go:282] 0 containers: []
	W1209 11:54:23.521571  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:23.521579  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:23.521651  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:23.561296  662586 cri.go:89] found id: ""
	I1209 11:54:23.561329  662586 logs.go:282] 0 containers: []
	W1209 11:54:23.561342  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:23.561350  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:23.561417  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:23.604936  662586 cri.go:89] found id: ""
	I1209 11:54:23.604965  662586 logs.go:282] 0 containers: []
	W1209 11:54:23.604976  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:23.604985  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:23.605055  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:23.665193  662586 cri.go:89] found id: ""
	I1209 11:54:23.665225  662586 logs.go:282] 0 containers: []
	W1209 11:54:23.665237  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:23.665247  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:23.665315  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:23.700202  662586 cri.go:89] found id: ""
	I1209 11:54:23.700239  662586 logs.go:282] 0 containers: []
	W1209 11:54:23.700251  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:23.700259  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:23.700336  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:23.734877  662586 cri.go:89] found id: ""
	I1209 11:54:23.734907  662586 logs.go:282] 0 containers: []
	W1209 11:54:23.734917  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:23.734927  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:23.734941  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:23.817328  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:23.817371  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:23.855052  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:23.855085  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:23.909107  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:23.909154  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:23.924198  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:23.924227  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:23.991976  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:26.492366  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:26.506223  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:26.506299  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:26.544932  662586 cri.go:89] found id: ""
	I1209 11:54:26.544974  662586 logs.go:282] 0 containers: []
	W1209 11:54:26.544987  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:26.544997  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:26.545080  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:26.579581  662586 cri.go:89] found id: ""
	I1209 11:54:26.579621  662586 logs.go:282] 0 containers: []
	W1209 11:54:26.579634  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:26.579643  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:26.579716  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:26.612510  662586 cri.go:89] found id: ""
	I1209 11:54:26.612545  662586 logs.go:282] 0 containers: []
	W1209 11:54:26.612567  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:26.612577  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:26.612646  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:26.646273  662586 cri.go:89] found id: ""
	I1209 11:54:26.646306  662586 logs.go:282] 0 containers: []
	W1209 11:54:26.646316  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:26.646322  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:26.646376  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:26.682027  662586 cri.go:89] found id: ""
	I1209 11:54:26.682063  662586 logs.go:282] 0 containers: []
	W1209 11:54:26.682072  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:26.682078  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:26.682132  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:26.715822  662586 cri.go:89] found id: ""
	I1209 11:54:26.715876  662586 logs.go:282] 0 containers: []
	W1209 11:54:26.715889  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:26.715898  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:26.715964  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:26.755976  662586 cri.go:89] found id: ""
	I1209 11:54:26.756016  662586 logs.go:282] 0 containers: []
	W1209 11:54:26.756031  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:26.756040  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:26.756122  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:26.787258  662586 cri.go:89] found id: ""
	I1209 11:54:26.787297  662586 logs.go:282] 0 containers: []
	W1209 11:54:26.787308  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:26.787319  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:26.787333  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:26.800534  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:26.800573  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:26.865767  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:26.865798  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:26.865824  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:26.950409  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:26.950460  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:26.994281  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:26.994320  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:25.396411  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:27.894846  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:28.451141  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:30.455101  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:27.591769  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:30.091528  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:29.544568  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:29.565182  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:29.565263  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:29.625116  662586 cri.go:89] found id: ""
	I1209 11:54:29.625155  662586 logs.go:282] 0 containers: []
	W1209 11:54:29.625168  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:29.625181  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:29.625257  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:29.673689  662586 cri.go:89] found id: ""
	I1209 11:54:29.673727  662586 logs.go:282] 0 containers: []
	W1209 11:54:29.673739  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:29.673747  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:29.673811  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:29.705925  662586 cri.go:89] found id: ""
	I1209 11:54:29.705959  662586 logs.go:282] 0 containers: []
	W1209 11:54:29.705971  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:29.705979  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:29.706033  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:29.738731  662586 cri.go:89] found id: ""
	I1209 11:54:29.738759  662586 logs.go:282] 0 containers: []
	W1209 11:54:29.738767  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:29.738774  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:29.738832  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:29.770778  662586 cri.go:89] found id: ""
	I1209 11:54:29.770814  662586 logs.go:282] 0 containers: []
	W1209 11:54:29.770826  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:29.770833  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:29.770899  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:29.801925  662586 cri.go:89] found id: ""
	I1209 11:54:29.801961  662586 logs.go:282] 0 containers: []
	W1209 11:54:29.801973  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:29.801981  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:29.802050  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:29.833681  662586 cri.go:89] found id: ""
	I1209 11:54:29.833712  662586 logs.go:282] 0 containers: []
	W1209 11:54:29.833722  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:29.833727  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:29.833791  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:29.873666  662586 cri.go:89] found id: ""
	I1209 11:54:29.873700  662586 logs.go:282] 0 containers: []
	W1209 11:54:29.873712  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:29.873722  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:29.873735  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:29.914855  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:29.914895  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:29.967730  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:29.967772  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:29.982037  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:29.982070  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:30.047168  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:30.047195  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:30.047212  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:32.623371  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:32.636346  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:32.636411  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:32.677709  662586 cri.go:89] found id: ""
	I1209 11:54:32.677736  662586 logs.go:282] 0 containers: []
	W1209 11:54:32.677744  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:32.677753  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:32.677805  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:29.896176  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:32.395216  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:32.952287  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:35.451456  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:32.092615  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:34.591397  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:32.710906  662586 cri.go:89] found id: ""
	I1209 11:54:32.710933  662586 logs.go:282] 0 containers: []
	W1209 11:54:32.710942  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:32.710948  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:32.711000  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:32.744623  662586 cri.go:89] found id: ""
	I1209 11:54:32.744654  662586 logs.go:282] 0 containers: []
	W1209 11:54:32.744667  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:32.744676  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:32.744736  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:32.779334  662586 cri.go:89] found id: ""
	I1209 11:54:32.779364  662586 logs.go:282] 0 containers: []
	W1209 11:54:32.779375  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:32.779382  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:32.779443  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:32.814998  662586 cri.go:89] found id: ""
	I1209 11:54:32.815032  662586 logs.go:282] 0 containers: []
	W1209 11:54:32.815046  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:32.815055  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:32.815128  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:32.850054  662586 cri.go:89] found id: ""
	I1209 11:54:32.850099  662586 logs.go:282] 0 containers: []
	W1209 11:54:32.850116  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:32.850127  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:32.850213  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:32.885769  662586 cri.go:89] found id: ""
	I1209 11:54:32.885805  662586 logs.go:282] 0 containers: []
	W1209 11:54:32.885818  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:32.885827  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:32.885899  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:32.927973  662586 cri.go:89] found id: ""
	I1209 11:54:32.928001  662586 logs.go:282] 0 containers: []
	W1209 11:54:32.928010  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:32.928019  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:32.928032  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:32.981915  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:32.981966  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:32.995817  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:32.995851  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:33.062409  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:33.062445  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:33.062462  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:33.146967  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:33.147011  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:35.688225  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:35.701226  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:35.701325  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:35.738628  662586 cri.go:89] found id: ""
	I1209 11:54:35.738655  662586 logs.go:282] 0 containers: []
	W1209 11:54:35.738663  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:35.738670  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:35.738737  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:35.771125  662586 cri.go:89] found id: ""
	I1209 11:54:35.771163  662586 logs.go:282] 0 containers: []
	W1209 11:54:35.771177  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:35.771187  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:35.771260  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:35.806244  662586 cri.go:89] found id: ""
	I1209 11:54:35.806277  662586 logs.go:282] 0 containers: []
	W1209 11:54:35.806290  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:35.806301  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:35.806376  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:35.839871  662586 cri.go:89] found id: ""
	I1209 11:54:35.839912  662586 logs.go:282] 0 containers: []
	W1209 11:54:35.839925  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:35.839932  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:35.840010  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:35.874994  662586 cri.go:89] found id: ""
	I1209 11:54:35.875034  662586 logs.go:282] 0 containers: []
	W1209 11:54:35.875046  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:35.875054  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:35.875129  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:35.910802  662586 cri.go:89] found id: ""
	I1209 11:54:35.910834  662586 logs.go:282] 0 containers: []
	W1209 11:54:35.910846  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:35.910855  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:35.910927  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:35.944633  662586 cri.go:89] found id: ""
	I1209 11:54:35.944663  662586 logs.go:282] 0 containers: []
	W1209 11:54:35.944672  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:35.944678  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:35.944749  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:35.982732  662586 cri.go:89] found id: ""
	I1209 11:54:35.982781  662586 logs.go:282] 0 containers: []
	W1209 11:54:35.982796  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:35.982811  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:35.982830  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:35.996271  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:35.996302  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:36.063463  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:36.063533  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:36.063554  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:36.141789  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:36.141833  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:36.187015  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:36.187047  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:34.895890  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:37.396472  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:37.951404  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:40.452814  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:37.091548  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:39.092168  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:38.739585  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:38.754322  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:38.754394  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:38.792497  662586 cri.go:89] found id: ""
	I1209 11:54:38.792525  662586 logs.go:282] 0 containers: []
	W1209 11:54:38.792535  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:38.792543  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:38.792608  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:38.829730  662586 cri.go:89] found id: ""
	I1209 11:54:38.829759  662586 logs.go:282] 0 containers: []
	W1209 11:54:38.829768  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:38.829774  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:38.829834  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:38.869942  662586 cri.go:89] found id: ""
	I1209 11:54:38.869981  662586 logs.go:282] 0 containers: []
	W1209 11:54:38.869994  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:38.870015  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:38.870085  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:38.906001  662586 cri.go:89] found id: ""
	I1209 11:54:38.906041  662586 logs.go:282] 0 containers: []
	W1209 11:54:38.906054  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:38.906063  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:38.906133  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:38.944389  662586 cri.go:89] found id: ""
	I1209 11:54:38.944427  662586 logs.go:282] 0 containers: []
	W1209 11:54:38.944445  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:38.944453  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:38.944534  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:38.979633  662586 cri.go:89] found id: ""
	I1209 11:54:38.979665  662586 logs.go:282] 0 containers: []
	W1209 11:54:38.979674  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:38.979681  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:38.979735  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:39.016366  662586 cri.go:89] found id: ""
	I1209 11:54:39.016402  662586 logs.go:282] 0 containers: []
	W1209 11:54:39.016416  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:39.016424  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:39.016489  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:39.049084  662586 cri.go:89] found id: ""
	I1209 11:54:39.049116  662586 logs.go:282] 0 containers: []
	W1209 11:54:39.049125  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:39.049134  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:39.049148  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:39.113953  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:39.113985  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:39.114004  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:39.191715  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:39.191767  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:39.232127  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:39.232167  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:39.281406  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:39.281448  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:41.795395  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:41.810293  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:41.810364  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:41.849819  662586 cri.go:89] found id: ""
	I1209 11:54:41.849858  662586 logs.go:282] 0 containers: []
	W1209 11:54:41.849872  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:41.849882  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:41.849952  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:41.883871  662586 cri.go:89] found id: ""
	I1209 11:54:41.883908  662586 logs.go:282] 0 containers: []
	W1209 11:54:41.883934  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:41.883942  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:41.884017  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:41.918194  662586 cri.go:89] found id: ""
	I1209 11:54:41.918230  662586 logs.go:282] 0 containers: []
	W1209 11:54:41.918239  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:41.918245  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:41.918312  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:41.950878  662586 cri.go:89] found id: ""
	I1209 11:54:41.950912  662586 logs.go:282] 0 containers: []
	W1209 11:54:41.950924  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:41.950933  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:41.950995  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:41.982922  662586 cri.go:89] found id: ""
	I1209 11:54:41.982964  662586 logs.go:282] 0 containers: []
	W1209 11:54:41.982976  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:41.982985  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:41.983064  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:42.014066  662586 cri.go:89] found id: ""
	I1209 11:54:42.014107  662586 logs.go:282] 0 containers: []
	W1209 11:54:42.014120  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:42.014129  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:42.014229  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:42.048017  662586 cri.go:89] found id: ""
	I1209 11:54:42.048056  662586 logs.go:282] 0 containers: []
	W1209 11:54:42.048070  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:42.048079  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:42.048146  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:42.080585  662586 cri.go:89] found id: ""
	I1209 11:54:42.080614  662586 logs.go:282] 0 containers: []
	W1209 11:54:42.080624  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:42.080634  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:42.080646  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:42.135012  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:42.135054  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:42.148424  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:42.148462  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:42.219179  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:42.219206  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:42.219230  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:42.305855  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:42.305902  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:39.895830  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:41.896255  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:44.398373  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:42.949835  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:44.951542  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:41.590831  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:43.592053  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:45.593044  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:44.843158  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:44.856317  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:44.856380  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:44.890940  662586 cri.go:89] found id: ""
	I1209 11:54:44.890984  662586 logs.go:282] 0 containers: []
	W1209 11:54:44.891003  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:44.891012  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:44.891081  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:44.923657  662586 cri.go:89] found id: ""
	I1209 11:54:44.923684  662586 logs.go:282] 0 containers: []
	W1209 11:54:44.923692  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:44.923698  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:44.923769  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:44.957512  662586 cri.go:89] found id: ""
	I1209 11:54:44.957545  662586 logs.go:282] 0 containers: []
	W1209 11:54:44.957558  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:44.957566  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:44.957636  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:44.998084  662586 cri.go:89] found id: ""
	I1209 11:54:44.998112  662586 logs.go:282] 0 containers: []
	W1209 11:54:44.998121  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:44.998128  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:44.998210  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:45.030335  662586 cri.go:89] found id: ""
	I1209 11:54:45.030360  662586 logs.go:282] 0 containers: []
	W1209 11:54:45.030369  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:45.030375  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:45.030447  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:45.063098  662586 cri.go:89] found id: ""
	I1209 11:54:45.063127  662586 logs.go:282] 0 containers: []
	W1209 11:54:45.063135  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:45.063141  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:45.063210  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:45.098430  662586 cri.go:89] found id: ""
	I1209 11:54:45.098458  662586 logs.go:282] 0 containers: []
	W1209 11:54:45.098466  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:45.098472  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:45.098526  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:45.132064  662586 cri.go:89] found id: ""
	I1209 11:54:45.132094  662586 logs.go:282] 0 containers: []
	W1209 11:54:45.132102  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:45.132113  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:45.132131  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:45.185512  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:45.185556  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:45.199543  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:45.199572  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:45.268777  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:45.268803  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:45.268817  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:45.352250  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:45.352299  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:46.897153  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:49.395935  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:46.952862  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:49.450006  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:48.092394  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:50.591937  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:47.892201  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:47.906961  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:47.907053  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:47.941349  662586 cri.go:89] found id: ""
	I1209 11:54:47.941394  662586 logs.go:282] 0 containers: []
	W1209 11:54:47.941408  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:47.941418  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:47.941479  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:47.981086  662586 cri.go:89] found id: ""
	I1209 11:54:47.981120  662586 logs.go:282] 0 containers: []
	W1209 11:54:47.981133  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:47.981141  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:47.981210  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:48.014105  662586 cri.go:89] found id: ""
	I1209 11:54:48.014142  662586 logs.go:282] 0 containers: []
	W1209 11:54:48.014151  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:48.014162  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:48.014249  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:48.049506  662586 cri.go:89] found id: ""
	I1209 11:54:48.049535  662586 logs.go:282] 0 containers: []
	W1209 11:54:48.049544  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:48.049552  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:48.049619  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:48.084284  662586 cri.go:89] found id: ""
	I1209 11:54:48.084314  662586 logs.go:282] 0 containers: []
	W1209 11:54:48.084324  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:48.084336  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:48.084406  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:48.117318  662586 cri.go:89] found id: ""
	I1209 11:54:48.117349  662586 logs.go:282] 0 containers: []
	W1209 11:54:48.117362  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:48.117371  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:48.117441  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:48.150121  662586 cri.go:89] found id: ""
	I1209 11:54:48.150151  662586 logs.go:282] 0 containers: []
	W1209 11:54:48.150187  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:48.150198  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:48.150266  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:48.180919  662586 cri.go:89] found id: ""
	I1209 11:54:48.180947  662586 logs.go:282] 0 containers: []
	W1209 11:54:48.180955  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:48.180966  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:48.180978  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:48.249572  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:48.249602  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:48.249617  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:48.324508  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:48.324552  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:48.363856  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:48.363901  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:48.415662  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:48.415721  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:50.929811  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:50.943650  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:50.943714  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:50.976444  662586 cri.go:89] found id: ""
	I1209 11:54:50.976480  662586 logs.go:282] 0 containers: []
	W1209 11:54:50.976493  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:50.976502  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:50.976574  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:51.016567  662586 cri.go:89] found id: ""
	I1209 11:54:51.016600  662586 logs.go:282] 0 containers: []
	W1209 11:54:51.016613  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:51.016621  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:51.016699  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:51.048933  662586 cri.go:89] found id: ""
	I1209 11:54:51.048967  662586 logs.go:282] 0 containers: []
	W1209 11:54:51.048977  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:51.048986  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:51.049073  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:51.083292  662586 cri.go:89] found id: ""
	I1209 11:54:51.083333  662586 logs.go:282] 0 containers: []
	W1209 11:54:51.083345  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:51.083354  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:51.083423  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:51.118505  662586 cri.go:89] found id: ""
	I1209 11:54:51.118547  662586 logs.go:282] 0 containers: []
	W1209 11:54:51.118560  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:51.118571  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:51.118644  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:51.152818  662586 cri.go:89] found id: ""
	I1209 11:54:51.152847  662586 logs.go:282] 0 containers: []
	W1209 11:54:51.152856  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:51.152870  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:51.152922  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:51.186953  662586 cri.go:89] found id: ""
	I1209 11:54:51.186981  662586 logs.go:282] 0 containers: []
	W1209 11:54:51.186991  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:51.186997  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:51.187063  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:51.219305  662586 cri.go:89] found id: ""
	I1209 11:54:51.219337  662586 logs.go:282] 0 containers: []
	W1209 11:54:51.219348  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:51.219361  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:51.219380  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:51.256295  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:51.256338  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:51.313751  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:51.313806  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:51.326940  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:51.326977  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:51.397395  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:51.397428  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:51.397445  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:51.396434  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:53.896554  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:51.456719  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:53.951566  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:52.592043  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:55.091800  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:53.975557  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:53.989509  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:53.989581  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:54.024363  662586 cri.go:89] found id: ""
	I1209 11:54:54.024403  662586 logs.go:282] 0 containers: []
	W1209 11:54:54.024416  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:54.024423  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:54.024484  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:54.062618  662586 cri.go:89] found id: ""
	I1209 11:54:54.062649  662586 logs.go:282] 0 containers: []
	W1209 11:54:54.062659  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:54.062667  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:54.062739  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:54.100194  662586 cri.go:89] found id: ""
	I1209 11:54:54.100231  662586 logs.go:282] 0 containers: []
	W1209 11:54:54.100243  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:54.100252  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:54.100324  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:54.135302  662586 cri.go:89] found id: ""
	I1209 11:54:54.135341  662586 logs.go:282] 0 containers: []
	W1209 11:54:54.135354  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:54.135363  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:54.135447  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:54.170898  662586 cri.go:89] found id: ""
	I1209 11:54:54.170940  662586 logs.go:282] 0 containers: []
	W1209 11:54:54.170953  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:54.170963  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:54.171035  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:54.205098  662586 cri.go:89] found id: ""
	I1209 11:54:54.205138  662586 logs.go:282] 0 containers: []
	W1209 11:54:54.205151  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:54.205159  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:54.205223  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:54.239153  662586 cri.go:89] found id: ""
	I1209 11:54:54.239210  662586 logs.go:282] 0 containers: []
	W1209 11:54:54.239226  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:54.239234  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:54.239307  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:54.278213  662586 cri.go:89] found id: ""
	I1209 11:54:54.278248  662586 logs.go:282] 0 containers: []
	W1209 11:54:54.278260  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:54.278275  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:54.278296  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:54.348095  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:54.348128  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:54.348156  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:54.427181  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:54.427224  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:54.467623  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:54.467656  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:54.519690  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:54.519734  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:57.033524  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:57.046420  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:57.046518  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:57.079588  662586 cri.go:89] found id: ""
	I1209 11:54:57.079616  662586 logs.go:282] 0 containers: []
	W1209 11:54:57.079626  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:57.079633  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:57.079687  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:57.114944  662586 cri.go:89] found id: ""
	I1209 11:54:57.114973  662586 logs.go:282] 0 containers: []
	W1209 11:54:57.114982  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:57.114988  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:57.115043  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:57.147667  662586 cri.go:89] found id: ""
	I1209 11:54:57.147708  662586 logs.go:282] 0 containers: []
	W1209 11:54:57.147721  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:57.147730  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:57.147794  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:57.182339  662586 cri.go:89] found id: ""
	I1209 11:54:57.182370  662586 logs.go:282] 0 containers: []
	W1209 11:54:57.182386  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:57.182395  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:57.182470  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:57.223129  662586 cri.go:89] found id: ""
	I1209 11:54:57.223170  662586 logs.go:282] 0 containers: []
	W1209 11:54:57.223186  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:57.223197  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:57.223270  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:57.262351  662586 cri.go:89] found id: ""
	I1209 11:54:57.262386  662586 logs.go:282] 0 containers: []
	W1209 11:54:57.262398  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:57.262409  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:57.262471  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:57.298743  662586 cri.go:89] found id: ""
	I1209 11:54:57.298772  662586 logs.go:282] 0 containers: []
	W1209 11:54:57.298782  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:57.298789  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:57.298856  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:57.339030  662586 cri.go:89] found id: ""
	I1209 11:54:57.339064  662586 logs.go:282] 0 containers: []
	W1209 11:54:57.339073  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:57.339085  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:57.339122  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:57.352603  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:57.352637  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:57.426627  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:57.426653  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:57.426669  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:57.515357  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:57.515401  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:57.554882  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:57.554925  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:56.396610  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:58.895822  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:56.451429  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:58.951440  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:57.590864  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:00.091967  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:00.112082  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:00.124977  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:00.125056  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:00.159003  662586 cri.go:89] found id: ""
	I1209 11:55:00.159032  662586 logs.go:282] 0 containers: []
	W1209 11:55:00.159041  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:00.159048  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:00.159101  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:00.192479  662586 cri.go:89] found id: ""
	I1209 11:55:00.192515  662586 logs.go:282] 0 containers: []
	W1209 11:55:00.192527  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:00.192533  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:00.192587  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:00.226146  662586 cri.go:89] found id: ""
	I1209 11:55:00.226194  662586 logs.go:282] 0 containers: []
	W1209 11:55:00.226208  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:00.226216  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:00.226273  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:00.260389  662586 cri.go:89] found id: ""
	I1209 11:55:00.260420  662586 logs.go:282] 0 containers: []
	W1209 11:55:00.260430  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:00.260442  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:00.260500  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:00.296091  662586 cri.go:89] found id: ""
	I1209 11:55:00.296121  662586 logs.go:282] 0 containers: []
	W1209 11:55:00.296131  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:00.296138  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:00.296195  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:00.332101  662586 cri.go:89] found id: ""
	I1209 11:55:00.332137  662586 logs.go:282] 0 containers: []
	W1209 11:55:00.332150  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:00.332158  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:00.332244  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:00.377329  662586 cri.go:89] found id: ""
	I1209 11:55:00.377358  662586 logs.go:282] 0 containers: []
	W1209 11:55:00.377368  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:00.377374  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:00.377438  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:00.415660  662586 cri.go:89] found id: ""
	I1209 11:55:00.415688  662586 logs.go:282] 0 containers: []
	W1209 11:55:00.415751  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:00.415767  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:00.415781  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:00.467734  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:00.467776  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:00.481244  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:00.481280  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:00.545721  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:00.545755  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:00.545777  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:00.624482  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:00.624533  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:01.396452  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:03.895539  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:01.452337  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:03.950752  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:05.951246  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:02.092654  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:04.592173  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:03.168340  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:03.183354  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:03.183439  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:03.223131  662586 cri.go:89] found id: ""
	I1209 11:55:03.223171  662586 logs.go:282] 0 containers: []
	W1209 11:55:03.223185  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:03.223193  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:03.223263  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:03.256561  662586 cri.go:89] found id: ""
	I1209 11:55:03.256595  662586 logs.go:282] 0 containers: []
	W1209 11:55:03.256603  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:03.256609  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:03.256667  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:03.289670  662586 cri.go:89] found id: ""
	I1209 11:55:03.289707  662586 logs.go:282] 0 containers: []
	W1209 11:55:03.289722  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:03.289738  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:03.289813  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:03.323687  662586 cri.go:89] found id: ""
	I1209 11:55:03.323714  662586 logs.go:282] 0 containers: []
	W1209 11:55:03.323724  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:03.323730  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:03.323786  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:03.358163  662586 cri.go:89] found id: ""
	I1209 11:55:03.358221  662586 logs.go:282] 0 containers: []
	W1209 11:55:03.358233  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:03.358241  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:03.358311  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:03.399688  662586 cri.go:89] found id: ""
	I1209 11:55:03.399721  662586 logs.go:282] 0 containers: []
	W1209 11:55:03.399734  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:03.399744  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:03.399812  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:03.433909  662586 cri.go:89] found id: ""
	I1209 11:55:03.433939  662586 logs.go:282] 0 containers: []
	W1209 11:55:03.433948  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:03.433954  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:03.434011  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:03.470208  662586 cri.go:89] found id: ""
	I1209 11:55:03.470239  662586 logs.go:282] 0 containers: []
	W1209 11:55:03.470248  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:03.470270  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:03.470289  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:03.545801  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:03.545848  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:03.584357  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:03.584389  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:03.641241  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:03.641283  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:03.657034  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:03.657080  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:03.731285  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:06.232380  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:06.246339  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:06.246411  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:06.281323  662586 cri.go:89] found id: ""
	I1209 11:55:06.281362  662586 logs.go:282] 0 containers: []
	W1209 11:55:06.281377  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:06.281385  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:06.281444  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:06.318225  662586 cri.go:89] found id: ""
	I1209 11:55:06.318261  662586 logs.go:282] 0 containers: []
	W1209 11:55:06.318277  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:06.318293  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:06.318364  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:06.353649  662586 cri.go:89] found id: ""
	I1209 11:55:06.353685  662586 logs.go:282] 0 containers: []
	W1209 11:55:06.353699  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:06.353708  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:06.353782  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:06.395204  662586 cri.go:89] found id: ""
	I1209 11:55:06.395242  662586 logs.go:282] 0 containers: []
	W1209 11:55:06.395257  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:06.395266  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:06.395335  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:06.436421  662586 cri.go:89] found id: ""
	I1209 11:55:06.436452  662586 logs.go:282] 0 containers: []
	W1209 11:55:06.436462  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:06.436469  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:06.436524  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:06.472218  662586 cri.go:89] found id: ""
	I1209 11:55:06.472246  662586 logs.go:282] 0 containers: []
	W1209 11:55:06.472255  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:06.472268  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:06.472335  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:06.506585  662586 cri.go:89] found id: ""
	I1209 11:55:06.506629  662586 logs.go:282] 0 containers: []
	W1209 11:55:06.506640  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:06.506647  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:06.506702  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:06.541442  662586 cri.go:89] found id: ""
	I1209 11:55:06.541472  662586 logs.go:282] 0 containers: []
	W1209 11:55:06.541481  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:06.541493  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:06.541512  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:06.592642  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:06.592682  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:06.606764  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:06.606805  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:06.677693  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:06.677720  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:06.677740  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:06.766074  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:06.766124  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:05.896263  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:08.396283  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:07.951409  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:10.451540  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:06.592724  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:09.091961  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:09.305144  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:09.319352  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:09.319444  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:09.357918  662586 cri.go:89] found id: ""
	I1209 11:55:09.358027  662586 logs.go:282] 0 containers: []
	W1209 11:55:09.358066  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:09.358077  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:09.358139  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:09.413181  662586 cri.go:89] found id: ""
	I1209 11:55:09.413213  662586 logs.go:282] 0 containers: []
	W1209 11:55:09.413226  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:09.413234  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:09.413310  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:09.448417  662586 cri.go:89] found id: ""
	I1209 11:55:09.448460  662586 logs.go:282] 0 containers: []
	W1209 11:55:09.448471  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:09.448480  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:09.448566  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:09.489732  662586 cri.go:89] found id: ""
	I1209 11:55:09.489765  662586 logs.go:282] 0 containers: []
	W1209 11:55:09.489775  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:09.489781  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:09.489845  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:09.524919  662586 cri.go:89] found id: ""
	I1209 11:55:09.524948  662586 logs.go:282] 0 containers: []
	W1209 11:55:09.524959  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:09.524968  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:09.525051  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:09.563268  662586 cri.go:89] found id: ""
	I1209 11:55:09.563301  662586 logs.go:282] 0 containers: []
	W1209 11:55:09.563311  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:09.563318  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:09.563373  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:09.598747  662586 cri.go:89] found id: ""
	I1209 11:55:09.598780  662586 logs.go:282] 0 containers: []
	W1209 11:55:09.598790  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:09.598798  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:09.598866  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:09.634447  662586 cri.go:89] found id: ""
	I1209 11:55:09.634479  662586 logs.go:282] 0 containers: []
	W1209 11:55:09.634492  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:09.634505  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:09.634520  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:09.647380  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:09.647419  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:09.721335  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:09.721363  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:09.721380  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:09.801039  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:09.801088  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:09.840929  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:09.840971  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:12.393810  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:12.407553  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:12.407654  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:12.444391  662586 cri.go:89] found id: ""
	I1209 11:55:12.444437  662586 logs.go:282] 0 containers: []
	W1209 11:55:12.444450  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:12.444459  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:12.444533  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:12.482714  662586 cri.go:89] found id: ""
	I1209 11:55:12.482752  662586 logs.go:282] 0 containers: []
	W1209 11:55:12.482764  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:12.482771  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:12.482853  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:12.518139  662586 cri.go:89] found id: ""
	I1209 11:55:12.518187  662586 logs.go:282] 0 containers: []
	W1209 11:55:12.518202  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:12.518211  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:12.518281  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:12.556903  662586 cri.go:89] found id: ""
	I1209 11:55:12.556938  662586 logs.go:282] 0 containers: []
	W1209 11:55:12.556950  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:12.556958  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:12.557028  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:12.591915  662586 cri.go:89] found id: ""
	I1209 11:55:12.591953  662586 logs.go:282] 0 containers: []
	W1209 11:55:12.591963  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:12.591971  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:12.592038  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:12.629767  662586 cri.go:89] found id: ""
	I1209 11:55:12.629797  662586 logs.go:282] 0 containers: []
	W1209 11:55:12.629806  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:12.629812  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:12.629878  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:12.667677  662586 cri.go:89] found id: ""
	I1209 11:55:12.667710  662586 logs.go:282] 0 containers: []
	W1209 11:55:12.667720  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:12.667727  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:12.667781  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:10.896109  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:12.896992  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:12.451770  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:14.952359  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:11.591952  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:14.092213  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:12.705720  662586 cri.go:89] found id: ""
	I1209 11:55:12.705747  662586 logs.go:282] 0 containers: []
	W1209 11:55:12.705756  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:12.705766  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:12.705780  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:12.758399  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:12.758441  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:12.772297  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:12.772336  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:12.839545  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:12.839569  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:12.839582  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:12.918424  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:12.918467  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:15.458122  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:15.473193  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:15.473284  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:15.508756  662586 cri.go:89] found id: ""
	I1209 11:55:15.508790  662586 logs.go:282] 0 containers: []
	W1209 11:55:15.508799  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:15.508806  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:15.508862  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:15.544735  662586 cri.go:89] found id: ""
	I1209 11:55:15.544770  662586 logs.go:282] 0 containers: []
	W1209 11:55:15.544782  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:15.544791  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:15.544866  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:15.577169  662586 cri.go:89] found id: ""
	I1209 11:55:15.577200  662586 logs.go:282] 0 containers: []
	W1209 11:55:15.577210  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:15.577216  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:15.577277  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:15.610662  662586 cri.go:89] found id: ""
	I1209 11:55:15.610690  662586 logs.go:282] 0 containers: []
	W1209 11:55:15.610700  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:15.610707  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:15.610763  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:15.645339  662586 cri.go:89] found id: ""
	I1209 11:55:15.645375  662586 logs.go:282] 0 containers: []
	W1209 11:55:15.645386  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:15.645394  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:15.645469  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:15.682044  662586 cri.go:89] found id: ""
	I1209 11:55:15.682079  662586 logs.go:282] 0 containers: []
	W1209 11:55:15.682096  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:15.682106  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:15.682201  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:15.717193  662586 cri.go:89] found id: ""
	I1209 11:55:15.717228  662586 logs.go:282] 0 containers: []
	W1209 11:55:15.717245  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:15.717256  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:15.717332  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:15.751756  662586 cri.go:89] found id: ""
	I1209 11:55:15.751792  662586 logs.go:282] 0 containers: []
	W1209 11:55:15.751803  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:15.751813  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:15.751827  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:15.811010  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:15.811063  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:15.842556  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:15.842597  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:15.920169  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:15.920195  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:15.920209  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:16.003180  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:16.003226  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:15.395666  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:17.396041  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:19.396262  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:17.451272  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:19.951638  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:16.591423  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:18.592456  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:21.090108  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:18.542563  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:18.555968  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:18.556059  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:18.588746  662586 cri.go:89] found id: ""
	I1209 11:55:18.588780  662586 logs.go:282] 0 containers: []
	W1209 11:55:18.588790  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:18.588797  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:18.588854  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:18.623664  662586 cri.go:89] found id: ""
	I1209 11:55:18.623707  662586 logs.go:282] 0 containers: []
	W1209 11:55:18.623720  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:18.623728  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:18.623798  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:18.659012  662586 cri.go:89] found id: ""
	I1209 11:55:18.659051  662586 logs.go:282] 0 containers: []
	W1209 11:55:18.659065  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:18.659074  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:18.659148  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:18.693555  662586 cri.go:89] found id: ""
	I1209 11:55:18.693588  662586 logs.go:282] 0 containers: []
	W1209 11:55:18.693600  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:18.693607  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:18.693661  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:18.726609  662586 cri.go:89] found id: ""
	I1209 11:55:18.726641  662586 logs.go:282] 0 containers: []
	W1209 11:55:18.726652  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:18.726659  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:18.726712  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:18.760654  662586 cri.go:89] found id: ""
	I1209 11:55:18.760682  662586 logs.go:282] 0 containers: []
	W1209 11:55:18.760694  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:18.760704  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:18.760761  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:18.794656  662586 cri.go:89] found id: ""
	I1209 11:55:18.794688  662586 logs.go:282] 0 containers: []
	W1209 11:55:18.794699  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:18.794706  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:18.794769  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:18.829988  662586 cri.go:89] found id: ""
	I1209 11:55:18.830030  662586 logs.go:282] 0 containers: []
	W1209 11:55:18.830045  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:18.830059  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:18.830073  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:18.872523  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:18.872558  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:18.929408  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:18.929449  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:18.943095  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:18.943133  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:19.009125  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:19.009150  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:19.009164  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:21.587418  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:21.606271  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:21.606358  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:21.653536  662586 cri.go:89] found id: ""
	I1209 11:55:21.653574  662586 logs.go:282] 0 containers: []
	W1209 11:55:21.653586  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:21.653595  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:21.653671  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:21.687023  662586 cri.go:89] found id: ""
	I1209 11:55:21.687049  662586 logs.go:282] 0 containers: []
	W1209 11:55:21.687060  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:21.687068  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:21.687131  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:21.720112  662586 cri.go:89] found id: ""
	I1209 11:55:21.720150  662586 logs.go:282] 0 containers: []
	W1209 11:55:21.720163  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:21.720171  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:21.720243  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:21.754697  662586 cri.go:89] found id: ""
	I1209 11:55:21.754729  662586 logs.go:282] 0 containers: []
	W1209 11:55:21.754740  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:21.754749  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:21.754814  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:21.793926  662586 cri.go:89] found id: ""
	I1209 11:55:21.793957  662586 logs.go:282] 0 containers: []
	W1209 11:55:21.793967  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:21.793973  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:21.794040  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:21.827572  662586 cri.go:89] found id: ""
	I1209 11:55:21.827609  662586 logs.go:282] 0 containers: []
	W1209 11:55:21.827622  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:21.827633  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:21.827700  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:21.861442  662586 cri.go:89] found id: ""
	I1209 11:55:21.861472  662586 logs.go:282] 0 containers: []
	W1209 11:55:21.861490  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:21.861499  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:21.861565  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:21.894858  662586 cri.go:89] found id: ""
	I1209 11:55:21.894884  662586 logs.go:282] 0 containers: []
	W1209 11:55:21.894892  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:21.894901  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:21.894914  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:21.942567  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:21.942625  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:21.956849  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:21.956879  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:22.020700  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:22.020724  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:22.020738  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:22.095730  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:22.095767  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:21.896304  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:24.395936  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:21.951928  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:24.450997  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:23.090962  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:25.091816  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:24.631715  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:24.644165  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:24.644252  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:24.677720  662586 cri.go:89] found id: ""
	I1209 11:55:24.677757  662586 logs.go:282] 0 containers: []
	W1209 11:55:24.677769  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:24.677778  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:24.677835  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:24.711053  662586 cri.go:89] found id: ""
	I1209 11:55:24.711086  662586 logs.go:282] 0 containers: []
	W1209 11:55:24.711095  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:24.711101  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:24.711154  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:24.744107  662586 cri.go:89] found id: ""
	I1209 11:55:24.744139  662586 logs.go:282] 0 containers: []
	W1209 11:55:24.744148  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:24.744154  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:24.744210  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:24.777811  662586 cri.go:89] found id: ""
	I1209 11:55:24.777853  662586 logs.go:282] 0 containers: []
	W1209 11:55:24.777866  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:24.777876  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:24.777938  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:24.810524  662586 cri.go:89] found id: ""
	I1209 11:55:24.810558  662586 logs.go:282] 0 containers: []
	W1209 11:55:24.810571  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:24.810580  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:24.810648  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:24.843551  662586 cri.go:89] found id: ""
	I1209 11:55:24.843582  662586 logs.go:282] 0 containers: []
	W1209 11:55:24.843590  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:24.843597  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:24.843649  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:24.875342  662586 cri.go:89] found id: ""
	I1209 11:55:24.875371  662586 logs.go:282] 0 containers: []
	W1209 11:55:24.875384  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:24.875390  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:24.875446  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:24.910298  662586 cri.go:89] found id: ""
	I1209 11:55:24.910329  662586 logs.go:282] 0 containers: []
	W1209 11:55:24.910340  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:24.910352  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:24.910377  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:24.962151  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:24.962204  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:24.976547  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:24.976577  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:25.050606  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:25.050635  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:25.050652  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:25.134204  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:25.134254  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:27.671220  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:27.685132  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:27.685194  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:26.895311  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:28.895954  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:26.950106  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:28.950915  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:30.952019  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:27.591908  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:30.090353  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:27.718113  662586 cri.go:89] found id: ""
	I1209 11:55:27.718141  662586 logs.go:282] 0 containers: []
	W1209 11:55:27.718150  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:27.718160  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:27.718242  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:27.752350  662586 cri.go:89] found id: ""
	I1209 11:55:27.752384  662586 logs.go:282] 0 containers: []
	W1209 11:55:27.752395  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:27.752401  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:27.752481  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:27.797360  662586 cri.go:89] found id: ""
	I1209 11:55:27.797393  662586 logs.go:282] 0 containers: []
	W1209 11:55:27.797406  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:27.797415  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:27.797488  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:27.834549  662586 cri.go:89] found id: ""
	I1209 11:55:27.834579  662586 logs.go:282] 0 containers: []
	W1209 11:55:27.834588  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:27.834594  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:27.834655  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:27.874403  662586 cri.go:89] found id: ""
	I1209 11:55:27.874440  662586 logs.go:282] 0 containers: []
	W1209 11:55:27.874465  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:27.874474  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:27.874557  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:27.914324  662586 cri.go:89] found id: ""
	I1209 11:55:27.914360  662586 logs.go:282] 0 containers: []
	W1209 11:55:27.914373  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:27.914380  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:27.914450  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:27.948001  662586 cri.go:89] found id: ""
	I1209 11:55:27.948043  662586 logs.go:282] 0 containers: []
	W1209 11:55:27.948056  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:27.948066  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:27.948219  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:27.982329  662586 cri.go:89] found id: ""
	I1209 11:55:27.982360  662586 logs.go:282] 0 containers: []
	W1209 11:55:27.982369  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:27.982379  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:27.982391  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:28.038165  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:28.038228  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:28.051578  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:28.051609  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:28.119914  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:28.119937  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:28.119951  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:28.195634  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:28.195679  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:30.735392  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:30.748430  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:30.748521  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:30.780500  662586 cri.go:89] found id: ""
	I1209 11:55:30.780528  662586 logs.go:282] 0 containers: []
	W1209 11:55:30.780537  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:30.780544  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:30.780606  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:30.812430  662586 cri.go:89] found id: ""
	I1209 11:55:30.812462  662586 logs.go:282] 0 containers: []
	W1209 11:55:30.812470  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:30.812477  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:30.812530  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:30.854030  662586 cri.go:89] found id: ""
	I1209 11:55:30.854057  662586 logs.go:282] 0 containers: []
	W1209 11:55:30.854066  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:30.854073  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:30.854130  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:30.892144  662586 cri.go:89] found id: ""
	I1209 11:55:30.892182  662586 logs.go:282] 0 containers: []
	W1209 11:55:30.892202  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:30.892211  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:30.892284  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:30.927540  662586 cri.go:89] found id: ""
	I1209 11:55:30.927576  662586 logs.go:282] 0 containers: []
	W1209 11:55:30.927590  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:30.927597  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:30.927660  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:30.963820  662586 cri.go:89] found id: ""
	I1209 11:55:30.963852  662586 logs.go:282] 0 containers: []
	W1209 11:55:30.963861  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:30.963867  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:30.963920  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:30.997793  662586 cri.go:89] found id: ""
	I1209 11:55:30.997819  662586 logs.go:282] 0 containers: []
	W1209 11:55:30.997828  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:30.997836  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:30.997902  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:31.031649  662586 cri.go:89] found id: ""
	I1209 11:55:31.031699  662586 logs.go:282] 0 containers: []
	W1209 11:55:31.031712  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:31.031726  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:31.031746  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:31.101464  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:31.101492  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:31.101509  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:31.184635  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:31.184681  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:31.222690  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:31.222732  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:31.276518  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:31.276566  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:30.896544  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:33.395861  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:33.451560  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:35.952567  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:32.091788  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:34.592091  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:33.790941  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:33.805299  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:33.805390  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:33.844205  662586 cri.go:89] found id: ""
	I1209 11:55:33.844241  662586 logs.go:282] 0 containers: []
	W1209 11:55:33.844253  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:33.844262  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:33.844337  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:33.883378  662586 cri.go:89] found id: ""
	I1209 11:55:33.883410  662586 logs.go:282] 0 containers: []
	W1209 11:55:33.883424  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:33.883431  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:33.883505  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:33.920007  662586 cri.go:89] found id: ""
	I1209 11:55:33.920049  662586 logs.go:282] 0 containers: []
	W1209 11:55:33.920061  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:33.920074  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:33.920141  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:33.956111  662586 cri.go:89] found id: ""
	I1209 11:55:33.956163  662586 logs.go:282] 0 containers: []
	W1209 11:55:33.956175  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:33.956183  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:33.956241  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:33.990057  662586 cri.go:89] found id: ""
	I1209 11:55:33.990092  662586 logs.go:282] 0 containers: []
	W1209 11:55:33.990102  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:33.990109  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:33.990166  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:34.023046  662586 cri.go:89] found id: ""
	I1209 11:55:34.023082  662586 logs.go:282] 0 containers: []
	W1209 11:55:34.023096  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:34.023103  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:34.023171  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:34.055864  662586 cri.go:89] found id: ""
	I1209 11:55:34.055898  662586 logs.go:282] 0 containers: []
	W1209 11:55:34.055909  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:34.055916  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:34.055987  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:34.091676  662586 cri.go:89] found id: ""
	I1209 11:55:34.091710  662586 logs.go:282] 0 containers: []
	W1209 11:55:34.091722  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:34.091733  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:34.091747  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:34.142959  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:34.143002  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:34.156431  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:34.156466  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:34.230277  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:34.230303  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:34.230320  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:34.313660  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:34.313713  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:36.850056  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:36.862486  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:36.862582  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:36.893134  662586 cri.go:89] found id: ""
	I1209 11:55:36.893163  662586 logs.go:282] 0 containers: []
	W1209 11:55:36.893173  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:36.893179  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:36.893257  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:36.927438  662586 cri.go:89] found id: ""
	I1209 11:55:36.927469  662586 logs.go:282] 0 containers: []
	W1209 11:55:36.927479  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:36.927485  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:36.927546  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:36.958787  662586 cri.go:89] found id: ""
	I1209 11:55:36.958818  662586 logs.go:282] 0 containers: []
	W1209 11:55:36.958829  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:36.958837  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:36.958901  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:36.995470  662586 cri.go:89] found id: ""
	I1209 11:55:36.995508  662586 logs.go:282] 0 containers: []
	W1209 11:55:36.995520  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:36.995529  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:36.995590  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:37.026705  662586 cri.go:89] found id: ""
	I1209 11:55:37.026736  662586 logs.go:282] 0 containers: []
	W1209 11:55:37.026746  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:37.026752  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:37.026805  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:37.059717  662586 cri.go:89] found id: ""
	I1209 11:55:37.059748  662586 logs.go:282] 0 containers: []
	W1209 11:55:37.059756  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:37.059762  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:37.059820  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:37.094049  662586 cri.go:89] found id: ""
	I1209 11:55:37.094076  662586 logs.go:282] 0 containers: []
	W1209 11:55:37.094088  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:37.094097  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:37.094190  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:37.128684  662586 cri.go:89] found id: ""
	I1209 11:55:37.128715  662586 logs.go:282] 0 containers: []
	W1209 11:55:37.128724  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:37.128735  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:37.128755  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:37.177932  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:37.177973  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:37.191218  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:37.191252  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:37.256488  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:37.256521  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:37.256538  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:37.330603  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:37.330647  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:35.895823  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:37.895972  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:37.952764  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:40.450704  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:37.092013  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:39.591402  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:39.868604  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:39.881991  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:39.882063  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:39.916750  662586 cri.go:89] found id: ""
	I1209 11:55:39.916786  662586 logs.go:282] 0 containers: []
	W1209 11:55:39.916799  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:39.916806  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:39.916874  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:39.957744  662586 cri.go:89] found id: ""
	I1209 11:55:39.957773  662586 logs.go:282] 0 containers: []
	W1209 11:55:39.957781  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:39.957788  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:39.957854  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:39.994613  662586 cri.go:89] found id: ""
	I1209 11:55:39.994645  662586 logs.go:282] 0 containers: []
	W1209 11:55:39.994654  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:39.994661  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:39.994726  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:40.032606  662586 cri.go:89] found id: ""
	I1209 11:55:40.032635  662586 logs.go:282] 0 containers: []
	W1209 11:55:40.032644  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:40.032650  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:40.032710  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:40.067172  662586 cri.go:89] found id: ""
	I1209 11:55:40.067204  662586 logs.go:282] 0 containers: []
	W1209 11:55:40.067214  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:40.067221  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:40.067278  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:40.101391  662586 cri.go:89] found id: ""
	I1209 11:55:40.101423  662586 logs.go:282] 0 containers: []
	W1209 11:55:40.101432  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:40.101439  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:40.101510  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:40.133160  662586 cri.go:89] found id: ""
	I1209 11:55:40.133196  662586 logs.go:282] 0 containers: []
	W1209 11:55:40.133209  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:40.133217  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:40.133283  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:40.166105  662586 cri.go:89] found id: ""
	I1209 11:55:40.166137  662586 logs.go:282] 0 containers: []
	W1209 11:55:40.166145  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:40.166160  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:40.166187  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:40.231525  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:40.231559  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:40.231582  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:40.311298  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:40.311354  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:40.350040  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:40.350077  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:40.404024  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:40.404061  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:39.896541  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:42.396800  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:42.453720  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:44.950595  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:42.091300  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:44.591230  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:42.917868  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:42.930289  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:42.930357  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:42.962822  662586 cri.go:89] found id: ""
	I1209 11:55:42.962856  662586 logs.go:282] 0 containers: []
	W1209 11:55:42.962869  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:42.962878  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:42.962950  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:42.996932  662586 cri.go:89] found id: ""
	I1209 11:55:42.996962  662586 logs.go:282] 0 containers: []
	W1209 11:55:42.996972  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:42.996979  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:42.997040  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:43.031782  662586 cri.go:89] found id: ""
	I1209 11:55:43.031824  662586 logs.go:282] 0 containers: []
	W1209 11:55:43.031837  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:43.031846  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:43.031915  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:43.064717  662586 cri.go:89] found id: ""
	I1209 11:55:43.064751  662586 logs.go:282] 0 containers: []
	W1209 11:55:43.064764  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:43.064774  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:43.064851  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:43.097248  662586 cri.go:89] found id: ""
	I1209 11:55:43.097278  662586 logs.go:282] 0 containers: []
	W1209 11:55:43.097287  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:43.097294  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:43.097356  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:43.135726  662586 cri.go:89] found id: ""
	I1209 11:55:43.135766  662586 logs.go:282] 0 containers: []
	W1209 11:55:43.135779  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:43.135788  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:43.135881  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:43.171120  662586 cri.go:89] found id: ""
	I1209 11:55:43.171148  662586 logs.go:282] 0 containers: []
	W1209 11:55:43.171157  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:43.171163  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:43.171216  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:43.207488  662586 cri.go:89] found id: ""
	I1209 11:55:43.207523  662586 logs.go:282] 0 containers: []
	W1209 11:55:43.207533  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:43.207545  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:43.207565  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:43.276112  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:43.276142  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:43.276159  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:43.354942  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:43.354990  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:43.392755  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:43.392800  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:43.445708  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:43.445752  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:45.962533  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:45.975508  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:45.975589  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:46.009619  662586 cri.go:89] found id: ""
	I1209 11:55:46.009653  662586 logs.go:282] 0 containers: []
	W1209 11:55:46.009663  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:46.009670  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:46.009726  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:46.042218  662586 cri.go:89] found id: ""
	I1209 11:55:46.042250  662586 logs.go:282] 0 containers: []
	W1209 11:55:46.042259  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:46.042265  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:46.042318  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:46.076204  662586 cri.go:89] found id: ""
	I1209 11:55:46.076239  662586 logs.go:282] 0 containers: []
	W1209 11:55:46.076249  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:46.076255  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:46.076326  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:46.113117  662586 cri.go:89] found id: ""
	I1209 11:55:46.113145  662586 logs.go:282] 0 containers: []
	W1209 11:55:46.113154  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:46.113160  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:46.113225  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:46.148232  662586 cri.go:89] found id: ""
	I1209 11:55:46.148277  662586 logs.go:282] 0 containers: []
	W1209 11:55:46.148293  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:46.148303  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:46.148379  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:46.185028  662586 cri.go:89] found id: ""
	I1209 11:55:46.185083  662586 logs.go:282] 0 containers: []
	W1209 11:55:46.185096  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:46.185106  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:46.185200  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:46.222882  662586 cri.go:89] found id: ""
	I1209 11:55:46.222920  662586 logs.go:282] 0 containers: []
	W1209 11:55:46.222933  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:46.222941  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:46.223007  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:46.263486  662586 cri.go:89] found id: ""
	I1209 11:55:46.263528  662586 logs.go:282] 0 containers: []
	W1209 11:55:46.263538  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:46.263549  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:46.263565  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:46.340524  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:46.340550  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:46.340567  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:46.422768  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:46.422810  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:46.464344  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:46.464382  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:46.517311  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:46.517354  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:44.895283  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:46.895427  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:48.895674  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:46.952912  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:48.953432  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:46.591521  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:49.093057  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:49.031192  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:49.043840  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:49.043929  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:49.077648  662586 cri.go:89] found id: ""
	I1209 11:55:49.077705  662586 logs.go:282] 0 containers: []
	W1209 11:55:49.077720  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:49.077730  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:49.077802  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:49.114111  662586 cri.go:89] found id: ""
	I1209 11:55:49.114138  662586 logs.go:282] 0 containers: []
	W1209 11:55:49.114146  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:49.114154  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:49.114236  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:49.147870  662586 cri.go:89] found id: ""
	I1209 11:55:49.147908  662586 logs.go:282] 0 containers: []
	W1209 11:55:49.147917  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:49.147923  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:49.147976  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:49.185223  662586 cri.go:89] found id: ""
	I1209 11:55:49.185256  662586 logs.go:282] 0 containers: []
	W1209 11:55:49.185269  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:49.185277  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:49.185350  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:49.218037  662586 cri.go:89] found id: ""
	I1209 11:55:49.218068  662586 logs.go:282] 0 containers: []
	W1209 11:55:49.218077  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:49.218084  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:49.218138  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:49.255483  662586 cri.go:89] found id: ""
	I1209 11:55:49.255522  662586 logs.go:282] 0 containers: []
	W1209 11:55:49.255535  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:49.255549  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:49.255629  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:49.288623  662586 cri.go:89] found id: ""
	I1209 11:55:49.288650  662586 logs.go:282] 0 containers: []
	W1209 11:55:49.288659  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:49.288666  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:49.288732  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:49.322880  662586 cri.go:89] found id: ""
	I1209 11:55:49.322913  662586 logs.go:282] 0 containers: []
	W1209 11:55:49.322921  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:49.322930  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:49.322943  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:49.372380  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:49.372428  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:49.385877  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:49.385914  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:49.460078  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:49.460101  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:49.460114  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:49.534588  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:49.534647  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:52.071408  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:52.084198  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:52.084276  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:52.118908  662586 cri.go:89] found id: ""
	I1209 11:55:52.118937  662586 logs.go:282] 0 containers: []
	W1209 11:55:52.118950  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:52.118958  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:52.119026  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:52.156494  662586 cri.go:89] found id: ""
	I1209 11:55:52.156521  662586 logs.go:282] 0 containers: []
	W1209 11:55:52.156530  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:52.156535  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:52.156586  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:52.196037  662586 cri.go:89] found id: ""
	I1209 11:55:52.196075  662586 logs.go:282] 0 containers: []
	W1209 11:55:52.196094  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:52.196102  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:52.196177  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:52.229436  662586 cri.go:89] found id: ""
	I1209 11:55:52.229465  662586 logs.go:282] 0 containers: []
	W1209 11:55:52.229477  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:52.229486  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:52.229558  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:52.268751  662586 cri.go:89] found id: ""
	I1209 11:55:52.268785  662586 logs.go:282] 0 containers: []
	W1209 11:55:52.268797  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:52.268805  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:52.268871  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:52.302405  662586 cri.go:89] found id: ""
	I1209 11:55:52.302436  662586 logs.go:282] 0 containers: []
	W1209 11:55:52.302446  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:52.302453  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:52.302522  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:52.338641  662586 cri.go:89] found id: ""
	I1209 11:55:52.338676  662586 logs.go:282] 0 containers: []
	W1209 11:55:52.338688  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:52.338698  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:52.338754  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:52.375541  662586 cri.go:89] found id: ""
	I1209 11:55:52.375578  662586 logs.go:282] 0 containers: []
	W1209 11:55:52.375591  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:52.375604  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:52.375624  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:52.389140  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:52.389190  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:52.460520  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:52.460546  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:52.460562  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:52.535234  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:52.535280  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:52.573317  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:52.573354  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:50.896292  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:52.896875  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:51.453540  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:53.456640  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:55.950197  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:51.590899  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:53.591317  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:56.092219  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:55.124068  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:55.136800  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:55.136868  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:55.169724  662586 cri.go:89] found id: ""
	I1209 11:55:55.169757  662586 logs.go:282] 0 containers: []
	W1209 11:55:55.169769  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:55.169777  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:55.169843  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:55.207466  662586 cri.go:89] found id: ""
	I1209 11:55:55.207514  662586 logs.go:282] 0 containers: []
	W1209 11:55:55.207528  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:55.207537  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:55.207600  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:55.241761  662586 cri.go:89] found id: ""
	I1209 11:55:55.241790  662586 logs.go:282] 0 containers: []
	W1209 11:55:55.241801  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:55.241809  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:55.241874  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:55.274393  662586 cri.go:89] found id: ""
	I1209 11:55:55.274434  662586 logs.go:282] 0 containers: []
	W1209 11:55:55.274447  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:55.274455  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:55.274522  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:55.307942  662586 cri.go:89] found id: ""
	I1209 11:55:55.307988  662586 logs.go:282] 0 containers: []
	W1209 11:55:55.308002  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:55.308012  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:55.308088  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:55.340074  662586 cri.go:89] found id: ""
	I1209 11:55:55.340107  662586 logs.go:282] 0 containers: []
	W1209 11:55:55.340116  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:55.340122  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:55.340196  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:55.388077  662586 cri.go:89] found id: ""
	I1209 11:55:55.388119  662586 logs.go:282] 0 containers: []
	W1209 11:55:55.388140  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:55.388149  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:55.388230  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:55.422923  662586 cri.go:89] found id: ""
	I1209 11:55:55.422961  662586 logs.go:282] 0 containers: []
	W1209 11:55:55.422975  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:55.422990  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:55.423008  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:55.476178  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:55.476219  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:55.489891  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:55.489919  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:55.555705  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:55.555726  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:55.555745  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:55.634818  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:55.634862  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:55.396320  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:57.895122  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:57.951119  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:00.451659  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:58.092427  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:00.590304  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:58.173169  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:58.188529  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:58.188620  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:58.225602  662586 cri.go:89] found id: ""
	I1209 11:55:58.225630  662586 logs.go:282] 0 containers: []
	W1209 11:55:58.225641  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:58.225649  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:58.225709  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:58.259597  662586 cri.go:89] found id: ""
	I1209 11:55:58.259638  662586 logs.go:282] 0 containers: []
	W1209 11:55:58.259652  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:58.259662  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:58.259744  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:58.293287  662586 cri.go:89] found id: ""
	I1209 11:55:58.293320  662586 logs.go:282] 0 containers: []
	W1209 11:55:58.293329  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:58.293336  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:58.293390  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:58.326581  662586 cri.go:89] found id: ""
	I1209 11:55:58.326611  662586 logs.go:282] 0 containers: []
	W1209 11:55:58.326622  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:58.326630  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:58.326699  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:58.359636  662586 cri.go:89] found id: ""
	I1209 11:55:58.359665  662586 logs.go:282] 0 containers: []
	W1209 11:55:58.359675  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:58.359681  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:58.359736  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:58.396767  662586 cri.go:89] found id: ""
	I1209 11:55:58.396798  662586 logs.go:282] 0 containers: []
	W1209 11:55:58.396809  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:58.396818  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:58.396887  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:58.428907  662586 cri.go:89] found id: ""
	I1209 11:55:58.428941  662586 logs.go:282] 0 containers: []
	W1209 11:55:58.428954  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:58.428962  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:58.429032  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:58.466082  662586 cri.go:89] found id: ""
	I1209 11:55:58.466124  662586 logs.go:282] 0 containers: []
	W1209 11:55:58.466136  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:58.466149  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:58.466186  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:58.542333  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:58.542378  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:58.582397  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:58.582436  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:58.632980  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:58.633030  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:58.648464  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:58.648514  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:58.711714  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:56:01.212475  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:56:01.225574  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:56:01.225642  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:56:01.259666  662586 cri.go:89] found id: ""
	I1209 11:56:01.259704  662586 logs.go:282] 0 containers: []
	W1209 11:56:01.259718  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:56:01.259726  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:56:01.259800  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:56:01.295433  662586 cri.go:89] found id: ""
	I1209 11:56:01.295474  662586 logs.go:282] 0 containers: []
	W1209 11:56:01.295495  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:56:01.295503  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:56:01.295561  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:56:01.330316  662586 cri.go:89] found id: ""
	I1209 11:56:01.330352  662586 logs.go:282] 0 containers: []
	W1209 11:56:01.330364  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:56:01.330373  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:56:01.330447  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:56:01.366762  662586 cri.go:89] found id: ""
	I1209 11:56:01.366797  662586 logs.go:282] 0 containers: []
	W1209 11:56:01.366808  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:56:01.366814  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:56:01.366878  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:56:01.403511  662586 cri.go:89] found id: ""
	I1209 11:56:01.403539  662586 logs.go:282] 0 containers: []
	W1209 11:56:01.403547  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:56:01.403553  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:56:01.403604  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:56:01.436488  662586 cri.go:89] found id: ""
	I1209 11:56:01.436526  662586 logs.go:282] 0 containers: []
	W1209 11:56:01.436538  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:56:01.436546  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:56:01.436617  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:56:01.471647  662586 cri.go:89] found id: ""
	I1209 11:56:01.471676  662586 logs.go:282] 0 containers: []
	W1209 11:56:01.471685  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:56:01.471690  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:56:01.471744  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:56:01.504065  662586 cri.go:89] found id: ""
	I1209 11:56:01.504099  662586 logs.go:282] 0 containers: []
	W1209 11:56:01.504111  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:56:01.504124  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:56:01.504143  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:56:01.553434  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:56:01.553482  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:56:01.567537  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:56:01.567579  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:56:01.636968  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:56:01.636995  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:56:01.637012  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:56:01.713008  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:56:01.713049  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:59.896841  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:02.396972  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:02.451893  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:04.453118  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:02.591218  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:04.592199  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:04.253143  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:56:04.266428  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:56:04.266512  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:56:04.298769  662586 cri.go:89] found id: ""
	I1209 11:56:04.298810  662586 logs.go:282] 0 containers: []
	W1209 11:56:04.298823  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:56:04.298833  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:56:04.298913  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:56:04.330392  662586 cri.go:89] found id: ""
	I1209 11:56:04.330428  662586 logs.go:282] 0 containers: []
	W1209 11:56:04.330441  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:56:04.330449  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:56:04.330528  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:56:04.362409  662586 cri.go:89] found id: ""
	I1209 11:56:04.362443  662586 logs.go:282] 0 containers: []
	W1209 11:56:04.362455  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:56:04.362463  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:56:04.362544  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:56:04.396853  662586 cri.go:89] found id: ""
	I1209 11:56:04.396884  662586 logs.go:282] 0 containers: []
	W1209 11:56:04.396893  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:56:04.396899  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:56:04.396966  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:56:04.430425  662586 cri.go:89] found id: ""
	I1209 11:56:04.430461  662586 logs.go:282] 0 containers: []
	W1209 11:56:04.430470  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:56:04.430477  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:56:04.430531  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:56:04.465354  662586 cri.go:89] found id: ""
	I1209 11:56:04.465391  662586 logs.go:282] 0 containers: []
	W1209 11:56:04.465403  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:56:04.465411  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:56:04.465480  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:56:04.500114  662586 cri.go:89] found id: ""
	I1209 11:56:04.500156  662586 logs.go:282] 0 containers: []
	W1209 11:56:04.500167  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:56:04.500179  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:56:04.500259  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:56:04.534853  662586 cri.go:89] found id: ""
	I1209 11:56:04.534888  662586 logs.go:282] 0 containers: []
	W1209 11:56:04.534902  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:56:04.534914  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:56:04.534928  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:56:04.586419  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:56:04.586457  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:56:04.600690  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:56:04.600728  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:56:04.669645  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:56:04.669685  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:56:04.669703  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:56:04.747973  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:56:04.748026  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:56:07.288721  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:56:07.302905  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:56:07.302975  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:56:07.336686  662586 cri.go:89] found id: ""
	I1209 11:56:07.336720  662586 logs.go:282] 0 containers: []
	W1209 11:56:07.336728  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:56:07.336735  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:56:07.336798  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:56:07.370119  662586 cri.go:89] found id: ""
	I1209 11:56:07.370150  662586 logs.go:282] 0 containers: []
	W1209 11:56:07.370159  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:56:07.370165  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:56:07.370245  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:56:07.402818  662586 cri.go:89] found id: ""
	I1209 11:56:07.402845  662586 logs.go:282] 0 containers: []
	W1209 11:56:07.402853  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:56:07.402861  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:56:07.402923  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:56:07.437694  662586 cri.go:89] found id: ""
	I1209 11:56:07.437722  662586 logs.go:282] 0 containers: []
	W1209 11:56:07.437732  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:56:07.437741  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:56:07.437806  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:56:07.474576  662586 cri.go:89] found id: ""
	I1209 11:56:07.474611  662586 logs.go:282] 0 containers: []
	W1209 11:56:07.474622  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:56:07.474629  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:56:07.474705  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:56:07.508538  662586 cri.go:89] found id: ""
	I1209 11:56:07.508575  662586 logs.go:282] 0 containers: []
	W1209 11:56:07.508585  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:56:07.508592  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:56:07.508661  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:56:07.548863  662586 cri.go:89] found id: ""
	I1209 11:56:07.548897  662586 logs.go:282] 0 containers: []
	W1209 11:56:07.548911  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:56:07.548922  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:56:07.549093  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:56:07.592515  662586 cri.go:89] found id: ""
	I1209 11:56:07.592543  662586 logs.go:282] 0 containers: []
	W1209 11:56:07.592555  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:56:07.592564  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:56:07.592579  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:56:07.652176  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:56:07.652219  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:56:04.895898  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:07.395712  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:09.398273  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:06.950668  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:09.450539  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:07.091573  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:09.591049  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:07.703040  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:56:07.703094  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:56:07.717880  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:56:07.717924  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:56:07.783396  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:56:07.783425  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:56:07.783441  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:56:10.362395  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:56:10.377478  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:56:10.377574  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:56:10.411923  662586 cri.go:89] found id: ""
	I1209 11:56:10.411956  662586 logs.go:282] 0 containers: []
	W1209 11:56:10.411969  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:56:10.411978  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:56:10.412049  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:56:10.444601  662586 cri.go:89] found id: ""
	I1209 11:56:10.444633  662586 logs.go:282] 0 containers: []
	W1209 11:56:10.444642  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:56:10.444648  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:56:10.444705  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:56:10.486720  662586 cri.go:89] found id: ""
	I1209 11:56:10.486753  662586 logs.go:282] 0 containers: []
	W1209 11:56:10.486763  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:56:10.486769  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:56:10.486822  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:56:10.523535  662586 cri.go:89] found id: ""
	I1209 11:56:10.523572  662586 logs.go:282] 0 containers: []
	W1209 11:56:10.523581  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:56:10.523587  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:56:10.523641  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:56:10.557701  662586 cri.go:89] found id: ""
	I1209 11:56:10.557741  662586 logs.go:282] 0 containers: []
	W1209 11:56:10.557754  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:56:10.557762  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:56:10.557834  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:56:10.593914  662586 cri.go:89] found id: ""
	I1209 11:56:10.593949  662586 logs.go:282] 0 containers: []
	W1209 11:56:10.593959  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:56:10.593965  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:56:10.594017  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:56:10.626367  662586 cri.go:89] found id: ""
	I1209 11:56:10.626469  662586 logs.go:282] 0 containers: []
	W1209 11:56:10.626482  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:56:10.626489  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:56:10.626547  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:56:10.665415  662586 cri.go:89] found id: ""
	I1209 11:56:10.665446  662586 logs.go:282] 0 containers: []
	W1209 11:56:10.665456  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:56:10.665467  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:56:10.665480  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:56:10.747483  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:56:10.747532  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:56:10.787728  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:56:10.787758  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:56:10.840678  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:56:10.840722  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:56:10.855774  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:56:10.855809  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:56:10.929638  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:56:11.896254  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:14.395661  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:11.451031  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:13.452502  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:15.951720  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:11.592197  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:13.593711  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:16.091641  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:13.430793  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:56:13.446156  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:56:13.446261  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:56:13.491624  662586 cri.go:89] found id: ""
	I1209 11:56:13.491662  662586 logs.go:282] 0 containers: []
	W1209 11:56:13.491675  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:56:13.491684  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:56:13.491758  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:56:13.537619  662586 cri.go:89] found id: ""
	I1209 11:56:13.537653  662586 logs.go:282] 0 containers: []
	W1209 11:56:13.537666  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:56:13.537675  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:56:13.537750  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:56:13.585761  662586 cri.go:89] found id: ""
	I1209 11:56:13.585796  662586 logs.go:282] 0 containers: []
	W1209 11:56:13.585810  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:56:13.585819  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:56:13.585883  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:56:13.620740  662586 cri.go:89] found id: ""
	I1209 11:56:13.620774  662586 logs.go:282] 0 containers: []
	W1209 11:56:13.620785  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:56:13.620791  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:56:13.620858  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:56:13.654405  662586 cri.go:89] found id: ""
	I1209 11:56:13.654433  662586 logs.go:282] 0 containers: []
	W1209 11:56:13.654442  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:56:13.654448  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:56:13.654509  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:56:13.687520  662586 cri.go:89] found id: ""
	I1209 11:56:13.687547  662586 logs.go:282] 0 containers: []
	W1209 11:56:13.687558  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:56:13.687566  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:56:13.687642  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:56:13.721105  662586 cri.go:89] found id: ""
	I1209 11:56:13.721140  662586 logs.go:282] 0 containers: []
	W1209 11:56:13.721153  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:56:13.721162  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:56:13.721238  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:56:13.753900  662586 cri.go:89] found id: ""
	I1209 11:56:13.753933  662586 logs.go:282] 0 containers: []
	W1209 11:56:13.753945  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:56:13.753960  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:56:13.753978  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:56:13.805864  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:56:13.805909  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:56:13.819356  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:56:13.819393  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:56:13.896097  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:56:13.896128  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:56:13.896150  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:56:13.979041  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:56:13.979084  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:56:16.516777  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:56:16.529916  662586 kubeadm.go:597] duration metric: took 4m1.869807937s to restartPrimaryControlPlane
	W1209 11:56:16.530015  662586 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1209 11:56:16.530067  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1209 11:56:16.396353  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:18.896097  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:18.451294  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:20.452525  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:18.092780  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:20.593275  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:18.635832  662586 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.105742271s)
	I1209 11:56:18.635914  662586 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 11:56:18.651678  662586 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 11:56:18.661965  662586 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 11:56:18.672060  662586 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 11:56:18.672082  662586 kubeadm.go:157] found existing configuration files:
	
	I1209 11:56:18.672147  662586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 11:56:18.681627  662586 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 11:56:18.681697  662586 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 11:56:18.691514  662586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 11:56:18.701210  662586 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 11:56:18.701292  662586 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 11:56:18.710934  662586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 11:56:18.720506  662586 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 11:56:18.720583  662586 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 11:56:18.729996  662586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 11:56:18.739425  662586 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 11:56:18.739486  662586 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 11:56:18.748788  662586 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1209 11:56:18.981849  662586 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1209 11:56:21.396764  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:23.894781  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:22.950912  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:24.951678  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:23.091306  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:24.592439  662109 pod_ready.go:82] duration metric: took 4m0.007699806s for pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace to be "Ready" ...
	E1209 11:56:24.592477  662109 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1209 11:56:24.592486  662109 pod_ready.go:39] duration metric: took 4m7.416528348s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 11:56:24.592504  662109 api_server.go:52] waiting for apiserver process to appear ...
	I1209 11:56:24.592537  662109 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:56:24.592590  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:56:24.643050  662109 cri.go:89] found id: "478ca5095dcdb90c166952aee1291e7de907591fa02ac65de132b6d7e6ea79cb"
	I1209 11:56:24.643085  662109 cri.go:89] found id: ""
	I1209 11:56:24.643094  662109 logs.go:282] 1 containers: [478ca5095dcdb90c166952aee1291e7de907591fa02ac65de132b6d7e6ea79cb]
	I1209 11:56:24.643151  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:24.647529  662109 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:56:24.647590  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:56:24.683125  662109 cri.go:89] found id: "13e00a6fef368f36aa166cfe9a5a6e37476d0bff708b09a552a102e6103aff16"
	I1209 11:56:24.683150  662109 cri.go:89] found id: ""
	I1209 11:56:24.683159  662109 logs.go:282] 1 containers: [13e00a6fef368f36aa166cfe9a5a6e37476d0bff708b09a552a102e6103aff16]
	I1209 11:56:24.683222  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:24.687584  662109 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:56:24.687706  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:56:24.720663  662109 cri.go:89] found id: "909852cc820d2c5faeb9b92e83a833c397dbbf46bbc715f79ef8b77f0a338f42"
	I1209 11:56:24.720699  662109 cri.go:89] found id: ""
	I1209 11:56:24.720708  662109 logs.go:282] 1 containers: [909852cc820d2c5faeb9b92e83a833c397dbbf46bbc715f79ef8b77f0a338f42]
	I1209 11:56:24.720769  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:24.724881  662109 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:56:24.724942  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:56:24.766055  662109 cri.go:89] found id: "73b01a8a4080f1488054462bb97f98c08528c799629927ce6a684fb0ade63413"
	I1209 11:56:24.766081  662109 cri.go:89] found id: ""
	I1209 11:56:24.766091  662109 logs.go:282] 1 containers: [73b01a8a4080f1488054462bb97f98c08528c799629927ce6a684fb0ade63413]
	I1209 11:56:24.766152  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:24.770491  662109 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:56:24.770557  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:56:24.804523  662109 cri.go:89] found id: "de64a319ab30ae80695633af0e1c9206551e2408918b743c2258216259de56d2"
	I1209 11:56:24.804549  662109 cri.go:89] found id: ""
	I1209 11:56:24.804558  662109 logs.go:282] 1 containers: [de64a319ab30ae80695633af0e1c9206551e2408918b743c2258216259de56d2]
	I1209 11:56:24.804607  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:24.808452  662109 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:56:24.808528  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:56:24.846043  662109 cri.go:89] found id: "b6662f1bed1995329291aee23d70ac4157125e45c75dd92e34e07fa257f2be2d"
	I1209 11:56:24.846072  662109 cri.go:89] found id: ""
	I1209 11:56:24.846084  662109 logs.go:282] 1 containers: [b6662f1bed1995329291aee23d70ac4157125e45c75dd92e34e07fa257f2be2d]
	I1209 11:56:24.846140  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:24.849991  662109 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:56:24.850057  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:56:24.884853  662109 cri.go:89] found id: ""
	I1209 11:56:24.884889  662109 logs.go:282] 0 containers: []
	W1209 11:56:24.884902  662109 logs.go:284] No container was found matching "kindnet"
	I1209 11:56:24.884912  662109 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 11:56:24.884983  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 11:56:24.920103  662109 cri.go:89] found id: "d184b6139f52f80605886a70426659453eee61916ead187b881c64f6b4c59bbb"
	I1209 11:56:24.920131  662109 cri.go:89] found id: "0ef403336ca71e96a468d654fafbbe9fd9c37a3d04d2578e06771b425f20004f"
	I1209 11:56:24.920135  662109 cri.go:89] found id: ""
	I1209 11:56:24.920152  662109 logs.go:282] 2 containers: [d184b6139f52f80605886a70426659453eee61916ead187b881c64f6b4c59bbb 0ef403336ca71e96a468d654fafbbe9fd9c37a3d04d2578e06771b425f20004f]
	I1209 11:56:24.920223  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:24.924212  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:24.928416  662109 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:56:24.928436  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 11:56:25.077407  662109 logs.go:123] Gathering logs for kube-apiserver [478ca5095dcdb90c166952aee1291e7de907591fa02ac65de132b6d7e6ea79cb] ...
	I1209 11:56:25.077468  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 478ca5095dcdb90c166952aee1291e7de907591fa02ac65de132b6d7e6ea79cb"
	I1209 11:56:25.125600  662109 logs.go:123] Gathering logs for storage-provisioner [d184b6139f52f80605886a70426659453eee61916ead187b881c64f6b4c59bbb] ...
	I1209 11:56:25.125649  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d184b6139f52f80605886a70426659453eee61916ead187b881c64f6b4c59bbb"
	I1209 11:56:25.163222  662109 logs.go:123] Gathering logs for container status ...
	I1209 11:56:25.163268  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:56:25.208430  662109 logs.go:123] Gathering logs for storage-provisioner [0ef403336ca71e96a468d654fafbbe9fd9c37a3d04d2578e06771b425f20004f] ...
	I1209 11:56:25.208465  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0ef403336ca71e96a468d654fafbbe9fd9c37a3d04d2578e06771b425f20004f"
	I1209 11:56:25.245884  662109 logs.go:123] Gathering logs for kubelet ...
	I1209 11:56:25.245917  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:56:25.318723  662109 logs.go:123] Gathering logs for dmesg ...
	I1209 11:56:25.318775  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:56:25.333173  662109 logs.go:123] Gathering logs for etcd [13e00a6fef368f36aa166cfe9a5a6e37476d0bff708b09a552a102e6103aff16] ...
	I1209 11:56:25.333207  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 13e00a6fef368f36aa166cfe9a5a6e37476d0bff708b09a552a102e6103aff16"
	I1209 11:56:25.394636  662109 logs.go:123] Gathering logs for coredns [909852cc820d2c5faeb9b92e83a833c397dbbf46bbc715f79ef8b77f0a338f42] ...
	I1209 11:56:25.394683  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 909852cc820d2c5faeb9b92e83a833c397dbbf46bbc715f79ef8b77f0a338f42"
	I1209 11:56:25.435210  662109 logs.go:123] Gathering logs for kube-scheduler [73b01a8a4080f1488054462bb97f98c08528c799629927ce6a684fb0ade63413] ...
	I1209 11:56:25.435248  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 73b01a8a4080f1488054462bb97f98c08528c799629927ce6a684fb0ade63413"
	I1209 11:56:25.482142  662109 logs.go:123] Gathering logs for kube-proxy [de64a319ab30ae80695633af0e1c9206551e2408918b743c2258216259de56d2] ...
	I1209 11:56:25.482184  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de64a319ab30ae80695633af0e1c9206551e2408918b743c2258216259de56d2"
	I1209 11:56:25.516975  662109 logs.go:123] Gathering logs for kube-controller-manager [b6662f1bed1995329291aee23d70ac4157125e45c75dd92e34e07fa257f2be2d] ...
	I1209 11:56:25.517006  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b6662f1bed1995329291aee23d70ac4157125e45c75dd92e34e07fa257f2be2d"
	I1209 11:56:25.565526  662109 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:56:25.565565  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:56:25.896281  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:28.395529  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:27.454449  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:29.950704  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:28.549071  662109 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:56:28.567288  662109 api_server.go:72] duration metric: took 4m18.770451099s to wait for apiserver process to appear ...
	I1209 11:56:28.567319  662109 api_server.go:88] waiting for apiserver healthz status ...
	I1209 11:56:28.567367  662109 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:56:28.567418  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:56:28.603341  662109 cri.go:89] found id: "478ca5095dcdb90c166952aee1291e7de907591fa02ac65de132b6d7e6ea79cb"
	I1209 11:56:28.603365  662109 cri.go:89] found id: ""
	I1209 11:56:28.603372  662109 logs.go:282] 1 containers: [478ca5095dcdb90c166952aee1291e7de907591fa02ac65de132b6d7e6ea79cb]
	I1209 11:56:28.603423  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:28.607416  662109 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:56:28.607493  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:56:28.647437  662109 cri.go:89] found id: "13e00a6fef368f36aa166cfe9a5a6e37476d0bff708b09a552a102e6103aff16"
	I1209 11:56:28.647465  662109 cri.go:89] found id: ""
	I1209 11:56:28.647477  662109 logs.go:282] 1 containers: [13e00a6fef368f36aa166cfe9a5a6e37476d0bff708b09a552a102e6103aff16]
	I1209 11:56:28.647539  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:28.651523  662109 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:56:28.651584  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:56:28.687889  662109 cri.go:89] found id: "909852cc820d2c5faeb9b92e83a833c397dbbf46bbc715f79ef8b77f0a338f42"
	I1209 11:56:28.687920  662109 cri.go:89] found id: ""
	I1209 11:56:28.687929  662109 logs.go:282] 1 containers: [909852cc820d2c5faeb9b92e83a833c397dbbf46bbc715f79ef8b77f0a338f42]
	I1209 11:56:28.687983  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:28.692025  662109 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:56:28.692100  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:56:28.728934  662109 cri.go:89] found id: "73b01a8a4080f1488054462bb97f98c08528c799629927ce6a684fb0ade63413"
	I1209 11:56:28.728961  662109 cri.go:89] found id: ""
	I1209 11:56:28.728969  662109 logs.go:282] 1 containers: [73b01a8a4080f1488054462bb97f98c08528c799629927ce6a684fb0ade63413]
	I1209 11:56:28.729020  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:28.733217  662109 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:56:28.733300  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:56:28.768700  662109 cri.go:89] found id: "de64a319ab30ae80695633af0e1c9206551e2408918b743c2258216259de56d2"
	I1209 11:56:28.768726  662109 cri.go:89] found id: ""
	I1209 11:56:28.768735  662109 logs.go:282] 1 containers: [de64a319ab30ae80695633af0e1c9206551e2408918b743c2258216259de56d2]
	I1209 11:56:28.768790  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:28.772844  662109 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:56:28.772921  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:56:28.812073  662109 cri.go:89] found id: "b6662f1bed1995329291aee23d70ac4157125e45c75dd92e34e07fa257f2be2d"
	I1209 11:56:28.812104  662109 cri.go:89] found id: ""
	I1209 11:56:28.812116  662109 logs.go:282] 1 containers: [b6662f1bed1995329291aee23d70ac4157125e45c75dd92e34e07fa257f2be2d]
	I1209 11:56:28.812195  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:28.816542  662109 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:56:28.816612  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:56:28.850959  662109 cri.go:89] found id: ""
	I1209 11:56:28.850997  662109 logs.go:282] 0 containers: []
	W1209 11:56:28.851010  662109 logs.go:284] No container was found matching "kindnet"
	I1209 11:56:28.851018  662109 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 11:56:28.851075  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 11:56:28.894115  662109 cri.go:89] found id: "d184b6139f52f80605886a70426659453eee61916ead187b881c64f6b4c59bbb"
	I1209 11:56:28.894142  662109 cri.go:89] found id: "0ef403336ca71e96a468d654fafbbe9fd9c37a3d04d2578e06771b425f20004f"
	I1209 11:56:28.894148  662109 cri.go:89] found id: ""
	I1209 11:56:28.894157  662109 logs.go:282] 2 containers: [d184b6139f52f80605886a70426659453eee61916ead187b881c64f6b4c59bbb 0ef403336ca71e96a468d654fafbbe9fd9c37a3d04d2578e06771b425f20004f]
	I1209 11:56:28.894228  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:28.899260  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:28.903033  662109 logs.go:123] Gathering logs for dmesg ...
	I1209 11:56:28.903055  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:56:28.916411  662109 logs.go:123] Gathering logs for kube-apiserver [478ca5095dcdb90c166952aee1291e7de907591fa02ac65de132b6d7e6ea79cb] ...
	I1209 11:56:28.916447  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 478ca5095dcdb90c166952aee1291e7de907591fa02ac65de132b6d7e6ea79cb"
	I1209 11:56:28.965873  662109 logs.go:123] Gathering logs for coredns [909852cc820d2c5faeb9b92e83a833c397dbbf46bbc715f79ef8b77f0a338f42] ...
	I1209 11:56:28.965911  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 909852cc820d2c5faeb9b92e83a833c397dbbf46bbc715f79ef8b77f0a338f42"
	I1209 11:56:29.003553  662109 logs.go:123] Gathering logs for kube-scheduler [73b01a8a4080f1488054462bb97f98c08528c799629927ce6a684fb0ade63413] ...
	I1209 11:56:29.003591  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 73b01a8a4080f1488054462bb97f98c08528c799629927ce6a684fb0ade63413"
	I1209 11:56:29.038945  662109 logs.go:123] Gathering logs for kube-proxy [de64a319ab30ae80695633af0e1c9206551e2408918b743c2258216259de56d2] ...
	I1209 11:56:29.038989  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de64a319ab30ae80695633af0e1c9206551e2408918b743c2258216259de56d2"
	I1209 11:56:29.079595  662109 logs.go:123] Gathering logs for storage-provisioner [d184b6139f52f80605886a70426659453eee61916ead187b881c64f6b4c59bbb] ...
	I1209 11:56:29.079636  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d184b6139f52f80605886a70426659453eee61916ead187b881c64f6b4c59bbb"
	I1209 11:56:29.117632  662109 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:56:29.117665  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:56:29.556193  662109 logs.go:123] Gathering logs for kubelet ...
	I1209 11:56:29.556245  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:56:29.629530  662109 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:56:29.629571  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 11:56:29.746102  662109 logs.go:123] Gathering logs for etcd [13e00a6fef368f36aa166cfe9a5a6e37476d0bff708b09a552a102e6103aff16] ...
	I1209 11:56:29.746137  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 13e00a6fef368f36aa166cfe9a5a6e37476d0bff708b09a552a102e6103aff16"
	I1209 11:56:29.799342  662109 logs.go:123] Gathering logs for kube-controller-manager [b6662f1bed1995329291aee23d70ac4157125e45c75dd92e34e07fa257f2be2d] ...
	I1209 11:56:29.799379  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b6662f1bed1995329291aee23d70ac4157125e45c75dd92e34e07fa257f2be2d"
	I1209 11:56:29.851197  662109 logs.go:123] Gathering logs for storage-provisioner [0ef403336ca71e96a468d654fafbbe9fd9c37a3d04d2578e06771b425f20004f] ...
	I1209 11:56:29.851254  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0ef403336ca71e96a468d654fafbbe9fd9c37a3d04d2578e06771b425f20004f"
	I1209 11:56:29.884688  662109 logs.go:123] Gathering logs for container status ...
	I1209 11:56:29.884725  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:56:30.396025  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:32.396195  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:34.396605  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:31.951405  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:34.451838  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:32.425773  662109 api_server.go:253] Checking apiserver healthz at https://192.168.39.169:8443/healthz ...
	I1209 11:56:32.432276  662109 api_server.go:279] https://192.168.39.169:8443/healthz returned 200:
	ok
	I1209 11:56:32.433602  662109 api_server.go:141] control plane version: v1.31.2
	I1209 11:56:32.433634  662109 api_server.go:131] duration metric: took 3.866306159s to wait for apiserver health ...
	I1209 11:56:32.433647  662109 system_pods.go:43] waiting for kube-system pods to appear ...
	I1209 11:56:32.433680  662109 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:56:32.433744  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:56:32.471560  662109 cri.go:89] found id: "478ca5095dcdb90c166952aee1291e7de907591fa02ac65de132b6d7e6ea79cb"
	I1209 11:56:32.471593  662109 cri.go:89] found id: ""
	I1209 11:56:32.471604  662109 logs.go:282] 1 containers: [478ca5095dcdb90c166952aee1291e7de907591fa02ac65de132b6d7e6ea79cb]
	I1209 11:56:32.471684  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:32.475735  662109 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:56:32.475809  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:56:32.509788  662109 cri.go:89] found id: "13e00a6fef368f36aa166cfe9a5a6e37476d0bff708b09a552a102e6103aff16"
	I1209 11:56:32.509821  662109 cri.go:89] found id: ""
	I1209 11:56:32.509833  662109 logs.go:282] 1 containers: [13e00a6fef368f36aa166cfe9a5a6e37476d0bff708b09a552a102e6103aff16]
	I1209 11:56:32.509889  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:32.513849  662109 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:56:32.513908  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:56:32.547022  662109 cri.go:89] found id: "909852cc820d2c5faeb9b92e83a833c397dbbf46bbc715f79ef8b77f0a338f42"
	I1209 11:56:32.547046  662109 cri.go:89] found id: ""
	I1209 11:56:32.547055  662109 logs.go:282] 1 containers: [909852cc820d2c5faeb9b92e83a833c397dbbf46bbc715f79ef8b77f0a338f42]
	I1209 11:56:32.547113  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:32.551393  662109 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:56:32.551476  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:56:32.586478  662109 cri.go:89] found id: "73b01a8a4080f1488054462bb97f98c08528c799629927ce6a684fb0ade63413"
	I1209 11:56:32.586516  662109 cri.go:89] found id: ""
	I1209 11:56:32.586536  662109 logs.go:282] 1 containers: [73b01a8a4080f1488054462bb97f98c08528c799629927ce6a684fb0ade63413]
	I1209 11:56:32.586605  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:32.592876  662109 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:56:32.592950  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:56:32.626775  662109 cri.go:89] found id: "de64a319ab30ae80695633af0e1c9206551e2408918b743c2258216259de56d2"
	I1209 11:56:32.626803  662109 cri.go:89] found id: ""
	I1209 11:56:32.626812  662109 logs.go:282] 1 containers: [de64a319ab30ae80695633af0e1c9206551e2408918b743c2258216259de56d2]
	I1209 11:56:32.626869  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:32.630757  662109 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:56:32.630825  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:56:32.663980  662109 cri.go:89] found id: "b6662f1bed1995329291aee23d70ac4157125e45c75dd92e34e07fa257f2be2d"
	I1209 11:56:32.664013  662109 cri.go:89] found id: ""
	I1209 11:56:32.664026  662109 logs.go:282] 1 containers: [b6662f1bed1995329291aee23d70ac4157125e45c75dd92e34e07fa257f2be2d]
	I1209 11:56:32.664093  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:32.668368  662109 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:56:32.668449  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:56:32.704638  662109 cri.go:89] found id: ""
	I1209 11:56:32.704675  662109 logs.go:282] 0 containers: []
	W1209 11:56:32.704688  662109 logs.go:284] No container was found matching "kindnet"
	I1209 11:56:32.704695  662109 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 11:56:32.704752  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 11:56:32.743694  662109 cri.go:89] found id: "d184b6139f52f80605886a70426659453eee61916ead187b881c64f6b4c59bbb"
	I1209 11:56:32.743729  662109 cri.go:89] found id: "0ef403336ca71e96a468d654fafbbe9fd9c37a3d04d2578e06771b425f20004f"
	I1209 11:56:32.743735  662109 cri.go:89] found id: ""
	I1209 11:56:32.743746  662109 logs.go:282] 2 containers: [d184b6139f52f80605886a70426659453eee61916ead187b881c64f6b4c59bbb 0ef403336ca71e96a468d654fafbbe9fd9c37a3d04d2578e06771b425f20004f]
	I1209 11:56:32.743814  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:32.749146  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:32.753226  662109 logs.go:123] Gathering logs for kube-scheduler [73b01a8a4080f1488054462bb97f98c08528c799629927ce6a684fb0ade63413] ...
	I1209 11:56:32.753253  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 73b01a8a4080f1488054462bb97f98c08528c799629927ce6a684fb0ade63413"
	I1209 11:56:32.787832  662109 logs.go:123] Gathering logs for kube-proxy [de64a319ab30ae80695633af0e1c9206551e2408918b743c2258216259de56d2] ...
	I1209 11:56:32.787877  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de64a319ab30ae80695633af0e1c9206551e2408918b743c2258216259de56d2"
	I1209 11:56:32.824859  662109 logs.go:123] Gathering logs for kube-controller-manager [b6662f1bed1995329291aee23d70ac4157125e45c75dd92e34e07fa257f2be2d] ...
	I1209 11:56:32.824891  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b6662f1bed1995329291aee23d70ac4157125e45c75dd92e34e07fa257f2be2d"
	I1209 11:56:32.881776  662109 logs.go:123] Gathering logs for storage-provisioner [d184b6139f52f80605886a70426659453eee61916ead187b881c64f6b4c59bbb] ...
	I1209 11:56:32.881808  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d184b6139f52f80605886a70426659453eee61916ead187b881c64f6b4c59bbb"
	I1209 11:56:32.919018  662109 logs.go:123] Gathering logs for storage-provisioner [0ef403336ca71e96a468d654fafbbe9fd9c37a3d04d2578e06771b425f20004f] ...
	I1209 11:56:32.919064  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0ef403336ca71e96a468d654fafbbe9fd9c37a3d04d2578e06771b425f20004f"
	I1209 11:56:32.956839  662109 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:56:32.956869  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:56:33.334255  662109 logs.go:123] Gathering logs for kubelet ...
	I1209 11:56:33.334300  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:56:33.406008  662109 logs.go:123] Gathering logs for etcd [13e00a6fef368f36aa166cfe9a5a6e37476d0bff708b09a552a102e6103aff16] ...
	I1209 11:56:33.406049  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 13e00a6fef368f36aa166cfe9a5a6e37476d0bff708b09a552a102e6103aff16"
	I1209 11:56:33.453689  662109 logs.go:123] Gathering logs for kube-apiserver [478ca5095dcdb90c166952aee1291e7de907591fa02ac65de132b6d7e6ea79cb] ...
	I1209 11:56:33.453724  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 478ca5095dcdb90c166952aee1291e7de907591fa02ac65de132b6d7e6ea79cb"
	I1209 11:56:33.496168  662109 logs.go:123] Gathering logs for coredns [909852cc820d2c5faeb9b92e83a833c397dbbf46bbc715f79ef8b77f0a338f42] ...
	I1209 11:56:33.496209  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 909852cc820d2c5faeb9b92e83a833c397dbbf46bbc715f79ef8b77f0a338f42"
	I1209 11:56:33.532057  662109 logs.go:123] Gathering logs for container status ...
	I1209 11:56:33.532090  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:56:33.575050  662109 logs.go:123] Gathering logs for dmesg ...
	I1209 11:56:33.575087  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:56:33.588543  662109 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:56:33.588575  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 11:56:36.194483  662109 system_pods.go:59] 8 kube-system pods found
	I1209 11:56:36.194516  662109 system_pods.go:61] "coredns-7c65d6cfc9-z647g" [0e15e13e-efe6-4ae2-8bac-205aadf8f95a] Running
	I1209 11:56:36.194522  662109 system_pods.go:61] "etcd-no-preload-820741" [3ea97088-f3b5-4c8f-ac11-7ab96f37bc5b] Running
	I1209 11:56:36.194527  662109 system_pods.go:61] "kube-apiserver-no-preload-820741" [bd0d3e20-bdab-41c1-bb25-0c5149b4e456] Running
	I1209 11:56:36.194531  662109 system_pods.go:61] "kube-controller-manager-no-preload-820741" [5eda77af-a169-4be5-9f96-6ad5406f9036] Running
	I1209 11:56:36.194534  662109 system_pods.go:61] "kube-proxy-hpvvp" [0945206c-8d1e-47e0-b35b-9011073423b2] Running
	I1209 11:56:36.194538  662109 system_pods.go:61] "kube-scheduler-no-preload-820741" [e7003668-a70d-4334-bb99-c63c463d87e0] Running
	I1209 11:56:36.194543  662109 system_pods.go:61] "metrics-server-6867b74b74-pwcsr" [40d4df7e-de82-478b-a77b-b27208d8262e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1209 11:56:36.194549  662109 system_pods.go:61] "storage-provisioner" [aeba46d3-ecf1-4923-b89c-75b34e75a06d] Running
	I1209 11:56:36.194559  662109 system_pods.go:74] duration metric: took 3.76090495s to wait for pod list to return data ...
	I1209 11:56:36.194567  662109 default_sa.go:34] waiting for default service account to be created ...
	I1209 11:56:36.197070  662109 default_sa.go:45] found service account: "default"
	I1209 11:56:36.197094  662109 default_sa.go:55] duration metric: took 2.520926ms for default service account to be created ...
	I1209 11:56:36.197104  662109 system_pods.go:116] waiting for k8s-apps to be running ...
	I1209 11:56:36.201494  662109 system_pods.go:86] 8 kube-system pods found
	I1209 11:56:36.201518  662109 system_pods.go:89] "coredns-7c65d6cfc9-z647g" [0e15e13e-efe6-4ae2-8bac-205aadf8f95a] Running
	I1209 11:56:36.201524  662109 system_pods.go:89] "etcd-no-preload-820741" [3ea97088-f3b5-4c8f-ac11-7ab96f37bc5b] Running
	I1209 11:56:36.201528  662109 system_pods.go:89] "kube-apiserver-no-preload-820741" [bd0d3e20-bdab-41c1-bb25-0c5149b4e456] Running
	I1209 11:56:36.201533  662109 system_pods.go:89] "kube-controller-manager-no-preload-820741" [5eda77af-a169-4be5-9f96-6ad5406f9036] Running
	I1209 11:56:36.201537  662109 system_pods.go:89] "kube-proxy-hpvvp" [0945206c-8d1e-47e0-b35b-9011073423b2] Running
	I1209 11:56:36.201540  662109 system_pods.go:89] "kube-scheduler-no-preload-820741" [e7003668-a70d-4334-bb99-c63c463d87e0] Running
	I1209 11:56:36.201547  662109 system_pods.go:89] "metrics-server-6867b74b74-pwcsr" [40d4df7e-de82-478b-a77b-b27208d8262e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1209 11:56:36.201551  662109 system_pods.go:89] "storage-provisioner" [aeba46d3-ecf1-4923-b89c-75b34e75a06d] Running
	I1209 11:56:36.201558  662109 system_pods.go:126] duration metric: took 4.448871ms to wait for k8s-apps to be running ...
	I1209 11:56:36.201567  662109 system_svc.go:44] waiting for kubelet service to be running ....
	I1209 11:56:36.201628  662109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 11:56:36.217457  662109 system_svc.go:56] duration metric: took 15.878252ms WaitForService to wait for kubelet
	I1209 11:56:36.217503  662109 kubeadm.go:582] duration metric: took 4m26.420670146s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 11:56:36.217527  662109 node_conditions.go:102] verifying NodePressure condition ...
	I1209 11:56:36.220498  662109 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1209 11:56:36.220526  662109 node_conditions.go:123] node cpu capacity is 2
	I1209 11:56:36.220572  662109 node_conditions.go:105] duration metric: took 3.039367ms to run NodePressure ...
	I1209 11:56:36.220586  662109 start.go:241] waiting for startup goroutines ...
	I1209 11:56:36.220597  662109 start.go:246] waiting for cluster config update ...
	I1209 11:56:36.220628  662109 start.go:255] writing updated cluster config ...
	I1209 11:56:36.220974  662109 ssh_runner.go:195] Run: rm -f paused
	I1209 11:56:36.272920  662109 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1209 11:56:36.274686  662109 out.go:177] * Done! kubectl is now configured to use "no-preload-820741" cluster and "default" namespace by default
	I1209 11:56:36.895681  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:38.896066  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:36.951281  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:39.455225  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:41.395880  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:43.895464  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:41.951287  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:44.451357  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:45.896184  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:48.398617  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:46.451733  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:48.950857  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:50.950964  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:50.895678  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:52.896291  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:53.389365  663024 pod_ready.go:82] duration metric: took 4m0.00015362s for pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace to be "Ready" ...
	E1209 11:56:53.389414  663024 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1209 11:56:53.389440  663024 pod_ready.go:39] duration metric: took 4m13.044002506s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 11:56:53.389480  663024 kubeadm.go:597] duration metric: took 4m21.286289463s to restartPrimaryControlPlane
	W1209 11:56:53.389572  663024 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1209 11:56:53.389610  663024 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1209 11:56:52.951153  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:55.451223  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:57.950413  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:57:00.449904  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:57:02.450069  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:57:04.451074  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:57:06.950873  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:57:08.951176  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:57:11.450596  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:57:13.451552  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:57:13.944884  661546 pod_ready.go:82] duration metric: took 4m0.000348644s for pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace to be "Ready" ...
	E1209 11:57:13.944919  661546 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace to be "Ready" (will not retry!)
	I1209 11:57:13.944943  661546 pod_ready.go:39] duration metric: took 4m14.049505666s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 11:57:13.944980  661546 kubeadm.go:597] duration metric: took 4m22.094543781s to restartPrimaryControlPlane
	W1209 11:57:13.945086  661546 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1209 11:57:13.945123  661546 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1209 11:57:19.569119  663024 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.179481312s)
	I1209 11:57:19.569196  663024 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 11:57:19.583584  663024 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 11:57:19.592807  663024 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 11:57:19.602121  663024 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 11:57:19.602190  663024 kubeadm.go:157] found existing configuration files:
	
	I1209 11:57:19.602249  663024 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1209 11:57:19.611109  663024 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 11:57:19.611187  663024 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 11:57:19.620264  663024 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1209 11:57:19.629026  663024 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 11:57:19.629103  663024 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 11:57:19.638036  663024 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1209 11:57:19.646265  663024 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 11:57:19.646331  663024 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 11:57:19.655187  663024 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1209 11:57:19.663908  663024 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 11:57:19.663962  663024 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 11:57:19.673002  663024 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1209 11:57:19.717664  663024 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1209 11:57:19.717737  663024 kubeadm.go:310] [preflight] Running pre-flight checks
	I1209 11:57:19.818945  663024 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1209 11:57:19.819065  663024 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1209 11:57:19.819160  663024 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1209 11:57:19.828186  663024 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1209 11:57:19.829831  663024 out.go:235]   - Generating certificates and keys ...
	I1209 11:57:19.829938  663024 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1209 11:57:19.830031  663024 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1209 11:57:19.830145  663024 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1209 11:57:19.830252  663024 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1209 11:57:19.830377  663024 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1209 11:57:19.830470  663024 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1209 11:57:19.830568  663024 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1209 11:57:19.830644  663024 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1209 11:57:19.830745  663024 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1209 11:57:19.830825  663024 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1209 11:57:19.830878  663024 kubeadm.go:310] [certs] Using the existing "sa" key
	I1209 11:57:19.830963  663024 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1209 11:57:19.961813  663024 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1209 11:57:20.436964  663024 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1209 11:57:20.652041  663024 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1209 11:57:20.837664  663024 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1209 11:57:20.892035  663024 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1209 11:57:20.892497  663024 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1209 11:57:20.895295  663024 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1209 11:57:20.896871  663024 out.go:235]   - Booting up control plane ...
	I1209 11:57:20.896992  663024 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1209 11:57:20.897139  663024 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1209 11:57:20.897260  663024 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1209 11:57:20.914735  663024 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1209 11:57:20.920520  663024 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1209 11:57:20.920566  663024 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1209 11:57:21.047290  663024 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1209 11:57:21.047437  663024 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1209 11:57:22.049131  663024 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001914766s
	I1209 11:57:22.049257  663024 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1209 11:57:27.053443  663024 kubeadm.go:310] [api-check] The API server is healthy after 5.002570817s
	I1209 11:57:27.068518  663024 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1209 11:57:27.086371  663024 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1209 11:57:27.114617  663024 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1209 11:57:27.114833  663024 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-482476 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1209 11:57:27.131354  663024 kubeadm.go:310] [bootstrap-token] Using token: 6aanjy.0y855mmcca5ic9co
	I1209 11:57:27.132852  663024 out.go:235]   - Configuring RBAC rules ...
	I1209 11:57:27.132992  663024 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1209 11:57:27.139770  663024 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1209 11:57:27.147974  663024 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1209 11:57:27.155508  663024 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1209 11:57:27.159181  663024 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1209 11:57:27.163403  663024 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1209 11:57:27.458812  663024 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1209 11:57:27.900322  663024 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1209 11:57:28.458864  663024 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1209 11:57:28.459944  663024 kubeadm.go:310] 
	I1209 11:57:28.460043  663024 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1209 11:57:28.460054  663024 kubeadm.go:310] 
	I1209 11:57:28.460156  663024 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1209 11:57:28.460166  663024 kubeadm.go:310] 
	I1209 11:57:28.460198  663024 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1209 11:57:28.460284  663024 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1209 11:57:28.460385  663024 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1209 11:57:28.460414  663024 kubeadm.go:310] 
	I1209 11:57:28.460499  663024 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1209 11:57:28.460509  663024 kubeadm.go:310] 
	I1209 11:57:28.460576  663024 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1209 11:57:28.460586  663024 kubeadm.go:310] 
	I1209 11:57:28.460663  663024 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1209 11:57:28.460766  663024 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1209 11:57:28.460862  663024 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1209 11:57:28.460871  663024 kubeadm.go:310] 
	I1209 11:57:28.460992  663024 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1209 11:57:28.461096  663024 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1209 11:57:28.461121  663024 kubeadm.go:310] 
	I1209 11:57:28.461244  663024 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token 6aanjy.0y855mmcca5ic9co \
	I1209 11:57:28.461395  663024 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c011865ce27f188d55feab5f04226df0aae8d2e7cfe661283806c6f2de0ef29a \
	I1209 11:57:28.461435  663024 kubeadm.go:310] 	--control-plane 
	I1209 11:57:28.461446  663024 kubeadm.go:310] 
	I1209 11:57:28.461551  663024 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1209 11:57:28.461574  663024 kubeadm.go:310] 
	I1209 11:57:28.461679  663024 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token 6aanjy.0y855mmcca5ic9co \
	I1209 11:57:28.461832  663024 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c011865ce27f188d55feab5f04226df0aae8d2e7cfe661283806c6f2de0ef29a 
	I1209 11:57:28.462544  663024 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1209 11:57:28.462594  663024 cni.go:84] Creating CNI manager for ""
	I1209 11:57:28.462620  663024 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 11:57:28.464574  663024 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1209 11:57:28.465952  663024 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1209 11:57:28.476155  663024 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1209 11:57:28.493471  663024 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1209 11:57:28.493551  663024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 11:57:28.493594  663024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-482476 minikube.k8s.io/updated_at=2024_12_09T11_57_28_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=60110addcdbf0fec7168b962521659e922988d6c minikube.k8s.io/name=default-k8s-diff-port-482476 minikube.k8s.io/primary=true
	I1209 11:57:28.506467  663024 ops.go:34] apiserver oom_adj: -16
	I1209 11:57:28.724224  663024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 11:57:29.224971  663024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 11:57:29.724660  663024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 11:57:30.224466  663024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 11:57:30.724354  663024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 11:57:31.224702  663024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 11:57:31.725101  663024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 11:57:32.224364  663024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 11:57:32.724357  663024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 11:57:32.844191  663024 kubeadm.go:1113] duration metric: took 4.350713188s to wait for elevateKubeSystemPrivileges
	I1209 11:57:32.844243  663024 kubeadm.go:394] duration metric: took 5m0.79272843s to StartCluster
	I1209 11:57:32.844287  663024 settings.go:142] acquiring lock: {Name:mkd819f3469f56ef651f2e5bb461e93bb6b2b5fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 11:57:32.844417  663024 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20068-609844/kubeconfig
	I1209 11:57:32.846697  663024 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/kubeconfig: {Name:mk32876a1ab47f13c0d7c0ba1ee5e32de01b4a4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 11:57:32.847014  663024 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.25 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 11:57:32.847067  663024 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1209 11:57:32.847162  663024 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-482476"
	I1209 11:57:32.847186  663024 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-482476"
	I1209 11:57:32.847192  663024 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-482476"
	W1209 11:57:32.847201  663024 addons.go:243] addon storage-provisioner should already be in state true
	I1209 11:57:32.847204  663024 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-482476"
	I1209 11:57:32.847228  663024 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-482476"
	I1209 11:57:32.847272  663024 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-482476"
	W1209 11:57:32.847287  663024 addons.go:243] addon metrics-server should already be in state true
	I1209 11:57:32.847285  663024 config.go:182] Loaded profile config "default-k8s-diff-port-482476": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 11:57:32.847328  663024 host.go:66] Checking if "default-k8s-diff-port-482476" exists ...
	I1209 11:57:32.847237  663024 host.go:66] Checking if "default-k8s-diff-port-482476" exists ...
	I1209 11:57:32.847705  663024 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:57:32.847713  663024 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:57:32.847750  663024 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:57:32.847755  663024 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:57:32.847841  663024 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:57:32.847873  663024 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:57:32.848599  663024 out.go:177] * Verifying Kubernetes components...
	I1209 11:57:32.850246  663024 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 11:57:32.864945  663024 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44639
	I1209 11:57:32.865141  663024 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37211
	I1209 11:57:32.865203  663024 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42835
	I1209 11:57:32.865473  663024 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:57:32.865635  663024 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:57:32.865733  663024 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:57:32.866096  663024 main.go:141] libmachine: Using API Version  1
	I1209 11:57:32.866115  663024 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:57:32.866241  663024 main.go:141] libmachine: Using API Version  1
	I1209 11:57:32.866264  663024 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:57:32.866241  663024 main.go:141] libmachine: Using API Version  1
	I1209 11:57:32.866316  663024 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:57:32.866642  663024 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:57:32.866654  663024 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:57:32.866656  663024 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:57:32.866865  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetState
	I1209 11:57:32.867243  663024 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:57:32.867287  663024 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:57:32.867321  663024 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:57:32.867358  663024 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:57:32.871085  663024 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-482476"
	W1209 11:57:32.871109  663024 addons.go:243] addon default-storageclass should already be in state true
	I1209 11:57:32.871142  663024 host.go:66] Checking if "default-k8s-diff-port-482476" exists ...
	I1209 11:57:32.871395  663024 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:57:32.871431  663024 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:57:32.883301  663024 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45049
	I1209 11:57:32.883976  663024 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:57:32.884508  663024 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36509
	I1209 11:57:32.884758  663024 main.go:141] libmachine: Using API Version  1
	I1209 11:57:32.884775  663024 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:57:32.885123  663024 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:57:32.885279  663024 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:57:32.885610  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetState
	I1209 11:57:32.885801  663024 main.go:141] libmachine: Using API Version  1
	I1209 11:57:32.885817  663024 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:57:32.886142  663024 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:57:32.886347  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetState
	I1209 11:57:32.888357  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .DriverName
	I1209 11:57:32.888762  663024 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37611
	I1209 11:57:32.889103  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .DriverName
	I1209 11:57:32.889192  663024 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:57:32.889669  663024 main.go:141] libmachine: Using API Version  1
	I1209 11:57:32.889692  663024 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:57:32.890035  663024 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:57:32.890082  663024 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 11:57:32.890647  663024 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:57:32.890687  663024 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:57:32.890867  663024 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1209 11:57:32.891756  663024 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 11:57:32.891774  663024 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1209 11:57:32.891794  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHHostname
	I1209 11:57:32.892543  663024 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1209 11:57:32.892563  663024 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1209 11:57:32.892587  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHHostname
	I1209 11:57:32.896754  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:57:32.897437  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:c9:8a", ip: ""} in network mk-default-k8s-diff-port-482476: {Iface:virbr2 ExpiryTime:2024-12-09 12:52:18 +0000 UTC Type:0 Mac:52:54:00:f0:c9:8a Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:default-k8s-diff-port-482476 Clientid:01:52:54:00:f0:c9:8a}
	I1209 11:57:32.897471  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined IP address 192.168.50.25 and MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:57:32.897752  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHPort
	I1209 11:57:32.897836  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:57:32.897975  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHKeyPath
	I1209 11:57:32.898370  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:c9:8a", ip: ""} in network mk-default-k8s-diff-port-482476: {Iface:virbr2 ExpiryTime:2024-12-09 12:52:18 +0000 UTC Type:0 Mac:52:54:00:f0:c9:8a Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:default-k8s-diff-port-482476 Clientid:01:52:54:00:f0:c9:8a}
	I1209 11:57:32.898381  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHUsername
	I1209 11:57:32.898395  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined IP address 192.168.50.25 and MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:57:32.898556  663024 sshutil.go:53] new ssh client: &{IP:192.168.50.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/default-k8s-diff-port-482476/id_rsa Username:docker}
	I1209 11:57:32.898649  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHPort
	I1209 11:57:32.898829  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHKeyPath
	I1209 11:57:32.898975  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHUsername
	I1209 11:57:32.899101  663024 sshutil.go:53] new ssh client: &{IP:192.168.50.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/default-k8s-diff-port-482476/id_rsa Username:docker}
	I1209 11:57:32.907891  663024 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34063
	I1209 11:57:32.908317  663024 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:57:32.908827  663024 main.go:141] libmachine: Using API Version  1
	I1209 11:57:32.908848  663024 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:57:32.909352  663024 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:57:32.909551  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetState
	I1209 11:57:32.911172  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .DriverName
	I1209 11:57:32.911417  663024 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1209 11:57:32.911434  663024 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1209 11:57:32.911460  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHHostname
	I1209 11:57:32.914016  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:57:32.914474  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:c9:8a", ip: ""} in network mk-default-k8s-diff-port-482476: {Iface:virbr2 ExpiryTime:2024-12-09 12:52:18 +0000 UTC Type:0 Mac:52:54:00:f0:c9:8a Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:default-k8s-diff-port-482476 Clientid:01:52:54:00:f0:c9:8a}
	I1209 11:57:32.914490  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined IP address 192.168.50.25 and MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:57:32.914646  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHPort
	I1209 11:57:32.914838  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHKeyPath
	I1209 11:57:32.914965  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHUsername
	I1209 11:57:32.915071  663024 sshutil.go:53] new ssh client: &{IP:192.168.50.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/default-k8s-diff-port-482476/id_rsa Username:docker}
	I1209 11:57:33.067075  663024 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 11:57:33.085671  663024 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-482476" to be "Ready" ...
	I1209 11:57:33.095765  663024 node_ready.go:49] node "default-k8s-diff-port-482476" has status "Ready":"True"
	I1209 11:57:33.095801  663024 node_ready.go:38] duration metric: took 10.096442ms for node "default-k8s-diff-port-482476" to be "Ready" ...
	I1209 11:57:33.095815  663024 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 11:57:33.105497  663024 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-7rr27" in "kube-system" namespace to be "Ready" ...
	I1209 11:57:33.200059  663024 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 11:57:33.218467  663024 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1209 11:57:33.218496  663024 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1209 11:57:33.225990  663024 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1209 11:57:33.278736  663024 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1209 11:57:33.278772  663024 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1209 11:57:33.342270  663024 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1209 11:57:33.342304  663024 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1209 11:57:33.412771  663024 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1209 11:57:34.250639  663024 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.050535014s)
	I1209 11:57:34.250706  663024 main.go:141] libmachine: Making call to close driver server
	I1209 11:57:34.250720  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .Close
	I1209 11:57:34.250704  663024 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.024681453s)
	I1209 11:57:34.250811  663024 main.go:141] libmachine: Making call to close driver server
	I1209 11:57:34.250820  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .Close
	I1209 11:57:34.251151  663024 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:57:34.251170  663024 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:57:34.251182  663024 main.go:141] libmachine: Making call to close driver server
	I1209 11:57:34.251192  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .Close
	I1209 11:57:34.251197  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | Closing plugin on server side
	I1209 11:57:34.251238  663024 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:57:34.251245  663024 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:57:34.251253  663024 main.go:141] libmachine: Making call to close driver server
	I1209 11:57:34.251261  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .Close
	I1209 11:57:34.253136  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | Closing plugin on server side
	I1209 11:57:34.253141  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | Closing plugin on server side
	I1209 11:57:34.253180  663024 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:57:34.253182  663024 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:57:34.253194  663024 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:57:34.253214  663024 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:57:34.279650  663024 main.go:141] libmachine: Making call to close driver server
	I1209 11:57:34.279682  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .Close
	I1209 11:57:34.280064  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | Closing plugin on server side
	I1209 11:57:34.280116  663024 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:57:34.280130  663024 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:57:34.656217  663024 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.243394493s)
	I1209 11:57:34.656287  663024 main.go:141] libmachine: Making call to close driver server
	I1209 11:57:34.656305  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .Close
	I1209 11:57:34.656641  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | Closing plugin on server side
	I1209 11:57:34.656655  663024 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:57:34.656671  663024 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:57:34.656683  663024 main.go:141] libmachine: Making call to close driver server
	I1209 11:57:34.656691  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .Close
	I1209 11:57:34.656982  663024 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:57:34.656999  663024 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:57:34.657011  663024 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-482476"
	I1209 11:57:34.658878  663024 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1209 11:57:34.660089  663024 addons.go:510] duration metric: took 1.813029421s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1209 11:57:35.122487  663024 pod_ready.go:103] pod "coredns-7c65d6cfc9-7rr27" in "kube-system" namespace has status "Ready":"False"
	I1209 11:57:36.112072  663024 pod_ready.go:93] pod "coredns-7c65d6cfc9-7rr27" in "kube-system" namespace has status "Ready":"True"
	I1209 11:57:36.112097  663024 pod_ready.go:82] duration metric: took 3.006564547s for pod "coredns-7c65d6cfc9-7rr27" in "kube-system" namespace to be "Ready" ...
	I1209 11:57:36.112110  663024 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-bb47s" in "kube-system" namespace to be "Ready" ...
	I1209 11:57:36.117521  663024 pod_ready.go:93] pod "coredns-7c65d6cfc9-bb47s" in "kube-system" namespace has status "Ready":"True"
	I1209 11:57:36.117545  663024 pod_ready.go:82] duration metric: took 5.428168ms for pod "coredns-7c65d6cfc9-bb47s" in "kube-system" namespace to be "Ready" ...
	I1209 11:57:36.117554  663024 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-482476" in "kube-system" namespace to be "Ready" ...
	I1209 11:57:36.122929  663024 pod_ready.go:93] pod "etcd-default-k8s-diff-port-482476" in "kube-system" namespace has status "Ready":"True"
	I1209 11:57:36.122953  663024 pod_ready.go:82] duration metric: took 5.392834ms for pod "etcd-default-k8s-diff-port-482476" in "kube-system" namespace to be "Ready" ...
	I1209 11:57:36.122972  663024 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-482476" in "kube-system" namespace to be "Ready" ...
	I1209 11:57:36.127025  663024 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-482476" in "kube-system" namespace has status "Ready":"True"
	I1209 11:57:36.127047  663024 pod_ready.go:82] duration metric: took 4.068175ms for pod "kube-apiserver-default-k8s-diff-port-482476" in "kube-system" namespace to be "Ready" ...
	I1209 11:57:36.127056  663024 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-482476" in "kube-system" namespace to be "Ready" ...
	I1209 11:57:36.131036  663024 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-482476" in "kube-system" namespace has status "Ready":"True"
	I1209 11:57:36.131055  663024 pod_ready.go:82] duration metric: took 3.993825ms for pod "kube-controller-manager-default-k8s-diff-port-482476" in "kube-system" namespace to be "Ready" ...
	I1209 11:57:36.131064  663024 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-pgs52" in "kube-system" namespace to be "Ready" ...
	I1209 11:57:36.508951  663024 pod_ready.go:93] pod "kube-proxy-pgs52" in "kube-system" namespace has status "Ready":"True"
	I1209 11:57:36.508980  663024 pod_ready.go:82] duration metric: took 377.910722ms for pod "kube-proxy-pgs52" in "kube-system" namespace to be "Ready" ...
	I1209 11:57:36.508991  663024 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-482476" in "kube-system" namespace to be "Ready" ...
	I1209 11:57:36.909065  663024 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-482476" in "kube-system" namespace has status "Ready":"True"
	I1209 11:57:36.909093  663024 pod_ready.go:82] duration metric: took 400.095775ms for pod "kube-scheduler-default-k8s-diff-port-482476" in "kube-system" namespace to be "Ready" ...
	I1209 11:57:36.909100  663024 pod_ready.go:39] duration metric: took 3.813270613s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 11:57:36.909116  663024 api_server.go:52] waiting for apiserver process to appear ...
	I1209 11:57:36.909169  663024 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:57:36.924688  663024 api_server.go:72] duration metric: took 4.077626254s to wait for apiserver process to appear ...
	I1209 11:57:36.924726  663024 api_server.go:88] waiting for apiserver healthz status ...
	I1209 11:57:36.924752  663024 api_server.go:253] Checking apiserver healthz at https://192.168.50.25:8444/healthz ...
	I1209 11:57:36.930782  663024 api_server.go:279] https://192.168.50.25:8444/healthz returned 200:
	ok
	I1209 11:57:36.931734  663024 api_server.go:141] control plane version: v1.31.2
	I1209 11:57:36.931758  663024 api_server.go:131] duration metric: took 7.024599ms to wait for apiserver health ...
	I1209 11:57:36.931766  663024 system_pods.go:43] waiting for kube-system pods to appear ...
	I1209 11:57:37.112291  663024 system_pods.go:59] 9 kube-system pods found
	I1209 11:57:37.112323  663024 system_pods.go:61] "coredns-7c65d6cfc9-7rr27" [a5dd0401-80bf-4c87-9771-e1837c960425] Running
	I1209 11:57:37.112328  663024 system_pods.go:61] "coredns-7c65d6cfc9-bb47s" [2ff9fcf2-1494-4739-9bf4-6c9dd5bcbbf4] Running
	I1209 11:57:37.112332  663024 system_pods.go:61] "etcd-default-k8s-diff-port-482476" [01dfcc5e-2ac5-4cee-8b68-ac1f3bf8866b] Running
	I1209 11:57:37.112337  663024 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-482476" [c65f1162-8d8a-47e8-8f1a-4abc0ebc8649] Running
	I1209 11:57:37.112340  663024 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-482476" [363e5f34-878a-434e-8bf2-21cdc06be305] Running
	I1209 11:57:37.112343  663024 system_pods.go:61] "kube-proxy-pgs52" [d5a3463e-e955-4345-9559-b23cce44fa0e] Running
	I1209 11:57:37.112346  663024 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-482476" [53f924fa-554b-4ceb-b171-efcfecdc137e] Running
	I1209 11:57:37.112356  663024 system_pods.go:61] "metrics-server-6867b74b74-2lmtn" [60803d31-d0b0-4d51-a9f2-cadafd184a90] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1209 11:57:37.112363  663024 system_pods.go:61] "storage-provisioner" [6b53e3ba-9bc9-4b5a-bec9-d06336616c8a] Running
	I1209 11:57:37.112373  663024 system_pods.go:74] duration metric: took 180.599339ms to wait for pod list to return data ...
	I1209 11:57:37.112387  663024 default_sa.go:34] waiting for default service account to be created ...
	I1209 11:57:37.309750  663024 default_sa.go:45] found service account: "default"
	I1209 11:57:37.309777  663024 default_sa.go:55] duration metric: took 197.382304ms for default service account to be created ...
	I1209 11:57:37.309787  663024 system_pods.go:116] waiting for k8s-apps to be running ...
	I1209 11:57:37.513080  663024 system_pods.go:86] 9 kube-system pods found
	I1209 11:57:37.513112  663024 system_pods.go:89] "coredns-7c65d6cfc9-7rr27" [a5dd0401-80bf-4c87-9771-e1837c960425] Running
	I1209 11:57:37.513118  663024 system_pods.go:89] "coredns-7c65d6cfc9-bb47s" [2ff9fcf2-1494-4739-9bf4-6c9dd5bcbbf4] Running
	I1209 11:57:37.513121  663024 system_pods.go:89] "etcd-default-k8s-diff-port-482476" [01dfcc5e-2ac5-4cee-8b68-ac1f3bf8866b] Running
	I1209 11:57:37.513128  663024 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-482476" [c65f1162-8d8a-47e8-8f1a-4abc0ebc8649] Running
	I1209 11:57:37.513133  663024 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-482476" [363e5f34-878a-434e-8bf2-21cdc06be305] Running
	I1209 11:57:37.513136  663024 system_pods.go:89] "kube-proxy-pgs52" [d5a3463e-e955-4345-9559-b23cce44fa0e] Running
	I1209 11:57:37.513141  663024 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-482476" [53f924fa-554b-4ceb-b171-efcfecdc137e] Running
	I1209 11:57:37.513150  663024 system_pods.go:89] "metrics-server-6867b74b74-2lmtn" [60803d31-d0b0-4d51-a9f2-cadafd184a90] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1209 11:57:37.513156  663024 system_pods.go:89] "storage-provisioner" [6b53e3ba-9bc9-4b5a-bec9-d06336616c8a] Running
	I1209 11:57:37.513168  663024 system_pods.go:126] duration metric: took 203.373238ms to wait for k8s-apps to be running ...
	I1209 11:57:37.513181  663024 system_svc.go:44] waiting for kubelet service to be running ....
	I1209 11:57:37.513233  663024 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 11:57:37.527419  663024 system_svc.go:56] duration metric: took 14.22618ms WaitForService to wait for kubelet
	I1209 11:57:37.527451  663024 kubeadm.go:582] duration metric: took 4.680397826s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 11:57:37.527473  663024 node_conditions.go:102] verifying NodePressure condition ...
	I1209 11:57:37.710396  663024 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1209 11:57:37.710429  663024 node_conditions.go:123] node cpu capacity is 2
	I1209 11:57:37.710447  663024 node_conditions.go:105] duration metric: took 182.968526ms to run NodePressure ...
	I1209 11:57:37.710463  663024 start.go:241] waiting for startup goroutines ...
	I1209 11:57:37.710473  663024 start.go:246] waiting for cluster config update ...
	I1209 11:57:37.710487  663024 start.go:255] writing updated cluster config ...
	I1209 11:57:37.710799  663024 ssh_runner.go:195] Run: rm -f paused
	I1209 11:57:37.760468  663024 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1209 11:57:37.762472  663024 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-482476" cluster and "default" namespace by default
	I1209 11:57:40.219406  661546 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.274255602s)
	I1209 11:57:40.219478  661546 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 11:57:40.234863  661546 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 11:57:40.245357  661546 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 11:57:40.255253  661546 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 11:57:40.255276  661546 kubeadm.go:157] found existing configuration files:
	
	I1209 11:57:40.255319  661546 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 11:57:40.264881  661546 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 11:57:40.264934  661546 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 11:57:40.274990  661546 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 11:57:40.284941  661546 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 11:57:40.284998  661546 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 11:57:40.295188  661546 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 11:57:40.305136  661546 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 11:57:40.305181  661546 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 11:57:40.315125  661546 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 11:57:40.324727  661546 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 11:57:40.324789  661546 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 11:57:40.333574  661546 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1209 11:57:40.378743  661546 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1209 11:57:40.378932  661546 kubeadm.go:310] [preflight] Running pre-flight checks
	I1209 11:57:40.492367  661546 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1209 11:57:40.492493  661546 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1209 11:57:40.492658  661546 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1209 11:57:40.504994  661546 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1209 11:57:40.506760  661546 out.go:235]   - Generating certificates and keys ...
	I1209 11:57:40.506878  661546 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1209 11:57:40.506955  661546 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1209 11:57:40.507033  661546 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1209 11:57:40.507088  661546 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1209 11:57:40.507156  661546 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1209 11:57:40.507274  661546 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1209 11:57:40.507377  661546 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1209 11:57:40.507463  661546 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1209 11:57:40.507573  661546 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1209 11:57:40.507692  661546 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1209 11:57:40.507756  661546 kubeadm.go:310] [certs] Using the existing "sa" key
	I1209 11:57:40.507836  661546 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1209 11:57:40.607744  661546 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1209 11:57:40.684950  661546 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1209 11:57:40.826079  661546 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1209 11:57:40.945768  661546 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1209 11:57:41.212984  661546 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1209 11:57:41.213406  661546 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1209 11:57:41.216390  661546 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1209 11:57:41.218053  661546 out.go:235]   - Booting up control plane ...
	I1209 11:57:41.218202  661546 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1209 11:57:41.218307  661546 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1209 11:57:41.220009  661546 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1209 11:57:41.237816  661546 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1209 11:57:41.244148  661546 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1209 11:57:41.244204  661546 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1209 11:57:41.371083  661546 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1209 11:57:41.371245  661546 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1209 11:57:41.872938  661546 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.998998ms
	I1209 11:57:41.873141  661546 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1209 11:57:46.874725  661546 kubeadm.go:310] [api-check] The API server is healthy after 5.001587898s
	I1209 11:57:46.886996  661546 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1209 11:57:46.897941  661546 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1209 11:57:46.927451  661546 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1209 11:57:46.927718  661546 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-005123 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1209 11:57:46.945578  661546 kubeadm.go:310] [bootstrap-token] Using token: bhdcn7.orsewwwtbk1gmdg8
	I1209 11:57:46.946894  661546 out.go:235]   - Configuring RBAC rules ...
	I1209 11:57:46.947041  661546 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1209 11:57:46.950006  661546 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1209 11:57:46.956761  661546 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1209 11:57:46.959756  661546 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1209 11:57:46.962973  661546 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1209 11:57:46.970016  661546 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1209 11:57:47.282251  661546 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1209 11:57:47.714588  661546 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1209 11:57:48.283610  661546 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1209 11:57:48.283671  661546 kubeadm.go:310] 
	I1209 11:57:48.283774  661546 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1209 11:57:48.283786  661546 kubeadm.go:310] 
	I1209 11:57:48.283901  661546 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1209 11:57:48.283948  661546 kubeadm.go:310] 
	I1209 11:57:48.283995  661546 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1209 11:57:48.284089  661546 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1209 11:57:48.284139  661546 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1209 11:57:48.284148  661546 kubeadm.go:310] 
	I1209 11:57:48.284216  661546 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1209 11:57:48.284224  661546 kubeadm.go:310] 
	I1209 11:57:48.284281  661546 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1209 11:57:48.284291  661546 kubeadm.go:310] 
	I1209 11:57:48.284359  661546 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1209 11:57:48.284465  661546 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1209 11:57:48.284583  661546 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1209 11:57:48.284596  661546 kubeadm.go:310] 
	I1209 11:57:48.284739  661546 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1209 11:57:48.284846  661546 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1209 11:57:48.284859  661546 kubeadm.go:310] 
	I1209 11:57:48.284972  661546 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token bhdcn7.orsewwwtbk1gmdg8 \
	I1209 11:57:48.285133  661546 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c011865ce27f188d55feab5f04226df0aae8d2e7cfe661283806c6f2de0ef29a \
	I1209 11:57:48.285170  661546 kubeadm.go:310] 	--control-plane 
	I1209 11:57:48.285184  661546 kubeadm.go:310] 
	I1209 11:57:48.285312  661546 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1209 11:57:48.285321  661546 kubeadm.go:310] 
	I1209 11:57:48.285388  661546 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token bhdcn7.orsewwwtbk1gmdg8 \
	I1209 11:57:48.285530  661546 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c011865ce27f188d55feab5f04226df0aae8d2e7cfe661283806c6f2de0ef29a 
	I1209 11:57:48.286117  661546 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1209 11:57:48.286246  661546 cni.go:84] Creating CNI manager for ""
	I1209 11:57:48.286263  661546 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 11:57:48.288141  661546 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1209 11:57:48.289484  661546 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1209 11:57:48.301160  661546 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1209 11:57:48.320752  661546 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1209 11:57:48.320831  661546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 11:57:48.320831  661546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-005123 minikube.k8s.io/updated_at=2024_12_09T11_57_48_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=60110addcdbf0fec7168b962521659e922988d6c minikube.k8s.io/name=embed-certs-005123 minikube.k8s.io/primary=true
	I1209 11:57:48.552069  661546 ops.go:34] apiserver oom_adj: -16
	I1209 11:57:48.552119  661546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 11:57:49.052304  661546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 11:57:49.552516  661546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 11:57:50.052548  661546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 11:57:50.552931  661546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 11:57:51.052381  661546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 11:57:51.552589  661546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 11:57:52.052273  661546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 11:57:52.552546  661546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 11:57:52.645059  661546 kubeadm.go:1113] duration metric: took 4.324296774s to wait for elevateKubeSystemPrivileges
	I1209 11:57:52.645107  661546 kubeadm.go:394] duration metric: took 5m0.847017281s to StartCluster
	I1209 11:57:52.645133  661546 settings.go:142] acquiring lock: {Name:mkd819f3469f56ef651f2e5bb461e93bb6b2b5fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 11:57:52.645241  661546 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20068-609844/kubeconfig
	I1209 11:57:52.647822  661546 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/kubeconfig: {Name:mk32876a1ab47f13c0d7c0ba1ee5e32de01b4a4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 11:57:52.648129  661546 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.218 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 11:57:52.648226  661546 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1209 11:57:52.648338  661546 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-005123"
	I1209 11:57:52.648354  661546 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-005123"
	W1209 11:57:52.648366  661546 addons.go:243] addon storage-provisioner should already be in state true
	I1209 11:57:52.648367  661546 addons.go:69] Setting default-storageclass=true in profile "embed-certs-005123"
	I1209 11:57:52.648396  661546 config.go:182] Loaded profile config "embed-certs-005123": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 11:57:52.648397  661546 addons.go:69] Setting metrics-server=true in profile "embed-certs-005123"
	I1209 11:57:52.648434  661546 addons.go:234] Setting addon metrics-server=true in "embed-certs-005123"
	I1209 11:57:52.648399  661546 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-005123"
	W1209 11:57:52.648448  661546 addons.go:243] addon metrics-server should already be in state true
	I1209 11:57:52.648499  661546 host.go:66] Checking if "embed-certs-005123" exists ...
	I1209 11:57:52.648400  661546 host.go:66] Checking if "embed-certs-005123" exists ...
	I1209 11:57:52.648867  661546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:57:52.648883  661546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:57:52.648914  661546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:57:52.648932  661546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:57:52.648947  661546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:57:52.648917  661546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:57:52.649702  661546 out.go:177] * Verifying Kubernetes components...
	I1209 11:57:52.651094  661546 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 11:57:52.665090  661546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38065
	I1209 11:57:52.665309  661546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35905
	I1209 11:57:52.665602  661546 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:57:52.665889  661546 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:57:52.666308  661546 main.go:141] libmachine: Using API Version  1
	I1209 11:57:52.666329  661546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:57:52.666470  661546 main.go:141] libmachine: Using API Version  1
	I1209 11:57:52.666492  661546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:57:52.666768  661546 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:57:52.666907  661546 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:57:52.667140  661546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35159
	I1209 11:57:52.667344  661546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:57:52.667387  661546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:57:52.667536  661546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:57:52.667580  661546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:57:52.667652  661546 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:57:52.668127  661546 main.go:141] libmachine: Using API Version  1
	I1209 11:57:52.668154  661546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:57:52.668657  661546 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:57:52.668868  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetState
	I1209 11:57:52.672550  661546 addons.go:234] Setting addon default-storageclass=true in "embed-certs-005123"
	W1209 11:57:52.672580  661546 addons.go:243] addon default-storageclass should already be in state true
	I1209 11:57:52.672612  661546 host.go:66] Checking if "embed-certs-005123" exists ...
	I1209 11:57:52.672985  661546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:57:52.673032  661546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:57:52.684848  661546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43667
	I1209 11:57:52.684854  661546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36851
	I1209 11:57:52.685398  661546 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:57:52.685451  661546 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:57:52.686054  661546 main.go:141] libmachine: Using API Version  1
	I1209 11:57:52.686081  661546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:57:52.686155  661546 main.go:141] libmachine: Using API Version  1
	I1209 11:57:52.686228  661546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:57:52.686553  661546 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:57:52.686614  661546 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:57:52.686753  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetState
	I1209 11:57:52.686930  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetState
	I1209 11:57:52.687838  661546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33245
	I1209 11:57:52.688391  661546 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:57:52.688818  661546 main.go:141] libmachine: (embed-certs-005123) Calling .DriverName
	I1209 11:57:52.689013  661546 main.go:141] libmachine: Using API Version  1
	I1209 11:57:52.689040  661546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:57:52.689314  661546 main.go:141] libmachine: (embed-certs-005123) Calling .DriverName
	I1209 11:57:52.689450  661546 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:57:52.689908  661546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:57:52.689943  661546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:57:52.691136  661546 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 11:57:52.691137  661546 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1209 11:57:52.692714  661546 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1209 11:57:52.692732  661546 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1209 11:57:52.692749  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHHostname
	I1209 11:57:52.692789  661546 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 11:57:52.692800  661546 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1209 11:57:52.692813  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHHostname
	I1209 11:57:52.696349  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:57:52.696791  661546 main.go:141] libmachine: (embed-certs-005123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:a0:a8", ip: ""} in network mk-embed-certs-005123: {Iface:virbr4 ExpiryTime:2024-12-09 12:52:37 +0000 UTC Type:0 Mac:52:54:00:ee:a0:a8 Iaid: IPaddr:192.168.72.218 Prefix:24 Hostname:embed-certs-005123 Clientid:01:52:54:00:ee:a0:a8}
	I1209 11:57:52.696815  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined IP address 192.168.72.218 and MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:57:52.697088  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:57:52.697143  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHPort
	I1209 11:57:52.697314  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHKeyPath
	I1209 11:57:52.697482  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHUsername
	I1209 11:57:52.697512  661546 main.go:141] libmachine: (embed-certs-005123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:a0:a8", ip: ""} in network mk-embed-certs-005123: {Iface:virbr4 ExpiryTime:2024-12-09 12:52:37 +0000 UTC Type:0 Mac:52:54:00:ee:a0:a8 Iaid: IPaddr:192.168.72.218 Prefix:24 Hostname:embed-certs-005123 Clientid:01:52:54:00:ee:a0:a8}
	I1209 11:57:52.697547  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined IP address 192.168.72.218 and MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:57:52.697658  661546 sshutil.go:53] new ssh client: &{IP:192.168.72.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/embed-certs-005123/id_rsa Username:docker}
	I1209 11:57:52.697787  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHPort
	I1209 11:57:52.697962  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHKeyPath
	I1209 11:57:52.698093  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHUsername
	I1209 11:57:52.698209  661546 sshutil.go:53] new ssh client: &{IP:192.168.72.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/embed-certs-005123/id_rsa Username:docker}
	I1209 11:57:52.705766  661546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37265
	I1209 11:57:52.706265  661546 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:57:52.706694  661546 main.go:141] libmachine: Using API Version  1
	I1209 11:57:52.706721  661546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:57:52.707031  661546 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:57:52.707241  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetState
	I1209 11:57:52.708747  661546 main.go:141] libmachine: (embed-certs-005123) Calling .DriverName
	I1209 11:57:52.708980  661546 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1209 11:57:52.708997  661546 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1209 11:57:52.709016  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHHostname
	I1209 11:57:52.711546  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:57:52.711986  661546 main.go:141] libmachine: (embed-certs-005123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:a0:a8", ip: ""} in network mk-embed-certs-005123: {Iface:virbr4 ExpiryTime:2024-12-09 12:52:37 +0000 UTC Type:0 Mac:52:54:00:ee:a0:a8 Iaid: IPaddr:192.168.72.218 Prefix:24 Hostname:embed-certs-005123 Clientid:01:52:54:00:ee:a0:a8}
	I1209 11:57:52.712011  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined IP address 192.168.72.218 and MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:57:52.712263  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHPort
	I1209 11:57:52.712438  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHKeyPath
	I1209 11:57:52.712604  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHUsername
	I1209 11:57:52.712751  661546 sshutil.go:53] new ssh client: &{IP:192.168.72.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/embed-certs-005123/id_rsa Username:docker}
	I1209 11:57:52.858535  661546 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 11:57:52.879035  661546 node_ready.go:35] waiting up to 6m0s for node "embed-certs-005123" to be "Ready" ...
	I1209 11:57:52.899550  661546 node_ready.go:49] node "embed-certs-005123" has status "Ready":"True"
	I1209 11:57:52.899575  661546 node_ready.go:38] duration metric: took 20.508179ms for node "embed-certs-005123" to be "Ready" ...
	I1209 11:57:52.899589  661546 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 11:57:52.960716  661546 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-t49mk" in "kube-system" namespace to be "Ready" ...
	I1209 11:57:52.962755  661546 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1209 11:57:52.962779  661546 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1209 11:57:52.995747  661546 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1209 11:57:52.995787  661546 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1209 11:57:53.031395  661546 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1209 11:57:53.031426  661546 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1209 11:57:53.031535  661546 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1209 11:57:53.049695  661546 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 11:57:53.061716  661546 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1209 11:57:53.314158  661546 main.go:141] libmachine: Making call to close driver server
	I1209 11:57:53.314212  661546 main.go:141] libmachine: (embed-certs-005123) Calling .Close
	I1209 11:57:53.314523  661546 main.go:141] libmachine: (embed-certs-005123) DBG | Closing plugin on server side
	I1209 11:57:53.314548  661546 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:57:53.314565  661546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:57:53.314586  661546 main.go:141] libmachine: Making call to close driver server
	I1209 11:57:53.314598  661546 main.go:141] libmachine: (embed-certs-005123) Calling .Close
	I1209 11:57:53.314857  661546 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:57:53.314875  661546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:57:53.323573  661546 main.go:141] libmachine: Making call to close driver server
	I1209 11:57:53.323590  661546 main.go:141] libmachine: (embed-certs-005123) Calling .Close
	I1209 11:57:53.323822  661546 main.go:141] libmachine: (embed-certs-005123) DBG | Closing plugin on server side
	I1209 11:57:53.323873  661546 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:57:53.323882  661546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:57:54.004616  661546 main.go:141] libmachine: Making call to close driver server
	I1209 11:57:54.004655  661546 main.go:141] libmachine: (embed-certs-005123) Calling .Close
	I1209 11:57:54.005050  661546 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:57:54.005067  661546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:57:54.005075  661546 main.go:141] libmachine: Making call to close driver server
	I1209 11:57:54.005083  661546 main.go:141] libmachine: (embed-certs-005123) Calling .Close
	I1209 11:57:54.005351  661546 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:57:54.005372  661546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:57:54.005370  661546 main.go:141] libmachine: (embed-certs-005123) DBG | Closing plugin on server side
	I1209 11:57:54.352527  661546 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.290758533s)
	I1209 11:57:54.352616  661546 main.go:141] libmachine: Making call to close driver server
	I1209 11:57:54.352636  661546 main.go:141] libmachine: (embed-certs-005123) Calling .Close
	I1209 11:57:54.352957  661546 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:57:54.352977  661546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:57:54.352987  661546 main.go:141] libmachine: Making call to close driver server
	I1209 11:57:54.352995  661546 main.go:141] libmachine: (embed-certs-005123) Calling .Close
	I1209 11:57:54.353278  661546 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:57:54.353320  661546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:57:54.353336  661546 addons.go:475] Verifying addon metrics-server=true in "embed-certs-005123"
	I1209 11:57:54.353387  661546 main.go:141] libmachine: (embed-certs-005123) DBG | Closing plugin on server side
	I1209 11:57:54.355153  661546 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1209 11:57:54.356250  661546 addons.go:510] duration metric: took 1.708044398s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1209 11:57:54.968202  661546 pod_ready.go:103] pod "coredns-7c65d6cfc9-t49mk" in "kube-system" namespace has status "Ready":"False"
	I1209 11:57:57.467948  661546 pod_ready.go:93] pod "coredns-7c65d6cfc9-t49mk" in "kube-system" namespace has status "Ready":"True"
	I1209 11:57:57.467979  661546 pod_ready.go:82] duration metric: took 4.507228843s for pod "coredns-7c65d6cfc9-t49mk" in "kube-system" namespace to be "Ready" ...
	I1209 11:57:57.467992  661546 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-xspr9" in "kube-system" namespace to be "Ready" ...
	I1209 11:57:59.475024  661546 pod_ready.go:103] pod "coredns-7c65d6cfc9-xspr9" in "kube-system" namespace has status "Ready":"False"
	I1209 11:58:00.473961  661546 pod_ready.go:93] pod "coredns-7c65d6cfc9-xspr9" in "kube-system" namespace has status "Ready":"True"
	I1209 11:58:00.473987  661546 pod_ready.go:82] duration metric: took 3.005987981s for pod "coredns-7c65d6cfc9-xspr9" in "kube-system" namespace to be "Ready" ...
	I1209 11:58:00.473996  661546 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-005123" in "kube-system" namespace to be "Ready" ...
	I1209 11:58:00.478022  661546 pod_ready.go:93] pod "etcd-embed-certs-005123" in "kube-system" namespace has status "Ready":"True"
	I1209 11:58:00.478040  661546 pod_ready.go:82] duration metric: took 4.038353ms for pod "etcd-embed-certs-005123" in "kube-system" namespace to be "Ready" ...
	I1209 11:58:00.478049  661546 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-005123" in "kube-system" namespace to be "Ready" ...
	I1209 11:58:00.482415  661546 pod_ready.go:93] pod "kube-apiserver-embed-certs-005123" in "kube-system" namespace has status "Ready":"True"
	I1209 11:58:00.482439  661546 pod_ready.go:82] duration metric: took 4.384854ms for pod "kube-apiserver-embed-certs-005123" in "kube-system" namespace to be "Ready" ...
	I1209 11:58:00.482449  661546 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-005123" in "kube-system" namespace to be "Ready" ...
	I1209 11:58:00.486284  661546 pod_ready.go:93] pod "kube-controller-manager-embed-certs-005123" in "kube-system" namespace has status "Ready":"True"
	I1209 11:58:00.486311  661546 pod_ready.go:82] duration metric: took 3.85467ms for pod "kube-controller-manager-embed-certs-005123" in "kube-system" namespace to be "Ready" ...
	I1209 11:58:00.486326  661546 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-n4pph" in "kube-system" namespace to be "Ready" ...
	I1209 11:58:00.490260  661546 pod_ready.go:93] pod "kube-proxy-n4pph" in "kube-system" namespace has status "Ready":"True"
	I1209 11:58:00.490284  661546 pod_ready.go:82] duration metric: took 3.949342ms for pod "kube-proxy-n4pph" in "kube-system" namespace to be "Ready" ...
	I1209 11:58:00.490296  661546 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-005123" in "kube-system" namespace to be "Ready" ...
	I1209 11:58:00.872396  661546 pod_ready.go:93] pod "kube-scheduler-embed-certs-005123" in "kube-system" namespace has status "Ready":"True"
	I1209 11:58:00.872420  661546 pod_ready.go:82] duration metric: took 382.116873ms for pod "kube-scheduler-embed-certs-005123" in "kube-system" namespace to be "Ready" ...
	I1209 11:58:00.872428  661546 pod_ready.go:39] duration metric: took 7.97282742s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 11:58:00.872446  661546 api_server.go:52] waiting for apiserver process to appear ...
	I1209 11:58:00.872502  661546 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:58:00.887281  661546 api_server.go:72] duration metric: took 8.239108757s to wait for apiserver process to appear ...
	I1209 11:58:00.887312  661546 api_server.go:88] waiting for apiserver healthz status ...
	I1209 11:58:00.887333  661546 api_server.go:253] Checking apiserver healthz at https://192.168.72.218:8443/healthz ...
	I1209 11:58:00.892005  661546 api_server.go:279] https://192.168.72.218:8443/healthz returned 200:
	ok
	I1209 11:58:00.893247  661546 api_server.go:141] control plane version: v1.31.2
	I1209 11:58:00.893277  661546 api_server.go:131] duration metric: took 5.95753ms to wait for apiserver health ...
	I1209 11:58:00.893288  661546 system_pods.go:43] waiting for kube-system pods to appear ...
	I1209 11:58:01.074723  661546 system_pods.go:59] 9 kube-system pods found
	I1209 11:58:01.074756  661546 system_pods.go:61] "coredns-7c65d6cfc9-t49mk" [ca3ba094-58a2-401d-8aea-46d6d96baacb] Running
	I1209 11:58:01.074762  661546 system_pods.go:61] "coredns-7c65d6cfc9-xspr9" [9384e9ea-987e-4728-bdf2-773645d52ab1] Running
	I1209 11:58:01.074766  661546 system_pods.go:61] "etcd-embed-certs-005123" [f8b23ab4-b852-4598-93d7-3b6eb1543a4b] Running
	I1209 11:58:01.074771  661546 system_pods.go:61] "kube-apiserver-embed-certs-005123" [96b0cb5a-1d81-48fc-bbc9-015de7c48ac5] Running
	I1209 11:58:01.074774  661546 system_pods.go:61] "kube-controller-manager-embed-certs-005123" [33c237d8-8f5a-4d8f-870a-7ba0dc2180dd] Running
	I1209 11:58:01.074777  661546 system_pods.go:61] "kube-proxy-n4pph" [520d101f-0df0-413f-a0fc-22ecc2884d40] Running
	I1209 11:58:01.074780  661546 system_pods.go:61] "kube-scheduler-embed-certs-005123" [11dcaf15-fc3b-4945-af9b-d08aa5528679] Running
	I1209 11:58:01.074786  661546 system_pods.go:61] "metrics-server-6867b74b74-zfw9r" [8438b820-4cc5-4d7b-8af5-9349fdd87ca8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1209 11:58:01.074791  661546 system_pods.go:61] "storage-provisioner" [91ceb801-7262-4d7e-9623-c8c1931fc34b] Running
	I1209 11:58:01.074797  661546 system_pods.go:74] duration metric: took 181.502993ms to wait for pod list to return data ...
	I1209 11:58:01.074804  661546 default_sa.go:34] waiting for default service account to be created ...
	I1209 11:58:01.272664  661546 default_sa.go:45] found service account: "default"
	I1209 11:58:01.272697  661546 default_sa.go:55] duration metric: took 197.886347ms for default service account to be created ...
	I1209 11:58:01.272707  661546 system_pods.go:116] waiting for k8s-apps to be running ...
	I1209 11:58:01.475062  661546 system_pods.go:86] 9 kube-system pods found
	I1209 11:58:01.475096  661546 system_pods.go:89] "coredns-7c65d6cfc9-t49mk" [ca3ba094-58a2-401d-8aea-46d6d96baacb] Running
	I1209 11:58:01.475102  661546 system_pods.go:89] "coredns-7c65d6cfc9-xspr9" [9384e9ea-987e-4728-bdf2-773645d52ab1] Running
	I1209 11:58:01.475105  661546 system_pods.go:89] "etcd-embed-certs-005123" [f8b23ab4-b852-4598-93d7-3b6eb1543a4b] Running
	I1209 11:58:01.475109  661546 system_pods.go:89] "kube-apiserver-embed-certs-005123" [96b0cb5a-1d81-48fc-bbc9-015de7c48ac5] Running
	I1209 11:58:01.475114  661546 system_pods.go:89] "kube-controller-manager-embed-certs-005123" [33c237d8-8f5a-4d8f-870a-7ba0dc2180dd] Running
	I1209 11:58:01.475118  661546 system_pods.go:89] "kube-proxy-n4pph" [520d101f-0df0-413f-a0fc-22ecc2884d40] Running
	I1209 11:58:01.475121  661546 system_pods.go:89] "kube-scheduler-embed-certs-005123" [11dcaf15-fc3b-4945-af9b-d08aa5528679] Running
	I1209 11:58:01.475131  661546 system_pods.go:89] "metrics-server-6867b74b74-zfw9r" [8438b820-4cc5-4d7b-8af5-9349fdd87ca8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1209 11:58:01.475138  661546 system_pods.go:89] "storage-provisioner" [91ceb801-7262-4d7e-9623-c8c1931fc34b] Running
	I1209 11:58:01.475148  661546 system_pods.go:126] duration metric: took 202.434687ms to wait for k8s-apps to be running ...
	I1209 11:58:01.475158  661546 system_svc.go:44] waiting for kubelet service to be running ....
	I1209 11:58:01.475220  661546 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 11:58:01.490373  661546 system_svc.go:56] duration metric: took 15.20079ms WaitForService to wait for kubelet
	I1209 11:58:01.490416  661546 kubeadm.go:582] duration metric: took 8.842250416s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 11:58:01.490451  661546 node_conditions.go:102] verifying NodePressure condition ...
	I1209 11:58:01.673621  661546 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1209 11:58:01.673651  661546 node_conditions.go:123] node cpu capacity is 2
	I1209 11:58:01.673662  661546 node_conditions.go:105] duration metric: took 183.205852ms to run NodePressure ...
	I1209 11:58:01.673674  661546 start.go:241] waiting for startup goroutines ...
	I1209 11:58:01.673681  661546 start.go:246] waiting for cluster config update ...
	I1209 11:58:01.673691  661546 start.go:255] writing updated cluster config ...
	I1209 11:58:01.673995  661546 ssh_runner.go:195] Run: rm -f paused
	I1209 11:58:01.725363  661546 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1209 11:58:01.727275  661546 out.go:177] * Done! kubectl is now configured to use "embed-certs-005123" cluster and "default" namespace by default
	I1209 11:58:14.994765  662586 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1209 11:58:14.994918  662586 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1209 11:58:14.995050  662586 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1209 11:58:14.995118  662586 kubeadm.go:310] [preflight] Running pre-flight checks
	I1209 11:58:14.995182  662586 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1209 11:58:14.995272  662586 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1209 11:58:14.995353  662586 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1209 11:58:14.995410  662586 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1209 11:58:14.996905  662586 out.go:235]   - Generating certificates and keys ...
	I1209 11:58:14.997000  662586 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1209 11:58:14.997055  662586 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1209 11:58:14.997123  662586 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1209 11:58:14.997184  662586 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1209 11:58:14.997278  662586 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1209 11:58:14.997349  662586 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1209 11:58:14.997474  662586 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1209 11:58:14.997567  662586 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1209 11:58:14.997631  662586 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1209 11:58:14.997700  662586 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1209 11:58:14.997736  662586 kubeadm.go:310] [certs] Using the existing "sa" key
	I1209 11:58:14.997783  662586 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1209 11:58:14.997826  662586 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1209 11:58:14.997871  662586 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1209 11:58:14.997930  662586 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1209 11:58:14.997977  662586 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1209 11:58:14.998063  662586 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1209 11:58:14.998141  662586 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1209 11:58:14.998199  662586 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1209 11:58:14.998264  662586 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1209 11:58:14.999539  662586 out.go:235]   - Booting up control plane ...
	I1209 11:58:14.999663  662586 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1209 11:58:14.999748  662586 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1209 11:58:14.999824  662586 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1209 11:58:14.999946  662586 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1209 11:58:15.000148  662586 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1209 11:58:15.000221  662586 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1209 11:58:15.000326  662586 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1209 11:58:15.000532  662586 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1209 11:58:15.000598  662586 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1209 11:58:15.000753  662586 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1209 11:58:15.000814  662586 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1209 11:58:15.000971  662586 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1209 11:58:15.001064  662586 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1209 11:58:15.001273  662586 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1209 11:58:15.001335  662586 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1209 11:58:15.001486  662586 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1209 11:58:15.001493  662586 kubeadm.go:310] 
	I1209 11:58:15.001553  662586 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1209 11:58:15.001616  662586 kubeadm.go:310] 		timed out waiting for the condition
	I1209 11:58:15.001631  662586 kubeadm.go:310] 
	I1209 11:58:15.001685  662586 kubeadm.go:310] 	This error is likely caused by:
	I1209 11:58:15.001732  662586 kubeadm.go:310] 		- The kubelet is not running
	I1209 11:58:15.001883  662586 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1209 11:58:15.001897  662586 kubeadm.go:310] 
	I1209 11:58:15.002041  662586 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1209 11:58:15.002087  662586 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1209 11:58:15.002146  662586 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1209 11:58:15.002156  662586 kubeadm.go:310] 
	I1209 11:58:15.002294  662586 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1209 11:58:15.002373  662586 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1209 11:58:15.002380  662586 kubeadm.go:310] 
	I1209 11:58:15.002502  662586 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1209 11:58:15.002623  662586 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1209 11:58:15.002725  662586 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1209 11:58:15.002799  662586 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1209 11:58:15.002835  662586 kubeadm.go:310] 
	W1209 11:58:15.002956  662586 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1209 11:58:15.003022  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1209 11:58:15.469838  662586 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 11:58:15.484503  662586 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 11:58:15.493409  662586 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 11:58:15.493430  662586 kubeadm.go:157] found existing configuration files:
	
	I1209 11:58:15.493487  662586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 11:58:15.502508  662586 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 11:58:15.502568  662586 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 11:58:15.511743  662586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 11:58:15.519855  662586 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 11:58:15.519913  662586 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 11:58:15.528743  662586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 11:58:15.537000  662586 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 11:58:15.537072  662586 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 11:58:15.546520  662586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 11:58:15.555448  662586 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 11:58:15.555526  662586 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 11:58:15.565618  662586 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1209 11:58:15.631763  662586 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1209 11:58:15.631832  662586 kubeadm.go:310] [preflight] Running pre-flight checks
	I1209 11:58:15.798683  662586 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1209 11:58:15.798822  662586 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1209 11:58:15.798957  662586 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1209 11:58:15.974522  662586 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1209 11:58:15.976286  662586 out.go:235]   - Generating certificates and keys ...
	I1209 11:58:15.976408  662586 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1209 11:58:15.976492  662586 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1209 11:58:15.976616  662586 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1209 11:58:15.976714  662586 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1209 11:58:15.976813  662586 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1209 11:58:15.976889  662586 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1209 11:58:15.976978  662586 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1209 11:58:15.977064  662586 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1209 11:58:15.977184  662586 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1209 11:58:15.977251  662586 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1209 11:58:15.977287  662586 kubeadm.go:310] [certs] Using the existing "sa" key
	I1209 11:58:15.977363  662586 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1209 11:58:16.193383  662586 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1209 11:58:16.324912  662586 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1209 11:58:16.541372  662586 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1209 11:58:16.786389  662586 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1209 11:58:16.807241  662586 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1209 11:58:16.808750  662586 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1209 11:58:16.808823  662586 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1209 11:58:16.951756  662586 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1209 11:58:16.954338  662586 out.go:235]   - Booting up control plane ...
	I1209 11:58:16.954486  662586 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1209 11:58:16.968892  662586 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1209 11:58:16.970556  662586 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1209 11:58:16.971301  662586 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1209 11:58:16.974040  662586 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1209 11:58:56.976537  662586 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1209 11:58:56.976966  662586 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1209 11:58:56.977214  662586 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1209 11:59:01.977861  662586 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1209 11:59:01.978074  662586 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1209 11:59:11.978821  662586 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1209 11:59:11.979056  662586 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1209 11:59:31.980118  662586 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1209 11:59:31.980386  662586 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1209 12:00:11.981507  662586 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1209 12:00:11.981791  662586 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1209 12:00:11.981804  662586 kubeadm.go:310] 
	I1209 12:00:11.981863  662586 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1209 12:00:11.981916  662586 kubeadm.go:310] 		timed out waiting for the condition
	I1209 12:00:11.981926  662586 kubeadm.go:310] 
	I1209 12:00:11.981977  662586 kubeadm.go:310] 	This error is likely caused by:
	I1209 12:00:11.982028  662586 kubeadm.go:310] 		- The kubelet is not running
	I1209 12:00:11.982232  662586 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1209 12:00:11.982262  662586 kubeadm.go:310] 
	I1209 12:00:11.982449  662586 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1209 12:00:11.982506  662586 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1209 12:00:11.982555  662586 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1209 12:00:11.982564  662586 kubeadm.go:310] 
	I1209 12:00:11.982709  662586 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1209 12:00:11.982824  662586 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1209 12:00:11.982837  662586 kubeadm.go:310] 
	I1209 12:00:11.982975  662586 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1209 12:00:11.983092  662586 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1209 12:00:11.983186  662586 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1209 12:00:11.983259  662586 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1209 12:00:11.983308  662586 kubeadm.go:310] 
	I1209 12:00:11.983442  662586 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1209 12:00:11.983534  662586 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1209 12:00:11.983622  662586 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1209 12:00:11.983692  662586 kubeadm.go:394] duration metric: took 7m57.372617524s to StartCluster
	I1209 12:00:11.983778  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 12:00:11.983852  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 12:00:12.032068  662586 cri.go:89] found id: ""
	I1209 12:00:12.032110  662586 logs.go:282] 0 containers: []
	W1209 12:00:12.032126  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 12:00:12.032139  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 12:00:12.032232  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 12:00:12.074929  662586 cri.go:89] found id: ""
	I1209 12:00:12.074977  662586 logs.go:282] 0 containers: []
	W1209 12:00:12.074990  662586 logs.go:284] No container was found matching "etcd"
	I1209 12:00:12.075001  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 12:00:12.075074  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 12:00:12.113547  662586 cri.go:89] found id: ""
	I1209 12:00:12.113582  662586 logs.go:282] 0 containers: []
	W1209 12:00:12.113592  662586 logs.go:284] No container was found matching "coredns"
	I1209 12:00:12.113598  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 12:00:12.113661  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 12:00:12.147436  662586 cri.go:89] found id: ""
	I1209 12:00:12.147465  662586 logs.go:282] 0 containers: []
	W1209 12:00:12.147475  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 12:00:12.147481  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 12:00:12.147535  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 12:00:12.184398  662586 cri.go:89] found id: ""
	I1209 12:00:12.184439  662586 logs.go:282] 0 containers: []
	W1209 12:00:12.184453  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 12:00:12.184463  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 12:00:12.184541  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 12:00:12.230844  662586 cri.go:89] found id: ""
	I1209 12:00:12.230884  662586 logs.go:282] 0 containers: []
	W1209 12:00:12.230896  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 12:00:12.230905  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 12:00:12.230981  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 12:00:12.264897  662586 cri.go:89] found id: ""
	I1209 12:00:12.264930  662586 logs.go:282] 0 containers: []
	W1209 12:00:12.264939  662586 logs.go:284] No container was found matching "kindnet"
	I1209 12:00:12.264946  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 12:00:12.265001  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 12:00:12.303553  662586 cri.go:89] found id: ""
	I1209 12:00:12.303594  662586 logs.go:282] 0 containers: []
	W1209 12:00:12.303607  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 12:00:12.303622  662586 logs.go:123] Gathering logs for container status ...
	I1209 12:00:12.303638  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 12:00:12.342799  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 12:00:12.342838  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 12:00:12.392992  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 12:00:12.393039  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 12:00:12.407065  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 12:00:12.407100  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 12:00:12.483599  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 12:00:12.483651  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 12:00:12.483675  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1209 12:00:12.591518  662586 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1209 12:00:12.591615  662586 out.go:270] * 
	W1209 12:00:12.591715  662586 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1209 12:00:12.591737  662586 out.go:270] * 
	W1209 12:00:12.592644  662586 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 12:00:12.596340  662586 out.go:201] 
	W1209 12:00:12.597706  662586 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1209 12:00:12.597757  662586 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1209 12:00:12.597798  662586 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1209 12:00:12.599219  662586 out.go:201] 
	
	
	==> CRI-O <==
	Dec 09 12:07:03 embed-certs-005123 crio[702]: time="2024-12-09 12:07:03.772021767Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733746023771997566,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c90bac28-bda3-4a8a-b36c-a61d67c7cf5d name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 12:07:03 embed-certs-005123 crio[702]: time="2024-12-09 12:07:03.772531849Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2faa3099-c3a3-4ee1-a877-b1b6c585109d name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 12:07:03 embed-certs-005123 crio[702]: time="2024-12-09 12:07:03.772582438Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2faa3099-c3a3-4ee1-a877-b1b6c585109d name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 12:07:03 embed-certs-005123 crio[702]: time="2024-12-09 12:07:03.772782536Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bd836c617c4c71eefb766c2dfb55170cf3cf91517592b1a7a183c74e32ea64a6,PodSandboxId:7455f6989fecae39d7d0c95e8bc7072133ece82c435ce6ed37b5f621db26a696,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733745474452157860,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91ceb801-7262-4d7e-9623-c8c1931fc34b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83e04e6c67eb04484f0a7ed6ae026d286dbe58b1771e200b50a3b5fb3155cfd2,PodSandboxId:6f09b37ff62169ca1ef8b5d9ea743a40d10b39438e49add1cdf06c9242f0bad5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733745473966659142,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xspr9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9384e9ea-987e-4728-bdf2-773645d52ab1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ad57f45638e33d61673cc77cf320de335668a20f9d834892bcd702efb4ff209,PodSandboxId:8576a7808ab702e2fa9b5d11849da794aed95311a7db73c0a1536df14395d7c2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733745473869691762,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-t49mk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c
a3ba094-58a2-401d-8aea-46d6d96baacb,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:527b59b253be0db5e7d00289763d2f0aeea7b9d27b8830656ccecafd25947cf8,PodSandboxId:c4229a54854342602f545e73b05d2e5f6e82c169027d56572c6bb1c6daaab695,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt
:1733745473444443662,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-n4pph,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 520d101f-0df0-413f-a0fc-22ecc2884d40,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6f57c0d1a2bfd5d40b6509193ea8dc5b5a600119199f1468b5c725144ac6de3,PodSandboxId:b8f808142acc4c40969cb81f766a314d2992f607a199f6939bcbf4fd9da1f70e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733745462474273995
,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-005123,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 897c2c22804a3f807b6c53388bf5ae22,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:528aa672a3fab35454ac1a9762bde88dacd8b0f9c91555af3a1d1f93061a1350,PodSandboxId:d30789ee9d14a7988cb1126d5d97bf77b940b8ccac52cf00674379b888851603,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733745462471
449496,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-005123,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b46485e7a34e26d2ec4507b212be06ea,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da9e6f1bc197424a6926dd5e34d40c87175209f99a1e583f8dbdba504862c6f8,PodSandboxId:43b4489819cbc50f99af06bb8570b409765aaac8ce25482cca038fb31307432f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733745462431601063,Label
s:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-005123,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e13b64e3c640916722b20659a0937fc,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9babc273bf1d0f041ccf16c5711057fdc4abc34dc992320f1aefd25a4d5b36e,PodSandboxId:c97ab69d81cf40be8be853b2726f9c74e5222b50fc79b93e034ff92da0a4c035,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733745462358888452,Labels:map[string]string{io
.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-005123,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fdc26478084558b0a11e6df527ae8916,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ecbfc4afc60ee805fae40bf4534caf8357b9374f4449f3faac32927d8404ae4,PodSandboxId:6211679fe2c07ed8493a037532ef67b2673ec799f6e8f3a0ff2af327b3452fa6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733745175261468738,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-005123,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b46485e7a34e26d2ec4507b212be06ea,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2faa3099-c3a3-4ee1-a877-b1b6c585109d name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 12:07:03 embed-certs-005123 crio[702]: time="2024-12-09 12:07:03.808917023Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=aee47e2e-dc51-4d2a-a0a0-ce566a3f30dc name=/runtime.v1.RuntimeService/Version
	Dec 09 12:07:03 embed-certs-005123 crio[702]: time="2024-12-09 12:07:03.809041896Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=aee47e2e-dc51-4d2a-a0a0-ce566a3f30dc name=/runtime.v1.RuntimeService/Version
	Dec 09 12:07:03 embed-certs-005123 crio[702]: time="2024-12-09 12:07:03.810277328Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1a445e50-50f6-4401-82b6-c000c7d4cba8 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 12:07:03 embed-certs-005123 crio[702]: time="2024-12-09 12:07:03.810646481Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733746023810627292,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1a445e50-50f6-4401-82b6-c000c7d4cba8 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 12:07:03 embed-certs-005123 crio[702]: time="2024-12-09 12:07:03.811238423Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=38decd9c-bfa9-488b-9a47-e156f1558c66 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 12:07:03 embed-certs-005123 crio[702]: time="2024-12-09 12:07:03.811289819Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=38decd9c-bfa9-488b-9a47-e156f1558c66 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 12:07:03 embed-certs-005123 crio[702]: time="2024-12-09 12:07:03.811473101Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bd836c617c4c71eefb766c2dfb55170cf3cf91517592b1a7a183c74e32ea64a6,PodSandboxId:7455f6989fecae39d7d0c95e8bc7072133ece82c435ce6ed37b5f621db26a696,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733745474452157860,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91ceb801-7262-4d7e-9623-c8c1931fc34b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83e04e6c67eb04484f0a7ed6ae026d286dbe58b1771e200b50a3b5fb3155cfd2,PodSandboxId:6f09b37ff62169ca1ef8b5d9ea743a40d10b39438e49add1cdf06c9242f0bad5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733745473966659142,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xspr9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9384e9ea-987e-4728-bdf2-773645d52ab1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ad57f45638e33d61673cc77cf320de335668a20f9d834892bcd702efb4ff209,PodSandboxId:8576a7808ab702e2fa9b5d11849da794aed95311a7db73c0a1536df14395d7c2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733745473869691762,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-t49mk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c
a3ba094-58a2-401d-8aea-46d6d96baacb,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:527b59b253be0db5e7d00289763d2f0aeea7b9d27b8830656ccecafd25947cf8,PodSandboxId:c4229a54854342602f545e73b05d2e5f6e82c169027d56572c6bb1c6daaab695,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt
:1733745473444443662,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-n4pph,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 520d101f-0df0-413f-a0fc-22ecc2884d40,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6f57c0d1a2bfd5d40b6509193ea8dc5b5a600119199f1468b5c725144ac6de3,PodSandboxId:b8f808142acc4c40969cb81f766a314d2992f607a199f6939bcbf4fd9da1f70e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733745462474273995
,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-005123,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 897c2c22804a3f807b6c53388bf5ae22,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:528aa672a3fab35454ac1a9762bde88dacd8b0f9c91555af3a1d1f93061a1350,PodSandboxId:d30789ee9d14a7988cb1126d5d97bf77b940b8ccac52cf00674379b888851603,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733745462471
449496,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-005123,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b46485e7a34e26d2ec4507b212be06ea,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da9e6f1bc197424a6926dd5e34d40c87175209f99a1e583f8dbdba504862c6f8,PodSandboxId:43b4489819cbc50f99af06bb8570b409765aaac8ce25482cca038fb31307432f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733745462431601063,Label
s:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-005123,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e13b64e3c640916722b20659a0937fc,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9babc273bf1d0f041ccf16c5711057fdc4abc34dc992320f1aefd25a4d5b36e,PodSandboxId:c97ab69d81cf40be8be853b2726f9c74e5222b50fc79b93e034ff92da0a4c035,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733745462358888452,Labels:map[string]string{io
.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-005123,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fdc26478084558b0a11e6df527ae8916,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ecbfc4afc60ee805fae40bf4534caf8357b9374f4449f3faac32927d8404ae4,PodSandboxId:6211679fe2c07ed8493a037532ef67b2673ec799f6e8f3a0ff2af327b3452fa6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733745175261468738,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-005123,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b46485e7a34e26d2ec4507b212be06ea,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=38decd9c-bfa9-488b-9a47-e156f1558c66 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 12:07:03 embed-certs-005123 crio[702]: time="2024-12-09 12:07:03.847086835Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3a3dade3-22d6-410a-b994-c4a465d9c780 name=/runtime.v1.RuntimeService/Version
	Dec 09 12:07:03 embed-certs-005123 crio[702]: time="2024-12-09 12:07:03.847158226Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3a3dade3-22d6-410a-b994-c4a465d9c780 name=/runtime.v1.RuntimeService/Version
	Dec 09 12:07:03 embed-certs-005123 crio[702]: time="2024-12-09 12:07:03.848309778Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0a1a91ec-5f0a-441e-9cd7-33c93be9d882 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 12:07:03 embed-certs-005123 crio[702]: time="2024-12-09 12:07:03.848698424Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733746023848678332,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0a1a91ec-5f0a-441e-9cd7-33c93be9d882 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 12:07:03 embed-certs-005123 crio[702]: time="2024-12-09 12:07:03.849200546Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9f29fdbb-5f65-4b75-b3fe-6196a9a2f5d6 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 12:07:03 embed-certs-005123 crio[702]: time="2024-12-09 12:07:03.849247808Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9f29fdbb-5f65-4b75-b3fe-6196a9a2f5d6 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 12:07:03 embed-certs-005123 crio[702]: time="2024-12-09 12:07:03.849449261Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bd836c617c4c71eefb766c2dfb55170cf3cf91517592b1a7a183c74e32ea64a6,PodSandboxId:7455f6989fecae39d7d0c95e8bc7072133ece82c435ce6ed37b5f621db26a696,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733745474452157860,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91ceb801-7262-4d7e-9623-c8c1931fc34b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83e04e6c67eb04484f0a7ed6ae026d286dbe58b1771e200b50a3b5fb3155cfd2,PodSandboxId:6f09b37ff62169ca1ef8b5d9ea743a40d10b39438e49add1cdf06c9242f0bad5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733745473966659142,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xspr9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9384e9ea-987e-4728-bdf2-773645d52ab1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ad57f45638e33d61673cc77cf320de335668a20f9d834892bcd702efb4ff209,PodSandboxId:8576a7808ab702e2fa9b5d11849da794aed95311a7db73c0a1536df14395d7c2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733745473869691762,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-t49mk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c
a3ba094-58a2-401d-8aea-46d6d96baacb,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:527b59b253be0db5e7d00289763d2f0aeea7b9d27b8830656ccecafd25947cf8,PodSandboxId:c4229a54854342602f545e73b05d2e5f6e82c169027d56572c6bb1c6daaab695,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt
:1733745473444443662,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-n4pph,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 520d101f-0df0-413f-a0fc-22ecc2884d40,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6f57c0d1a2bfd5d40b6509193ea8dc5b5a600119199f1468b5c725144ac6de3,PodSandboxId:b8f808142acc4c40969cb81f766a314d2992f607a199f6939bcbf4fd9da1f70e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733745462474273995
,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-005123,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 897c2c22804a3f807b6c53388bf5ae22,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:528aa672a3fab35454ac1a9762bde88dacd8b0f9c91555af3a1d1f93061a1350,PodSandboxId:d30789ee9d14a7988cb1126d5d97bf77b940b8ccac52cf00674379b888851603,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733745462471
449496,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-005123,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b46485e7a34e26d2ec4507b212be06ea,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da9e6f1bc197424a6926dd5e34d40c87175209f99a1e583f8dbdba504862c6f8,PodSandboxId:43b4489819cbc50f99af06bb8570b409765aaac8ce25482cca038fb31307432f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733745462431601063,Label
s:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-005123,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e13b64e3c640916722b20659a0937fc,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9babc273bf1d0f041ccf16c5711057fdc4abc34dc992320f1aefd25a4d5b36e,PodSandboxId:c97ab69d81cf40be8be853b2726f9c74e5222b50fc79b93e034ff92da0a4c035,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733745462358888452,Labels:map[string]string{io
.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-005123,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fdc26478084558b0a11e6df527ae8916,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ecbfc4afc60ee805fae40bf4534caf8357b9374f4449f3faac32927d8404ae4,PodSandboxId:6211679fe2c07ed8493a037532ef67b2673ec799f6e8f3a0ff2af327b3452fa6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733745175261468738,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-005123,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b46485e7a34e26d2ec4507b212be06ea,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9f29fdbb-5f65-4b75-b3fe-6196a9a2f5d6 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 12:07:03 embed-certs-005123 crio[702]: time="2024-12-09 12:07:03.880885378Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9b424212-eb0e-47dc-b38b-ad5b8aba69af name=/runtime.v1.RuntimeService/Version
	Dec 09 12:07:03 embed-certs-005123 crio[702]: time="2024-12-09 12:07:03.881017212Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9b424212-eb0e-47dc-b38b-ad5b8aba69af name=/runtime.v1.RuntimeService/Version
	Dec 09 12:07:03 embed-certs-005123 crio[702]: time="2024-12-09 12:07:03.883244206Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=64dd9429-fa10-44b2-9160-f845be2d5689 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 12:07:03 embed-certs-005123 crio[702]: time="2024-12-09 12:07:03.883906979Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733746023883882020,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=64dd9429-fa10-44b2-9160-f845be2d5689 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 12:07:03 embed-certs-005123 crio[702]: time="2024-12-09 12:07:03.884630050Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=afe055bf-bd24-4cbf-8a7c-b67bb9bcbe7f name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 12:07:03 embed-certs-005123 crio[702]: time="2024-12-09 12:07:03.884681604Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=afe055bf-bd24-4cbf-8a7c-b67bb9bcbe7f name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 12:07:03 embed-certs-005123 crio[702]: time="2024-12-09 12:07:03.884869007Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bd836c617c4c71eefb766c2dfb55170cf3cf91517592b1a7a183c74e32ea64a6,PodSandboxId:7455f6989fecae39d7d0c95e8bc7072133ece82c435ce6ed37b5f621db26a696,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733745474452157860,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91ceb801-7262-4d7e-9623-c8c1931fc34b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83e04e6c67eb04484f0a7ed6ae026d286dbe58b1771e200b50a3b5fb3155cfd2,PodSandboxId:6f09b37ff62169ca1ef8b5d9ea743a40d10b39438e49add1cdf06c9242f0bad5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733745473966659142,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xspr9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9384e9ea-987e-4728-bdf2-773645d52ab1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ad57f45638e33d61673cc77cf320de335668a20f9d834892bcd702efb4ff209,PodSandboxId:8576a7808ab702e2fa9b5d11849da794aed95311a7db73c0a1536df14395d7c2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733745473869691762,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-t49mk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c
a3ba094-58a2-401d-8aea-46d6d96baacb,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:527b59b253be0db5e7d00289763d2f0aeea7b9d27b8830656ccecafd25947cf8,PodSandboxId:c4229a54854342602f545e73b05d2e5f6e82c169027d56572c6bb1c6daaab695,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt
:1733745473444443662,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-n4pph,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 520d101f-0df0-413f-a0fc-22ecc2884d40,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6f57c0d1a2bfd5d40b6509193ea8dc5b5a600119199f1468b5c725144ac6de3,PodSandboxId:b8f808142acc4c40969cb81f766a314d2992f607a199f6939bcbf4fd9da1f70e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733745462474273995
,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-005123,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 897c2c22804a3f807b6c53388bf5ae22,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:528aa672a3fab35454ac1a9762bde88dacd8b0f9c91555af3a1d1f93061a1350,PodSandboxId:d30789ee9d14a7988cb1126d5d97bf77b940b8ccac52cf00674379b888851603,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733745462471
449496,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-005123,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b46485e7a34e26d2ec4507b212be06ea,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da9e6f1bc197424a6926dd5e34d40c87175209f99a1e583f8dbdba504862c6f8,PodSandboxId:43b4489819cbc50f99af06bb8570b409765aaac8ce25482cca038fb31307432f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733745462431601063,Label
s:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-005123,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e13b64e3c640916722b20659a0937fc,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9babc273bf1d0f041ccf16c5711057fdc4abc34dc992320f1aefd25a4d5b36e,PodSandboxId:c97ab69d81cf40be8be853b2726f9c74e5222b50fc79b93e034ff92da0a4c035,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733745462358888452,Labels:map[string]string{io
.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-005123,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fdc26478084558b0a11e6df527ae8916,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ecbfc4afc60ee805fae40bf4534caf8357b9374f4449f3faac32927d8404ae4,PodSandboxId:6211679fe2c07ed8493a037532ef67b2673ec799f6e8f3a0ff2af327b3452fa6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733745175261468738,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-005123,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b46485e7a34e26d2ec4507b212be06ea,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=afe055bf-bd24-4cbf-8a7c-b67bb9bcbe7f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	bd836c617c4c7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   7455f6989feca       storage-provisioner
	83e04e6c67eb0       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   6f09b37ff6216       coredns-7c65d6cfc9-xspr9
	1ad57f45638e3       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   8576a7808ab70       coredns-7c65d6cfc9-t49mk
	527b59b253be0       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38   9 minutes ago       Running             kube-proxy                0                   c4229a5485434       kube-proxy-n4pph
	c6f57c0d1a2bf       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503   9 minutes ago       Running             kube-controller-manager   2                   b8f808142acc4       kube-controller-manager-embed-certs-005123
	528aa672a3fab       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   9 minutes ago       Running             kube-apiserver            2                   d30789ee9d14a       kube-apiserver-embed-certs-005123
	da9e6f1bc1974       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856   9 minutes ago       Running             kube-scheduler            2                   43b4489819cbc       kube-scheduler-embed-certs-005123
	d9babc273bf1d       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   9 minutes ago       Running             etcd                      2                   c97ab69d81cf4       etcd-embed-certs-005123
	9ecbfc4afc60e       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   14 minutes ago      Exited              kube-apiserver            1                   6211679fe2c07       kube-apiserver-embed-certs-005123
	
	
	==> coredns [1ad57f45638e33d61673cc77cf320de335668a20f9d834892bcd702efb4ff209] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [83e04e6c67eb04484f0a7ed6ae026d286dbe58b1771e200b50a3b5fb3155cfd2] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               embed-certs-005123
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-005123
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=60110addcdbf0fec7168b962521659e922988d6c
	                    minikube.k8s.io/name=embed-certs-005123
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_09T11_57_48_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 09 Dec 2024 11:57:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-005123
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 09 Dec 2024 12:06:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 09 Dec 2024 12:03:04 +0000   Mon, 09 Dec 2024 11:57:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 09 Dec 2024 12:03:04 +0000   Mon, 09 Dec 2024 11:57:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 09 Dec 2024 12:03:04 +0000   Mon, 09 Dec 2024 11:57:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 09 Dec 2024 12:03:04 +0000   Mon, 09 Dec 2024 11:57:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.218
	  Hostname:    embed-certs-005123
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4bae420e2f61438cbcb08aca330ef929
	  System UUID:                4bae420e-2f61-438c-bcb0-8aca330ef929
	  Boot ID:                    540eed1d-106c-4560-9304-3f7bc5c5d90e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-t49mk                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m12s
	  kube-system                 coredns-7c65d6cfc9-xspr9                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m12s
	  kube-system                 etcd-embed-certs-005123                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         9m17s
	  kube-system                 kube-apiserver-embed-certs-005123             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m17s
	  kube-system                 kube-controller-manager-embed-certs-005123    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m17s
	  kube-system                 kube-proxy-n4pph                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m12s
	  kube-system                 kube-scheduler-embed-certs-005123             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m17s
	  kube-system                 metrics-server-6867b74b74-zfw9r               100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         9m10s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m11s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m9s   kube-proxy       
	  Normal  Starting                 9m17s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m17s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m17s  kubelet          Node embed-certs-005123 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m17s  kubelet          Node embed-certs-005123 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m17s  kubelet          Node embed-certs-005123 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m13s  node-controller  Node embed-certs-005123 event: Registered Node embed-certs-005123 in Controller
	
	
	==> dmesg <==
	[  +0.055350] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041048] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.019448] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.164562] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +1.628061] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.346347] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +0.057637] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.063658] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[  +0.178175] systemd-fstab-generator[651]: Ignoring "noauto" option for root device
	[  +0.140169] systemd-fstab-generator[663]: Ignoring "noauto" option for root device
	[  +0.282387] systemd-fstab-generator[693]: Ignoring "noauto" option for root device
	[  +4.073106] systemd-fstab-generator[781]: Ignoring "noauto" option for root device
	[  +1.974083] systemd-fstab-generator[903]: Ignoring "noauto" option for root device
	[  +0.067432] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.519404] kauditd_printk_skb: 69 callbacks suppressed
	[Dec 9 11:53] kauditd_printk_skb: 90 callbacks suppressed
	[Dec 9 11:57] kauditd_printk_skb: 2 callbacks suppressed
	[  +1.346175] systemd-fstab-generator[2619]: Ignoring "noauto" option for root device
	[  +4.538537] kauditd_printk_skb: 58 callbacks suppressed
	[  +1.500808] systemd-fstab-generator[2935]: Ignoring "noauto" option for root device
	[  +5.429186] systemd-fstab-generator[3048]: Ignoring "noauto" option for root device
	[  +0.084529] kauditd_printk_skb: 14 callbacks suppressed
	[Dec 9 11:58] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [d9babc273bf1d0f041ccf16c5711057fdc4abc34dc992320f1aefd25a4d5b36e] <==
	{"level":"info","ts":"2024-12-09T11:57:42.652423Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-12-09T11:57:42.652416Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.72.218:2380"}
	{"level":"info","ts":"2024-12-09T11:57:42.652517Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.72.218:2380"}
	{"level":"info","ts":"2024-12-09T11:57:42.653542Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"920986b861bdd178 switched to configuration voters=(10523090130799808888)"}
	{"level":"info","ts":"2024-12-09T11:57:42.653795Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fc7b2fb2a5a2cf43","local-member-id":"920986b861bdd178","added-peer-id":"920986b861bdd178","added-peer-peer-urls":["https://192.168.72.218:2380"]}
	{"level":"info","ts":"2024-12-09T11:57:42.714544Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"920986b861bdd178 is starting a new election at term 1"}
	{"level":"info","ts":"2024-12-09T11:57:42.716194Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"920986b861bdd178 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-12-09T11:57:42.716357Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"920986b861bdd178 received MsgPreVoteResp from 920986b861bdd178 at term 1"}
	{"level":"info","ts":"2024-12-09T11:57:42.716396Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"920986b861bdd178 became candidate at term 2"}
	{"level":"info","ts":"2024-12-09T11:57:42.716459Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"920986b861bdd178 received MsgVoteResp from 920986b861bdd178 at term 2"}
	{"level":"info","ts":"2024-12-09T11:57:42.716556Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"920986b861bdd178 became leader at term 2"}
	{"level":"info","ts":"2024-12-09T11:57:42.716584Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 920986b861bdd178 elected leader 920986b861bdd178 at term 2"}
	{"level":"info","ts":"2024-12-09T11:57:42.721652Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"920986b861bdd178","local-member-attributes":"{Name:embed-certs-005123 ClientURLs:[https://192.168.72.218:2379]}","request-path":"/0/members/920986b861bdd178/attributes","cluster-id":"fc7b2fb2a5a2cf43","publish-timeout":"7s"}
	{"level":"info","ts":"2024-12-09T11:57:42.723028Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-09T11:57:42.723434Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-09T11:57:42.723574Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-09T11:57:42.725383Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-09T11:57:42.726112Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.218:2379"}
	{"level":"info","ts":"2024-12-09T11:57:42.726206Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fc7b2fb2a5a2cf43","local-member-id":"920986b861bdd178","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-09T11:57:42.726280Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-09T11:57:42.726312Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-09T11:57:42.738991Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-12-09T11:57:42.739027Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-12-09T11:57:42.739547Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-09T11:57:42.740344Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 12:07:04 up 14 min,  0 users,  load average: 0.35, 0.26, 0.17
	Linux embed-certs-005123 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [528aa672a3fab35454ac1a9762bde88dacd8b0f9c91555af3a1d1f93061a1350] <==
	W1209 12:02:45.954504       1 handler_proxy.go:99] no RequestInfo found in the context
	E1209 12:02:45.954571       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1209 12:02:45.955684       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1209 12:02:45.955730       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1209 12:03:45.956117       1 handler_proxy.go:99] no RequestInfo found in the context
	E1209 12:03:45.956208       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1209 12:03:45.956280       1 handler_proxy.go:99] no RequestInfo found in the context
	E1209 12:03:45.956316       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1209 12:03:45.957358       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1209 12:03:45.957433       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1209 12:05:45.958356       1 handler_proxy.go:99] no RequestInfo found in the context
	W1209 12:05:45.958378       1 handler_proxy.go:99] no RequestInfo found in the context
	E1209 12:05:45.958729       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E1209 12:05:45.958766       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1209 12:05:45.959979       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1209 12:05:45.960051       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [9ecbfc4afc60ee805fae40bf4534caf8357b9374f4449f3faac32927d8404ae4] <==
	W1209 11:57:35.187126       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:57:35.194779       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:57:35.206387       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:57:35.251234       1 logging.go:55] [core] [Channel #18 SubChannel #19]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:57:35.354338       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:57:35.549909       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:57:35.559656       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:57:35.580493       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:57:35.591478       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:57:35.605147       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:57:35.647061       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:57:35.743226       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:57:35.744433       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:57:35.760011       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:57:35.764638       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:57:35.793517       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:57:35.864266       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:57:35.957809       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:57:35.963305       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:57:36.126666       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:57:36.343167       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:57:39.257586       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:57:39.399697       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:57:39.421750       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:57:39.506029       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [c6f57c0d1a2bfd5d40b6509193ea8dc5b5a600119199f1468b5c725144ac6de3] <==
	E1209 12:01:51.804786       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1209 12:01:52.331444       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1209 12:02:21.813694       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1209 12:02:22.340459       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1209 12:02:51.821068       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1209 12:02:52.351416       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1209 12:03:04.828413       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-005123"
	E1209 12:03:21.828549       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1209 12:03:22.358810       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1209 12:03:50.628374       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="260.707µs"
	E1209 12:03:51.835453       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1209 12:03:52.368278       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1209 12:04:02.628185       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="131.49µs"
	E1209 12:04:21.844783       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1209 12:04:22.378479       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1209 12:04:51.851384       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1209 12:04:52.393849       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1209 12:05:21.856793       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1209 12:05:22.402142       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1209 12:05:51.862560       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1209 12:05:52.409836       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1209 12:06:21.869187       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1209 12:06:22.418866       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1209 12:06:51.876585       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1209 12:06:52.436794       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [527b59b253be0db5e7d00289763d2f0aeea7b9d27b8830656ccecafd25947cf8] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1209 11:57:54.068875       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1209 11:57:54.096967       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.72.218"]
	E1209 11:57:54.097247       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1209 11:57:54.309274       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1209 11:57:54.309394       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1209 11:57:54.309480       1 server_linux.go:169] "Using iptables Proxier"
	I1209 11:57:54.368574       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1209 11:57:54.368806       1 server.go:483] "Version info" version="v1.31.2"
	I1209 11:57:54.368832       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1209 11:57:54.371776       1 config.go:199] "Starting service config controller"
	I1209 11:57:54.373211       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1209 11:57:54.373360       1 config.go:328] "Starting node config controller"
	I1209 11:57:54.382406       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1209 11:57:54.375882       1 config.go:105] "Starting endpoint slice config controller"
	I1209 11:57:54.382502       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1209 11:57:54.382604       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1209 11:57:54.386636       1 shared_informer.go:320] Caches are synced for node config
	I1209 11:57:54.484119       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [da9e6f1bc197424a6926dd5e34d40c87175209f99a1e583f8dbdba504862c6f8] <==
	W1209 11:57:45.893733       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1209 11:57:45.893890       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1209 11:57:45.902278       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1209 11:57:45.902397       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1209 11:57:45.950549       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1209 11:57:45.950691       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1209 11:57:46.020249       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1209 11:57:46.020433       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1209 11:57:46.078852       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1209 11:57:46.079029       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1209 11:57:46.087627       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1209 11:57:46.087772       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1209 11:57:46.116185       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1209 11:57:46.116410       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1209 11:57:46.130702       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1209 11:57:46.131081       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1209 11:57:46.165220       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1209 11:57:46.165474       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1209 11:57:46.235055       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1209 11:57:46.235227       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1209 11:57:46.237459       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1209 11:57:46.237508       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1209 11:57:46.301529       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1209 11:57:46.301564       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1209 11:57:48.398034       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 09 12:05:50 embed-certs-005123 kubelet[2942]: E1209 12:05:50.613462    2942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-zfw9r" podUID="8438b820-4cc5-4d7b-8af5-9349fdd87ca8"
	Dec 09 12:05:57 embed-certs-005123 kubelet[2942]: E1209 12:05:57.737586    2942 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733745957737150206,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 12:05:57 embed-certs-005123 kubelet[2942]: E1209 12:05:57.738056    2942 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733745957737150206,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 12:06:01 embed-certs-005123 kubelet[2942]: E1209 12:06:01.613133    2942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-zfw9r" podUID="8438b820-4cc5-4d7b-8af5-9349fdd87ca8"
	Dec 09 12:06:07 embed-certs-005123 kubelet[2942]: E1209 12:06:07.739225    2942 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733745967738902943,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 12:06:07 embed-certs-005123 kubelet[2942]: E1209 12:06:07.739281    2942 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733745967738902943,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 12:06:16 embed-certs-005123 kubelet[2942]: E1209 12:06:16.613156    2942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-zfw9r" podUID="8438b820-4cc5-4d7b-8af5-9349fdd87ca8"
	Dec 09 12:06:17 embed-certs-005123 kubelet[2942]: E1209 12:06:17.741029    2942 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733745977740486138,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 12:06:17 embed-certs-005123 kubelet[2942]: E1209 12:06:17.741436    2942 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733745977740486138,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 12:06:27 embed-certs-005123 kubelet[2942]: E1209 12:06:27.744551    2942 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733745987743172930,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 12:06:27 embed-certs-005123 kubelet[2942]: E1209 12:06:27.744592    2942 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733745987743172930,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 12:06:30 embed-certs-005123 kubelet[2942]: E1209 12:06:30.615035    2942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-zfw9r" podUID="8438b820-4cc5-4d7b-8af5-9349fdd87ca8"
	Dec 09 12:06:37 embed-certs-005123 kubelet[2942]: E1209 12:06:37.746265    2942 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733745997745754306,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 12:06:37 embed-certs-005123 kubelet[2942]: E1209 12:06:37.746788    2942 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733745997745754306,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 12:06:44 embed-certs-005123 kubelet[2942]: E1209 12:06:44.613130    2942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-zfw9r" podUID="8438b820-4cc5-4d7b-8af5-9349fdd87ca8"
	Dec 09 12:06:47 embed-certs-005123 kubelet[2942]: E1209 12:06:47.641388    2942 iptables.go:577] "Could not set up iptables canary" err=<
	Dec 09 12:06:47 embed-certs-005123 kubelet[2942]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 09 12:06:47 embed-certs-005123 kubelet[2942]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 09 12:06:47 embed-certs-005123 kubelet[2942]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 09 12:06:47 embed-certs-005123 kubelet[2942]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 09 12:06:47 embed-certs-005123 kubelet[2942]: E1209 12:06:47.749091    2942 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733746007748141525,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 12:06:47 embed-certs-005123 kubelet[2942]: E1209 12:06:47.749135    2942 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733746007748141525,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 12:06:56 embed-certs-005123 kubelet[2942]: E1209 12:06:56.612396    2942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-zfw9r" podUID="8438b820-4cc5-4d7b-8af5-9349fdd87ca8"
	Dec 09 12:06:57 embed-certs-005123 kubelet[2942]: E1209 12:06:57.751441    2942 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733746017751187107,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 12:06:57 embed-certs-005123 kubelet[2942]: E1209 12:06:57.751713    2942 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733746017751187107,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [bd836c617c4c71eefb766c2dfb55170cf3cf91517592b1a7a183c74e32ea64a6] <==
	I1209 11:57:54.582895       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1209 11:57:54.593401       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1209 11:57:54.593458       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1209 11:57:54.609448       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1209 11:57:54.609625       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-005123_4ce3b392-4680-457a-956d-eef012adebc5!
	I1209 11:57:54.610638       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e7aaac86-6035-4d6d-942e-248efc0c7825", APIVersion:"v1", ResourceVersion:"393", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-005123_4ce3b392-4680-457a-956d-eef012adebc5 became leader
	I1209 11:57:54.710603       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-005123_4ce3b392-4680-457a-956d-eef012adebc5!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-005123 -n embed-certs-005123
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-005123 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-zfw9r
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-005123 describe pod metrics-server-6867b74b74-zfw9r
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-005123 describe pod metrics-server-6867b74b74-zfw9r: exit status 1 (65.43614ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-zfw9r" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-005123 describe pod metrics-server-6867b74b74-zfw9r: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.37s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
E1209 12:01:33.303775  617017 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/addons-156041/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
E1209 12:03:22.653098  617017 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/functional-032350/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
E1209 12:06:33.303575  617017 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/addons-156041/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
E1209 12:08:22.652787  617017 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/functional-032350/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-014592 -n old-k8s-version-014592
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-014592 -n old-k8s-version-014592: exit status 2 (244.82757ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-014592" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-014592 -n old-k8s-version-014592
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-014592 -n old-k8s-version-014592: exit status 2 (235.20948ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-014592 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-014592 logs -n 25: (1.515695293s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p running-upgrade-119214                              | running-upgrade-119214       | jenkins | v1.34.0 | 09 Dec 24 11:43 UTC | 09 Dec 24 11:43 UTC |
	| delete  | -p                                                     | disable-driver-mounts-905993 | jenkins | v1.34.0 | 09 Dec 24 11:43 UTC | 09 Dec 24 11:43 UTC |
	|         | disable-driver-mounts-905993                           |                              |         |         |                     |                     |
	| start   | -p no-preload-820741                                   | no-preload-820741            | jenkins | v1.34.0 | 09 Dec 24 11:43 UTC | 09 Dec 24 11:44 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-005123            | embed-certs-005123           | jenkins | v1.34.0 | 09 Dec 24 11:44 UTC | 09 Dec 24 11:44 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-005123                                  | embed-certs-005123           | jenkins | v1.34.0 | 09 Dec 24 11:44 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-820741             | no-preload-820741            | jenkins | v1.34.0 | 09 Dec 24 11:44 UTC | 09 Dec 24 11:44 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-820741                                   | no-preload-820741            | jenkins | v1.34.0 | 09 Dec 24 11:44 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-835095                           | kubernetes-upgrade-835095    | jenkins | v1.34.0 | 09 Dec 24 11:45 UTC | 09 Dec 24 11:45 UTC |
	| start   | -p kubernetes-upgrade-835095                           | kubernetes-upgrade-835095    | jenkins | v1.34.0 | 09 Dec 24 11:45 UTC | 09 Dec 24 11:45 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-835095                           | kubernetes-upgrade-835095    | jenkins | v1.34.0 | 09 Dec 24 11:45 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-835095                           | kubernetes-upgrade-835095    | jenkins | v1.34.0 | 09 Dec 24 11:45 UTC | 09 Dec 24 11:46 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-835095                           | kubernetes-upgrade-835095    | jenkins | v1.34.0 | 09 Dec 24 11:46 UTC | 09 Dec 24 11:46 UTC |
	| start   | -p                                                     | default-k8s-diff-port-482476 | jenkins | v1.34.0 | 09 Dec 24 11:46 UTC | 09 Dec 24 11:47 UTC |
	|         | default-k8s-diff-port-482476                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-005123                 | embed-certs-005123           | jenkins | v1.34.0 | 09 Dec 24 11:46 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-005123                                  | embed-certs-005123           | jenkins | v1.34.0 | 09 Dec 24 11:46 UTC | 09 Dec 24 11:58 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-014592        | old-k8s-version-014592       | jenkins | v1.34.0 | 09 Dec 24 11:46 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-820741                  | no-preload-820741            | jenkins | v1.34.0 | 09 Dec 24 11:47 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-820741                                   | no-preload-820741            | jenkins | v1.34.0 | 09 Dec 24 11:47 UTC | 09 Dec 24 11:56 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-482476  | default-k8s-diff-port-482476 | jenkins | v1.34.0 | 09 Dec 24 11:47 UTC | 09 Dec 24 11:47 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-482476 | jenkins | v1.34.0 | 09 Dec 24 11:47 UTC |                     |
	|         | default-k8s-diff-port-482476                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-014592                              | old-k8s-version-014592       | jenkins | v1.34.0 | 09 Dec 24 11:48 UTC | 09 Dec 24 11:48 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-014592             | old-k8s-version-014592       | jenkins | v1.34.0 | 09 Dec 24 11:48 UTC | 09 Dec 24 11:48 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-014592                              | old-k8s-version-014592       | jenkins | v1.34.0 | 09 Dec 24 11:48 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-482476       | default-k8s-diff-port-482476 | jenkins | v1.34.0 | 09 Dec 24 11:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-482476 | jenkins | v1.34.0 | 09 Dec 24 11:49 UTC | 09 Dec 24 11:57 UTC |
	|         | default-k8s-diff-port-482476                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/09 11:49:59
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 11:49:59.489110  663024 out.go:345] Setting OutFile to fd 1 ...
	I1209 11:49:59.489218  663024 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 11:49:59.489223  663024 out.go:358] Setting ErrFile to fd 2...
	I1209 11:49:59.489227  663024 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 11:49:59.489393  663024 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20068-609844/.minikube/bin
	I1209 11:49:59.489968  663024 out.go:352] Setting JSON to false
	I1209 11:49:59.491001  663024 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":16343,"bootTime":1733728656,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 11:49:59.491116  663024 start.go:139] virtualization: kvm guest
	I1209 11:49:59.493422  663024 out.go:177] * [default-k8s-diff-port-482476] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1209 11:49:59.495230  663024 notify.go:220] Checking for updates...
	I1209 11:49:59.495310  663024 out.go:177]   - MINIKUBE_LOCATION=20068
	I1209 11:49:59.496833  663024 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 11:49:59.498350  663024 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20068-609844/kubeconfig
	I1209 11:49:59.499799  663024 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20068-609844/.minikube
	I1209 11:49:59.501159  663024 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1209 11:49:59.502351  663024 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 11:49:59.503976  663024 config.go:182] Loaded profile config "default-k8s-diff-port-482476": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 11:49:59.504355  663024 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:49:59.504434  663024 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:49:59.519867  663024 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39863
	I1209 11:49:59.520292  663024 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:49:59.520859  663024 main.go:141] libmachine: Using API Version  1
	I1209 11:49:59.520886  663024 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:49:59.521235  663024 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:49:59.521438  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .DriverName
	I1209 11:49:59.521739  663024 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 11:49:59.522124  663024 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:49:59.522225  663024 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:49:59.537355  663024 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39881
	I1209 11:49:59.537882  663024 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:49:59.538473  663024 main.go:141] libmachine: Using API Version  1
	I1209 11:49:59.538507  663024 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:49:59.538862  663024 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:49:59.539111  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .DriverName
	I1209 11:49:59.573642  663024 out.go:177] * Using the kvm2 driver based on existing profile
	I1209 11:49:59.574808  663024 start.go:297] selected driver: kvm2
	I1209 11:49:59.574821  663024 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-482476 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-482476 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.25 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 11:49:59.574939  663024 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 11:49:59.575618  663024 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 11:49:59.575711  663024 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20068-609844/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1209 11:49:59.591990  663024 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1209 11:49:59.592425  663024 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 11:49:59.592468  663024 cni.go:84] Creating CNI manager for ""
	I1209 11:49:59.592500  663024 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 11:49:59.592535  663024 start.go:340] cluster config:
	{Name:default-k8s-diff-port-482476 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-482476 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.25 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-h
ost Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 11:49:59.592645  663024 iso.go:125] acquiring lock: {Name:mk3bf4d9da9b75cc0ae0a3732f4492bac54a4b46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 11:49:59.594451  663024 out.go:177] * Starting "default-k8s-diff-port-482476" primary control-plane node in "default-k8s-diff-port-482476" cluster
	I1209 11:49:56.270467  661546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.218:22: connect: no route to host
	I1209 11:49:59.342522  661546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.218:22: connect: no route to host
	I1209 11:49:59.595812  663024 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 11:49:59.595868  663024 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1209 11:49:59.595876  663024 cache.go:56] Caching tarball of preloaded images
	I1209 11:49:59.595966  663024 preload.go:172] Found /home/jenkins/minikube-integration/20068-609844/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1209 11:49:59.595978  663024 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1209 11:49:59.596080  663024 profile.go:143] Saving config to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/default-k8s-diff-port-482476/config.json ...
	I1209 11:49:59.596311  663024 start.go:360] acquireMachinesLock for default-k8s-diff-port-482476: {Name:mka6afffe9ddf2faebdc603ed0401805d58bf31e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 11:50:05.422464  661546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.218:22: connect: no route to host
	I1209 11:50:08.494459  661546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.218:22: connect: no route to host
	I1209 11:50:14.574530  661546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.218:22: connect: no route to host
	I1209 11:50:17.646514  661546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.218:22: connect: no route to host
	I1209 11:50:23.726481  661546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.218:22: connect: no route to host
	I1209 11:50:26.798485  661546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.218:22: connect: no route to host
	I1209 11:50:32.878439  661546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.218:22: connect: no route to host
	I1209 11:50:35.950501  661546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.218:22: connect: no route to host
	I1209 11:50:42.030519  661546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.218:22: connect: no route to host
	I1209 11:50:45.102528  661546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.218:22: connect: no route to host
	I1209 11:50:51.182489  661546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.218:22: connect: no route to host
	I1209 11:50:54.254539  661546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.218:22: connect: no route to host
	I1209 11:51:00.334461  661546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.218:22: connect: no route to host
	I1209 11:51:03.406475  661546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.218:22: connect: no route to host
	I1209 11:51:09.486483  661546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.218:22: connect: no route to host
	I1209 11:51:12.558522  661546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.218:22: connect: no route to host
	I1209 11:51:18.638454  661546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.218:22: connect: no route to host
	I1209 11:51:24.715494  662109 start.go:364] duration metric: took 4m3.035196519s to acquireMachinesLock for "no-preload-820741"
	I1209 11:51:24.715567  662109 start.go:96] Skipping create...Using existing machine configuration
	I1209 11:51:24.715578  662109 fix.go:54] fixHost starting: 
	I1209 11:51:24.715984  662109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:51:24.716040  662109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:51:24.731722  662109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43799
	I1209 11:51:24.732247  662109 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:51:24.732853  662109 main.go:141] libmachine: Using API Version  1
	I1209 11:51:24.732876  662109 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:51:24.733244  662109 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:51:24.733437  662109 main.go:141] libmachine: (no-preload-820741) Calling .DriverName
	I1209 11:51:24.733606  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetState
	I1209 11:51:24.735295  662109 fix.go:112] recreateIfNeeded on no-preload-820741: state=Stopped err=<nil>
	I1209 11:51:24.735325  662109 main.go:141] libmachine: (no-preload-820741) Calling .DriverName
	W1209 11:51:24.735521  662109 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 11:51:24.737237  662109 out.go:177] * Restarting existing kvm2 VM for "no-preload-820741" ...
	I1209 11:51:21.710446  661546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.218:22: connect: no route to host
	I1209 11:51:24.712631  661546 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 11:51:24.712695  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetMachineName
	I1209 11:51:24.713111  661546 buildroot.go:166] provisioning hostname "embed-certs-005123"
	I1209 11:51:24.713140  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetMachineName
	I1209 11:51:24.713398  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHHostname
	I1209 11:51:24.715321  661546 machine.go:96] duration metric: took 4m34.547615205s to provisionDockerMachine
	I1209 11:51:24.715372  661546 fix.go:56] duration metric: took 4m34.572283015s for fixHost
	I1209 11:51:24.715381  661546 start.go:83] releasing machines lock for "embed-certs-005123", held for 4m34.572321017s
	W1209 11:51:24.715401  661546 start.go:714] error starting host: provision: host is not running
	W1209 11:51:24.715538  661546 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1209 11:51:24.715550  661546 start.go:729] Will try again in 5 seconds ...
	I1209 11:51:24.738507  662109 main.go:141] libmachine: (no-preload-820741) Calling .Start
	I1209 11:51:24.738692  662109 main.go:141] libmachine: (no-preload-820741) Ensuring networks are active...
	I1209 11:51:24.739450  662109 main.go:141] libmachine: (no-preload-820741) Ensuring network default is active
	I1209 11:51:24.739799  662109 main.go:141] libmachine: (no-preload-820741) Ensuring network mk-no-preload-820741 is active
	I1209 11:51:24.740206  662109 main.go:141] libmachine: (no-preload-820741) Getting domain xml...
	I1209 11:51:24.740963  662109 main.go:141] libmachine: (no-preload-820741) Creating domain...
	I1209 11:51:25.958244  662109 main.go:141] libmachine: (no-preload-820741) Waiting to get IP...
	I1209 11:51:25.959122  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:25.959507  662109 main.go:141] libmachine: (no-preload-820741) DBG | unable to find current IP address of domain no-preload-820741 in network mk-no-preload-820741
	I1209 11:51:25.959585  662109 main.go:141] libmachine: (no-preload-820741) DBG | I1209 11:51:25.959486  663348 retry.go:31] will retry after 256.759149ms: waiting for machine to come up
	I1209 11:51:26.218626  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:26.219187  662109 main.go:141] libmachine: (no-preload-820741) DBG | unable to find current IP address of domain no-preload-820741 in network mk-no-preload-820741
	I1209 11:51:26.219222  662109 main.go:141] libmachine: (no-preload-820741) DBG | I1209 11:51:26.219121  663348 retry.go:31] will retry after 259.957451ms: waiting for machine to come up
	I1209 11:51:26.480403  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:26.480800  662109 main.go:141] libmachine: (no-preload-820741) DBG | unable to find current IP address of domain no-preload-820741 in network mk-no-preload-820741
	I1209 11:51:26.480828  662109 main.go:141] libmachine: (no-preload-820741) DBG | I1209 11:51:26.480753  663348 retry.go:31] will retry after 482.242492ms: waiting for machine to come up
	I1209 11:51:29.718422  661546 start.go:360] acquireMachinesLock for embed-certs-005123: {Name:mka6afffe9ddf2faebdc603ed0401805d58bf31e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 11:51:26.964420  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:26.964870  662109 main.go:141] libmachine: (no-preload-820741) DBG | unable to find current IP address of domain no-preload-820741 in network mk-no-preload-820741
	I1209 11:51:26.964903  662109 main.go:141] libmachine: (no-preload-820741) DBG | I1209 11:51:26.964821  663348 retry.go:31] will retry after 386.489156ms: waiting for machine to come up
	I1209 11:51:27.353471  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:27.353850  662109 main.go:141] libmachine: (no-preload-820741) DBG | unable to find current IP address of domain no-preload-820741 in network mk-no-preload-820741
	I1209 11:51:27.353875  662109 main.go:141] libmachine: (no-preload-820741) DBG | I1209 11:51:27.353796  663348 retry.go:31] will retry after 602.322538ms: waiting for machine to come up
	I1209 11:51:27.957621  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:27.958020  662109 main.go:141] libmachine: (no-preload-820741) DBG | unable to find current IP address of domain no-preload-820741 in network mk-no-preload-820741
	I1209 11:51:27.958051  662109 main.go:141] libmachine: (no-preload-820741) DBG | I1209 11:51:27.957967  663348 retry.go:31] will retry after 747.355263ms: waiting for machine to come up
	I1209 11:51:28.707049  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:28.707486  662109 main.go:141] libmachine: (no-preload-820741) DBG | unable to find current IP address of domain no-preload-820741 in network mk-no-preload-820741
	I1209 11:51:28.707515  662109 main.go:141] libmachine: (no-preload-820741) DBG | I1209 11:51:28.707436  663348 retry.go:31] will retry after 1.034218647s: waiting for machine to come up
	I1209 11:51:29.743755  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:29.744171  662109 main.go:141] libmachine: (no-preload-820741) DBG | unable to find current IP address of domain no-preload-820741 in network mk-no-preload-820741
	I1209 11:51:29.744213  662109 main.go:141] libmachine: (no-preload-820741) DBG | I1209 11:51:29.744119  663348 retry.go:31] will retry after 1.348194555s: waiting for machine to come up
	I1209 11:51:31.094696  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:31.095202  662109 main.go:141] libmachine: (no-preload-820741) DBG | unable to find current IP address of domain no-preload-820741 in network mk-no-preload-820741
	I1209 11:51:31.095234  662109 main.go:141] libmachine: (no-preload-820741) DBG | I1209 11:51:31.095124  663348 retry.go:31] will retry after 1.226653754s: waiting for machine to come up
	I1209 11:51:32.323529  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:32.323935  662109 main.go:141] libmachine: (no-preload-820741) DBG | unable to find current IP address of domain no-preload-820741 in network mk-no-preload-820741
	I1209 11:51:32.323959  662109 main.go:141] libmachine: (no-preload-820741) DBG | I1209 11:51:32.323884  663348 retry.go:31] will retry after 2.008914491s: waiting for machine to come up
	I1209 11:51:34.335246  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:34.335619  662109 main.go:141] libmachine: (no-preload-820741) DBG | unable to find current IP address of domain no-preload-820741 in network mk-no-preload-820741
	I1209 11:51:34.335658  662109 main.go:141] libmachine: (no-preload-820741) DBG | I1209 11:51:34.335593  663348 retry.go:31] will retry after 1.835576732s: waiting for machine to come up
	I1209 11:51:36.173316  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:36.173752  662109 main.go:141] libmachine: (no-preload-820741) DBG | unable to find current IP address of domain no-preload-820741 in network mk-no-preload-820741
	I1209 11:51:36.173786  662109 main.go:141] libmachine: (no-preload-820741) DBG | I1209 11:51:36.173711  663348 retry.go:31] will retry after 3.204076548s: waiting for machine to come up
	I1209 11:51:39.382184  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:39.382619  662109 main.go:141] libmachine: (no-preload-820741) DBG | unable to find current IP address of domain no-preload-820741 in network mk-no-preload-820741
	I1209 11:51:39.382656  662109 main.go:141] libmachine: (no-preload-820741) DBG | I1209 11:51:39.382560  663348 retry.go:31] will retry after 3.298451611s: waiting for machine to come up
	I1209 11:51:44.103077  662586 start.go:364] duration metric: took 3m16.308265809s to acquireMachinesLock for "old-k8s-version-014592"
	I1209 11:51:44.103164  662586 start.go:96] Skipping create...Using existing machine configuration
	I1209 11:51:44.103178  662586 fix.go:54] fixHost starting: 
	I1209 11:51:44.103657  662586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:51:44.103716  662586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:51:44.121162  662586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33531
	I1209 11:51:44.121672  662586 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:51:44.122203  662586 main.go:141] libmachine: Using API Version  1
	I1209 11:51:44.122232  662586 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:51:44.122644  662586 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:51:44.122852  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .DriverName
	I1209 11:51:44.123023  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetState
	I1209 11:51:44.124544  662586 fix.go:112] recreateIfNeeded on old-k8s-version-014592: state=Stopped err=<nil>
	I1209 11:51:44.124567  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .DriverName
	W1209 11:51:44.124704  662586 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 11:51:44.126942  662586 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-014592" ...
	I1209 11:51:42.684438  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:42.684824  662109 main.go:141] libmachine: (no-preload-820741) Found IP for machine: 192.168.39.169
	I1209 11:51:42.684859  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has current primary IP address 192.168.39.169 and MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:42.684867  662109 main.go:141] libmachine: (no-preload-820741) Reserving static IP address...
	I1209 11:51:42.685269  662109 main.go:141] libmachine: (no-preload-820741) DBG | found host DHCP lease matching {name: "no-preload-820741", mac: "52:54:00:27:4c:0e", ip: "192.168.39.169"} in network mk-no-preload-820741: {Iface:virbr1 ExpiryTime:2024-12-09 12:43:41 +0000 UTC Type:0 Mac:52:54:00:27:4c:0e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:no-preload-820741 Clientid:01:52:54:00:27:4c:0e}
	I1209 11:51:42.685296  662109 main.go:141] libmachine: (no-preload-820741) DBG | skip adding static IP to network mk-no-preload-820741 - found existing host DHCP lease matching {name: "no-preload-820741", mac: "52:54:00:27:4c:0e", ip: "192.168.39.169"}
	I1209 11:51:42.685311  662109 main.go:141] libmachine: (no-preload-820741) Reserved static IP address: 192.168.39.169
	I1209 11:51:42.685334  662109 main.go:141] libmachine: (no-preload-820741) Waiting for SSH to be available...
	I1209 11:51:42.685348  662109 main.go:141] libmachine: (no-preload-820741) DBG | Getting to WaitForSSH function...
	I1209 11:51:42.687295  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:42.687588  662109 main.go:141] libmachine: (no-preload-820741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4c:0e", ip: ""} in network mk-no-preload-820741: {Iface:virbr1 ExpiryTime:2024-12-09 12:43:41 +0000 UTC Type:0 Mac:52:54:00:27:4c:0e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:no-preload-820741 Clientid:01:52:54:00:27:4c:0e}
	I1209 11:51:42.687625  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined IP address 192.168.39.169 and MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:42.687702  662109 main.go:141] libmachine: (no-preload-820741) DBG | Using SSH client type: external
	I1209 11:51:42.687790  662109 main.go:141] libmachine: (no-preload-820741) DBG | Using SSH private key: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/no-preload-820741/id_rsa (-rw-------)
	I1209 11:51:42.687824  662109 main.go:141] libmachine: (no-preload-820741) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.169 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20068-609844/.minikube/machines/no-preload-820741/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1209 11:51:42.687844  662109 main.go:141] libmachine: (no-preload-820741) DBG | About to run SSH command:
	I1209 11:51:42.687857  662109 main.go:141] libmachine: (no-preload-820741) DBG | exit 0
	I1209 11:51:42.822609  662109 main.go:141] libmachine: (no-preload-820741) DBG | SSH cmd err, output: <nil>: 
	I1209 11:51:42.822996  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetConfigRaw
	I1209 11:51:42.823665  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetIP
	I1209 11:51:42.826484  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:42.826783  662109 main.go:141] libmachine: (no-preload-820741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4c:0e", ip: ""} in network mk-no-preload-820741: {Iface:virbr1 ExpiryTime:2024-12-09 12:43:41 +0000 UTC Type:0 Mac:52:54:00:27:4c:0e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:no-preload-820741 Clientid:01:52:54:00:27:4c:0e}
	I1209 11:51:42.826808  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined IP address 192.168.39.169 and MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:42.827050  662109 profile.go:143] Saving config to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/no-preload-820741/config.json ...
	I1209 11:51:42.827323  662109 machine.go:93] provisionDockerMachine start ...
	I1209 11:51:42.827346  662109 main.go:141] libmachine: (no-preload-820741) Calling .DriverName
	I1209 11:51:42.827620  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHHostname
	I1209 11:51:42.830224  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:42.830569  662109 main.go:141] libmachine: (no-preload-820741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4c:0e", ip: ""} in network mk-no-preload-820741: {Iface:virbr1 ExpiryTime:2024-12-09 12:43:41 +0000 UTC Type:0 Mac:52:54:00:27:4c:0e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:no-preload-820741 Clientid:01:52:54:00:27:4c:0e}
	I1209 11:51:42.830599  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined IP address 192.168.39.169 and MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:42.830717  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHPort
	I1209 11:51:42.830909  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHKeyPath
	I1209 11:51:42.831107  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHKeyPath
	I1209 11:51:42.831274  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHUsername
	I1209 11:51:42.831454  662109 main.go:141] libmachine: Using SSH client type: native
	I1209 11:51:42.831790  662109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.169 22 <nil> <nil>}
	I1209 11:51:42.831807  662109 main.go:141] libmachine: About to run SSH command:
	hostname
	I1209 11:51:42.938456  662109 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1209 11:51:42.938500  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetMachineName
	I1209 11:51:42.938778  662109 buildroot.go:166] provisioning hostname "no-preload-820741"
	I1209 11:51:42.938813  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetMachineName
	I1209 11:51:42.939023  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHHostname
	I1209 11:51:42.941706  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:42.942236  662109 main.go:141] libmachine: (no-preload-820741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4c:0e", ip: ""} in network mk-no-preload-820741: {Iface:virbr1 ExpiryTime:2024-12-09 12:43:41 +0000 UTC Type:0 Mac:52:54:00:27:4c:0e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:no-preload-820741 Clientid:01:52:54:00:27:4c:0e}
	I1209 11:51:42.942267  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined IP address 192.168.39.169 and MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:42.942390  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHPort
	I1209 11:51:42.942606  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHKeyPath
	I1209 11:51:42.942773  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHKeyPath
	I1209 11:51:42.942922  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHUsername
	I1209 11:51:42.943177  662109 main.go:141] libmachine: Using SSH client type: native
	I1209 11:51:42.943382  662109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.169 22 <nil> <nil>}
	I1209 11:51:42.943406  662109 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-820741 && echo "no-preload-820741" | sudo tee /etc/hostname
	I1209 11:51:43.065816  662109 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-820741
	
	I1209 11:51:43.065849  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHHostname
	I1209 11:51:43.068607  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:43.068916  662109 main.go:141] libmachine: (no-preload-820741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4c:0e", ip: ""} in network mk-no-preload-820741: {Iface:virbr1 ExpiryTime:2024-12-09 12:43:41 +0000 UTC Type:0 Mac:52:54:00:27:4c:0e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:no-preload-820741 Clientid:01:52:54:00:27:4c:0e}
	I1209 11:51:43.068951  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined IP address 192.168.39.169 and MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:43.069127  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHPort
	I1209 11:51:43.069256  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHKeyPath
	I1209 11:51:43.069351  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHKeyPath
	I1209 11:51:43.069514  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHUsername
	I1209 11:51:43.069637  662109 main.go:141] libmachine: Using SSH client type: native
	I1209 11:51:43.069841  662109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.169 22 <nil> <nil>}
	I1209 11:51:43.069861  662109 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-820741' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-820741/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-820741' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 11:51:43.182210  662109 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 11:51:43.182257  662109 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20068-609844/.minikube CaCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20068-609844/.minikube}
	I1209 11:51:43.182289  662109 buildroot.go:174] setting up certificates
	I1209 11:51:43.182305  662109 provision.go:84] configureAuth start
	I1209 11:51:43.182323  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetMachineName
	I1209 11:51:43.182674  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetIP
	I1209 11:51:43.185513  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:43.185872  662109 main.go:141] libmachine: (no-preload-820741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4c:0e", ip: ""} in network mk-no-preload-820741: {Iface:virbr1 ExpiryTime:2024-12-09 12:43:41 +0000 UTC Type:0 Mac:52:54:00:27:4c:0e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:no-preload-820741 Clientid:01:52:54:00:27:4c:0e}
	I1209 11:51:43.185897  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined IP address 192.168.39.169 and MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:43.186018  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHHostname
	I1209 11:51:43.188128  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:43.188482  662109 main.go:141] libmachine: (no-preload-820741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4c:0e", ip: ""} in network mk-no-preload-820741: {Iface:virbr1 ExpiryTime:2024-12-09 12:43:41 +0000 UTC Type:0 Mac:52:54:00:27:4c:0e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:no-preload-820741 Clientid:01:52:54:00:27:4c:0e}
	I1209 11:51:43.188534  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined IP address 192.168.39.169 and MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:43.188668  662109 provision.go:143] copyHostCerts
	I1209 11:51:43.188752  662109 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem, removing ...
	I1209 11:51:43.188774  662109 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem
	I1209 11:51:43.188840  662109 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem (1679 bytes)
	I1209 11:51:43.188928  662109 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem, removing ...
	I1209 11:51:43.188936  662109 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem
	I1209 11:51:43.188963  662109 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem (1082 bytes)
	I1209 11:51:43.189019  662109 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem, removing ...
	I1209 11:51:43.189027  662109 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem
	I1209 11:51:43.189049  662109 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem (1123 bytes)
	I1209 11:51:43.189104  662109 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem org=jenkins.no-preload-820741 san=[127.0.0.1 192.168.39.169 localhost minikube no-preload-820741]
	I1209 11:51:43.488258  662109 provision.go:177] copyRemoteCerts
	I1209 11:51:43.488336  662109 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 11:51:43.488367  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHHostname
	I1209 11:51:43.491689  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:43.492025  662109 main.go:141] libmachine: (no-preload-820741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4c:0e", ip: ""} in network mk-no-preload-820741: {Iface:virbr1 ExpiryTime:2024-12-09 12:43:41 +0000 UTC Type:0 Mac:52:54:00:27:4c:0e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:no-preload-820741 Clientid:01:52:54:00:27:4c:0e}
	I1209 11:51:43.492059  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined IP address 192.168.39.169 and MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:43.492267  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHPort
	I1209 11:51:43.492465  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHKeyPath
	I1209 11:51:43.492635  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHUsername
	I1209 11:51:43.492768  662109 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/no-preload-820741/id_rsa Username:docker}
	I1209 11:51:43.577708  662109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1209 11:51:43.602000  662109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1209 11:51:43.627251  662109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1209 11:51:43.651591  662109 provision.go:87] duration metric: took 469.266358ms to configureAuth
	I1209 11:51:43.651626  662109 buildroot.go:189] setting minikube options for container-runtime
	I1209 11:51:43.651863  662109 config.go:182] Loaded profile config "no-preload-820741": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 11:51:43.652059  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHHostname
	I1209 11:51:43.655150  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:43.655489  662109 main.go:141] libmachine: (no-preload-820741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4c:0e", ip: ""} in network mk-no-preload-820741: {Iface:virbr1 ExpiryTime:2024-12-09 12:43:41 +0000 UTC Type:0 Mac:52:54:00:27:4c:0e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:no-preload-820741 Clientid:01:52:54:00:27:4c:0e}
	I1209 11:51:43.655518  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined IP address 192.168.39.169 and MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:43.655738  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHPort
	I1209 11:51:43.655963  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHKeyPath
	I1209 11:51:43.656146  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHKeyPath
	I1209 11:51:43.656295  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHUsername
	I1209 11:51:43.656483  662109 main.go:141] libmachine: Using SSH client type: native
	I1209 11:51:43.656688  662109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.169 22 <nil> <nil>}
	I1209 11:51:43.656710  662109 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 11:51:43.870704  662109 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 11:51:43.870738  662109 machine.go:96] duration metric: took 1.043398486s to provisionDockerMachine
	I1209 11:51:43.870756  662109 start.go:293] postStartSetup for "no-preload-820741" (driver="kvm2")
	I1209 11:51:43.870771  662109 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 11:51:43.870796  662109 main.go:141] libmachine: (no-preload-820741) Calling .DriverName
	I1209 11:51:43.871158  662109 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 11:51:43.871186  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHHostname
	I1209 11:51:43.873863  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:43.874207  662109 main.go:141] libmachine: (no-preload-820741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4c:0e", ip: ""} in network mk-no-preload-820741: {Iface:virbr1 ExpiryTime:2024-12-09 12:43:41 +0000 UTC Type:0 Mac:52:54:00:27:4c:0e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:no-preload-820741 Clientid:01:52:54:00:27:4c:0e}
	I1209 11:51:43.874230  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined IP address 192.168.39.169 and MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:43.874408  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHPort
	I1209 11:51:43.874610  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHKeyPath
	I1209 11:51:43.874800  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHUsername
	I1209 11:51:43.874925  662109 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/no-preload-820741/id_rsa Username:docker}
	I1209 11:51:43.956874  662109 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 11:51:43.960825  662109 info.go:137] Remote host: Buildroot 2023.02.9
	I1209 11:51:43.960853  662109 filesync.go:126] Scanning /home/jenkins/minikube-integration/20068-609844/.minikube/addons for local assets ...
	I1209 11:51:43.960919  662109 filesync.go:126] Scanning /home/jenkins/minikube-integration/20068-609844/.minikube/files for local assets ...
	I1209 11:51:43.960993  662109 filesync.go:149] local asset: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem -> 6170172.pem in /etc/ssl/certs
	I1209 11:51:43.961095  662109 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 11:51:43.970138  662109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem --> /etc/ssl/certs/6170172.pem (1708 bytes)
	I1209 11:51:43.991975  662109 start.go:296] duration metric: took 121.20118ms for postStartSetup
	I1209 11:51:43.992020  662109 fix.go:56] duration metric: took 19.276442325s for fixHost
	I1209 11:51:43.992043  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHHostname
	I1209 11:51:43.994707  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:43.995035  662109 main.go:141] libmachine: (no-preload-820741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4c:0e", ip: ""} in network mk-no-preload-820741: {Iface:virbr1 ExpiryTime:2024-12-09 12:43:41 +0000 UTC Type:0 Mac:52:54:00:27:4c:0e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:no-preload-820741 Clientid:01:52:54:00:27:4c:0e}
	I1209 11:51:43.995069  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined IP address 192.168.39.169 and MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:43.995186  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHPort
	I1209 11:51:43.995403  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHKeyPath
	I1209 11:51:43.995568  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHKeyPath
	I1209 11:51:43.995716  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHUsername
	I1209 11:51:43.995927  662109 main.go:141] libmachine: Using SSH client type: native
	I1209 11:51:43.996107  662109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.169 22 <nil> <nil>}
	I1209 11:51:43.996117  662109 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1209 11:51:44.102890  662109 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733745104.077047488
	
	I1209 11:51:44.102914  662109 fix.go:216] guest clock: 1733745104.077047488
	I1209 11:51:44.102922  662109 fix.go:229] Guest: 2024-12-09 11:51:44.077047488 +0000 UTC Remote: 2024-12-09 11:51:43.992024296 +0000 UTC m=+262.463051778 (delta=85.023192ms)
	I1209 11:51:44.102952  662109 fix.go:200] guest clock delta is within tolerance: 85.023192ms
	I1209 11:51:44.102957  662109 start.go:83] releasing machines lock for "no-preload-820741", held for 19.387413234s
	I1209 11:51:44.102980  662109 main.go:141] libmachine: (no-preload-820741) Calling .DriverName
	I1209 11:51:44.103272  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetIP
	I1209 11:51:44.105929  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:44.106314  662109 main.go:141] libmachine: (no-preload-820741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4c:0e", ip: ""} in network mk-no-preload-820741: {Iface:virbr1 ExpiryTime:2024-12-09 12:43:41 +0000 UTC Type:0 Mac:52:54:00:27:4c:0e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:no-preload-820741 Clientid:01:52:54:00:27:4c:0e}
	I1209 11:51:44.106341  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined IP address 192.168.39.169 and MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:44.106567  662109 main.go:141] libmachine: (no-preload-820741) Calling .DriverName
	I1209 11:51:44.107102  662109 main.go:141] libmachine: (no-preload-820741) Calling .DriverName
	I1209 11:51:44.107323  662109 main.go:141] libmachine: (no-preload-820741) Calling .DriverName
	I1209 11:51:44.107453  662109 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 11:51:44.107507  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHHostname
	I1209 11:51:44.107640  662109 ssh_runner.go:195] Run: cat /version.json
	I1209 11:51:44.107672  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHHostname
	I1209 11:51:44.110422  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:44.110792  662109 main.go:141] libmachine: (no-preload-820741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4c:0e", ip: ""} in network mk-no-preload-820741: {Iface:virbr1 ExpiryTime:2024-12-09 12:43:41 +0000 UTC Type:0 Mac:52:54:00:27:4c:0e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:no-preload-820741 Clientid:01:52:54:00:27:4c:0e}
	I1209 11:51:44.110822  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined IP address 192.168.39.169 and MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:44.110840  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:44.110984  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHPort
	I1209 11:51:44.111194  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHKeyPath
	I1209 11:51:44.111376  662109 main.go:141] libmachine: (no-preload-820741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4c:0e", ip: ""} in network mk-no-preload-820741: {Iface:virbr1 ExpiryTime:2024-12-09 12:43:41 +0000 UTC Type:0 Mac:52:54:00:27:4c:0e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:no-preload-820741 Clientid:01:52:54:00:27:4c:0e}
	I1209 11:51:44.111395  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined IP address 192.168.39.169 and MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:44.111408  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHUsername
	I1209 11:51:44.111569  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHPort
	I1209 11:51:44.111589  662109 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/no-preload-820741/id_rsa Username:docker}
	I1209 11:51:44.111722  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHKeyPath
	I1209 11:51:44.111827  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHUsername
	I1209 11:51:44.111986  662109 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/no-preload-820741/id_rsa Username:docker}
	I1209 11:51:44.228799  662109 ssh_runner.go:195] Run: systemctl --version
	I1209 11:51:44.234678  662109 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 11:51:44.383290  662109 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 11:51:44.388906  662109 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 11:51:44.388981  662109 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 11:51:44.405271  662109 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1209 11:51:44.405308  662109 start.go:495] detecting cgroup driver to use...
	I1209 11:51:44.405389  662109 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 11:51:44.425480  662109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 11:51:44.439827  662109 docker.go:217] disabling cri-docker service (if available) ...
	I1209 11:51:44.439928  662109 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 11:51:44.454750  662109 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 11:51:44.470828  662109 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 11:51:44.595400  662109 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 11:51:44.756743  662109 docker.go:233] disabling docker service ...
	I1209 11:51:44.756817  662109 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 11:51:44.774069  662109 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 11:51:44.788188  662109 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 11:51:44.909156  662109 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 11:51:45.036992  662109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 11:51:45.051284  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 11:51:45.071001  662109 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1209 11:51:45.071074  662109 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:51:45.081491  662109 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 11:51:45.081549  662109 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:51:45.091476  662109 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:51:45.103237  662109 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:51:45.114723  662109 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 11:51:45.126330  662109 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:51:45.136501  662109 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:51:45.152804  662109 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:51:45.163221  662109 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 11:51:45.173297  662109 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1209 11:51:45.173379  662109 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1209 11:51:45.186209  662109 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 11:51:45.195773  662109 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 11:51:45.339593  662109 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 11:51:45.438766  662109 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 11:51:45.438851  662109 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 11:51:45.444775  662109 start.go:563] Will wait 60s for crictl version
	I1209 11:51:45.444847  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:51:45.449585  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 11:51:45.493796  662109 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1209 11:51:45.493899  662109 ssh_runner.go:195] Run: crio --version
	I1209 11:51:45.521391  662109 ssh_runner.go:195] Run: crio --version
	I1209 11:51:45.551249  662109 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1209 11:51:45.552714  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetIP
	I1209 11:51:45.555910  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:45.556271  662109 main.go:141] libmachine: (no-preload-820741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4c:0e", ip: ""} in network mk-no-preload-820741: {Iface:virbr1 ExpiryTime:2024-12-09 12:43:41 +0000 UTC Type:0 Mac:52:54:00:27:4c:0e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:no-preload-820741 Clientid:01:52:54:00:27:4c:0e}
	I1209 11:51:45.556298  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined IP address 192.168.39.169 and MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:45.556571  662109 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1209 11:51:45.560718  662109 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 11:51:45.573027  662109 kubeadm.go:883] updating cluster {Name:no-preload-820741 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:no-preload-820741 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.169 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 11:51:45.573171  662109 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 11:51:45.573226  662109 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 11:51:45.613696  662109 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1209 11:51:45.613724  662109 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.2 registry.k8s.io/kube-controller-manager:v1.31.2 registry.k8s.io/kube-scheduler:v1.31.2 registry.k8s.io/kube-proxy:v1.31.2 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1209 11:51:45.613810  662109 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1209 11:51:45.613847  662109 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.2
	I1209 11:51:45.613864  662109 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.2
	I1209 11:51:45.613880  662109 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1209 11:51:45.613857  662109 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1209 11:51:45.613939  662109 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1209 11:51:45.613801  662109 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.2
	I1209 11:51:45.613810  662109 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 11:51:45.615885  662109 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1209 11:51:45.615983  662109 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.2
	I1209 11:51:45.615889  662109 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1209 11:51:45.615885  662109 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 11:51:45.615893  662109 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1209 11:51:45.615891  662109 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1209 11:51:45.615897  662109 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.2
	I1209 11:51:45.615893  662109 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.2
	I1209 11:51:45.819757  662109 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1209 11:51:45.836546  662109 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1209 11:51:45.851918  662109 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.2
	I1209 11:51:45.857461  662109 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.2
	I1209 11:51:45.857468  662109 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.2
	I1209 11:51:45.863981  662109 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1209 11:51:45.864038  662109 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1209 11:51:45.864122  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:51:45.865289  662109 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.2
	I1209 11:51:45.868361  662109 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I1209 11:51:46.030476  662109 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.2" does not exist at hash "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173" in container runtime
	I1209 11:51:46.030525  662109 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.2
	I1209 11:51:46.030582  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:51:46.030525  662109 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.2" needs transfer: "registry.k8s.io/kube-proxy:v1.31.2" does not exist at hash "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38" in container runtime
	I1209 11:51:46.030603  662109 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.2" does not exist at hash "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856" in container runtime
	I1209 11:51:46.030625  662109 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.2
	I1209 11:51:46.030652  662109 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.2
	I1209 11:51:46.030694  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:51:46.030655  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:51:46.030720  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1209 11:51:46.030760  662109 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.2" does not exist at hash "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503" in container runtime
	I1209 11:51:46.030794  662109 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1209 11:51:46.030823  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:51:46.030823  662109 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I1209 11:51:46.030845  662109 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I1209 11:51:46.030868  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:51:46.041983  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1209 11:51:46.042072  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1209 11:51:46.042088  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1209 11:51:46.086909  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1209 11:51:46.086966  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1209 11:51:46.086997  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1209 11:51:46.141636  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1209 11:51:46.141723  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1209 11:51:46.141779  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1209 11:51:46.249908  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1209 11:51:46.249972  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1209 11:51:46.250024  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1209 11:51:46.250056  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1209 11:51:46.266345  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1209 11:51:46.266425  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1209 11:51:46.376691  662109 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1209 11:51:46.376784  662109 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2
	I1209 11:51:46.376904  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1209 11:51:46.376937  662109 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1209 11:51:46.376911  662109 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1209 11:51:46.376980  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1209 11:51:46.407997  662109 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2
	I1209 11:51:46.408015  662109 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2
	I1209 11:51:46.408120  662109 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.2
	I1209 11:51:46.408120  662109 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1209 11:51:46.450341  662109 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.2 (exists)
	I1209 11:51:46.450374  662109 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1209 11:51:46.450445  662109 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1209 11:51:46.450503  662109 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I1209 11:51:46.450537  662109 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I1209 11:51:46.450541  662109 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2
	I1209 11:51:46.450570  662109 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.2 (exists)
	I1209 11:51:46.450621  662109 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I1209 11:51:46.450621  662109 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1209 11:51:46.450621  662109 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.2 (exists)
	I1209 11:51:44.128421  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .Start
	I1209 11:51:44.128663  662586 main.go:141] libmachine: (old-k8s-version-014592) Ensuring networks are active...
	I1209 11:51:44.129435  662586 main.go:141] libmachine: (old-k8s-version-014592) Ensuring network default is active
	I1209 11:51:44.129805  662586 main.go:141] libmachine: (old-k8s-version-014592) Ensuring network mk-old-k8s-version-014592 is active
	I1209 11:51:44.130314  662586 main.go:141] libmachine: (old-k8s-version-014592) Getting domain xml...
	I1209 11:51:44.131070  662586 main.go:141] libmachine: (old-k8s-version-014592) Creating domain...
	I1209 11:51:45.405214  662586 main.go:141] libmachine: (old-k8s-version-014592) Waiting to get IP...
	I1209 11:51:45.406116  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:51:45.406680  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | unable to find current IP address of domain old-k8s-version-014592 in network mk-old-k8s-version-014592
	I1209 11:51:45.406716  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | I1209 11:51:45.406613  663492 retry.go:31] will retry after 249.130873ms: waiting for machine to come up
	I1209 11:51:45.657224  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:51:45.657727  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | unable to find current IP address of domain old-k8s-version-014592 in network mk-old-k8s-version-014592
	I1209 11:51:45.657756  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | I1209 11:51:45.657687  663492 retry.go:31] will retry after 363.458278ms: waiting for machine to come up
	I1209 11:51:46.023431  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:51:46.023912  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | unable to find current IP address of domain old-k8s-version-014592 in network mk-old-k8s-version-014592
	I1209 11:51:46.023945  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | I1209 11:51:46.023851  663492 retry.go:31] will retry after 313.220722ms: waiting for machine to come up
	I1209 11:51:46.339300  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:51:46.339850  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | unable to find current IP address of domain old-k8s-version-014592 in network mk-old-k8s-version-014592
	I1209 11:51:46.339876  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | I1209 11:51:46.339791  663492 retry.go:31] will retry after 517.613322ms: waiting for machine to come up
	I1209 11:51:46.859825  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:51:46.860229  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | unable to find current IP address of domain old-k8s-version-014592 in network mk-old-k8s-version-014592
	I1209 11:51:46.860260  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | I1209 11:51:46.860198  663492 retry.go:31] will retry after 710.195232ms: waiting for machine to come up
	I1209 11:51:47.572460  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:51:47.573030  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | unable to find current IP address of domain old-k8s-version-014592 in network mk-old-k8s-version-014592
	I1209 11:51:47.573080  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | I1209 11:51:47.573008  663492 retry.go:31] will retry after 620.717522ms: waiting for machine to come up
	I1209 11:51:46.869631  662109 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 11:51:48.822213  662109 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2: (2.371704342s)
	I1209 11:51:48.822263  662109 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2 from cache
	I1209 11:51:48.822262  662109 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0: (2.371603127s)
	I1209 11:51:48.822296  662109 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I1209 11:51:48.822295  662109 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.2: (2.371584353s)
	I1209 11:51:48.822298  662109 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1209 11:51:48.822309  662109 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.2 (exists)
	I1209 11:51:48.822324  662109 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.952666874s)
	I1209 11:51:48.822364  662109 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1209 11:51:48.822367  662109 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1209 11:51:48.822416  662109 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 11:51:48.822460  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:51:50.794288  662109 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (1.971891497s)
	I1209 11:51:50.794330  662109 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1209 11:51:50.794357  662109 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.2
	I1209 11:51:50.794357  662109 ssh_runner.go:235] Completed: which crictl: (1.971876587s)
	I1209 11:51:50.794417  662109 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2
	I1209 11:51:50.794437  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 11:51:48.195603  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:51:48.196140  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | unable to find current IP address of domain old-k8s-version-014592 in network mk-old-k8s-version-014592
	I1209 11:51:48.196172  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | I1209 11:51:48.196083  663492 retry.go:31] will retry after 747.45082ms: waiting for machine to come up
	I1209 11:51:48.945230  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:51:48.945682  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | unable to find current IP address of domain old-k8s-version-014592 in network mk-old-k8s-version-014592
	I1209 11:51:48.945737  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | I1209 11:51:48.945661  663492 retry.go:31] will retry after 1.307189412s: waiting for machine to come up
	I1209 11:51:50.254747  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:51:50.255335  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | unable to find current IP address of domain old-k8s-version-014592 in network mk-old-k8s-version-014592
	I1209 11:51:50.255359  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | I1209 11:51:50.255276  663492 retry.go:31] will retry after 1.269881759s: waiting for machine to come up
	I1209 11:51:51.526966  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:51:51.527400  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | unable to find current IP address of domain old-k8s-version-014592 in network mk-old-k8s-version-014592
	I1209 11:51:51.527431  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | I1209 11:51:51.527348  663492 retry.go:31] will retry after 1.424091669s: waiting for machine to come up
	I1209 11:51:52.958981  662109 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.164517823s)
	I1209 11:51:52.959044  662109 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2: (2.164597978s)
	I1209 11:51:52.959089  662109 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2 from cache
	I1209 11:51:52.959120  662109 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1209 11:51:52.959057  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 11:51:52.959203  662109 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1209 11:51:53.007629  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 11:51:54.832641  662109 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2: (1.873398185s)
	I1209 11:51:54.832686  662109 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2 from cache
	I1209 11:51:54.832694  662109 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.825022672s)
	I1209 11:51:54.832714  662109 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I1209 11:51:54.832748  662109 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1209 11:51:54.832769  662109 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I1209 11:51:54.832853  662109 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1209 11:51:52.953290  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:51:52.953711  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | unable to find current IP address of domain old-k8s-version-014592 in network mk-old-k8s-version-014592
	I1209 11:51:52.953743  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | I1209 11:51:52.953658  663492 retry.go:31] will retry after 2.009829783s: waiting for machine to come up
	I1209 11:51:54.965818  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:51:54.966337  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | unable to find current IP address of domain old-k8s-version-014592 in network mk-old-k8s-version-014592
	I1209 11:51:54.966372  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | I1209 11:51:54.966285  663492 retry.go:31] will retry after 2.209879817s: waiting for machine to come up
	I1209 11:51:57.177397  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:51:57.177870  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | unable to find current IP address of domain old-k8s-version-014592 in network mk-old-k8s-version-014592
	I1209 11:51:57.177901  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | I1209 11:51:57.177805  663492 retry.go:31] will retry after 2.999056002s: waiting for machine to come up
	I1209 11:51:58.433813  662109 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.600992195s)
	I1209 11:51:58.433889  662109 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I1209 11:51:58.433913  662109 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1209 11:51:58.433831  662109 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (3.600948593s)
	I1209 11:51:58.433947  662109 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1209 11:51:58.433961  662109 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1209 11:51:59.792012  662109 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2: (1.35801884s)
	I1209 11:51:59.792049  662109 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2 from cache
	I1209 11:51:59.792078  662109 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1209 11:51:59.792127  662109 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1209 11:52:00.635140  662109 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1209 11:52:00.635193  662109 cache_images.go:123] Successfully loaded all cached images
	I1209 11:52:00.635212  662109 cache_images.go:92] duration metric: took 15.021464053s to LoadCachedImages
	I1209 11:52:00.635232  662109 kubeadm.go:934] updating node { 192.168.39.169 8443 v1.31.2 crio true true} ...
	I1209 11:52:00.635395  662109 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-820741 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.169
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:no-preload-820741 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 11:52:00.635481  662109 ssh_runner.go:195] Run: crio config
	I1209 11:52:00.680321  662109 cni.go:84] Creating CNI manager for ""
	I1209 11:52:00.680345  662109 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 11:52:00.680370  662109 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1209 11:52:00.680394  662109 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.169 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-820741 NodeName:no-preload-820741 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.169"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.169 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 11:52:00.680545  662109 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.169
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-820741"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.169"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.169"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 11:52:00.680614  662109 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1209 11:52:00.690391  662109 binaries.go:44] Found k8s binaries, skipping transfer
	I1209 11:52:00.690484  662109 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 11:52:00.699034  662109 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1209 11:52:00.714710  662109 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 11:52:00.730375  662109 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2297 bytes)
	I1209 11:52:00.747519  662109 ssh_runner.go:195] Run: grep 192.168.39.169	control-plane.minikube.internal$ /etc/hosts
	I1209 11:52:00.751163  662109 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.169	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 11:52:00.762405  662109 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 11:52:00.881308  662109 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 11:52:00.898028  662109 certs.go:68] Setting up /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/no-preload-820741 for IP: 192.168.39.169
	I1209 11:52:00.898060  662109 certs.go:194] generating shared ca certs ...
	I1209 11:52:00.898085  662109 certs.go:226] acquiring lock for ca certs: {Name:mk2df5887e08965d909a9c950da5dfffb8a04ddc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 11:52:00.898349  662109 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.key
	I1209 11:52:00.898415  662109 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.key
	I1209 11:52:00.898429  662109 certs.go:256] generating profile certs ...
	I1209 11:52:00.898565  662109 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/no-preload-820741/client.key
	I1209 11:52:00.898646  662109 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/no-preload-820741/apiserver.key.814e22a1
	I1209 11:52:00.898701  662109 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/no-preload-820741/proxy-client.key
	I1209 11:52:00.898859  662109 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017.pem (1338 bytes)
	W1209 11:52:00.898904  662109 certs.go:480] ignoring /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017_empty.pem, impossibly tiny 0 bytes
	I1209 11:52:00.898918  662109 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem (1675 bytes)
	I1209 11:52:00.898949  662109 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem (1082 bytes)
	I1209 11:52:00.898982  662109 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem (1123 bytes)
	I1209 11:52:00.899007  662109 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem (1679 bytes)
	I1209 11:52:00.899045  662109 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem (1708 bytes)
	I1209 11:52:00.899994  662109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 11:52:00.943848  662109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1209 11:52:00.970587  662109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 11:52:01.025164  662109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1209 11:52:01.055766  662109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/no-preload-820741/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1209 11:52:01.089756  662109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/no-preload-820741/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1209 11:52:01.112171  662109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/no-preload-820741/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 11:52:01.135928  662109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/no-preload-820741/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1209 11:52:01.157703  662109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017.pem --> /usr/share/ca-certificates/617017.pem (1338 bytes)
	I1209 11:52:01.179806  662109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem --> /usr/share/ca-certificates/6170172.pem (1708 bytes)
	I1209 11:52:01.201663  662109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 11:52:01.223314  662109 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 11:52:01.239214  662109 ssh_runner.go:195] Run: openssl version
	I1209 11:52:01.244687  662109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/617017.pem && ln -fs /usr/share/ca-certificates/617017.pem /etc/ssl/certs/617017.pem"
	I1209 11:52:01.254630  662109 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/617017.pem
	I1209 11:52:01.258801  662109 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 10:45 /usr/share/ca-certificates/617017.pem
	I1209 11:52:01.258849  662109 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/617017.pem
	I1209 11:52:01.264219  662109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/617017.pem /etc/ssl/certs/51391683.0"
	I1209 11:52:01.274077  662109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6170172.pem && ln -fs /usr/share/ca-certificates/6170172.pem /etc/ssl/certs/6170172.pem"
	I1209 11:52:01.284511  662109 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6170172.pem
	I1209 11:52:01.289141  662109 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 10:45 /usr/share/ca-certificates/6170172.pem
	I1209 11:52:01.289216  662109 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6170172.pem
	I1209 11:52:01.295079  662109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6170172.pem /etc/ssl/certs/3ec20f2e.0"
	I1209 11:52:01.305606  662109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1209 11:52:01.315795  662109 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 11:52:01.320085  662109 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 10:34 /usr/share/ca-certificates/minikubeCA.pem
	I1209 11:52:01.320147  662109 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 11:52:01.325590  662109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1209 11:52:01.335747  662109 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 11:52:01.340113  662109 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1209 11:52:01.346217  662109 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1209 11:52:01.351799  662109 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1209 11:52:01.357441  662109 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1209 11:52:01.362784  662109 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1209 11:52:01.368210  662109 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1209 11:52:01.373975  662109 kubeadm.go:392] StartCluster: {Name:no-preload-820741 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:no-preload-820741 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.169 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 11:52:01.374101  662109 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 11:52:01.374160  662109 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 11:52:01.409780  662109 cri.go:89] found id: ""
	I1209 11:52:01.409852  662109 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 11:52:01.419505  662109 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1209 11:52:01.419550  662109 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1209 11:52:01.419603  662109 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1209 11:52:01.429000  662109 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1209 11:52:01.429999  662109 kubeconfig.go:125] found "no-preload-820741" server: "https://192.168.39.169:8443"
	I1209 11:52:01.432151  662109 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1209 11:52:01.440964  662109 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.169
	I1209 11:52:01.441003  662109 kubeadm.go:1160] stopping kube-system containers ...
	I1209 11:52:01.441021  662109 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1209 11:52:01.441084  662109 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 11:52:01.474788  662109 cri.go:89] found id: ""
	I1209 11:52:01.474865  662109 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1209 11:52:01.491360  662109 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 11:52:01.500483  662109 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 11:52:01.500505  662109 kubeadm.go:157] found existing configuration files:
	
	I1209 11:52:01.500558  662109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 11:52:01.509190  662109 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 11:52:01.509251  662109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 11:52:01.518248  662109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 11:52:01.526845  662109 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 11:52:01.526909  662109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 11:52:01.535849  662109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 11:52:01.544609  662109 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 11:52:01.544672  662109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 11:52:01.553527  662109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 11:52:01.561876  662109 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 11:52:01.561928  662109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 11:52:00.178781  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:00.179225  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | unable to find current IP address of domain old-k8s-version-014592 in network mk-old-k8s-version-014592
	I1209 11:52:00.179273  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | I1209 11:52:00.179165  663492 retry.go:31] will retry after 4.532370187s: waiting for machine to come up
	I1209 11:52:05.915073  663024 start.go:364] duration metric: took 2m6.318720193s to acquireMachinesLock for "default-k8s-diff-port-482476"
	I1209 11:52:05.915166  663024 start.go:96] Skipping create...Using existing machine configuration
	I1209 11:52:05.915179  663024 fix.go:54] fixHost starting: 
	I1209 11:52:05.915652  663024 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:52:05.915716  663024 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:52:05.933810  663024 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35679
	I1209 11:52:05.934363  663024 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:52:05.935019  663024 main.go:141] libmachine: Using API Version  1
	I1209 11:52:05.935071  663024 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:52:05.935489  663024 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:52:05.935682  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .DriverName
	I1209 11:52:05.935879  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetState
	I1209 11:52:05.937627  663024 fix.go:112] recreateIfNeeded on default-k8s-diff-port-482476: state=Stopped err=<nil>
	I1209 11:52:05.937660  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .DriverName
	W1209 11:52:05.937842  663024 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 11:52:05.939893  663024 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-482476" ...
	I1209 11:52:01.570657  662109 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 11:52:01.579782  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:01.680268  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:02.573653  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:02.762024  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:02.826444  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:02.932170  662109 api_server.go:52] waiting for apiserver process to appear ...
	I1209 11:52:02.932291  662109 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:03.432933  662109 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:03.933186  662109 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:03.948529  662109 api_server.go:72] duration metric: took 1.016357501s to wait for apiserver process to appear ...
	I1209 11:52:03.948565  662109 api_server.go:88] waiting for apiserver healthz status ...
	I1209 11:52:03.948595  662109 api_server.go:253] Checking apiserver healthz at https://192.168.39.169:8443/healthz ...
	I1209 11:52:06.443635  662109 api_server.go:279] https://192.168.39.169:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1209 11:52:06.443675  662109 api_server.go:103] status: https://192.168.39.169:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1209 11:52:06.443692  662109 api_server.go:253] Checking apiserver healthz at https://192.168.39.169:8443/healthz ...
	I1209 11:52:06.490801  662109 api_server.go:279] https://192.168.39.169:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1209 11:52:06.490839  662109 api_server.go:103] status: https://192.168.39.169:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1209 11:52:06.490860  662109 api_server.go:253] Checking apiserver healthz at https://192.168.39.169:8443/healthz ...
	I1209 11:52:06.502460  662109 api_server.go:279] https://192.168.39.169:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1209 11:52:06.502497  662109 api_server.go:103] status: https://192.168.39.169:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1209 11:52:04.713201  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:04.713711  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has current primary IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:04.713817  662586 main.go:141] libmachine: (old-k8s-version-014592) Found IP for machine: 192.168.61.132
	I1209 11:52:04.713853  662586 main.go:141] libmachine: (old-k8s-version-014592) Reserving static IP address...
	I1209 11:52:04.714267  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "old-k8s-version-014592", mac: "52:54:00:54:72:3e", ip: "192.168.61.132"} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:51:55 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:52:04.714298  662586 main.go:141] libmachine: (old-k8s-version-014592) Reserved static IP address: 192.168.61.132
	I1209 11:52:04.714318  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | skip adding static IP to network mk-old-k8s-version-014592 - found existing host DHCP lease matching {name: "old-k8s-version-014592", mac: "52:54:00:54:72:3e", ip: "192.168.61.132"}
	I1209 11:52:04.714332  662586 main.go:141] libmachine: (old-k8s-version-014592) Waiting for SSH to be available...
	I1209 11:52:04.714347  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | Getting to WaitForSSH function...
	I1209 11:52:04.716632  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:04.716972  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:72:3e", ip: ""} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:51:55 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:52:04.717005  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:04.717129  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | Using SSH client type: external
	I1209 11:52:04.717157  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | Using SSH private key: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/old-k8s-version-014592/id_rsa (-rw-------)
	I1209 11:52:04.717192  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.132 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20068-609844/.minikube/machines/old-k8s-version-014592/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1209 11:52:04.717206  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | About to run SSH command:
	I1209 11:52:04.717223  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | exit 0
	I1209 11:52:04.846290  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | SSH cmd err, output: <nil>: 
	I1209 11:52:04.846675  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetConfigRaw
	I1209 11:52:04.847483  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetIP
	I1209 11:52:04.850430  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:04.850859  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:72:3e", ip: ""} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:51:55 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:52:04.850888  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:04.851113  662586 profile.go:143] Saving config to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/old-k8s-version-014592/config.json ...
	I1209 11:52:04.851328  662586 machine.go:93] provisionDockerMachine start ...
	I1209 11:52:04.851348  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .DriverName
	I1209 11:52:04.851547  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHHostname
	I1209 11:52:04.854318  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:04.854622  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:72:3e", ip: ""} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:51:55 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:52:04.854654  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:04.854782  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHPort
	I1209 11:52:04.854959  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHKeyPath
	I1209 11:52:04.855134  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHKeyPath
	I1209 11:52:04.855276  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHUsername
	I1209 11:52:04.855438  662586 main.go:141] libmachine: Using SSH client type: native
	I1209 11:52:04.855696  662586 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.132 22 <nil> <nil>}
	I1209 11:52:04.855709  662586 main.go:141] libmachine: About to run SSH command:
	hostname
	I1209 11:52:04.963021  662586 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1209 11:52:04.963059  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetMachineName
	I1209 11:52:04.963344  662586 buildroot.go:166] provisioning hostname "old-k8s-version-014592"
	I1209 11:52:04.963368  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetMachineName
	I1209 11:52:04.963545  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHHostname
	I1209 11:52:04.966102  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:04.966461  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:72:3e", ip: ""} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:51:55 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:52:04.966496  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:04.966607  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHPort
	I1209 11:52:04.966780  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHKeyPath
	I1209 11:52:04.966919  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHKeyPath
	I1209 11:52:04.967056  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHUsername
	I1209 11:52:04.967221  662586 main.go:141] libmachine: Using SSH client type: native
	I1209 11:52:04.967407  662586 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.132 22 <nil> <nil>}
	I1209 11:52:04.967419  662586 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-014592 && echo "old-k8s-version-014592" | sudo tee /etc/hostname
	I1209 11:52:05.094147  662586 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-014592
	
	I1209 11:52:05.094210  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHHostname
	I1209 11:52:05.097298  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.097729  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:72:3e", ip: ""} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:51:55 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:52:05.097765  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.097949  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHPort
	I1209 11:52:05.098197  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHKeyPath
	I1209 11:52:05.098460  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHKeyPath
	I1209 11:52:05.098632  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHUsername
	I1209 11:52:05.098829  662586 main.go:141] libmachine: Using SSH client type: native
	I1209 11:52:05.099046  662586 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.132 22 <nil> <nil>}
	I1209 11:52:05.099082  662586 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-014592' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-014592/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-014592' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 11:52:05.210739  662586 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 11:52:05.210785  662586 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20068-609844/.minikube CaCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20068-609844/.minikube}
	I1209 11:52:05.210846  662586 buildroot.go:174] setting up certificates
	I1209 11:52:05.210859  662586 provision.go:84] configureAuth start
	I1209 11:52:05.210881  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetMachineName
	I1209 11:52:05.211210  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetIP
	I1209 11:52:05.214546  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.214937  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:72:3e", ip: ""} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:51:55 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:52:05.214967  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.215167  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHHostname
	I1209 11:52:05.217866  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.218269  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:72:3e", ip: ""} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:51:55 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:52:05.218300  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.218452  662586 provision.go:143] copyHostCerts
	I1209 11:52:05.218530  662586 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem, removing ...
	I1209 11:52:05.218558  662586 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem
	I1209 11:52:05.218630  662586 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem (1082 bytes)
	I1209 11:52:05.218807  662586 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem, removing ...
	I1209 11:52:05.218820  662586 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem
	I1209 11:52:05.218863  662586 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem (1123 bytes)
	I1209 11:52:05.218943  662586 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem, removing ...
	I1209 11:52:05.218953  662586 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem
	I1209 11:52:05.218983  662586 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem (1679 bytes)
	I1209 11:52:05.219060  662586 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-014592 san=[127.0.0.1 192.168.61.132 localhost minikube old-k8s-version-014592]
	I1209 11:52:05.292744  662586 provision.go:177] copyRemoteCerts
	I1209 11:52:05.292830  662586 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 11:52:05.292867  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHHostname
	I1209 11:52:05.296244  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.296670  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:72:3e", ip: ""} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:51:55 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:52:05.296712  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.296896  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHPort
	I1209 11:52:05.297111  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHKeyPath
	I1209 11:52:05.297330  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHUsername
	I1209 11:52:05.297514  662586 sshutil.go:53] new ssh client: &{IP:192.168.61.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/old-k8s-version-014592/id_rsa Username:docker}
	I1209 11:52:05.381148  662586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1209 11:52:05.404883  662586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1209 11:52:05.433421  662586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1209 11:52:05.456775  662586 provision.go:87] duration metric: took 245.894878ms to configureAuth
	I1209 11:52:05.456811  662586 buildroot.go:189] setting minikube options for container-runtime
	I1209 11:52:05.457003  662586 config.go:182] Loaded profile config "old-k8s-version-014592": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1209 11:52:05.457082  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHHostname
	I1209 11:52:05.459984  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.460372  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:72:3e", ip: ""} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:51:55 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:52:05.460415  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.460631  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHPort
	I1209 11:52:05.460851  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHKeyPath
	I1209 11:52:05.461021  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHKeyPath
	I1209 11:52:05.461217  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHUsername
	I1209 11:52:05.461481  662586 main.go:141] libmachine: Using SSH client type: native
	I1209 11:52:05.461702  662586 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.132 22 <nil> <nil>}
	I1209 11:52:05.461722  662586 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 11:52:05.683276  662586 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 11:52:05.683311  662586 machine.go:96] duration metric: took 831.968459ms to provisionDockerMachine
	I1209 11:52:05.683335  662586 start.go:293] postStartSetup for "old-k8s-version-014592" (driver="kvm2")
	I1209 11:52:05.683349  662586 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 11:52:05.683391  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .DriverName
	I1209 11:52:05.683809  662586 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 11:52:05.683850  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHHostname
	I1209 11:52:05.687116  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.687540  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:72:3e", ip: ""} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:51:55 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:52:05.687579  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.687787  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHPort
	I1209 11:52:05.688013  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHKeyPath
	I1209 11:52:05.688204  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHUsername
	I1209 11:52:05.688439  662586 sshutil.go:53] new ssh client: &{IP:192.168.61.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/old-k8s-version-014592/id_rsa Username:docker}
	I1209 11:52:05.768777  662586 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 11:52:05.772572  662586 info.go:137] Remote host: Buildroot 2023.02.9
	I1209 11:52:05.772603  662586 filesync.go:126] Scanning /home/jenkins/minikube-integration/20068-609844/.minikube/addons for local assets ...
	I1209 11:52:05.772690  662586 filesync.go:126] Scanning /home/jenkins/minikube-integration/20068-609844/.minikube/files for local assets ...
	I1209 11:52:05.772813  662586 filesync.go:149] local asset: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem -> 6170172.pem in /etc/ssl/certs
	I1209 11:52:05.772942  662586 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 11:52:05.784153  662586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem --> /etc/ssl/certs/6170172.pem (1708 bytes)
	I1209 11:52:05.808677  662586 start.go:296] duration metric: took 125.320445ms for postStartSetup
	I1209 11:52:05.808736  662586 fix.go:56] duration metric: took 21.705557963s for fixHost
	I1209 11:52:05.808766  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHHostname
	I1209 11:52:05.811685  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.812053  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:72:3e", ip: ""} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:51:55 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:52:05.812090  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.812426  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHPort
	I1209 11:52:05.812639  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHKeyPath
	I1209 11:52:05.812853  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHKeyPath
	I1209 11:52:05.812996  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHUsername
	I1209 11:52:05.813345  662586 main.go:141] libmachine: Using SSH client type: native
	I1209 11:52:05.813562  662586 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.132 22 <nil> <nil>}
	I1209 11:52:05.813572  662586 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1209 11:52:05.914863  662586 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733745125.875320243
	
	I1209 11:52:05.914892  662586 fix.go:216] guest clock: 1733745125.875320243
	I1209 11:52:05.914906  662586 fix.go:229] Guest: 2024-12-09 11:52:05.875320243 +0000 UTC Remote: 2024-12-09 11:52:05.808742373 +0000 UTC m=+218.159686894 (delta=66.57787ms)
	I1209 11:52:05.914941  662586 fix.go:200] guest clock delta is within tolerance: 66.57787ms
	I1209 11:52:05.914952  662586 start.go:83] releasing machines lock for "old-k8s-version-014592", held for 21.811813657s
	I1209 11:52:05.914983  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .DriverName
	I1209 11:52:05.915289  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetIP
	I1209 11:52:05.918015  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.918513  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:72:3e", ip: ""} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:51:55 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:52:05.918546  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.918662  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .DriverName
	I1209 11:52:05.919315  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .DriverName
	I1209 11:52:05.919508  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .DriverName
	I1209 11:52:05.919628  662586 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 11:52:05.919684  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHHostname
	I1209 11:52:05.919739  662586 ssh_runner.go:195] Run: cat /version.json
	I1209 11:52:05.919767  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHHostname
	I1209 11:52:05.922529  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.922816  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.923096  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:72:3e", ip: ""} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:51:55 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:52:05.923121  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.923223  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:72:3e", ip: ""} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:51:55 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:52:05.923258  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.923291  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHPort
	I1209 11:52:05.923459  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHPort
	I1209 11:52:05.923602  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHKeyPath
	I1209 11:52:05.923616  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHKeyPath
	I1209 11:52:05.923848  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHUsername
	I1209 11:52:05.923900  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHUsername
	I1209 11:52:05.924030  662586 sshutil.go:53] new ssh client: &{IP:192.168.61.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/old-k8s-version-014592/id_rsa Username:docker}
	I1209 11:52:05.924104  662586 sshutil.go:53] new ssh client: &{IP:192.168.61.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/old-k8s-version-014592/id_rsa Username:docker}
	I1209 11:52:06.037215  662586 ssh_runner.go:195] Run: systemctl --version
	I1209 11:52:06.043193  662586 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 11:52:06.193717  662586 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 11:52:06.199693  662586 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 11:52:06.199786  662586 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 11:52:06.216007  662586 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1209 11:52:06.216040  662586 start.go:495] detecting cgroup driver to use...
	I1209 11:52:06.216131  662586 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 11:52:06.233631  662586 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 11:52:06.249730  662586 docker.go:217] disabling cri-docker service (if available) ...
	I1209 11:52:06.249817  662586 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 11:52:06.265290  662586 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 11:52:06.281676  662586 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 11:52:06.432116  662586 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 11:52:06.605899  662586 docker.go:233] disabling docker service ...
	I1209 11:52:06.606004  662586 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 11:52:06.622861  662586 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 11:52:06.637605  662586 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 11:52:06.772842  662586 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 11:52:06.905950  662586 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 11:52:06.923048  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 11:52:06.943483  662586 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1209 11:52:06.943542  662586 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:52:06.957647  662586 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 11:52:06.957725  662586 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:52:06.970221  662586 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:52:06.981243  662586 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:52:06.992084  662586 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 11:52:07.004284  662586 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 11:52:07.014329  662586 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1209 11:52:07.014411  662586 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1209 11:52:07.028104  662586 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 11:52:07.038782  662586 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 11:52:07.155779  662586 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 11:52:07.271726  662586 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 11:52:07.271815  662586 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 11:52:07.276994  662586 start.go:563] Will wait 60s for crictl version
	I1209 11:52:07.277061  662586 ssh_runner.go:195] Run: which crictl
	I1209 11:52:07.281212  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 11:52:07.328839  662586 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1209 11:52:07.328959  662586 ssh_runner.go:195] Run: crio --version
	I1209 11:52:07.360632  662586 ssh_runner.go:195] Run: crio --version
	I1209 11:52:07.393046  662586 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1209 11:52:07.394357  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetIP
	I1209 11:52:07.398002  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:07.398539  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:72:3e", ip: ""} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:51:55 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:52:07.398564  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:07.398893  662586 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1209 11:52:07.404512  662586 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 11:52:07.417822  662586 kubeadm.go:883] updating cluster {Name:old-k8s-version-014592 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-014592 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.132 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 11:52:07.418006  662586 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1209 11:52:07.418108  662586 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 11:52:07.473163  662586 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1209 11:52:07.473249  662586 ssh_runner.go:195] Run: which lz4
	I1209 11:52:07.478501  662586 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1209 11:52:07.483744  662586 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1209 11:52:07.483786  662586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1209 11:52:06.949438  662109 api_server.go:253] Checking apiserver healthz at https://192.168.39.169:8443/healthz ...
	I1209 11:52:06.959097  662109 api_server.go:279] https://192.168.39.169:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1209 11:52:06.959150  662109 api_server.go:103] status: https://192.168.39.169:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1209 11:52:07.449249  662109 api_server.go:253] Checking apiserver healthz at https://192.168.39.169:8443/healthz ...
	I1209 11:52:07.466817  662109 api_server.go:279] https://192.168.39.169:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1209 11:52:07.466860  662109 api_server.go:103] status: https://192.168.39.169:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1209 11:52:07.948998  662109 api_server.go:253] Checking apiserver healthz at https://192.168.39.169:8443/healthz ...
	I1209 11:52:07.958340  662109 api_server.go:279] https://192.168.39.169:8443/healthz returned 200:
	ok
	I1209 11:52:07.966049  662109 api_server.go:141] control plane version: v1.31.2
	I1209 11:52:07.966095  662109 api_server.go:131] duration metric: took 4.017521352s to wait for apiserver health ...
	I1209 11:52:07.966111  662109 cni.go:84] Creating CNI manager for ""
	I1209 11:52:07.966121  662109 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 11:52:07.967962  662109 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1209 11:52:05.941206  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .Start
	I1209 11:52:05.941411  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Ensuring networks are active...
	I1209 11:52:05.942245  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Ensuring network default is active
	I1209 11:52:05.942724  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Ensuring network mk-default-k8s-diff-port-482476 is active
	I1209 11:52:05.943274  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Getting domain xml...
	I1209 11:52:05.944080  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Creating domain...
	I1209 11:52:07.394633  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting to get IP...
	I1209 11:52:07.396032  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:07.397444  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | unable to find current IP address of domain default-k8s-diff-port-482476 in network mk-default-k8s-diff-port-482476
	I1209 11:52:07.397560  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | I1209 11:52:07.397434  663663 retry.go:31] will retry after 205.256699ms: waiting for machine to come up
	I1209 11:52:07.604209  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:07.604884  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | unable to find current IP address of domain default-k8s-diff-port-482476 in network mk-default-k8s-diff-port-482476
	I1209 11:52:07.604920  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | I1209 11:52:07.604828  663663 retry.go:31] will retry after 291.255961ms: waiting for machine to come up
	I1209 11:52:07.897467  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:07.898992  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | unable to find current IP address of domain default-k8s-diff-port-482476 in network mk-default-k8s-diff-port-482476
	I1209 11:52:07.899020  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | I1209 11:52:07.898866  663663 retry.go:31] will retry after 437.180412ms: waiting for machine to come up
	I1209 11:52:08.337664  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:08.338195  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | unable to find current IP address of domain default-k8s-diff-port-482476 in network mk-default-k8s-diff-port-482476
	I1209 11:52:08.338235  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | I1209 11:52:08.338151  663663 retry.go:31] will retry after 603.826089ms: waiting for machine to come up
	I1209 11:52:08.944048  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:08.944672  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | unable to find current IP address of domain default-k8s-diff-port-482476 in network mk-default-k8s-diff-port-482476
	I1209 11:52:08.944702  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | I1209 11:52:08.944612  663663 retry.go:31] will retry after 557.882868ms: waiting for machine to come up
	I1209 11:52:07.969367  662109 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1209 11:52:07.986045  662109 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1209 11:52:08.075377  662109 system_pods.go:43] waiting for kube-system pods to appear ...
	I1209 11:52:08.091609  662109 system_pods.go:59] 8 kube-system pods found
	I1209 11:52:08.091648  662109 system_pods.go:61] "coredns-7c65d6cfc9-z647g" [0e15e13e-efe6-4ae2-8bac-205aadf8f95a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1209 11:52:08.091656  662109 system_pods.go:61] "etcd-no-preload-820741" [3ea97088-f3b5-4c8f-ac11-7ab96f37bc5b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1209 11:52:08.091664  662109 system_pods.go:61] "kube-apiserver-no-preload-820741" [bd0d3e20-bdab-41c1-bb25-0c5149b4e456] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1209 11:52:08.091670  662109 system_pods.go:61] "kube-controller-manager-no-preload-820741" [5eda77af-a169-4be5-9f96-6ad5406f9036] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1209 11:52:08.091675  662109 system_pods.go:61] "kube-proxy-hpvvp" [0945206c-8d1e-47e0-b35b-9011073423b2] Running
	I1209 11:52:08.091681  662109 system_pods.go:61] "kube-scheduler-no-preload-820741" [e7003668-a70d-4334-bb99-c63c463d87e0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1209 11:52:08.091686  662109 system_pods.go:61] "metrics-server-6867b74b74-pwcsr" [40d4df7e-de82-478b-a77b-b27208d8262e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1209 11:52:08.091691  662109 system_pods.go:61] "storage-provisioner" [aeba46d3-ecf1-4923-b89c-75b34e75a06d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1209 11:52:08.091699  662109 system_pods.go:74] duration metric: took 16.289433ms to wait for pod list to return data ...
	I1209 11:52:08.091707  662109 node_conditions.go:102] verifying NodePressure condition ...
	I1209 11:52:08.096961  662109 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1209 11:52:08.097010  662109 node_conditions.go:123] node cpu capacity is 2
	I1209 11:52:08.097047  662109 node_conditions.go:105] duration metric: took 5.334194ms to run NodePressure ...
	I1209 11:52:08.097073  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:08.573868  662109 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1209 11:52:08.583670  662109 kubeadm.go:739] kubelet initialised
	I1209 11:52:08.583700  662109 kubeadm.go:740] duration metric: took 9.800796ms waiting for restarted kubelet to initialise ...
	I1209 11:52:08.583713  662109 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 11:52:08.592490  662109 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-z647g" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:08.600581  662109 pod_ready.go:98] node "no-preload-820741" hosting pod "coredns-7c65d6cfc9-z647g" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-820741" has status "Ready":"False"
	I1209 11:52:08.600611  662109 pod_ready.go:82] duration metric: took 8.087599ms for pod "coredns-7c65d6cfc9-z647g" in "kube-system" namespace to be "Ready" ...
	E1209 11:52:08.600623  662109 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-820741" hosting pod "coredns-7c65d6cfc9-z647g" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-820741" has status "Ready":"False"
	I1209 11:52:08.600633  662109 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-820741" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:08.609663  662109 pod_ready.go:98] node "no-preload-820741" hosting pod "etcd-no-preload-820741" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-820741" has status "Ready":"False"
	I1209 11:52:08.609698  662109 pod_ready.go:82] duration metric: took 9.054194ms for pod "etcd-no-preload-820741" in "kube-system" namespace to be "Ready" ...
	E1209 11:52:08.609712  662109 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-820741" hosting pod "etcd-no-preload-820741" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-820741" has status "Ready":"False"
	I1209 11:52:08.609722  662109 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-820741" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:08.615482  662109 pod_ready.go:98] node "no-preload-820741" hosting pod "kube-apiserver-no-preload-820741" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-820741" has status "Ready":"False"
	I1209 11:52:08.615514  662109 pod_ready.go:82] duration metric: took 5.78152ms for pod "kube-apiserver-no-preload-820741" in "kube-system" namespace to be "Ready" ...
	E1209 11:52:08.615526  662109 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-820741" hosting pod "kube-apiserver-no-preload-820741" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-820741" has status "Ready":"False"
	I1209 11:52:08.615536  662109 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-820741" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:08.623662  662109 pod_ready.go:98] node "no-preload-820741" hosting pod "kube-controller-manager-no-preload-820741" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-820741" has status "Ready":"False"
	I1209 11:52:08.623698  662109 pod_ready.go:82] duration metric: took 8.151877ms for pod "kube-controller-manager-no-preload-820741" in "kube-system" namespace to be "Ready" ...
	E1209 11:52:08.623713  662109 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-820741" hosting pod "kube-controller-manager-no-preload-820741" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-820741" has status "Ready":"False"
	I1209 11:52:08.623722  662109 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-hpvvp" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:08.978286  662109 pod_ready.go:98] node "no-preload-820741" hosting pod "kube-proxy-hpvvp" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-820741" has status "Ready":"False"
	I1209 11:52:08.978323  662109 pod_ready.go:82] duration metric: took 354.589596ms for pod "kube-proxy-hpvvp" in "kube-system" namespace to be "Ready" ...
	E1209 11:52:08.978344  662109 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-820741" hosting pod "kube-proxy-hpvvp" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-820741" has status "Ready":"False"
	I1209 11:52:08.978356  662109 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-820741" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:09.378434  662109 pod_ready.go:98] node "no-preload-820741" hosting pod "kube-scheduler-no-preload-820741" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-820741" has status "Ready":"False"
	I1209 11:52:09.378471  662109 pod_ready.go:82] duration metric: took 400.107028ms for pod "kube-scheduler-no-preload-820741" in "kube-system" namespace to be "Ready" ...
	E1209 11:52:09.378484  662109 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-820741" hosting pod "kube-scheduler-no-preload-820741" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-820741" has status "Ready":"False"
	I1209 11:52:09.378494  662109 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:09.778087  662109 pod_ready.go:98] node "no-preload-820741" hosting pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-820741" has status "Ready":"False"
	I1209 11:52:09.778117  662109 pod_ready.go:82] duration metric: took 399.613592ms for pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace to be "Ready" ...
	E1209 11:52:09.778129  662109 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-820741" hosting pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-820741" has status "Ready":"False"
	I1209 11:52:09.778138  662109 pod_ready.go:39] duration metric: took 1.194413796s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 11:52:09.778162  662109 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1209 11:52:09.793629  662109 ops.go:34] apiserver oom_adj: -16
	I1209 11:52:09.793663  662109 kubeadm.go:597] duration metric: took 8.374104555s to restartPrimaryControlPlane
	I1209 11:52:09.793681  662109 kubeadm.go:394] duration metric: took 8.419719684s to StartCluster
	I1209 11:52:09.793708  662109 settings.go:142] acquiring lock: {Name:mkd819f3469f56ef651f2e5bb461e93bb6b2b5fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 11:52:09.793848  662109 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20068-609844/kubeconfig
	I1209 11:52:09.796407  662109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/kubeconfig: {Name:mk32876a1ab47f13c0d7c0ba1ee5e32de01b4a4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 11:52:09.796774  662109 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.169 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 11:52:09.796837  662109 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1209 11:52:09.796954  662109 addons.go:69] Setting storage-provisioner=true in profile "no-preload-820741"
	I1209 11:52:09.796975  662109 addons.go:234] Setting addon storage-provisioner=true in "no-preload-820741"
	W1209 11:52:09.796984  662109 addons.go:243] addon storage-provisioner should already be in state true
	I1209 11:52:09.797023  662109 host.go:66] Checking if "no-preload-820741" exists ...
	I1209 11:52:09.797048  662109 config.go:182] Loaded profile config "no-preload-820741": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 11:52:09.797086  662109 addons.go:69] Setting default-storageclass=true in profile "no-preload-820741"
	I1209 11:52:09.797110  662109 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-820741"
	I1209 11:52:09.797119  662109 addons.go:69] Setting metrics-server=true in profile "no-preload-820741"
	I1209 11:52:09.797150  662109 addons.go:234] Setting addon metrics-server=true in "no-preload-820741"
	W1209 11:52:09.797160  662109 addons.go:243] addon metrics-server should already be in state true
	I1209 11:52:09.797204  662109 host.go:66] Checking if "no-preload-820741" exists ...
	I1209 11:52:09.797545  662109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:52:09.797571  662109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:52:09.797579  662109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:52:09.797596  662109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:52:09.797611  662109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:52:09.797620  662109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:52:09.799690  662109 out.go:177] * Verifying Kubernetes components...
	I1209 11:52:09.801035  662109 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 11:52:09.814968  662109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33991
	I1209 11:52:09.815010  662109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34319
	I1209 11:52:09.815576  662109 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:52:09.815715  662109 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:52:09.816340  662109 main.go:141] libmachine: Using API Version  1
	I1209 11:52:09.816361  662109 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:52:09.816666  662109 main.go:141] libmachine: Using API Version  1
	I1209 11:52:09.816683  662109 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:52:09.816745  662109 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:52:09.817402  662109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:52:09.817449  662109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:52:09.818118  662109 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:52:09.818680  662109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:52:09.818718  662109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:52:09.842345  662109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37501
	I1209 11:52:09.842582  662109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44183
	I1209 11:52:09.842703  662109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38793
	I1209 11:52:09.843479  662109 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:52:09.843608  662109 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:52:09.843667  662109 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:52:09.843973  662109 main.go:141] libmachine: Using API Version  1
	I1209 11:52:09.843999  662109 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:52:09.844168  662109 main.go:141] libmachine: Using API Version  1
	I1209 11:52:09.844180  662109 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:52:09.844575  662109 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:52:09.844773  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetState
	I1209 11:52:09.845107  662109 main.go:141] libmachine: Using API Version  1
	I1209 11:52:09.845122  662109 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:52:09.845633  662109 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:52:09.845887  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetState
	I1209 11:52:09.847386  662109 main.go:141] libmachine: (no-preload-820741) Calling .DriverName
	I1209 11:52:09.848553  662109 main.go:141] libmachine: (no-preload-820741) Calling .DriverName
	I1209 11:52:09.849410  662109 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1209 11:52:09.849690  662109 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:52:09.850230  662109 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 11:52:09.850303  662109 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1209 11:52:09.850323  662109 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1209 11:52:09.850346  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHHostname
	I1209 11:52:09.851051  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetState
	I1209 11:52:09.851404  662109 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 11:52:09.851426  662109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1209 11:52:09.851447  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHHostname
	I1209 11:52:09.855303  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:52:09.855935  662109 addons.go:234] Setting addon default-storageclass=true in "no-preload-820741"
	W1209 11:52:09.855958  662109 addons.go:243] addon default-storageclass should already be in state true
	I1209 11:52:09.855991  662109 host.go:66] Checking if "no-preload-820741" exists ...
	I1209 11:52:09.856373  662109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:52:09.856429  662109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:52:09.857583  662109 main.go:141] libmachine: (no-preload-820741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4c:0e", ip: ""} in network mk-no-preload-820741: {Iface:virbr1 ExpiryTime:2024-12-09 12:43:41 +0000 UTC Type:0 Mac:52:54:00:27:4c:0e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:no-preload-820741 Clientid:01:52:54:00:27:4c:0e}
	I1209 11:52:09.857614  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined IP address 192.168.39.169 and MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:52:09.857874  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHPort
	I1209 11:52:09.858206  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHKeyPath
	I1209 11:52:09.858588  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHUsername
	I1209 11:52:09.858766  662109 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/no-preload-820741/id_rsa Username:docker}
	I1209 11:52:09.859464  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:52:09.859875  662109 main.go:141] libmachine: (no-preload-820741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4c:0e", ip: ""} in network mk-no-preload-820741: {Iface:virbr1 ExpiryTime:2024-12-09 12:43:41 +0000 UTC Type:0 Mac:52:54:00:27:4c:0e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:no-preload-820741 Clientid:01:52:54:00:27:4c:0e}
	I1209 11:52:09.859897  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined IP address 192.168.39.169 and MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:52:09.860238  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHPort
	I1209 11:52:09.860449  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHKeyPath
	I1209 11:52:09.860597  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHUsername
	I1209 11:52:09.860736  662109 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/no-preload-820741/id_rsa Username:docker}
	I1209 11:52:09.880235  662109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38341
	I1209 11:52:09.880846  662109 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:52:09.881409  662109 main.go:141] libmachine: Using API Version  1
	I1209 11:52:09.881429  662109 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:52:09.881855  662109 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:52:09.882651  662109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:52:09.882711  662109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:52:09.904576  662109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36779
	I1209 11:52:09.905132  662109 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:52:09.905765  662109 main.go:141] libmachine: Using API Version  1
	I1209 11:52:09.905788  662109 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:52:09.906224  662109 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:52:09.906469  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetState
	I1209 11:52:09.908475  662109 main.go:141] libmachine: (no-preload-820741) Calling .DriverName
	I1209 11:52:09.908715  662109 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1209 11:52:09.908735  662109 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1209 11:52:09.908756  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHHostname
	I1209 11:52:09.912294  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:52:09.912928  662109 main.go:141] libmachine: (no-preload-820741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4c:0e", ip: ""} in network mk-no-preload-820741: {Iface:virbr1 ExpiryTime:2024-12-09 12:43:41 +0000 UTC Type:0 Mac:52:54:00:27:4c:0e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:no-preload-820741 Clientid:01:52:54:00:27:4c:0e}
	I1209 11:52:09.912963  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined IP address 192.168.39.169 and MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:52:09.913128  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHPort
	I1209 11:52:09.913383  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHKeyPath
	I1209 11:52:09.913563  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHUsername
	I1209 11:52:09.913711  662109 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/no-preload-820741/id_rsa Username:docker}
	I1209 11:52:10.141200  662109 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 11:52:10.172182  662109 node_ready.go:35] waiting up to 6m0s for node "no-preload-820741" to be "Ready" ...
	I1209 11:52:10.306617  662109 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1209 11:52:10.306646  662109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1209 11:52:10.321962  662109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 11:52:10.326125  662109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1209 11:52:10.360534  662109 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1209 11:52:10.360568  662109 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1209 11:52:10.470875  662109 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1209 11:52:10.470917  662109 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1209 11:52:10.555610  662109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1209 11:52:11.721480  662109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.395310752s)
	I1209 11:52:11.721571  662109 main.go:141] libmachine: Making call to close driver server
	I1209 11:52:11.721638  662109 main.go:141] libmachine: (no-preload-820741) Calling .Close
	I1209 11:52:11.721581  662109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.165925756s)
	I1209 11:52:11.721735  662109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.399738143s)
	I1209 11:52:11.721753  662109 main.go:141] libmachine: Making call to close driver server
	I1209 11:52:11.721766  662109 main.go:141] libmachine: (no-preload-820741) Calling .Close
	I1209 11:52:11.721765  662109 main.go:141] libmachine: Making call to close driver server
	I1209 11:52:11.721779  662109 main.go:141] libmachine: (no-preload-820741) Calling .Close
	I1209 11:52:11.722002  662109 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:52:11.722014  662109 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:52:11.722021  662109 main.go:141] libmachine: Making call to close driver server
	I1209 11:52:11.722028  662109 main.go:141] libmachine: (no-preload-820741) Calling .Close
	I1209 11:52:11.722201  662109 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:52:11.722213  662109 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:52:11.722221  662109 main.go:141] libmachine: Making call to close driver server
	I1209 11:52:11.722226  662109 main.go:141] libmachine: (no-preload-820741) Calling .Close
	I1209 11:52:11.722320  662109 main.go:141] libmachine: (no-preload-820741) DBG | Closing plugin on server side
	I1209 11:52:11.722329  662109 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:52:11.722349  662109 main.go:141] libmachine: (no-preload-820741) DBG | Closing plugin on server side
	I1209 11:52:11.722360  662109 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:52:11.722384  662109 main.go:141] libmachine: Making call to close driver server
	I1209 11:52:11.722395  662109 main.go:141] libmachine: (no-preload-820741) Calling .Close
	I1209 11:52:11.722424  662109 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:52:11.722438  662109 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:52:11.722465  662109 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:52:11.722475  662109 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:52:11.722490  662109 addons.go:475] Verifying addon metrics-server=true in "no-preload-820741"
	I1209 11:52:11.722560  662109 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:52:11.722579  662109 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:52:11.722564  662109 main.go:141] libmachine: (no-preload-820741) DBG | Closing plugin on server side
	I1209 11:52:11.729638  662109 main.go:141] libmachine: Making call to close driver server
	I1209 11:52:11.729660  662109 main.go:141] libmachine: (no-preload-820741) Calling .Close
	I1209 11:52:11.729934  662109 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:52:11.729950  662109 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:52:11.731642  662109 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I1209 11:52:09.097654  662586 crio.go:462] duration metric: took 1.619191765s to copy over tarball
	I1209 11:52:09.097748  662586 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1209 11:52:12.304496  662586 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.20670295s)
	I1209 11:52:12.304543  662586 crio.go:469] duration metric: took 3.206852542s to extract the tarball
	I1209 11:52:12.304553  662586 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1209 11:52:12.347991  662586 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 11:52:12.385411  662586 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1209 11:52:12.385438  662586 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1209 11:52:12.385533  662586 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 11:52:12.385557  662586 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1209 11:52:12.385570  662586 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1209 11:52:12.385609  662586 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 11:52:12.385641  662586 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1209 11:52:12.385650  662586 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1209 11:52:12.385645  662586 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1209 11:52:12.385620  662586 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1209 11:52:12.387326  662586 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1209 11:52:12.387335  662586 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 11:52:12.387371  662586 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1209 11:52:12.387372  662586 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1209 11:52:12.387328  662586 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1209 11:52:12.387338  662586 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 11:52:12.387328  662586 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1209 11:52:12.387383  662586 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1209 11:52:12.621631  662586 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1209 11:52:12.623694  662586 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 11:52:12.632536  662586 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1209 11:52:12.634550  662586 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1209 11:52:12.638401  662586 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1209 11:52:12.641071  662586 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1209 11:52:12.645344  662586 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1209 11:52:09.504566  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:09.505124  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | unable to find current IP address of domain default-k8s-diff-port-482476 in network mk-default-k8s-diff-port-482476
	I1209 11:52:09.505155  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | I1209 11:52:09.505076  663663 retry.go:31] will retry after 636.87343ms: waiting for machine to come up
	I1209 11:52:10.144387  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:10.145090  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | unable to find current IP address of domain default-k8s-diff-port-482476 in network mk-default-k8s-diff-port-482476
	I1209 11:52:10.145119  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | I1209 11:52:10.145037  663663 retry.go:31] will retry after 716.448577ms: waiting for machine to come up
	I1209 11:52:10.863113  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:10.863816  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | unable to find current IP address of domain default-k8s-diff-port-482476 in network mk-default-k8s-diff-port-482476
	I1209 11:52:10.863848  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | I1209 11:52:10.863762  663663 retry.go:31] will retry after 901.007245ms: waiting for machine to come up
	I1209 11:52:11.766356  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:11.766745  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | unable to find current IP address of domain default-k8s-diff-port-482476 in network mk-default-k8s-diff-port-482476
	I1209 11:52:11.766773  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | I1209 11:52:11.766688  663663 retry.go:31] will retry after 1.570604193s: waiting for machine to come up
	I1209 11:52:13.339318  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:13.339796  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | unable to find current IP address of domain default-k8s-diff-port-482476 in network mk-default-k8s-diff-port-482476
	I1209 11:52:13.339828  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | I1209 11:52:13.339744  663663 retry.go:31] will retry after 1.928200683s: waiting for machine to come up
	I1209 11:52:11.732956  662109 addons.go:510] duration metric: took 1.936137102s for enable addons: enabled=[metrics-server storage-provisioner default-storageclass]
	I1209 11:52:12.175844  662109 node_ready.go:53] node "no-preload-820741" has status "Ready":"False"
	I1209 11:52:14.504491  662109 node_ready.go:53] node "no-preload-820741" has status "Ready":"False"
	I1209 11:52:12.756066  662586 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1209 11:52:12.756121  662586 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1209 11:52:12.756134  662586 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1209 11:52:12.756175  662586 ssh_runner.go:195] Run: which crictl
	I1209 11:52:12.756179  662586 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 11:52:12.756230  662586 ssh_runner.go:195] Run: which crictl
	I1209 11:52:12.808091  662586 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1209 11:52:12.808139  662586 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1209 11:52:12.808186  662586 ssh_runner.go:195] Run: which crictl
	I1209 11:52:12.809593  662586 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1209 11:52:12.809622  662586 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1209 11:52:12.809637  662586 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1209 11:52:12.809659  662586 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1209 11:52:12.809682  662586 ssh_runner.go:195] Run: which crictl
	I1209 11:52:12.809712  662586 ssh_runner.go:195] Run: which crictl
	I1209 11:52:12.809775  662586 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1209 11:52:12.809803  662586 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1209 11:52:12.809829  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 11:52:12.809841  662586 ssh_runner.go:195] Run: which crictl
	I1209 11:52:12.809724  662586 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1209 11:52:12.809873  662586 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1209 11:52:12.809898  662586 ssh_runner.go:195] Run: which crictl
	I1209 11:52:12.809933  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1209 11:52:12.812256  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1209 11:52:12.819121  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1209 11:52:12.825106  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1209 11:52:12.910431  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1209 11:52:12.910501  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 11:52:12.910560  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1209 11:52:12.910503  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1209 11:52:12.910638  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1209 11:52:12.910713  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1209 11:52:12.930461  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1209 11:52:13.079147  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1209 11:52:13.079189  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 11:52:13.079233  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1209 11:52:13.079276  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1209 11:52:13.079418  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1209 11:52:13.079447  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1209 11:52:13.079517  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1209 11:52:13.224753  662586 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1209 11:52:13.227126  662586 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1209 11:52:13.227190  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1209 11:52:13.227253  662586 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1209 11:52:13.227291  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1209 11:52:13.227332  662586 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1209 11:52:13.227393  662586 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1209 11:52:13.277747  662586 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1209 11:52:13.285286  662586 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1209 11:52:13.663858  662586 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 11:52:13.805603  662586 cache_images.go:92] duration metric: took 1.420145666s to LoadCachedImages
	W1209 11:52:13.805814  662586 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I1209 11:52:13.805848  662586 kubeadm.go:934] updating node { 192.168.61.132 8443 v1.20.0 crio true true} ...
	I1209 11:52:13.805980  662586 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-014592 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.132
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-014592 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 11:52:13.806079  662586 ssh_runner.go:195] Run: crio config
	I1209 11:52:13.870766  662586 cni.go:84] Creating CNI manager for ""
	I1209 11:52:13.870797  662586 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 11:52:13.870813  662586 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1209 11:52:13.870841  662586 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.132 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-014592 NodeName:old-k8s-version-014592 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.132"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.132 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1209 11:52:13.871050  662586 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.132
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-014592"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.132
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.132"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 11:52:13.871136  662586 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1209 11:52:13.881556  662586 binaries.go:44] Found k8s binaries, skipping transfer
	I1209 11:52:13.881628  662586 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 11:52:13.891122  662586 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1209 11:52:13.908181  662586 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 11:52:13.925041  662586 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1209 11:52:13.941567  662586 ssh_runner.go:195] Run: grep 192.168.61.132	control-plane.minikube.internal$ /etc/hosts
	I1209 11:52:13.945502  662586 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.132	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 11:52:13.957476  662586 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 11:52:14.091699  662586 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 11:52:14.108772  662586 certs.go:68] Setting up /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/old-k8s-version-014592 for IP: 192.168.61.132
	I1209 11:52:14.108810  662586 certs.go:194] generating shared ca certs ...
	I1209 11:52:14.108838  662586 certs.go:226] acquiring lock for ca certs: {Name:mk2df5887e08965d909a9c950da5dfffb8a04ddc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 11:52:14.109024  662586 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.key
	I1209 11:52:14.109087  662586 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.key
	I1209 11:52:14.109105  662586 certs.go:256] generating profile certs ...
	I1209 11:52:14.109248  662586 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/old-k8s-version-014592/client.key
	I1209 11:52:14.109323  662586 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/old-k8s-version-014592/apiserver.key.28078577
	I1209 11:52:14.109383  662586 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/old-k8s-version-014592/proxy-client.key
	I1209 11:52:14.109572  662586 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017.pem (1338 bytes)
	W1209 11:52:14.109609  662586 certs.go:480] ignoring /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017_empty.pem, impossibly tiny 0 bytes
	I1209 11:52:14.109619  662586 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem (1675 bytes)
	I1209 11:52:14.109659  662586 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem (1082 bytes)
	I1209 11:52:14.109697  662586 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem (1123 bytes)
	I1209 11:52:14.109737  662586 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem (1679 bytes)
	I1209 11:52:14.109802  662586 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem (1708 bytes)
	I1209 11:52:14.110497  662586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 11:52:14.145815  662586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1209 11:52:14.179452  662586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 11:52:14.217469  662586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1209 11:52:14.250288  662586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/old-k8s-version-014592/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1209 11:52:14.287110  662586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/old-k8s-version-014592/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1209 11:52:14.317190  662586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/old-k8s-version-014592/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 11:52:14.356825  662586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/old-k8s-version-014592/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1209 11:52:14.379756  662586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017.pem --> /usr/share/ca-certificates/617017.pem (1338 bytes)
	I1209 11:52:14.402045  662586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem --> /usr/share/ca-certificates/6170172.pem (1708 bytes)
	I1209 11:52:14.425287  662586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 11:52:14.448025  662586 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 11:52:14.464144  662586 ssh_runner.go:195] Run: openssl version
	I1209 11:52:14.470256  662586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/617017.pem && ln -fs /usr/share/ca-certificates/617017.pem /etc/ssl/certs/617017.pem"
	I1209 11:52:14.481298  662586 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/617017.pem
	I1209 11:52:14.485849  662586 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 10:45 /usr/share/ca-certificates/617017.pem
	I1209 11:52:14.485904  662586 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/617017.pem
	I1209 11:52:14.492321  662586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/617017.pem /etc/ssl/certs/51391683.0"
	I1209 11:52:14.504155  662586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6170172.pem && ln -fs /usr/share/ca-certificates/6170172.pem /etc/ssl/certs/6170172.pem"
	I1209 11:52:14.515819  662586 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6170172.pem
	I1209 11:52:14.520876  662586 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 10:45 /usr/share/ca-certificates/6170172.pem
	I1209 11:52:14.520955  662586 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6170172.pem
	I1209 11:52:14.527295  662586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6170172.pem /etc/ssl/certs/3ec20f2e.0"
	I1209 11:52:14.538319  662586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1209 11:52:14.549753  662586 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 11:52:14.554273  662586 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 10:34 /usr/share/ca-certificates/minikubeCA.pem
	I1209 11:52:14.554341  662586 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 11:52:14.559893  662586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1209 11:52:14.570744  662586 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 11:52:14.575763  662586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1209 11:52:14.582279  662586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1209 11:52:14.588549  662586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1209 11:52:14.594376  662586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1209 11:52:14.599758  662586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1209 11:52:14.605497  662586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1209 11:52:14.611083  662586 kubeadm.go:392] StartCluster: {Name:old-k8s-version-014592 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-014592 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.132 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 11:52:14.611213  662586 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 11:52:14.611288  662586 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 11:52:14.649447  662586 cri.go:89] found id: ""
	I1209 11:52:14.649538  662586 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 11:52:14.660070  662586 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1209 11:52:14.660094  662586 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1209 11:52:14.660145  662586 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1209 11:52:14.670412  662586 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1209 11:52:14.671387  662586 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-014592" does not appear in /home/jenkins/minikube-integration/20068-609844/kubeconfig
	I1209 11:52:14.672043  662586 kubeconfig.go:62] /home/jenkins/minikube-integration/20068-609844/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-014592" cluster setting kubeconfig missing "old-k8s-version-014592" context setting]
	I1209 11:52:14.673337  662586 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/kubeconfig: {Name:mk32876a1ab47f13c0d7c0ba1ee5e32de01b4a4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 11:52:14.708285  662586 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1209 11:52:14.719486  662586 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.132
	I1209 11:52:14.719535  662586 kubeadm.go:1160] stopping kube-system containers ...
	I1209 11:52:14.719563  662586 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1209 11:52:14.719635  662586 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 11:52:14.755280  662586 cri.go:89] found id: ""
	I1209 11:52:14.755369  662586 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1209 11:52:14.771385  662586 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 11:52:14.781364  662586 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 11:52:14.781387  662586 kubeadm.go:157] found existing configuration files:
	
	I1209 11:52:14.781455  662586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 11:52:14.790942  662586 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 11:52:14.791016  662586 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 11:52:14.800481  662586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 11:52:14.809875  662586 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 11:52:14.809948  662586 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 11:52:14.819619  662586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 11:52:14.831670  662586 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 11:52:14.831750  662586 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 11:52:14.844244  662586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 11:52:14.853328  662586 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 11:52:14.853403  662586 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 11:52:14.862428  662586 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 11:52:14.871346  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:15.007799  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:15.697594  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:15.921787  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:16.031826  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:16.132199  662586 api_server.go:52] waiting for apiserver process to appear ...
	I1209 11:52:16.132310  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:16.633329  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:17.133389  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:17.632581  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:15.270255  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:15.270804  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | unable to find current IP address of domain default-k8s-diff-port-482476 in network mk-default-k8s-diff-port-482476
	I1209 11:52:15.270836  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | I1209 11:52:15.270741  663663 retry.go:31] will retry after 2.90998032s: waiting for machine to come up
	I1209 11:52:18.182069  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:18.182741  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | unable to find current IP address of domain default-k8s-diff-port-482476 in network mk-default-k8s-diff-port-482476
	I1209 11:52:18.182774  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | I1209 11:52:18.182689  663663 retry.go:31] will retry after 3.196470388s: waiting for machine to come up
	I1209 11:52:16.676188  662109 node_ready.go:53] node "no-preload-820741" has status "Ready":"False"
	I1209 11:52:17.175894  662109 node_ready.go:49] node "no-preload-820741" has status "Ready":"True"
	I1209 11:52:17.175928  662109 node_ready.go:38] duration metric: took 7.003696159s for node "no-preload-820741" to be "Ready" ...
	I1209 11:52:17.175945  662109 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 11:52:17.180647  662109 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-z647g" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:19.188583  662109 pod_ready.go:103] pod "coredns-7c65d6cfc9-z647g" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:18.133165  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:18.632403  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:19.132416  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:19.633332  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:20.133343  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:20.632968  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:21.133411  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:21.632656  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:22.132876  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:22.632816  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:21.381260  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:21.381912  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | unable to find current IP address of domain default-k8s-diff-port-482476 in network mk-default-k8s-diff-port-482476
	I1209 11:52:21.381943  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | I1209 11:52:21.381834  663663 retry.go:31] will retry after 3.621023528s: waiting for machine to come up
	I1209 11:52:26.142813  661546 start.go:364] duration metric: took 56.424295065s to acquireMachinesLock for "embed-certs-005123"
	I1209 11:52:26.142877  661546 start.go:96] Skipping create...Using existing machine configuration
	I1209 11:52:26.142886  661546 fix.go:54] fixHost starting: 
	I1209 11:52:26.143376  661546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:52:26.143416  661546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:52:26.164438  661546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45591
	I1209 11:52:26.165041  661546 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:52:26.165779  661546 main.go:141] libmachine: Using API Version  1
	I1209 11:52:26.165828  661546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:52:26.166318  661546 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:52:26.166544  661546 main.go:141] libmachine: (embed-certs-005123) Calling .DriverName
	I1209 11:52:26.166745  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetState
	I1209 11:52:26.168534  661546 fix.go:112] recreateIfNeeded on embed-certs-005123: state=Stopped err=<nil>
	I1209 11:52:26.168564  661546 main.go:141] libmachine: (embed-certs-005123) Calling .DriverName
	W1209 11:52:26.168753  661546 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 11:52:26.170973  661546 out.go:177] * Restarting existing kvm2 VM for "embed-certs-005123" ...
	I1209 11:52:26.172269  661546 main.go:141] libmachine: (embed-certs-005123) Calling .Start
	I1209 11:52:26.172500  661546 main.go:141] libmachine: (embed-certs-005123) Ensuring networks are active...
	I1209 11:52:26.173391  661546 main.go:141] libmachine: (embed-certs-005123) Ensuring network default is active
	I1209 11:52:26.173747  661546 main.go:141] libmachine: (embed-certs-005123) Ensuring network mk-embed-certs-005123 is active
	I1209 11:52:26.174208  661546 main.go:141] libmachine: (embed-certs-005123) Getting domain xml...
	I1209 11:52:26.174990  661546 main.go:141] libmachine: (embed-certs-005123) Creating domain...
	I1209 11:52:21.687274  662109 pod_ready.go:103] pod "coredns-7c65d6cfc9-z647g" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:23.688011  662109 pod_ready.go:103] pod "coredns-7c65d6cfc9-z647g" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:24.187886  662109 pod_ready.go:93] pod "coredns-7c65d6cfc9-z647g" in "kube-system" namespace has status "Ready":"True"
	I1209 11:52:24.187917  662109 pod_ready.go:82] duration metric: took 7.007243363s for pod "coredns-7c65d6cfc9-z647g" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:24.187928  662109 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-820741" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:24.193936  662109 pod_ready.go:93] pod "etcd-no-preload-820741" in "kube-system" namespace has status "Ready":"True"
	I1209 11:52:24.193958  662109 pod_ready.go:82] duration metric: took 6.02353ms for pod "etcd-no-preload-820741" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:24.193966  662109 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-820741" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:24.203685  662109 pod_ready.go:93] pod "kube-apiserver-no-preload-820741" in "kube-system" namespace has status "Ready":"True"
	I1209 11:52:24.203712  662109 pod_ready.go:82] duration metric: took 9.739287ms for pod "kube-apiserver-no-preload-820741" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:24.203722  662109 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-820741" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:24.210004  662109 pod_ready.go:93] pod "kube-controller-manager-no-preload-820741" in "kube-system" namespace has status "Ready":"True"
	I1209 11:52:24.210034  662109 pod_ready.go:82] duration metric: took 6.304008ms for pod "kube-controller-manager-no-preload-820741" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:24.210048  662109 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-hpvvp" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:24.216225  662109 pod_ready.go:93] pod "kube-proxy-hpvvp" in "kube-system" namespace has status "Ready":"True"
	I1209 11:52:24.216249  662109 pod_ready.go:82] duration metric: took 6.193945ms for pod "kube-proxy-hpvvp" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:24.216258  662109 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-820741" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:24.584682  662109 pod_ready.go:93] pod "kube-scheduler-no-preload-820741" in "kube-system" namespace has status "Ready":"True"
	I1209 11:52:24.584711  662109 pod_ready.go:82] duration metric: took 368.445803ms for pod "kube-scheduler-no-preload-820741" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:24.584724  662109 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:25.004323  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.004761  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Found IP for machine: 192.168.50.25
	I1209 11:52:25.004791  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has current primary IP address 192.168.50.25 and MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.004798  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Reserving static IP address...
	I1209 11:52:25.005275  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-482476", mac: "52:54:00:f0:c9:8a", ip: "192.168.50.25"} in network mk-default-k8s-diff-port-482476: {Iface:virbr2 ExpiryTime:2024-12-09 12:52:18 +0000 UTC Type:0 Mac:52:54:00:f0:c9:8a Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:default-k8s-diff-port-482476 Clientid:01:52:54:00:f0:c9:8a}
	I1209 11:52:25.005301  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | skip adding static IP to network mk-default-k8s-diff-port-482476 - found existing host DHCP lease matching {name: "default-k8s-diff-port-482476", mac: "52:54:00:f0:c9:8a", ip: "192.168.50.25"}
	I1209 11:52:25.005314  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Reserved static IP address: 192.168.50.25
	I1209 11:52:25.005328  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for SSH to be available...
	I1209 11:52:25.005342  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | Getting to WaitForSSH function...
	I1209 11:52:25.007758  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.008146  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:c9:8a", ip: ""} in network mk-default-k8s-diff-port-482476: {Iface:virbr2 ExpiryTime:2024-12-09 12:52:18 +0000 UTC Type:0 Mac:52:54:00:f0:c9:8a Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:default-k8s-diff-port-482476 Clientid:01:52:54:00:f0:c9:8a}
	I1209 11:52:25.008189  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined IP address 192.168.50.25 and MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.008291  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | Using SSH client type: external
	I1209 11:52:25.008318  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | Using SSH private key: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/default-k8s-diff-port-482476/id_rsa (-rw-------)
	I1209 11:52:25.008348  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.25 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20068-609844/.minikube/machines/default-k8s-diff-port-482476/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1209 11:52:25.008361  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | About to run SSH command:
	I1209 11:52:25.008369  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | exit 0
	I1209 11:52:25.130532  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | SSH cmd err, output: <nil>: 
	I1209 11:52:25.130901  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetConfigRaw
	I1209 11:52:25.131568  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetIP
	I1209 11:52:25.134487  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.134816  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:c9:8a", ip: ""} in network mk-default-k8s-diff-port-482476: {Iface:virbr2 ExpiryTime:2024-12-09 12:52:18 +0000 UTC Type:0 Mac:52:54:00:f0:c9:8a Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:default-k8s-diff-port-482476 Clientid:01:52:54:00:f0:c9:8a}
	I1209 11:52:25.134854  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined IP address 192.168.50.25 and MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.135163  663024 profile.go:143] Saving config to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/default-k8s-diff-port-482476/config.json ...
	I1209 11:52:25.135451  663024 machine.go:93] provisionDockerMachine start ...
	I1209 11:52:25.135480  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .DriverName
	I1209 11:52:25.135736  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHHostname
	I1209 11:52:25.138444  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.138853  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:c9:8a", ip: ""} in network mk-default-k8s-diff-port-482476: {Iface:virbr2 ExpiryTime:2024-12-09 12:52:18 +0000 UTC Type:0 Mac:52:54:00:f0:c9:8a Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:default-k8s-diff-port-482476 Clientid:01:52:54:00:f0:c9:8a}
	I1209 11:52:25.138894  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined IP address 192.168.50.25 and MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.138981  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHPort
	I1209 11:52:25.139188  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHKeyPath
	I1209 11:52:25.139327  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHKeyPath
	I1209 11:52:25.139491  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHUsername
	I1209 11:52:25.139655  663024 main.go:141] libmachine: Using SSH client type: native
	I1209 11:52:25.139895  663024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.25 22 <nil> <nil>}
	I1209 11:52:25.139906  663024 main.go:141] libmachine: About to run SSH command:
	hostname
	I1209 11:52:25.242441  663024 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1209 11:52:25.242472  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetMachineName
	I1209 11:52:25.242837  663024 buildroot.go:166] provisioning hostname "default-k8s-diff-port-482476"
	I1209 11:52:25.242878  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetMachineName
	I1209 11:52:25.243093  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHHostname
	I1209 11:52:25.245995  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.246447  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:c9:8a", ip: ""} in network mk-default-k8s-diff-port-482476: {Iface:virbr2 ExpiryTime:2024-12-09 12:52:18 +0000 UTC Type:0 Mac:52:54:00:f0:c9:8a Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:default-k8s-diff-port-482476 Clientid:01:52:54:00:f0:c9:8a}
	I1209 11:52:25.246478  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined IP address 192.168.50.25 and MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.246685  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHPort
	I1209 11:52:25.246900  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHKeyPath
	I1209 11:52:25.247052  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHKeyPath
	I1209 11:52:25.247175  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHUsername
	I1209 11:52:25.247330  663024 main.go:141] libmachine: Using SSH client type: native
	I1209 11:52:25.247518  663024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.25 22 <nil> <nil>}
	I1209 11:52:25.247531  663024 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-482476 && echo "default-k8s-diff-port-482476" | sudo tee /etc/hostname
	I1209 11:52:25.361366  663024 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-482476
	
	I1209 11:52:25.361397  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHHostname
	I1209 11:52:25.364194  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.364608  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:c9:8a", ip: ""} in network mk-default-k8s-diff-port-482476: {Iface:virbr2 ExpiryTime:2024-12-09 12:52:18 +0000 UTC Type:0 Mac:52:54:00:f0:c9:8a Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:default-k8s-diff-port-482476 Clientid:01:52:54:00:f0:c9:8a}
	I1209 11:52:25.364639  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined IP address 192.168.50.25 and MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.364813  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHPort
	I1209 11:52:25.365064  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHKeyPath
	I1209 11:52:25.365267  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHKeyPath
	I1209 11:52:25.365369  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHUsername
	I1209 11:52:25.365613  663024 main.go:141] libmachine: Using SSH client type: native
	I1209 11:52:25.365790  663024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.25 22 <nil> <nil>}
	I1209 11:52:25.365808  663024 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-482476' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-482476/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-482476' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 11:52:25.475311  663024 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 11:52:25.475346  663024 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20068-609844/.minikube CaCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20068-609844/.minikube}
	I1209 11:52:25.475386  663024 buildroot.go:174] setting up certificates
	I1209 11:52:25.475403  663024 provision.go:84] configureAuth start
	I1209 11:52:25.475412  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetMachineName
	I1209 11:52:25.475711  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetIP
	I1209 11:52:25.478574  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.478903  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:c9:8a", ip: ""} in network mk-default-k8s-diff-port-482476: {Iface:virbr2 ExpiryTime:2024-12-09 12:52:18 +0000 UTC Type:0 Mac:52:54:00:f0:c9:8a Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:default-k8s-diff-port-482476 Clientid:01:52:54:00:f0:c9:8a}
	I1209 11:52:25.478935  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined IP address 192.168.50.25 and MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.479055  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHHostname
	I1209 11:52:25.481280  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.481655  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:c9:8a", ip: ""} in network mk-default-k8s-diff-port-482476: {Iface:virbr2 ExpiryTime:2024-12-09 12:52:18 +0000 UTC Type:0 Mac:52:54:00:f0:c9:8a Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:default-k8s-diff-port-482476 Clientid:01:52:54:00:f0:c9:8a}
	I1209 11:52:25.481688  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined IP address 192.168.50.25 and MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.481788  663024 provision.go:143] copyHostCerts
	I1209 11:52:25.481845  663024 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem, removing ...
	I1209 11:52:25.481876  663024 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem
	I1209 11:52:25.481957  663024 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem (1123 bytes)
	I1209 11:52:25.482056  663024 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem, removing ...
	I1209 11:52:25.482065  663024 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem
	I1209 11:52:25.482090  663024 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem (1679 bytes)
	I1209 11:52:25.482243  663024 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem, removing ...
	I1209 11:52:25.482254  663024 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem
	I1209 11:52:25.482279  663024 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem (1082 bytes)
	I1209 11:52:25.482336  663024 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-482476 san=[127.0.0.1 192.168.50.25 default-k8s-diff-port-482476 localhost minikube]
	I1209 11:52:25.534856  663024 provision.go:177] copyRemoteCerts
	I1209 11:52:25.534921  663024 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 11:52:25.534951  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHHostname
	I1209 11:52:25.537732  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.538138  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:c9:8a", ip: ""} in network mk-default-k8s-diff-port-482476: {Iface:virbr2 ExpiryTime:2024-12-09 12:52:18 +0000 UTC Type:0 Mac:52:54:00:f0:c9:8a Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:default-k8s-diff-port-482476 Clientid:01:52:54:00:f0:c9:8a}
	I1209 11:52:25.538190  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined IP address 192.168.50.25 and MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.538390  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHPort
	I1209 11:52:25.538611  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHKeyPath
	I1209 11:52:25.538783  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHUsername
	I1209 11:52:25.538943  663024 sshutil.go:53] new ssh client: &{IP:192.168.50.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/default-k8s-diff-port-482476/id_rsa Username:docker}
	I1209 11:52:25.619772  663024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1209 11:52:25.643527  663024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1209 11:52:25.668517  663024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1209 11:52:25.693573  663024 provision.go:87] duration metric: took 218.153182ms to configureAuth
	I1209 11:52:25.693615  663024 buildroot.go:189] setting minikube options for container-runtime
	I1209 11:52:25.693807  663024 config.go:182] Loaded profile config "default-k8s-diff-port-482476": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 11:52:25.693906  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHHostname
	I1209 11:52:25.696683  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.697058  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:c9:8a", ip: ""} in network mk-default-k8s-diff-port-482476: {Iface:virbr2 ExpiryTime:2024-12-09 12:52:18 +0000 UTC Type:0 Mac:52:54:00:f0:c9:8a Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:default-k8s-diff-port-482476 Clientid:01:52:54:00:f0:c9:8a}
	I1209 11:52:25.697092  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined IP address 192.168.50.25 and MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.697344  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHPort
	I1209 11:52:25.697548  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHKeyPath
	I1209 11:52:25.697741  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHKeyPath
	I1209 11:52:25.697868  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHUsername
	I1209 11:52:25.698033  663024 main.go:141] libmachine: Using SSH client type: native
	I1209 11:52:25.698229  663024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.25 22 <nil> <nil>}
	I1209 11:52:25.698254  663024 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 11:52:25.915568  663024 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 11:52:25.915595  663024 machine.go:96] duration metric: took 780.126343ms to provisionDockerMachine
	I1209 11:52:25.915610  663024 start.go:293] postStartSetup for "default-k8s-diff-port-482476" (driver="kvm2")
	I1209 11:52:25.915620  663024 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 11:52:25.915644  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .DriverName
	I1209 11:52:25.916005  663024 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 11:52:25.916047  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHHostname
	I1209 11:52:25.919268  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.919602  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:c9:8a", ip: ""} in network mk-default-k8s-diff-port-482476: {Iface:virbr2 ExpiryTime:2024-12-09 12:52:18 +0000 UTC Type:0 Mac:52:54:00:f0:c9:8a Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:default-k8s-diff-port-482476 Clientid:01:52:54:00:f0:c9:8a}
	I1209 11:52:25.919628  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined IP address 192.168.50.25 and MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.919775  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHPort
	I1209 11:52:25.919967  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHKeyPath
	I1209 11:52:25.920133  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHUsername
	I1209 11:52:25.920285  663024 sshutil.go:53] new ssh client: &{IP:192.168.50.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/default-k8s-diff-port-482476/id_rsa Username:docker}
	I1209 11:52:26.000530  663024 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 11:52:26.004544  663024 info.go:137] Remote host: Buildroot 2023.02.9
	I1209 11:52:26.004574  663024 filesync.go:126] Scanning /home/jenkins/minikube-integration/20068-609844/.minikube/addons for local assets ...
	I1209 11:52:26.004651  663024 filesync.go:126] Scanning /home/jenkins/minikube-integration/20068-609844/.minikube/files for local assets ...
	I1209 11:52:26.004759  663024 filesync.go:149] local asset: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem -> 6170172.pem in /etc/ssl/certs
	I1209 11:52:26.004885  663024 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 11:52:26.013444  663024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem --> /etc/ssl/certs/6170172.pem (1708 bytes)
	I1209 11:52:26.036052  663024 start.go:296] duration metric: took 120.422739ms for postStartSetup
	I1209 11:52:26.036110  663024 fix.go:56] duration metric: took 20.120932786s for fixHost
	I1209 11:52:26.036135  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHHostname
	I1209 11:52:26.039079  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:26.039445  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:c9:8a", ip: ""} in network mk-default-k8s-diff-port-482476: {Iface:virbr2 ExpiryTime:2024-12-09 12:52:18 +0000 UTC Type:0 Mac:52:54:00:f0:c9:8a Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:default-k8s-diff-port-482476 Clientid:01:52:54:00:f0:c9:8a}
	I1209 11:52:26.039478  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined IP address 192.168.50.25 and MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:26.039797  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHPort
	I1209 11:52:26.040065  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHKeyPath
	I1209 11:52:26.040228  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHKeyPath
	I1209 11:52:26.040427  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHUsername
	I1209 11:52:26.040620  663024 main.go:141] libmachine: Using SSH client type: native
	I1209 11:52:26.040906  663024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.25 22 <nil> <nil>}
	I1209 11:52:26.040924  663024 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1209 11:52:26.142590  663024 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733745146.090497627
	
	I1209 11:52:26.142623  663024 fix.go:216] guest clock: 1733745146.090497627
	I1209 11:52:26.142634  663024 fix.go:229] Guest: 2024-12-09 11:52:26.090497627 +0000 UTC Remote: 2024-12-09 11:52:26.036115182 +0000 UTC m=+146.587055001 (delta=54.382445ms)
	I1209 11:52:26.142669  663024 fix.go:200] guest clock delta is within tolerance: 54.382445ms
	I1209 11:52:26.142681  663024 start.go:83] releasing machines lock for "default-k8s-diff-port-482476", held for 20.227543026s
	I1209 11:52:26.142723  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .DriverName
	I1209 11:52:26.143032  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetIP
	I1209 11:52:26.146118  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:26.146602  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:c9:8a", ip: ""} in network mk-default-k8s-diff-port-482476: {Iface:virbr2 ExpiryTime:2024-12-09 12:52:18 +0000 UTC Type:0 Mac:52:54:00:f0:c9:8a Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:default-k8s-diff-port-482476 Clientid:01:52:54:00:f0:c9:8a}
	I1209 11:52:26.146634  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined IP address 192.168.50.25 and MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:26.146841  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .DriverName
	I1209 11:52:26.147440  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .DriverName
	I1209 11:52:26.147709  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .DriverName
	I1209 11:52:26.147833  663024 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 11:52:26.147872  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHHostname
	I1209 11:52:26.147980  663024 ssh_runner.go:195] Run: cat /version.json
	I1209 11:52:26.148009  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHHostname
	I1209 11:52:26.151002  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:26.151346  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:26.151379  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:c9:8a", ip: ""} in network mk-default-k8s-diff-port-482476: {Iface:virbr2 ExpiryTime:2024-12-09 12:52:18 +0000 UTC Type:0 Mac:52:54:00:f0:c9:8a Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:default-k8s-diff-port-482476 Clientid:01:52:54:00:f0:c9:8a}
	I1209 11:52:26.151410  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined IP address 192.168.50.25 and MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:26.151534  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHPort
	I1209 11:52:26.151729  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHKeyPath
	I1209 11:52:26.151848  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:c9:8a", ip: ""} in network mk-default-k8s-diff-port-482476: {Iface:virbr2 ExpiryTime:2024-12-09 12:52:18 +0000 UTC Type:0 Mac:52:54:00:f0:c9:8a Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:default-k8s-diff-port-482476 Clientid:01:52:54:00:f0:c9:8a}
	I1209 11:52:26.151876  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined IP address 192.168.50.25 and MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:26.151904  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHUsername
	I1209 11:52:26.152003  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHPort
	I1209 11:52:26.152082  663024 sshutil.go:53] new ssh client: &{IP:192.168.50.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/default-k8s-diff-port-482476/id_rsa Username:docker}
	I1209 11:52:26.152159  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHKeyPath
	I1209 11:52:26.152322  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHUsername
	I1209 11:52:26.152565  663024 sshutil.go:53] new ssh client: &{IP:192.168.50.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/default-k8s-diff-port-482476/id_rsa Username:docker}
	I1209 11:52:26.231575  663024 ssh_runner.go:195] Run: systemctl --version
	I1209 11:52:26.267939  663024 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 11:52:26.418953  663024 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 11:52:26.426243  663024 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 11:52:26.426337  663024 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 11:52:26.448407  663024 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1209 11:52:26.448442  663024 start.go:495] detecting cgroup driver to use...
	I1209 11:52:26.448540  663024 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 11:52:26.469675  663024 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 11:52:26.488825  663024 docker.go:217] disabling cri-docker service (if available) ...
	I1209 11:52:26.488902  663024 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 11:52:26.507716  663024 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 11:52:26.525232  663024 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 11:52:26.664062  663024 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 11:52:26.854813  663024 docker.go:233] disabling docker service ...
	I1209 11:52:26.854883  663024 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 11:52:26.870021  663024 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 11:52:26.883610  663024 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 11:52:27.001237  663024 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 11:52:27.126865  663024 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 11:52:27.144121  663024 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 11:52:27.168073  663024 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1209 11:52:27.168242  663024 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:52:27.180516  663024 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 11:52:27.180587  663024 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:52:27.191681  663024 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:52:27.204047  663024 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:52:27.214157  663024 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 11:52:27.225934  663024 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:52:27.236691  663024 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:52:27.258774  663024 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:52:27.271986  663024 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 11:52:27.283488  663024 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1209 11:52:27.283539  663024 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1209 11:52:27.299065  663024 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 11:52:27.309203  663024 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 11:52:27.431740  663024 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 11:52:27.529577  663024 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 11:52:27.529668  663024 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 11:52:27.534733  663024 start.go:563] Will wait 60s for crictl version
	I1209 11:52:27.534800  663024 ssh_runner.go:195] Run: which crictl
	I1209 11:52:27.538544  663024 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 11:52:27.577577  663024 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1209 11:52:27.577684  663024 ssh_runner.go:195] Run: crio --version
	I1209 11:52:27.607938  663024 ssh_runner.go:195] Run: crio --version
	I1209 11:52:27.645210  663024 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1209 11:52:23.133393  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:23.632776  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:24.133286  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:24.632415  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:25.132348  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:25.632478  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:26.132982  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:26.632517  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:27.132692  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:27.633291  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:27.646510  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetIP
	I1209 11:52:27.650014  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:27.650439  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:c9:8a", ip: ""} in network mk-default-k8s-diff-port-482476: {Iface:virbr2 ExpiryTime:2024-12-09 12:52:18 +0000 UTC Type:0 Mac:52:54:00:f0:c9:8a Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:default-k8s-diff-port-482476 Clientid:01:52:54:00:f0:c9:8a}
	I1209 11:52:27.650469  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined IP address 192.168.50.25 and MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:27.650705  663024 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1209 11:52:27.654738  663024 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 11:52:27.668671  663024 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-482476 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:default-k8s-diff-port-482476 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.25 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 11:52:27.668808  663024 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 11:52:27.668873  663024 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 11:52:27.709582  663024 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1209 11:52:27.709679  663024 ssh_runner.go:195] Run: which lz4
	I1209 11:52:27.713702  663024 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1209 11:52:27.717851  663024 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1209 11:52:27.717887  663024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1209 11:52:29.037160  663024 crio.go:462] duration metric: took 1.32348676s to copy over tarball
	I1209 11:52:29.037262  663024 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1209 11:52:27.500098  661546 main.go:141] libmachine: (embed-certs-005123) Waiting to get IP...
	I1209 11:52:27.501088  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:27.501538  661546 main.go:141] libmachine: (embed-certs-005123) DBG | unable to find current IP address of domain embed-certs-005123 in network mk-embed-certs-005123
	I1209 11:52:27.501605  661546 main.go:141] libmachine: (embed-certs-005123) DBG | I1209 11:52:27.501510  663907 retry.go:31] will retry after 191.187925ms: waiting for machine to come up
	I1209 11:52:27.694017  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:27.694574  661546 main.go:141] libmachine: (embed-certs-005123) DBG | unable to find current IP address of domain embed-certs-005123 in network mk-embed-certs-005123
	I1209 11:52:27.694605  661546 main.go:141] libmachine: (embed-certs-005123) DBG | I1209 11:52:27.694512  663907 retry.go:31] will retry after 256.268ms: waiting for machine to come up
	I1209 11:52:27.952185  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:27.952863  661546 main.go:141] libmachine: (embed-certs-005123) DBG | unable to find current IP address of domain embed-certs-005123 in network mk-embed-certs-005123
	I1209 11:52:27.952908  661546 main.go:141] libmachine: (embed-certs-005123) DBG | I1209 11:52:27.952759  663907 retry.go:31] will retry after 460.272204ms: waiting for machine to come up
	I1209 11:52:28.414403  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:28.414925  661546 main.go:141] libmachine: (embed-certs-005123) DBG | unable to find current IP address of domain embed-certs-005123 in network mk-embed-certs-005123
	I1209 11:52:28.414967  661546 main.go:141] libmachine: (embed-certs-005123) DBG | I1209 11:52:28.414873  663907 retry.go:31] will retry after 450.761189ms: waiting for machine to come up
	I1209 11:52:28.867687  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:28.868350  661546 main.go:141] libmachine: (embed-certs-005123) DBG | unable to find current IP address of domain embed-certs-005123 in network mk-embed-certs-005123
	I1209 11:52:28.868389  661546 main.go:141] libmachine: (embed-certs-005123) DBG | I1209 11:52:28.868313  663907 retry.go:31] will retry after 615.800863ms: waiting for machine to come up
	I1209 11:52:29.486566  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:29.487179  661546 main.go:141] libmachine: (embed-certs-005123) DBG | unable to find current IP address of domain embed-certs-005123 in network mk-embed-certs-005123
	I1209 11:52:29.487218  661546 main.go:141] libmachine: (embed-certs-005123) DBG | I1209 11:52:29.487108  663907 retry.go:31] will retry after 628.641045ms: waiting for machine to come up
	I1209 11:52:30.117051  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:30.117424  661546 main.go:141] libmachine: (embed-certs-005123) DBG | unable to find current IP address of domain embed-certs-005123 in network mk-embed-certs-005123
	I1209 11:52:30.117459  661546 main.go:141] libmachine: (embed-certs-005123) DBG | I1209 11:52:30.117356  663907 retry.go:31] will retry after 902.465226ms: waiting for machine to come up
	I1209 11:52:31.021756  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:31.022268  661546 main.go:141] libmachine: (embed-certs-005123) DBG | unable to find current IP address of domain embed-certs-005123 in network mk-embed-certs-005123
	I1209 11:52:31.022298  661546 main.go:141] libmachine: (embed-certs-005123) DBG | I1209 11:52:31.022229  663907 retry.go:31] will retry after 918.939368ms: waiting for machine to come up
	I1209 11:52:26.594953  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:29.093499  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:28.132379  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:28.633377  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:29.132983  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:29.633370  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:30.132748  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:30.633383  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:31.133450  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:31.633210  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:32.132406  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:32.632598  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:31.234956  663024 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.197609203s)
	I1209 11:52:31.235007  663024 crio.go:469] duration metric: took 2.197798334s to extract the tarball
	I1209 11:52:31.235018  663024 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1209 11:52:31.275616  663024 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 11:52:31.320918  663024 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 11:52:31.320945  663024 cache_images.go:84] Images are preloaded, skipping loading
	I1209 11:52:31.320961  663024 kubeadm.go:934] updating node { 192.168.50.25 8444 v1.31.2 crio true true} ...
	I1209 11:52:31.321122  663024 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-482476 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.25
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-482476 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 11:52:31.321246  663024 ssh_runner.go:195] Run: crio config
	I1209 11:52:31.367805  663024 cni.go:84] Creating CNI manager for ""
	I1209 11:52:31.367827  663024 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 11:52:31.367839  663024 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1209 11:52:31.367863  663024 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.25 APIServerPort:8444 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-482476 NodeName:default-k8s-diff-port-482476 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.25"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.25 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 11:52:31.368005  663024 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.25
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-482476"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.25"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.25"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 11:52:31.368074  663024 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1209 11:52:31.377831  663024 binaries.go:44] Found k8s binaries, skipping transfer
	I1209 11:52:31.377902  663024 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 11:52:31.386872  663024 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I1209 11:52:31.403764  663024 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 11:52:31.419295  663024 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2305 bytes)
	I1209 11:52:31.435856  663024 ssh_runner.go:195] Run: grep 192.168.50.25	control-plane.minikube.internal$ /etc/hosts
	I1209 11:52:31.439480  663024 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.25	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 11:52:31.455136  663024 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 11:52:31.573295  663024 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 11:52:31.589679  663024 certs.go:68] Setting up /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/default-k8s-diff-port-482476 for IP: 192.168.50.25
	I1209 11:52:31.589703  663024 certs.go:194] generating shared ca certs ...
	I1209 11:52:31.589741  663024 certs.go:226] acquiring lock for ca certs: {Name:mk2df5887e08965d909a9c950da5dfffb8a04ddc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 11:52:31.589930  663024 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.key
	I1209 11:52:31.589982  663024 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.key
	I1209 11:52:31.589995  663024 certs.go:256] generating profile certs ...
	I1209 11:52:31.590137  663024 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/default-k8s-diff-port-482476/client.key
	I1209 11:52:31.590256  663024 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/default-k8s-diff-port-482476/apiserver.key.e2346b12
	I1209 11:52:31.590322  663024 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/default-k8s-diff-port-482476/proxy-client.key
	I1209 11:52:31.590479  663024 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017.pem (1338 bytes)
	W1209 11:52:31.590522  663024 certs.go:480] ignoring /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017_empty.pem, impossibly tiny 0 bytes
	I1209 11:52:31.590535  663024 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem (1675 bytes)
	I1209 11:52:31.590571  663024 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem (1082 bytes)
	I1209 11:52:31.590612  663024 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem (1123 bytes)
	I1209 11:52:31.590649  663024 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem (1679 bytes)
	I1209 11:52:31.590710  663024 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem (1708 bytes)
	I1209 11:52:31.591643  663024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 11:52:31.634363  663024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1209 11:52:31.660090  663024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 11:52:31.692933  663024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1209 11:52:31.726010  663024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/default-k8s-diff-port-482476/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1209 11:52:31.757565  663024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/default-k8s-diff-port-482476/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1209 11:52:31.781368  663024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/default-k8s-diff-port-482476/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 11:52:31.805233  663024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/default-k8s-diff-port-482476/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1209 11:52:31.828391  663024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem --> /usr/share/ca-certificates/6170172.pem (1708 bytes)
	I1209 11:52:31.850407  663024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 11:52:31.873159  663024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017.pem --> /usr/share/ca-certificates/617017.pem (1338 bytes)
	I1209 11:52:31.895503  663024 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 11:52:31.911754  663024 ssh_runner.go:195] Run: openssl version
	I1209 11:52:31.917771  663024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6170172.pem && ln -fs /usr/share/ca-certificates/6170172.pem /etc/ssl/certs/6170172.pem"
	I1209 11:52:31.929857  663024 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6170172.pem
	I1209 11:52:31.934518  663024 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 10:45 /usr/share/ca-certificates/6170172.pem
	I1209 11:52:31.934596  663024 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6170172.pem
	I1209 11:52:31.940382  663024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6170172.pem /etc/ssl/certs/3ec20f2e.0"
	I1209 11:52:31.951417  663024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1209 11:52:31.961966  663024 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 11:52:31.966234  663024 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 10:34 /usr/share/ca-certificates/minikubeCA.pem
	I1209 11:52:31.966286  663024 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 11:52:31.972070  663024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1209 11:52:31.982547  663024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/617017.pem && ln -fs /usr/share/ca-certificates/617017.pem /etc/ssl/certs/617017.pem"
	I1209 11:52:31.993215  663024 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/617017.pem
	I1209 11:52:31.997579  663024 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 10:45 /usr/share/ca-certificates/617017.pem
	I1209 11:52:31.997641  663024 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/617017.pem
	I1209 11:52:32.003050  663024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/617017.pem /etc/ssl/certs/51391683.0"
	I1209 11:52:32.013463  663024 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 11:52:32.017936  663024 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1209 11:52:32.024029  663024 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1209 11:52:32.029686  663024 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1209 11:52:32.035260  663024 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1209 11:52:32.040696  663024 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1209 11:52:32.046116  663024 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1209 11:52:32.051521  663024 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-482476 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.2 ClusterName:default-k8s-diff-port-482476 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.25 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 11:52:32.051605  663024 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 11:52:32.051676  663024 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 11:52:32.092529  663024 cri.go:89] found id: ""
	I1209 11:52:32.092623  663024 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 11:52:32.103153  663024 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1209 11:52:32.103183  663024 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1209 11:52:32.103247  663024 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1209 11:52:32.113029  663024 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1209 11:52:32.114506  663024 kubeconfig.go:125] found "default-k8s-diff-port-482476" server: "https://192.168.50.25:8444"
	I1209 11:52:32.116929  663024 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1209 11:52:32.127055  663024 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.25
	I1209 11:52:32.127108  663024 kubeadm.go:1160] stopping kube-system containers ...
	I1209 11:52:32.127124  663024 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1209 11:52:32.127189  663024 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 11:52:32.169401  663024 cri.go:89] found id: ""
	I1209 11:52:32.169507  663024 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1209 11:52:32.187274  663024 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 11:52:32.196843  663024 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 11:52:32.196867  663024 kubeadm.go:157] found existing configuration files:
	
	I1209 11:52:32.196925  663024 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1209 11:52:32.205670  663024 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 11:52:32.205754  663024 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 11:52:32.214977  663024 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1209 11:52:32.223707  663024 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 11:52:32.223782  663024 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 11:52:32.232514  663024 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1209 11:52:32.240999  663024 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 11:52:32.241076  663024 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 11:52:32.250049  663024 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1209 11:52:32.258782  663024 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 11:52:32.258846  663024 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 11:52:32.268447  663024 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 11:52:32.277875  663024 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:32.394016  663024 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:33.494978  663024 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.100920844s)
	I1209 11:52:33.495030  663024 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:33.719319  663024 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:33.787272  663024 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:33.882783  663024 api_server.go:52] waiting for apiserver process to appear ...
	I1209 11:52:33.882876  663024 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:34.383090  663024 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:31.942735  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:31.943207  661546 main.go:141] libmachine: (embed-certs-005123) DBG | unable to find current IP address of domain embed-certs-005123 in network mk-embed-certs-005123
	I1209 11:52:31.943244  661546 main.go:141] libmachine: (embed-certs-005123) DBG | I1209 11:52:31.943141  663907 retry.go:31] will retry after 1.153139191s: waiting for machine to come up
	I1209 11:52:33.097672  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:33.098233  661546 main.go:141] libmachine: (embed-certs-005123) DBG | unable to find current IP address of domain embed-certs-005123 in network mk-embed-certs-005123
	I1209 11:52:33.098299  661546 main.go:141] libmachine: (embed-certs-005123) DBG | I1209 11:52:33.098199  663907 retry.go:31] will retry after 2.002880852s: waiting for machine to come up
	I1209 11:52:35.103239  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:35.103693  661546 main.go:141] libmachine: (embed-certs-005123) DBG | unable to find current IP address of domain embed-certs-005123 in network mk-embed-certs-005123
	I1209 11:52:35.103724  661546 main.go:141] libmachine: (embed-certs-005123) DBG | I1209 11:52:35.103639  663907 retry.go:31] will retry after 2.219510124s: waiting for machine to come up
	I1209 11:52:31.593184  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:34.090877  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:36.094569  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:33.132924  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:33.632884  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:34.132528  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:34.632989  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:35.133398  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:35.632376  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:36.132936  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:36.633152  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:37.133343  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:37.633367  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:34.883172  663024 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:35.384008  663024 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:35.883940  663024 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:35.901453  663024 api_server.go:72] duration metric: took 2.018670363s to wait for apiserver process to appear ...
	I1209 11:52:35.901489  663024 api_server.go:88] waiting for apiserver healthz status ...
	I1209 11:52:35.901524  663024 api_server.go:253] Checking apiserver healthz at https://192.168.50.25:8444/healthz ...
	I1209 11:52:38.225976  663024 api_server.go:279] https://192.168.50.25:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1209 11:52:38.226017  663024 api_server.go:103] status: https://192.168.50.25:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1209 11:52:38.226037  663024 api_server.go:253] Checking apiserver healthz at https://192.168.50.25:8444/healthz ...
	I1209 11:52:38.269459  663024 api_server.go:279] https://192.168.50.25:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1209 11:52:38.269549  663024 api_server.go:103] status: https://192.168.50.25:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1209 11:52:38.401652  663024 api_server.go:253] Checking apiserver healthz at https://192.168.50.25:8444/healthz ...
	I1209 11:52:38.407995  663024 api_server.go:279] https://192.168.50.25:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1209 11:52:38.408028  663024 api_server.go:103] status: https://192.168.50.25:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1209 11:52:38.902416  663024 api_server.go:253] Checking apiserver healthz at https://192.168.50.25:8444/healthz ...
	I1209 11:52:38.914550  663024 api_server.go:279] https://192.168.50.25:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1209 11:52:38.914579  663024 api_server.go:103] status: https://192.168.50.25:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1209 11:52:39.401719  663024 api_server.go:253] Checking apiserver healthz at https://192.168.50.25:8444/healthz ...
	I1209 11:52:39.409382  663024 api_server.go:279] https://192.168.50.25:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1209 11:52:39.409427  663024 api_server.go:103] status: https://192.168.50.25:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1209 11:52:39.902488  663024 api_server.go:253] Checking apiserver healthz at https://192.168.50.25:8444/healthz ...
	I1209 11:52:39.907511  663024 api_server.go:279] https://192.168.50.25:8444/healthz returned 200:
	ok
	I1209 11:52:39.914532  663024 api_server.go:141] control plane version: v1.31.2
	I1209 11:52:39.914562  663024 api_server.go:131] duration metric: took 4.013066199s to wait for apiserver health ...
	I1209 11:52:39.914586  663024 cni.go:84] Creating CNI manager for ""
	I1209 11:52:39.914594  663024 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 11:52:39.915954  663024 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1209 11:52:37.324833  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:37.325397  661546 main.go:141] libmachine: (embed-certs-005123) DBG | unable to find current IP address of domain embed-certs-005123 in network mk-embed-certs-005123
	I1209 11:52:37.325430  661546 main.go:141] libmachine: (embed-certs-005123) DBG | I1209 11:52:37.325338  663907 retry.go:31] will retry after 3.636796307s: waiting for machine to come up
	I1209 11:52:40.966039  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:40.966438  661546 main.go:141] libmachine: (embed-certs-005123) DBG | unable to find current IP address of domain embed-certs-005123 in network mk-embed-certs-005123
	I1209 11:52:40.966463  661546 main.go:141] libmachine: (embed-certs-005123) DBG | I1209 11:52:40.966419  663907 retry.go:31] will retry after 3.704289622s: waiting for machine to come up
	I1209 11:52:38.592804  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:40.593407  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:38.133368  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:38.632475  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:39.132993  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:39.633225  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:40.132552  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:40.633292  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:41.132443  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:41.632994  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:42.132631  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:42.633378  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:39.917397  663024 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1209 11:52:39.928995  663024 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1209 11:52:39.953045  663024 system_pods.go:43] waiting for kube-system pods to appear ...
	I1209 11:52:39.962582  663024 system_pods.go:59] 8 kube-system pods found
	I1209 11:52:39.962628  663024 system_pods.go:61] "coredns-7c65d6cfc9-zzrgn" [dca7a835-3b66-4515-b571-6420afc42c44] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1209 11:52:39.962639  663024 system_pods.go:61] "etcd-default-k8s-diff-port-482476" [2323dbbc-9e7f-4047-b0be-b68b851f4986] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1209 11:52:39.962649  663024 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-482476" [0b7a4936-5282-46a4-a08a-e225b303f6f2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1209 11:52:39.962658  663024 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-482476" [c6ff79a0-2177-4c79-8021-c523f8d53e9b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1209 11:52:39.962666  663024 system_pods.go:61] "kube-proxy-6th5d" [0cff6df1-1adb-4b7e-8d59-a837db026339] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1209 11:52:39.962682  663024 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-482476" [524125eb-afd4-4e20-b0f0-e58019e84962] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1209 11:52:39.962694  663024 system_pods.go:61] "metrics-server-6867b74b74-bpccn" [7426c800-9ff7-4778-82a0-6c71fd05a222] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1209 11:52:39.962702  663024 system_pods.go:61] "storage-provisioner" [4478313a-58e8-4d24-ab0b-c087e664200d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1209 11:52:39.962711  663024 system_pods.go:74] duration metric: took 9.637672ms to wait for pod list to return data ...
	I1209 11:52:39.962725  663024 node_conditions.go:102] verifying NodePressure condition ...
	I1209 11:52:39.969576  663024 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1209 11:52:39.969611  663024 node_conditions.go:123] node cpu capacity is 2
	I1209 11:52:39.969627  663024 node_conditions.go:105] duration metric: took 6.893708ms to run NodePressure ...
	I1209 11:52:39.969660  663024 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:40.340239  663024 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1209 11:52:40.345384  663024 kubeadm.go:739] kubelet initialised
	I1209 11:52:40.345412  663024 kubeadm.go:740] duration metric: took 5.145751ms waiting for restarted kubelet to initialise ...
	I1209 11:52:40.345425  663024 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 11:52:40.350721  663024 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-zzrgn" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:42.357138  663024 pod_ready.go:103] pod "coredns-7c65d6cfc9-zzrgn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:44.361981  663024 pod_ready.go:103] pod "coredns-7c65d6cfc9-zzrgn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:44.674598  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:44.675048  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has current primary IP address 192.168.72.218 and MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:44.675068  661546 main.go:141] libmachine: (embed-certs-005123) Found IP for machine: 192.168.72.218
	I1209 11:52:44.675075  661546 main.go:141] libmachine: (embed-certs-005123) Reserving static IP address...
	I1209 11:52:44.675492  661546 main.go:141] libmachine: (embed-certs-005123) DBG | found host DHCP lease matching {name: "embed-certs-005123", mac: "52:54:00:ee:a0:a8", ip: "192.168.72.218"} in network mk-embed-certs-005123: {Iface:virbr4 ExpiryTime:2024-12-09 12:52:37 +0000 UTC Type:0 Mac:52:54:00:ee:a0:a8 Iaid: IPaddr:192.168.72.218 Prefix:24 Hostname:embed-certs-005123 Clientid:01:52:54:00:ee:a0:a8}
	I1209 11:52:44.675522  661546 main.go:141] libmachine: (embed-certs-005123) DBG | skip adding static IP to network mk-embed-certs-005123 - found existing host DHCP lease matching {name: "embed-certs-005123", mac: "52:54:00:ee:a0:a8", ip: "192.168.72.218"}
	I1209 11:52:44.675537  661546 main.go:141] libmachine: (embed-certs-005123) Reserved static IP address: 192.168.72.218
	I1209 11:52:44.675555  661546 main.go:141] libmachine: (embed-certs-005123) Waiting for SSH to be available...
	I1209 11:52:44.675566  661546 main.go:141] libmachine: (embed-certs-005123) DBG | Getting to WaitForSSH function...
	I1209 11:52:44.677490  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:44.677814  661546 main.go:141] libmachine: (embed-certs-005123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:a0:a8", ip: ""} in network mk-embed-certs-005123: {Iface:virbr4 ExpiryTime:2024-12-09 12:52:37 +0000 UTC Type:0 Mac:52:54:00:ee:a0:a8 Iaid: IPaddr:192.168.72.218 Prefix:24 Hostname:embed-certs-005123 Clientid:01:52:54:00:ee:a0:a8}
	I1209 11:52:44.677860  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined IP address 192.168.72.218 and MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:44.677952  661546 main.go:141] libmachine: (embed-certs-005123) DBG | Using SSH client type: external
	I1209 11:52:44.678012  661546 main.go:141] libmachine: (embed-certs-005123) DBG | Using SSH private key: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/embed-certs-005123/id_rsa (-rw-------)
	I1209 11:52:44.678042  661546 main.go:141] libmachine: (embed-certs-005123) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.218 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20068-609844/.minikube/machines/embed-certs-005123/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1209 11:52:44.678056  661546 main.go:141] libmachine: (embed-certs-005123) DBG | About to run SSH command:
	I1209 11:52:44.678068  661546 main.go:141] libmachine: (embed-certs-005123) DBG | exit 0
	I1209 11:52:44.798377  661546 main.go:141] libmachine: (embed-certs-005123) DBG | SSH cmd err, output: <nil>: 
	I1209 11:52:44.798782  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetConfigRaw
	I1209 11:52:44.799532  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetIP
	I1209 11:52:44.801853  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:44.802223  661546 main.go:141] libmachine: (embed-certs-005123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:a0:a8", ip: ""} in network mk-embed-certs-005123: {Iface:virbr4 ExpiryTime:2024-12-09 12:52:37 +0000 UTC Type:0 Mac:52:54:00:ee:a0:a8 Iaid: IPaddr:192.168.72.218 Prefix:24 Hostname:embed-certs-005123 Clientid:01:52:54:00:ee:a0:a8}
	I1209 11:52:44.802255  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined IP address 192.168.72.218 and MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:44.802539  661546 profile.go:143] Saving config to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/embed-certs-005123/config.json ...
	I1209 11:52:44.802777  661546 machine.go:93] provisionDockerMachine start ...
	I1209 11:52:44.802799  661546 main.go:141] libmachine: (embed-certs-005123) Calling .DriverName
	I1209 11:52:44.802994  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHHostname
	I1209 11:52:44.805481  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:44.805803  661546 main.go:141] libmachine: (embed-certs-005123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:a0:a8", ip: ""} in network mk-embed-certs-005123: {Iface:virbr4 ExpiryTime:2024-12-09 12:52:37 +0000 UTC Type:0 Mac:52:54:00:ee:a0:a8 Iaid: IPaddr:192.168.72.218 Prefix:24 Hostname:embed-certs-005123 Clientid:01:52:54:00:ee:a0:a8}
	I1209 11:52:44.805838  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined IP address 192.168.72.218 and MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:44.805999  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHPort
	I1209 11:52:44.806219  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHKeyPath
	I1209 11:52:44.806386  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHKeyPath
	I1209 11:52:44.806555  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHUsername
	I1209 11:52:44.806716  661546 main.go:141] libmachine: Using SSH client type: native
	I1209 11:52:44.806886  661546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.218 22 <nil> <nil>}
	I1209 11:52:44.806897  661546 main.go:141] libmachine: About to run SSH command:
	hostname
	I1209 11:52:44.914443  661546 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1209 11:52:44.914480  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetMachineName
	I1209 11:52:44.914783  661546 buildroot.go:166] provisioning hostname "embed-certs-005123"
	I1209 11:52:44.914810  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetMachineName
	I1209 11:52:44.914973  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHHostname
	I1209 11:52:44.918053  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:44.918471  661546 main.go:141] libmachine: (embed-certs-005123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:a0:a8", ip: ""} in network mk-embed-certs-005123: {Iface:virbr4 ExpiryTime:2024-12-09 12:52:37 +0000 UTC Type:0 Mac:52:54:00:ee:a0:a8 Iaid: IPaddr:192.168.72.218 Prefix:24 Hostname:embed-certs-005123 Clientid:01:52:54:00:ee:a0:a8}
	I1209 11:52:44.918508  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined IP address 192.168.72.218 and MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:44.918701  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHPort
	I1209 11:52:44.918935  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHKeyPath
	I1209 11:52:44.919087  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHKeyPath
	I1209 11:52:44.919267  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHUsername
	I1209 11:52:44.919452  661546 main.go:141] libmachine: Using SSH client type: native
	I1209 11:52:44.919624  661546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.218 22 <nil> <nil>}
	I1209 11:52:44.919645  661546 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-005123 && echo "embed-certs-005123" | sudo tee /etc/hostname
	I1209 11:52:45.032725  661546 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-005123
	
	I1209 11:52:45.032760  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHHostname
	I1209 11:52:45.035820  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:45.036222  661546 main.go:141] libmachine: (embed-certs-005123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:a0:a8", ip: ""} in network mk-embed-certs-005123: {Iface:virbr4 ExpiryTime:2024-12-09 12:52:37 +0000 UTC Type:0 Mac:52:54:00:ee:a0:a8 Iaid: IPaddr:192.168.72.218 Prefix:24 Hostname:embed-certs-005123 Clientid:01:52:54:00:ee:a0:a8}
	I1209 11:52:45.036263  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined IP address 192.168.72.218 and MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:45.036466  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHPort
	I1209 11:52:45.036666  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHKeyPath
	I1209 11:52:45.036864  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHKeyPath
	I1209 11:52:45.037003  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHUsername
	I1209 11:52:45.037189  661546 main.go:141] libmachine: Using SSH client type: native
	I1209 11:52:45.037396  661546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.218 22 <nil> <nil>}
	I1209 11:52:45.037413  661546 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-005123' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-005123/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-005123' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 11:52:45.147189  661546 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 11:52:45.147225  661546 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20068-609844/.minikube CaCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20068-609844/.minikube}
	I1209 11:52:45.147284  661546 buildroot.go:174] setting up certificates
	I1209 11:52:45.147299  661546 provision.go:84] configureAuth start
	I1209 11:52:45.147313  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetMachineName
	I1209 11:52:45.147667  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetIP
	I1209 11:52:45.150526  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:45.150965  661546 main.go:141] libmachine: (embed-certs-005123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:a0:a8", ip: ""} in network mk-embed-certs-005123: {Iface:virbr4 ExpiryTime:2024-12-09 12:52:37 +0000 UTC Type:0 Mac:52:54:00:ee:a0:a8 Iaid: IPaddr:192.168.72.218 Prefix:24 Hostname:embed-certs-005123 Clientid:01:52:54:00:ee:a0:a8}
	I1209 11:52:45.151009  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined IP address 192.168.72.218 and MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:45.151118  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHHostname
	I1209 11:52:45.153778  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:45.154178  661546 main.go:141] libmachine: (embed-certs-005123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:a0:a8", ip: ""} in network mk-embed-certs-005123: {Iface:virbr4 ExpiryTime:2024-12-09 12:52:37 +0000 UTC Type:0 Mac:52:54:00:ee:a0:a8 Iaid: IPaddr:192.168.72.218 Prefix:24 Hostname:embed-certs-005123 Clientid:01:52:54:00:ee:a0:a8}
	I1209 11:52:45.154213  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined IP address 192.168.72.218 and MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:45.154382  661546 provision.go:143] copyHostCerts
	I1209 11:52:45.154455  661546 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem, removing ...
	I1209 11:52:45.154478  661546 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem
	I1209 11:52:45.154549  661546 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem (1082 bytes)
	I1209 11:52:45.154673  661546 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem, removing ...
	I1209 11:52:45.154685  661546 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem
	I1209 11:52:45.154717  661546 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem (1123 bytes)
	I1209 11:52:45.154816  661546 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem, removing ...
	I1209 11:52:45.154829  661546 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem
	I1209 11:52:45.154857  661546 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem (1679 bytes)
	I1209 11:52:45.154935  661546 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem org=jenkins.embed-certs-005123 san=[127.0.0.1 192.168.72.218 embed-certs-005123 localhost minikube]
	I1209 11:52:45.382712  661546 provision.go:177] copyRemoteCerts
	I1209 11:52:45.382772  661546 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 11:52:45.382801  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHHostname
	I1209 11:52:45.385625  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:45.386020  661546 main.go:141] libmachine: (embed-certs-005123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:a0:a8", ip: ""} in network mk-embed-certs-005123: {Iface:virbr4 ExpiryTime:2024-12-09 12:52:37 +0000 UTC Type:0 Mac:52:54:00:ee:a0:a8 Iaid: IPaddr:192.168.72.218 Prefix:24 Hostname:embed-certs-005123 Clientid:01:52:54:00:ee:a0:a8}
	I1209 11:52:45.386050  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined IP address 192.168.72.218 and MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:45.386241  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHPort
	I1209 11:52:45.386448  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHKeyPath
	I1209 11:52:45.386626  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHUsername
	I1209 11:52:45.386765  661546 sshutil.go:53] new ssh client: &{IP:192.168.72.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/embed-certs-005123/id_rsa Username:docker}
	I1209 11:52:45.464427  661546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1209 11:52:45.488111  661546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1209 11:52:45.511231  661546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1209 11:52:45.534104  661546 provision.go:87] duration metric: took 386.787703ms to configureAuth
	I1209 11:52:45.534141  661546 buildroot.go:189] setting minikube options for container-runtime
	I1209 11:52:45.534411  661546 config.go:182] Loaded profile config "embed-certs-005123": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 11:52:45.534526  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHHostname
	I1209 11:52:45.537936  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:45.538370  661546 main.go:141] libmachine: (embed-certs-005123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:a0:a8", ip: ""} in network mk-embed-certs-005123: {Iface:virbr4 ExpiryTime:2024-12-09 12:52:37 +0000 UTC Type:0 Mac:52:54:00:ee:a0:a8 Iaid: IPaddr:192.168.72.218 Prefix:24 Hostname:embed-certs-005123 Clientid:01:52:54:00:ee:a0:a8}
	I1209 11:52:45.538402  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined IP address 192.168.72.218 and MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:45.538584  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHPort
	I1209 11:52:45.538826  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHKeyPath
	I1209 11:52:45.539019  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHKeyPath
	I1209 11:52:45.539150  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHUsername
	I1209 11:52:45.539378  661546 main.go:141] libmachine: Using SSH client type: native
	I1209 11:52:45.539551  661546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.218 22 <nil> <nil>}
	I1209 11:52:45.539568  661546 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 11:52:45.771215  661546 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 11:52:45.771259  661546 machine.go:96] duration metric: took 968.466766ms to provisionDockerMachine
	I1209 11:52:45.771276  661546 start.go:293] postStartSetup for "embed-certs-005123" (driver="kvm2")
	I1209 11:52:45.771287  661546 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 11:52:45.771316  661546 main.go:141] libmachine: (embed-certs-005123) Calling .DriverName
	I1209 11:52:45.771673  661546 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 11:52:45.771709  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHHostname
	I1209 11:52:45.774881  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:45.775294  661546 main.go:141] libmachine: (embed-certs-005123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:a0:a8", ip: ""} in network mk-embed-certs-005123: {Iface:virbr4 ExpiryTime:2024-12-09 12:52:37 +0000 UTC Type:0 Mac:52:54:00:ee:a0:a8 Iaid: IPaddr:192.168.72.218 Prefix:24 Hostname:embed-certs-005123 Clientid:01:52:54:00:ee:a0:a8}
	I1209 11:52:45.775340  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined IP address 192.168.72.218 and MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:45.775510  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHPort
	I1209 11:52:45.775714  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHKeyPath
	I1209 11:52:45.775899  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHUsername
	I1209 11:52:45.776065  661546 sshutil.go:53] new ssh client: &{IP:192.168.72.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/embed-certs-005123/id_rsa Username:docker}
	I1209 11:52:45.856991  661546 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 11:52:45.862195  661546 info.go:137] Remote host: Buildroot 2023.02.9
	I1209 11:52:45.862224  661546 filesync.go:126] Scanning /home/jenkins/minikube-integration/20068-609844/.minikube/addons for local assets ...
	I1209 11:52:45.862295  661546 filesync.go:126] Scanning /home/jenkins/minikube-integration/20068-609844/.minikube/files for local assets ...
	I1209 11:52:45.862368  661546 filesync.go:149] local asset: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem -> 6170172.pem in /etc/ssl/certs
	I1209 11:52:45.862497  661546 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 11:52:45.874850  661546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem --> /etc/ssl/certs/6170172.pem (1708 bytes)
	I1209 11:52:45.899279  661546 start.go:296] duration metric: took 127.984399ms for postStartSetup
	I1209 11:52:45.899332  661546 fix.go:56] duration metric: took 19.756446591s for fixHost
	I1209 11:52:45.899362  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHHostname
	I1209 11:52:45.902428  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:45.902828  661546 main.go:141] libmachine: (embed-certs-005123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:a0:a8", ip: ""} in network mk-embed-certs-005123: {Iface:virbr4 ExpiryTime:2024-12-09 12:52:37 +0000 UTC Type:0 Mac:52:54:00:ee:a0:a8 Iaid: IPaddr:192.168.72.218 Prefix:24 Hostname:embed-certs-005123 Clientid:01:52:54:00:ee:a0:a8}
	I1209 11:52:45.902861  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined IP address 192.168.72.218 and MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:45.903117  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHPort
	I1209 11:52:45.903344  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHKeyPath
	I1209 11:52:45.903554  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHKeyPath
	I1209 11:52:45.903704  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHUsername
	I1209 11:52:45.903955  661546 main.go:141] libmachine: Using SSH client type: native
	I1209 11:52:45.904191  661546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.218 22 <nil> <nil>}
	I1209 11:52:45.904209  661546 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1209 11:52:46.007164  661546 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733745165.964649155
	
	I1209 11:52:46.007194  661546 fix.go:216] guest clock: 1733745165.964649155
	I1209 11:52:46.007217  661546 fix.go:229] Guest: 2024-12-09 11:52:45.964649155 +0000 UTC Remote: 2024-12-09 11:52:45.899337716 +0000 UTC m=+369.711404421 (delta=65.311439ms)
	I1209 11:52:46.007267  661546 fix.go:200] guest clock delta is within tolerance: 65.311439ms
	I1209 11:52:46.007280  661546 start.go:83] releasing machines lock for "embed-certs-005123", held for 19.864428938s
	I1209 11:52:46.007313  661546 main.go:141] libmachine: (embed-certs-005123) Calling .DriverName
	I1209 11:52:46.007616  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetIP
	I1209 11:52:46.011273  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:46.011799  661546 main.go:141] libmachine: (embed-certs-005123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:a0:a8", ip: ""} in network mk-embed-certs-005123: {Iface:virbr4 ExpiryTime:2024-12-09 12:52:37 +0000 UTC Type:0 Mac:52:54:00:ee:a0:a8 Iaid: IPaddr:192.168.72.218 Prefix:24 Hostname:embed-certs-005123 Clientid:01:52:54:00:ee:a0:a8}
	I1209 11:52:46.011830  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined IP address 192.168.72.218 and MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:46.012074  661546 main.go:141] libmachine: (embed-certs-005123) Calling .DriverName
	I1209 11:52:46.012681  661546 main.go:141] libmachine: (embed-certs-005123) Calling .DriverName
	I1209 11:52:46.012907  661546 main.go:141] libmachine: (embed-certs-005123) Calling .DriverName
	I1209 11:52:46.013027  661546 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 11:52:46.013099  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHHostname
	I1209 11:52:46.013170  661546 ssh_runner.go:195] Run: cat /version.json
	I1209 11:52:46.013196  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHHostname
	I1209 11:52:46.016473  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:46.016764  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:46.016840  661546 main.go:141] libmachine: (embed-certs-005123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:a0:a8", ip: ""} in network mk-embed-certs-005123: {Iface:virbr4 ExpiryTime:2024-12-09 12:52:37 +0000 UTC Type:0 Mac:52:54:00:ee:a0:a8 Iaid: IPaddr:192.168.72.218 Prefix:24 Hostname:embed-certs-005123 Clientid:01:52:54:00:ee:a0:a8}
	I1209 11:52:46.016875  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined IP address 192.168.72.218 and MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:46.016964  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHPort
	I1209 11:52:46.017186  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHKeyPath
	I1209 11:52:46.017287  661546 main.go:141] libmachine: (embed-certs-005123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:a0:a8", ip: ""} in network mk-embed-certs-005123: {Iface:virbr4 ExpiryTime:2024-12-09 12:52:37 +0000 UTC Type:0 Mac:52:54:00:ee:a0:a8 Iaid: IPaddr:192.168.72.218 Prefix:24 Hostname:embed-certs-005123 Clientid:01:52:54:00:ee:a0:a8}
	I1209 11:52:46.017401  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHUsername
	I1209 11:52:46.017442  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined IP address 192.168.72.218 and MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:46.017480  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHPort
	I1209 11:52:46.017553  661546 sshutil.go:53] new ssh client: &{IP:192.168.72.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/embed-certs-005123/id_rsa Username:docker}
	I1209 11:52:46.017785  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHKeyPath
	I1209 11:52:46.017911  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHUsername
	I1209 11:52:46.018075  661546 sshutil.go:53] new ssh client: &{IP:192.168.72.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/embed-certs-005123/id_rsa Username:docker}
	I1209 11:52:46.129248  661546 ssh_runner.go:195] Run: systemctl --version
	I1209 11:52:46.136309  661546 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 11:52:43.091899  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:45.592415  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:46.287879  661546 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 11:52:46.293689  661546 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 11:52:46.293770  661546 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 11:52:46.311972  661546 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1209 11:52:46.312009  661546 start.go:495] detecting cgroup driver to use...
	I1209 11:52:46.312085  661546 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 11:52:46.329406  661546 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 11:52:46.344607  661546 docker.go:217] disabling cri-docker service (if available) ...
	I1209 11:52:46.344664  661546 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 11:52:46.360448  661546 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 11:52:46.374509  661546 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 11:52:46.503687  661546 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 11:52:46.649152  661546 docker.go:233] disabling docker service ...
	I1209 11:52:46.649234  661546 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 11:52:46.663277  661546 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 11:52:46.677442  661546 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 11:52:46.832667  661546 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 11:52:46.949826  661546 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 11:52:46.963119  661546 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 11:52:46.981743  661546 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1209 11:52:46.981834  661546 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:52:46.991634  661546 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 11:52:46.991706  661546 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:52:47.004032  661546 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:52:47.015001  661546 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:52:47.025000  661546 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 11:52:47.035513  661546 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:52:47.045431  661546 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:52:47.061931  661546 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:52:47.071531  661546 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 11:52:47.080492  661546 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1209 11:52:47.080559  661546 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1209 11:52:47.094021  661546 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 11:52:47.104015  661546 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 11:52:47.226538  661546 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 11:52:47.318832  661546 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 11:52:47.318911  661546 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 11:52:47.323209  661546 start.go:563] Will wait 60s for crictl version
	I1209 11:52:47.323276  661546 ssh_runner.go:195] Run: which crictl
	I1209 11:52:47.326773  661546 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 11:52:47.365536  661546 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1209 11:52:47.365629  661546 ssh_runner.go:195] Run: crio --version
	I1209 11:52:47.392781  661546 ssh_runner.go:195] Run: crio --version
	I1209 11:52:47.422945  661546 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1209 11:52:43.133189  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:43.632726  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:44.132804  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:44.632952  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:45.132474  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:45.633318  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:46.133116  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:46.632595  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:47.133211  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:47.633233  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:46.858128  663024 pod_ready.go:103] pod "coredns-7c65d6cfc9-zzrgn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:49.358845  663024 pod_ready.go:103] pod "coredns-7c65d6cfc9-zzrgn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:47.423936  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetIP
	I1209 11:52:47.426959  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:47.427401  661546 main.go:141] libmachine: (embed-certs-005123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:a0:a8", ip: ""} in network mk-embed-certs-005123: {Iface:virbr4 ExpiryTime:2024-12-09 12:52:37 +0000 UTC Type:0 Mac:52:54:00:ee:a0:a8 Iaid: IPaddr:192.168.72.218 Prefix:24 Hostname:embed-certs-005123 Clientid:01:52:54:00:ee:a0:a8}
	I1209 11:52:47.427425  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined IP address 192.168.72.218 and MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:47.427636  661546 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1209 11:52:47.432509  661546 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 11:52:47.448620  661546 kubeadm.go:883] updating cluster {Name:embed-certs-005123 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:embed-certs-005123 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.218 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 11:52:47.448772  661546 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 11:52:47.448824  661546 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 11:52:47.485100  661546 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1209 11:52:47.485173  661546 ssh_runner.go:195] Run: which lz4
	I1209 11:52:47.489202  661546 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1209 11:52:47.493060  661546 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1209 11:52:47.493093  661546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1209 11:52:48.772297  661546 crio.go:462] duration metric: took 1.283133931s to copy over tarball
	I1209 11:52:48.772381  661546 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1209 11:52:50.959318  661546 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.18690714s)
	I1209 11:52:50.959352  661546 crio.go:469] duration metric: took 2.187018432s to extract the tarball
	I1209 11:52:50.959359  661546 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1209 11:52:50.995746  661546 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 11:52:51.037764  661546 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 11:52:51.037792  661546 cache_images.go:84] Images are preloaded, skipping loading
	I1209 11:52:51.037799  661546 kubeadm.go:934] updating node { 192.168.72.218 8443 v1.31.2 crio true true} ...
	I1209 11:52:51.037909  661546 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-005123 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.218
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:embed-certs-005123 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 11:52:51.037972  661546 ssh_runner.go:195] Run: crio config
	I1209 11:52:51.080191  661546 cni.go:84] Creating CNI manager for ""
	I1209 11:52:51.080220  661546 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 11:52:51.080231  661546 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1209 11:52:51.080258  661546 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.218 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-005123 NodeName:embed-certs-005123 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.218"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.218 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 11:52:51.080442  661546 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.218
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-005123"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.218"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.218"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 11:52:51.080544  661546 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1209 11:52:51.091894  661546 binaries.go:44] Found k8s binaries, skipping transfer
	I1209 11:52:51.091975  661546 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 11:52:51.101702  661546 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I1209 11:52:51.117636  661546 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 11:52:51.133662  661546 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2298 bytes)
	I1209 11:52:51.151725  661546 ssh_runner.go:195] Run: grep 192.168.72.218	control-plane.minikube.internal$ /etc/hosts
	I1209 11:52:51.155759  661546 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.218	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 11:52:51.167480  661546 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 11:52:47.592707  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:50.093177  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:48.132348  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:48.632894  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:49.133272  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:49.633015  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:50.132977  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:50.632533  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:51.132939  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:51.632463  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:52.133082  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:52.633298  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:50.357709  663024 pod_ready.go:93] pod "coredns-7c65d6cfc9-zzrgn" in "kube-system" namespace has status "Ready":"True"
	I1209 11:52:50.357740  663024 pod_ready.go:82] duration metric: took 10.006992001s for pod "coredns-7c65d6cfc9-zzrgn" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:50.357752  663024 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-482476" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:50.363374  663024 pod_ready.go:93] pod "etcd-default-k8s-diff-port-482476" in "kube-system" namespace has status "Ready":"True"
	I1209 11:52:50.363403  663024 pod_ready.go:82] duration metric: took 5.642657ms for pod "etcd-default-k8s-diff-port-482476" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:50.363417  663024 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-482476" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:50.368456  663024 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-482476" in "kube-system" namespace has status "Ready":"True"
	I1209 11:52:50.368478  663024 pod_ready.go:82] duration metric: took 5.053713ms for pod "kube-apiserver-default-k8s-diff-port-482476" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:50.368488  663024 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-482476" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:50.374156  663024 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-482476" in "kube-system" namespace has status "Ready":"True"
	I1209 11:52:50.374205  663024 pod_ready.go:82] duration metric: took 5.708489ms for pod "kube-controller-manager-default-k8s-diff-port-482476" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:50.374219  663024 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-6th5d" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:50.378734  663024 pod_ready.go:93] pod "kube-proxy-6th5d" in "kube-system" namespace has status "Ready":"True"
	I1209 11:52:50.378752  663024 pod_ready.go:82] duration metric: took 4.526066ms for pod "kube-proxy-6th5d" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:50.378760  663024 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-482476" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:52.384763  663024 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-482476" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:53.389110  663024 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-482476" in "kube-system" namespace has status "Ready":"True"
	I1209 11:52:53.389146  663024 pod_ready.go:82] duration metric: took 3.010378852s for pod "kube-scheduler-default-k8s-diff-port-482476" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:53.389162  663024 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:51.305408  661546 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 11:52:51.330738  661546 certs.go:68] Setting up /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/embed-certs-005123 for IP: 192.168.72.218
	I1209 11:52:51.330766  661546 certs.go:194] generating shared ca certs ...
	I1209 11:52:51.330791  661546 certs.go:226] acquiring lock for ca certs: {Name:mk2df5887e08965d909a9c950da5dfffb8a04ddc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 11:52:51.331002  661546 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.key
	I1209 11:52:51.331099  661546 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.key
	I1209 11:52:51.331116  661546 certs.go:256] generating profile certs ...
	I1209 11:52:51.331252  661546 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/embed-certs-005123/client.key
	I1209 11:52:51.331333  661546 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/embed-certs-005123/apiserver.key.a40d22b0
	I1209 11:52:51.331400  661546 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/embed-certs-005123/proxy-client.key
	I1209 11:52:51.331595  661546 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017.pem (1338 bytes)
	W1209 11:52:51.331631  661546 certs.go:480] ignoring /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017_empty.pem, impossibly tiny 0 bytes
	I1209 11:52:51.331645  661546 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem (1675 bytes)
	I1209 11:52:51.331680  661546 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem (1082 bytes)
	I1209 11:52:51.331717  661546 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem (1123 bytes)
	I1209 11:52:51.331747  661546 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem (1679 bytes)
	I1209 11:52:51.331824  661546 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem (1708 bytes)
	I1209 11:52:51.332728  661546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 11:52:51.366002  661546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1209 11:52:51.400591  661546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 11:52:51.431219  661546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1209 11:52:51.459334  661546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/embed-certs-005123/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1209 11:52:51.487240  661546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/embed-certs-005123/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1209 11:52:51.522273  661546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/embed-certs-005123/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 11:52:51.545757  661546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/embed-certs-005123/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1209 11:52:51.572793  661546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 11:52:51.595719  661546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017.pem --> /usr/share/ca-certificates/617017.pem (1338 bytes)
	I1209 11:52:51.618456  661546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem --> /usr/share/ca-certificates/6170172.pem (1708 bytes)
	I1209 11:52:51.643337  661546 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 11:52:51.659719  661546 ssh_runner.go:195] Run: openssl version
	I1209 11:52:51.665339  661546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/617017.pem && ln -fs /usr/share/ca-certificates/617017.pem /etc/ssl/certs/617017.pem"
	I1209 11:52:51.676145  661546 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/617017.pem
	I1209 11:52:51.680615  661546 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 10:45 /usr/share/ca-certificates/617017.pem
	I1209 11:52:51.680670  661546 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/617017.pem
	I1209 11:52:51.686782  661546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/617017.pem /etc/ssl/certs/51391683.0"
	I1209 11:52:51.697398  661546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6170172.pem && ln -fs /usr/share/ca-certificates/6170172.pem /etc/ssl/certs/6170172.pem"
	I1209 11:52:51.707438  661546 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6170172.pem
	I1209 11:52:51.711764  661546 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 10:45 /usr/share/ca-certificates/6170172.pem
	I1209 11:52:51.711832  661546 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6170172.pem
	I1209 11:52:51.717278  661546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6170172.pem /etc/ssl/certs/3ec20f2e.0"
	I1209 11:52:51.727774  661546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1209 11:52:51.738575  661546 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 11:52:51.742996  661546 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 10:34 /usr/share/ca-certificates/minikubeCA.pem
	I1209 11:52:51.743057  661546 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 11:52:51.748505  661546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1209 11:52:51.758738  661546 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 11:52:51.763005  661546 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1209 11:52:51.768964  661546 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1209 11:52:51.775011  661546 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1209 11:52:51.780810  661546 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1209 11:52:51.786716  661546 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1209 11:52:51.792351  661546 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1209 11:52:51.798098  661546 kubeadm.go:392] StartCluster: {Name:embed-certs-005123 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:embed-certs-005123 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.218 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 11:52:51.798239  661546 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 11:52:51.798296  661546 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 11:52:51.840669  661546 cri.go:89] found id: ""
	I1209 11:52:51.840755  661546 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 11:52:51.850404  661546 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1209 11:52:51.850429  661546 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1209 11:52:51.850474  661546 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1209 11:52:51.859350  661546 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1209 11:52:51.860405  661546 kubeconfig.go:125] found "embed-certs-005123" server: "https://192.168.72.218:8443"
	I1209 11:52:51.862591  661546 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1209 11:52:51.872497  661546 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.218
	I1209 11:52:51.872539  661546 kubeadm.go:1160] stopping kube-system containers ...
	I1209 11:52:51.872558  661546 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1209 11:52:51.872638  661546 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 11:52:51.913221  661546 cri.go:89] found id: ""
	I1209 11:52:51.913316  661546 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1209 11:52:51.929885  661546 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 11:52:51.940078  661546 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 11:52:51.940105  661546 kubeadm.go:157] found existing configuration files:
	
	I1209 11:52:51.940166  661546 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 11:52:51.948911  661546 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 11:52:51.948977  661546 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 11:52:51.958278  661546 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 11:52:51.966808  661546 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 11:52:51.966879  661546 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 11:52:51.975480  661546 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 11:52:51.984071  661546 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 11:52:51.984127  661546 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 11:52:51.992480  661546 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 11:52:52.000798  661546 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 11:52:52.000873  661546 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 11:52:52.009553  661546 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 11:52:52.019274  661546 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:52.133477  661546 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:53.081976  661546 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:53.293871  661546 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:53.364259  661546 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:53.452043  661546 api_server.go:52] waiting for apiserver process to appear ...
	I1209 11:52:53.452147  661546 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:53.952743  661546 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:54.452498  661546 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:54.952482  661546 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:55.452783  661546 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:55.483411  661546 api_server.go:72] duration metric: took 2.0313706s to wait for apiserver process to appear ...
	I1209 11:52:55.483448  661546 api_server.go:88] waiting for apiserver healthz status ...
	I1209 11:52:55.483473  661546 api_server.go:253] Checking apiserver healthz at https://192.168.72.218:8443/healthz ...
	I1209 11:52:55.483982  661546 api_server.go:269] stopped: https://192.168.72.218:8443/healthz: Get "https://192.168.72.218:8443/healthz": dial tcp 192.168.72.218:8443: connect: connection refused
	I1209 11:52:55.983589  661546 api_server.go:253] Checking apiserver healthz at https://192.168.72.218:8443/healthz ...
	I1209 11:52:52.592309  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:55.257400  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:53.132520  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:53.633385  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:54.132432  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:54.632974  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:55.132958  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:55.633343  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:56.132687  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:56.633236  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:57.133489  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:57.633105  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:55.396602  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:57.397077  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:58.136225  661546 api_server.go:279] https://192.168.72.218:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1209 11:52:58.136259  661546 api_server.go:103] status: https://192.168.72.218:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1209 11:52:58.136276  661546 api_server.go:253] Checking apiserver healthz at https://192.168.72.218:8443/healthz ...
	I1209 11:52:58.174521  661546 api_server.go:279] https://192.168.72.218:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1209 11:52:58.174583  661546 api_server.go:103] status: https://192.168.72.218:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1209 11:52:58.484089  661546 api_server.go:253] Checking apiserver healthz at https://192.168.72.218:8443/healthz ...
	I1209 11:52:58.489495  661546 api_server.go:279] https://192.168.72.218:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1209 11:52:58.489536  661546 api_server.go:103] status: https://192.168.72.218:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1209 11:52:58.984185  661546 api_server.go:253] Checking apiserver healthz at https://192.168.72.218:8443/healthz ...
	I1209 11:52:58.990889  661546 api_server.go:279] https://192.168.72.218:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1209 11:52:58.990932  661546 api_server.go:103] status: https://192.168.72.218:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1209 11:52:59.484415  661546 api_server.go:253] Checking apiserver healthz at https://192.168.72.218:8443/healthz ...
	I1209 11:52:59.490878  661546 api_server.go:279] https://192.168.72.218:8443/healthz returned 200:
	ok
	I1209 11:52:59.498196  661546 api_server.go:141] control plane version: v1.31.2
	I1209 11:52:59.498231  661546 api_server.go:131] duration metric: took 4.014775842s to wait for apiserver health ...
	I1209 11:52:59.498241  661546 cni.go:84] Creating CNI manager for ""
	I1209 11:52:59.498247  661546 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 11:52:59.499779  661546 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1209 11:52:59.500941  661546 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1209 11:52:59.514201  661546 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1209 11:52:59.544391  661546 system_pods.go:43] waiting for kube-system pods to appear ...
	I1209 11:52:59.555798  661546 system_pods.go:59] 8 kube-system pods found
	I1209 11:52:59.555837  661546 system_pods.go:61] "coredns-7c65d6cfc9-cdnjm" [7cb724f8-c570-4a19-808d-da994ec43eaa] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1209 11:52:59.555849  661546 system_pods.go:61] "etcd-embed-certs-005123" [bf194765-7520-4b5d-a1e5-b49830a0f620] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1209 11:52:59.555858  661546 system_pods.go:61] "kube-apiserver-embed-certs-005123" [470f6c19-0112-4b0d-89d9-b792e912cf6d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1209 11:52:59.555863  661546 system_pods.go:61] "kube-controller-manager-embed-certs-005123" [b42748b2-f3a9-4d29-a832-a30d54b329c2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1209 11:52:59.555868  661546 system_pods.go:61] "kube-proxy-b7bf2" [f9aab69c-2232-4f56-a502-ffd033f7ac10] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1209 11:52:59.555877  661546 system_pods.go:61] "kube-scheduler-embed-certs-005123" [e61a8e3c-c1d3-4dab-abb2-6f5221bc5d25] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1209 11:52:59.555885  661546 system_pods.go:61] "metrics-server-6867b74b74-x4kvn" [210cb99c-e3e7-4337-bed4-985cb98143dd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1209 11:52:59.555893  661546 system_pods.go:61] "storage-provisioner" [f2f7d9e2-1121-4df2-adb7-a0af32f957ff] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1209 11:52:59.555903  661546 system_pods.go:74] duration metric: took 11.485008ms to wait for pod list to return data ...
	I1209 11:52:59.555913  661546 node_conditions.go:102] verifying NodePressure condition ...
	I1209 11:52:59.560077  661546 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1209 11:52:59.560100  661546 node_conditions.go:123] node cpu capacity is 2
	I1209 11:52:59.560110  661546 node_conditions.go:105] duration metric: took 4.192476ms to run NodePressure ...
	I1209 11:52:59.560132  661546 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:59.890141  661546 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1209 11:52:59.895382  661546 kubeadm.go:739] kubelet initialised
	I1209 11:52:59.895414  661546 kubeadm.go:740] duration metric: took 5.227549ms waiting for restarted kubelet to initialise ...
	I1209 11:52:59.895425  661546 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 11:52:59.901454  661546 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-cdnjm" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:57.593336  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:00.094942  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:58.132858  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:58.633386  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:59.132544  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:59.633427  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:00.133402  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:00.632719  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:01.132786  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:01.632909  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:02.133197  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:02.632620  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:59.896691  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:02.396546  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:01.907730  661546 pod_ready.go:103] pod "coredns-7c65d6cfc9-cdnjm" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:03.910835  661546 pod_ready.go:103] pod "coredns-7c65d6cfc9-cdnjm" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:02.591692  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:05.090892  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:03.133091  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:03.633389  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:04.132587  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:04.633239  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:05.132773  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:05.632456  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:06.132989  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:06.632584  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:07.133153  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:07.633389  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:04.895599  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:06.896541  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:08.912963  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:06.408122  661546 pod_ready.go:103] pod "coredns-7c65d6cfc9-cdnjm" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:08.412579  661546 pod_ready.go:103] pod "coredns-7c65d6cfc9-cdnjm" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:10.419673  661546 pod_ready.go:93] pod "coredns-7c65d6cfc9-cdnjm" in "kube-system" namespace has status "Ready":"True"
	I1209 11:53:10.419702  661546 pod_ready.go:82] duration metric: took 10.518223469s for pod "coredns-7c65d6cfc9-cdnjm" in "kube-system" namespace to be "Ready" ...
	I1209 11:53:10.419716  661546 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-005123" in "kube-system" namespace to be "Ready" ...
	I1209 11:53:07.591181  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:10.091248  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:08.132885  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:08.633192  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:09.132446  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:09.633385  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:10.132534  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:10.632399  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:11.132877  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:11.633091  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:12.132592  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:12.633185  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:11.396121  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:13.901605  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:12.425696  661546 pod_ready.go:103] pod "etcd-embed-certs-005123" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:13.926007  661546 pod_ready.go:93] pod "etcd-embed-certs-005123" in "kube-system" namespace has status "Ready":"True"
	I1209 11:53:13.926041  661546 pod_ready.go:82] duration metric: took 3.50631846s for pod "etcd-embed-certs-005123" in "kube-system" namespace to be "Ready" ...
	I1209 11:53:13.926053  661546 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-005123" in "kube-system" namespace to be "Ready" ...
	I1209 11:53:13.931124  661546 pod_ready.go:93] pod "kube-apiserver-embed-certs-005123" in "kube-system" namespace has status "Ready":"True"
	I1209 11:53:13.931150  661546 pod_ready.go:82] duration metric: took 5.090118ms for pod "kube-apiserver-embed-certs-005123" in "kube-system" namespace to be "Ready" ...
	I1209 11:53:13.931163  661546 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-005123" in "kube-system" namespace to be "Ready" ...
	I1209 11:53:13.935763  661546 pod_ready.go:93] pod "kube-controller-manager-embed-certs-005123" in "kube-system" namespace has status "Ready":"True"
	I1209 11:53:13.935783  661546 pod_ready.go:82] duration metric: took 4.613902ms for pod "kube-controller-manager-embed-certs-005123" in "kube-system" namespace to be "Ready" ...
	I1209 11:53:13.935792  661546 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-b7bf2" in "kube-system" namespace to be "Ready" ...
	I1209 11:53:13.940013  661546 pod_ready.go:93] pod "kube-proxy-b7bf2" in "kube-system" namespace has status "Ready":"True"
	I1209 11:53:13.940037  661546 pod_ready.go:82] duration metric: took 4.238468ms for pod "kube-proxy-b7bf2" in "kube-system" namespace to be "Ready" ...
	I1209 11:53:13.940050  661546 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-005123" in "kube-system" namespace to be "Ready" ...
	I1209 11:53:13.944480  661546 pod_ready.go:93] pod "kube-scheduler-embed-certs-005123" in "kube-system" namespace has status "Ready":"True"
	I1209 11:53:13.944497  661546 pod_ready.go:82] duration metric: took 4.439334ms for pod "kube-scheduler-embed-certs-005123" in "kube-system" namespace to be "Ready" ...
	I1209 11:53:13.944504  661546 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace to be "Ready" ...
	I1209 11:53:15.951194  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:12.091413  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:14.591239  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:13.132852  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:13.632863  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:14.132638  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:14.632522  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:15.133201  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:15.632442  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:16.132620  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:53:16.132747  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:53:16.171708  662586 cri.go:89] found id: ""
	I1209 11:53:16.171748  662586 logs.go:282] 0 containers: []
	W1209 11:53:16.171761  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:53:16.171768  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:53:16.171823  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:53:16.206350  662586 cri.go:89] found id: ""
	I1209 11:53:16.206381  662586 logs.go:282] 0 containers: []
	W1209 11:53:16.206390  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:53:16.206398  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:53:16.206468  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:53:16.239292  662586 cri.go:89] found id: ""
	I1209 11:53:16.239325  662586 logs.go:282] 0 containers: []
	W1209 11:53:16.239334  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:53:16.239341  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:53:16.239397  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:53:16.275809  662586 cri.go:89] found id: ""
	I1209 11:53:16.275841  662586 logs.go:282] 0 containers: []
	W1209 11:53:16.275850  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:53:16.275856  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:53:16.275913  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:53:16.310434  662586 cri.go:89] found id: ""
	I1209 11:53:16.310466  662586 logs.go:282] 0 containers: []
	W1209 11:53:16.310474  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:53:16.310480  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:53:16.310540  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:53:16.347697  662586 cri.go:89] found id: ""
	I1209 11:53:16.347729  662586 logs.go:282] 0 containers: []
	W1209 11:53:16.347738  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:53:16.347745  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:53:16.347801  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:53:16.380949  662586 cri.go:89] found id: ""
	I1209 11:53:16.380977  662586 logs.go:282] 0 containers: []
	W1209 11:53:16.380985  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:53:16.380992  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:53:16.381074  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:53:16.415236  662586 cri.go:89] found id: ""
	I1209 11:53:16.415268  662586 logs.go:282] 0 containers: []
	W1209 11:53:16.415290  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:53:16.415304  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:53:16.415321  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:53:16.459614  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:53:16.459645  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:53:16.509575  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:53:16.509617  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:53:16.522864  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:53:16.522898  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:53:16.644997  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:53:16.645059  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:53:16.645106  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:53:16.396028  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:18.397195  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:17.951721  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:19.952199  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:16.591767  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:19.091470  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:21.095835  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:19.220978  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:19.233506  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:53:19.233597  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:53:19.268975  662586 cri.go:89] found id: ""
	I1209 11:53:19.269007  662586 logs.go:282] 0 containers: []
	W1209 11:53:19.269019  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:53:19.269027  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:53:19.269103  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:53:19.304898  662586 cri.go:89] found id: ""
	I1209 11:53:19.304935  662586 logs.go:282] 0 containers: []
	W1209 11:53:19.304949  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:53:19.304957  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:53:19.305034  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:53:19.344798  662586 cri.go:89] found id: ""
	I1209 11:53:19.344835  662586 logs.go:282] 0 containers: []
	W1209 11:53:19.344846  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:53:19.344855  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:53:19.344925  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:53:19.395335  662586 cri.go:89] found id: ""
	I1209 11:53:19.395377  662586 logs.go:282] 0 containers: []
	W1209 11:53:19.395387  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:53:19.395395  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:53:19.395464  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:53:19.430334  662586 cri.go:89] found id: ""
	I1209 11:53:19.430364  662586 logs.go:282] 0 containers: []
	W1209 11:53:19.430377  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:53:19.430386  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:53:19.430465  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:53:19.468732  662586 cri.go:89] found id: ""
	I1209 11:53:19.468766  662586 logs.go:282] 0 containers: []
	W1209 11:53:19.468775  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:53:19.468782  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:53:19.468836  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:53:19.503194  662586 cri.go:89] found id: ""
	I1209 11:53:19.503242  662586 logs.go:282] 0 containers: []
	W1209 11:53:19.503255  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:53:19.503263  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:53:19.503328  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:53:19.537074  662586 cri.go:89] found id: ""
	I1209 11:53:19.537114  662586 logs.go:282] 0 containers: []
	W1209 11:53:19.537125  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:53:19.537135  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:53:19.537151  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:53:19.590081  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:53:19.590130  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:53:19.604350  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:53:19.604388  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:53:19.683073  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:53:19.683106  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:53:19.683124  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:53:19.763564  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:53:19.763611  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:53:22.302792  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:22.315992  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:53:22.316079  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:53:22.350464  662586 cri.go:89] found id: ""
	I1209 11:53:22.350495  662586 logs.go:282] 0 containers: []
	W1209 11:53:22.350505  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:53:22.350511  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:53:22.350569  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:53:22.382832  662586 cri.go:89] found id: ""
	I1209 11:53:22.382867  662586 logs.go:282] 0 containers: []
	W1209 11:53:22.382880  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:53:22.382889  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:53:22.382958  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:53:22.417826  662586 cri.go:89] found id: ""
	I1209 11:53:22.417859  662586 logs.go:282] 0 containers: []
	W1209 11:53:22.417871  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:53:22.417880  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:53:22.417963  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:53:22.451545  662586 cri.go:89] found id: ""
	I1209 11:53:22.451579  662586 logs.go:282] 0 containers: []
	W1209 11:53:22.451588  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:53:22.451594  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:53:22.451659  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:53:22.488413  662586 cri.go:89] found id: ""
	I1209 11:53:22.488448  662586 logs.go:282] 0 containers: []
	W1209 11:53:22.488458  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:53:22.488464  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:53:22.488531  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:53:22.523891  662586 cri.go:89] found id: ""
	I1209 11:53:22.523916  662586 logs.go:282] 0 containers: []
	W1209 11:53:22.523925  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:53:22.523931  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:53:22.523990  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:53:22.555828  662586 cri.go:89] found id: ""
	I1209 11:53:22.555866  662586 logs.go:282] 0 containers: []
	W1209 11:53:22.555879  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:53:22.555887  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:53:22.555960  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:53:22.592133  662586 cri.go:89] found id: ""
	I1209 11:53:22.592171  662586 logs.go:282] 0 containers: []
	W1209 11:53:22.592181  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:53:22.592192  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:53:22.592209  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:53:22.641928  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:53:22.641966  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:53:22.655182  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:53:22.655215  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 11:53:20.896189  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:23.397242  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:21.957934  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:24.451292  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:23.591147  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:25.591982  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	W1209 11:53:22.724320  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:53:22.724343  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:53:22.724359  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:53:22.811692  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:53:22.811743  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:53:25.347903  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:25.360839  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:53:25.360907  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:53:25.392880  662586 cri.go:89] found id: ""
	I1209 11:53:25.392917  662586 logs.go:282] 0 containers: []
	W1209 11:53:25.392930  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:53:25.392939  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:53:25.393008  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:53:25.427862  662586 cri.go:89] found id: ""
	I1209 11:53:25.427905  662586 logs.go:282] 0 containers: []
	W1209 11:53:25.427914  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:53:25.427921  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:53:25.428009  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:53:25.463733  662586 cri.go:89] found id: ""
	I1209 11:53:25.463767  662586 logs.go:282] 0 containers: []
	W1209 11:53:25.463778  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:53:25.463788  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:53:25.463884  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:53:25.501653  662586 cri.go:89] found id: ""
	I1209 11:53:25.501681  662586 logs.go:282] 0 containers: []
	W1209 11:53:25.501690  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:53:25.501697  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:53:25.501751  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:53:25.535368  662586 cri.go:89] found id: ""
	I1209 11:53:25.535410  662586 logs.go:282] 0 containers: []
	W1209 11:53:25.535422  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:53:25.535431  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:53:25.535511  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:53:25.569709  662586 cri.go:89] found id: ""
	I1209 11:53:25.569739  662586 logs.go:282] 0 containers: []
	W1209 11:53:25.569748  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:53:25.569761  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:53:25.569827  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:53:25.604352  662586 cri.go:89] found id: ""
	I1209 11:53:25.604389  662586 logs.go:282] 0 containers: []
	W1209 11:53:25.604404  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:53:25.604413  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:53:25.604477  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:53:25.635832  662586 cri.go:89] found id: ""
	I1209 11:53:25.635865  662586 logs.go:282] 0 containers: []
	W1209 11:53:25.635878  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:53:25.635892  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:53:25.635908  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:53:25.650611  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:53:25.650647  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:53:25.721092  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:53:25.721121  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:53:25.721139  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:53:25.795552  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:53:25.795598  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:53:25.858088  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:53:25.858161  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:53:25.898217  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:28.395882  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:26.950691  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:28.951203  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:28.091306  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:30.091842  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:28.410683  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:28.422993  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:53:28.423072  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:53:28.455054  662586 cri.go:89] found id: ""
	I1209 11:53:28.455083  662586 logs.go:282] 0 containers: []
	W1209 11:53:28.455092  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:53:28.455098  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:53:28.455162  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:53:28.493000  662586 cri.go:89] found id: ""
	I1209 11:53:28.493037  662586 logs.go:282] 0 containers: []
	W1209 11:53:28.493046  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:53:28.493052  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:53:28.493104  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:53:28.526294  662586 cri.go:89] found id: ""
	I1209 11:53:28.526333  662586 logs.go:282] 0 containers: []
	W1209 11:53:28.526346  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:53:28.526354  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:53:28.526417  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:53:28.560383  662586 cri.go:89] found id: ""
	I1209 11:53:28.560414  662586 logs.go:282] 0 containers: []
	W1209 11:53:28.560423  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:53:28.560430  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:53:28.560485  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:53:28.595906  662586 cri.go:89] found id: ""
	I1209 11:53:28.595935  662586 logs.go:282] 0 containers: []
	W1209 11:53:28.595946  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:53:28.595954  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:53:28.596021  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:53:28.629548  662586 cri.go:89] found id: ""
	I1209 11:53:28.629584  662586 logs.go:282] 0 containers: []
	W1209 11:53:28.629597  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:53:28.629607  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:53:28.629673  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:53:28.666362  662586 cri.go:89] found id: ""
	I1209 11:53:28.666398  662586 logs.go:282] 0 containers: []
	W1209 11:53:28.666410  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:53:28.666418  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:53:28.666494  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:53:28.697704  662586 cri.go:89] found id: ""
	I1209 11:53:28.697736  662586 logs.go:282] 0 containers: []
	W1209 11:53:28.697746  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:53:28.697756  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:53:28.697769  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:53:28.745774  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:53:28.745816  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:53:28.759543  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:53:28.759582  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:53:28.834772  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:53:28.834795  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:53:28.834812  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:53:28.913137  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:53:28.913178  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:53:31.460658  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:31.473503  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:53:31.473575  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:53:31.506710  662586 cri.go:89] found id: ""
	I1209 11:53:31.506748  662586 logs.go:282] 0 containers: []
	W1209 11:53:31.506760  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:53:31.506770  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:53:31.506842  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:53:31.544127  662586 cri.go:89] found id: ""
	I1209 11:53:31.544188  662586 logs.go:282] 0 containers: []
	W1209 11:53:31.544202  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:53:31.544211  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:53:31.544289  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:53:31.591081  662586 cri.go:89] found id: ""
	I1209 11:53:31.591116  662586 logs.go:282] 0 containers: []
	W1209 11:53:31.591128  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:53:31.591135  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:53:31.591213  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:53:31.629311  662586 cri.go:89] found id: ""
	I1209 11:53:31.629340  662586 logs.go:282] 0 containers: []
	W1209 11:53:31.629348  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:53:31.629355  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:53:31.629432  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:53:31.671035  662586 cri.go:89] found id: ""
	I1209 11:53:31.671069  662586 logs.go:282] 0 containers: []
	W1209 11:53:31.671081  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:53:31.671090  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:53:31.671162  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:53:31.705753  662586 cri.go:89] found id: ""
	I1209 11:53:31.705792  662586 logs.go:282] 0 containers: []
	W1209 11:53:31.705805  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:53:31.705815  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:53:31.705889  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:53:31.739118  662586 cri.go:89] found id: ""
	I1209 11:53:31.739146  662586 logs.go:282] 0 containers: []
	W1209 11:53:31.739155  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:53:31.739162  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:53:31.739225  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:53:31.771085  662586 cri.go:89] found id: ""
	I1209 11:53:31.771120  662586 logs.go:282] 0 containers: []
	W1209 11:53:31.771129  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:53:31.771139  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:53:31.771152  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:53:31.820993  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:53:31.821049  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:53:31.835576  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:53:31.835612  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:53:31.903011  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:53:31.903039  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:53:31.903056  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:53:31.977784  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:53:31.977830  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:53:30.896197  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:33.395937  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:31.450832  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:33.451161  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:35.451446  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:32.590724  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:34.592352  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:34.514654  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:34.529156  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:53:34.529236  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:53:34.567552  662586 cri.go:89] found id: ""
	I1209 11:53:34.567580  662586 logs.go:282] 0 containers: []
	W1209 11:53:34.567590  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:53:34.567598  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:53:34.567665  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:53:34.608863  662586 cri.go:89] found id: ""
	I1209 11:53:34.608891  662586 logs.go:282] 0 containers: []
	W1209 11:53:34.608900  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:53:34.608907  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:53:34.608970  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:53:34.647204  662586 cri.go:89] found id: ""
	I1209 11:53:34.647242  662586 logs.go:282] 0 containers: []
	W1209 11:53:34.647254  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:53:34.647263  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:53:34.647333  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:53:34.682511  662586 cri.go:89] found id: ""
	I1209 11:53:34.682565  662586 logs.go:282] 0 containers: []
	W1209 11:53:34.682580  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:53:34.682596  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:53:34.682674  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:53:34.717557  662586 cri.go:89] found id: ""
	I1209 11:53:34.717585  662586 logs.go:282] 0 containers: []
	W1209 11:53:34.717595  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:53:34.717602  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:53:34.717670  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:53:34.749814  662586 cri.go:89] found id: ""
	I1209 11:53:34.749851  662586 logs.go:282] 0 containers: []
	W1209 11:53:34.749865  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:53:34.749876  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:53:34.749949  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:53:34.782732  662586 cri.go:89] found id: ""
	I1209 11:53:34.782766  662586 logs.go:282] 0 containers: []
	W1209 11:53:34.782776  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:53:34.782782  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:53:34.782846  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:53:34.817114  662586 cri.go:89] found id: ""
	I1209 11:53:34.817149  662586 logs.go:282] 0 containers: []
	W1209 11:53:34.817162  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:53:34.817175  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:53:34.817192  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:53:34.885963  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:53:34.885986  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:53:34.886001  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:53:34.969858  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:53:34.969905  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:53:35.006981  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:53:35.007024  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:53:35.055360  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:53:35.055401  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:53:37.570641  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:37.595904  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:53:37.595986  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:53:37.642205  662586 cri.go:89] found id: ""
	I1209 11:53:37.642248  662586 logs.go:282] 0 containers: []
	W1209 11:53:37.642261  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:53:37.642270  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:53:37.642347  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:53:37.676666  662586 cri.go:89] found id: ""
	I1209 11:53:37.676692  662586 logs.go:282] 0 containers: []
	W1209 11:53:37.676701  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:53:37.676707  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:53:37.676760  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:53:35.396037  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:37.896489  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:37.952569  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:40.450464  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:37.092250  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:39.092392  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:37.714201  662586 cri.go:89] found id: ""
	I1209 11:53:37.714233  662586 logs.go:282] 0 containers: []
	W1209 11:53:37.714243  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:53:37.714249  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:53:37.714311  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:53:37.748018  662586 cri.go:89] found id: ""
	I1209 11:53:37.748047  662586 logs.go:282] 0 containers: []
	W1209 11:53:37.748058  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:53:37.748067  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:53:37.748127  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:53:37.783763  662586 cri.go:89] found id: ""
	I1209 11:53:37.783799  662586 logs.go:282] 0 containers: []
	W1209 11:53:37.783807  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:53:37.783823  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:53:37.783898  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:53:37.822470  662586 cri.go:89] found id: ""
	I1209 11:53:37.822502  662586 logs.go:282] 0 containers: []
	W1209 11:53:37.822514  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:53:37.822523  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:53:37.822585  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:53:37.858493  662586 cri.go:89] found id: ""
	I1209 11:53:37.858527  662586 logs.go:282] 0 containers: []
	W1209 11:53:37.858537  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:53:37.858543  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:53:37.858599  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:53:37.899263  662586 cri.go:89] found id: ""
	I1209 11:53:37.899288  662586 logs.go:282] 0 containers: []
	W1209 11:53:37.899295  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:53:37.899304  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:53:37.899321  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:53:37.972531  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:53:37.972559  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:53:37.972575  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:53:38.046271  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:53:38.046315  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:53:38.088829  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:53:38.088867  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:53:38.141935  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:53:38.141985  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:53:40.657131  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:40.669884  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:53:40.669954  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:53:40.704291  662586 cri.go:89] found id: ""
	I1209 11:53:40.704332  662586 logs.go:282] 0 containers: []
	W1209 11:53:40.704345  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:53:40.704357  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:53:40.704435  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:53:40.738637  662586 cri.go:89] found id: ""
	I1209 11:53:40.738673  662586 logs.go:282] 0 containers: []
	W1209 11:53:40.738684  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:53:40.738690  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:53:40.738747  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:53:40.770737  662586 cri.go:89] found id: ""
	I1209 11:53:40.770774  662586 logs.go:282] 0 containers: []
	W1209 11:53:40.770787  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:53:40.770796  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:53:40.770865  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:53:40.805667  662586 cri.go:89] found id: ""
	I1209 11:53:40.805702  662586 logs.go:282] 0 containers: []
	W1209 11:53:40.805729  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:53:40.805739  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:53:40.805812  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:53:40.838444  662586 cri.go:89] found id: ""
	I1209 11:53:40.838482  662586 logs.go:282] 0 containers: []
	W1209 11:53:40.838496  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:53:40.838505  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:53:40.838578  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:53:40.871644  662586 cri.go:89] found id: ""
	I1209 11:53:40.871679  662586 logs.go:282] 0 containers: []
	W1209 11:53:40.871691  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:53:40.871700  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:53:40.871763  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:53:40.907242  662586 cri.go:89] found id: ""
	I1209 11:53:40.907275  662586 logs.go:282] 0 containers: []
	W1209 11:53:40.907284  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:53:40.907291  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:53:40.907359  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:53:40.941542  662586 cri.go:89] found id: ""
	I1209 11:53:40.941570  662586 logs.go:282] 0 containers: []
	W1209 11:53:40.941583  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:53:40.941595  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:53:40.941616  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:53:41.022344  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:53:41.022373  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:53:41.022387  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:53:41.097083  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:53:41.097129  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:53:41.135303  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:53:41.135349  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:53:41.191400  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:53:41.191447  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:53:40.396681  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:42.895118  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:42.451217  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:44.951893  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:41.591753  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:44.090762  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:46.091821  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:43.705246  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:43.717939  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:53:43.718001  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:53:43.750027  662586 cri.go:89] found id: ""
	I1209 11:53:43.750066  662586 logs.go:282] 0 containers: []
	W1209 11:53:43.750079  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:53:43.750087  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:53:43.750156  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:53:43.782028  662586 cri.go:89] found id: ""
	I1209 11:53:43.782067  662586 logs.go:282] 0 containers: []
	W1209 11:53:43.782081  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:53:43.782090  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:53:43.782153  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:53:43.815509  662586 cri.go:89] found id: ""
	I1209 11:53:43.815549  662586 logs.go:282] 0 containers: []
	W1209 11:53:43.815562  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:53:43.815570  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:53:43.815629  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:53:43.852803  662586 cri.go:89] found id: ""
	I1209 11:53:43.852834  662586 logs.go:282] 0 containers: []
	W1209 11:53:43.852842  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:53:43.852850  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:53:43.852915  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:53:43.886761  662586 cri.go:89] found id: ""
	I1209 11:53:43.886789  662586 logs.go:282] 0 containers: []
	W1209 11:53:43.886798  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:53:43.886805  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:53:43.886883  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:53:43.924427  662586 cri.go:89] found id: ""
	I1209 11:53:43.924458  662586 logs.go:282] 0 containers: []
	W1209 11:53:43.924466  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:53:43.924478  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:53:43.924542  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:53:43.960351  662586 cri.go:89] found id: ""
	I1209 11:53:43.960381  662586 logs.go:282] 0 containers: []
	W1209 11:53:43.960398  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:53:43.960407  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:53:43.960476  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:53:43.993933  662586 cri.go:89] found id: ""
	I1209 11:53:43.993960  662586 logs.go:282] 0 containers: []
	W1209 11:53:43.993969  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:53:43.993979  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:53:43.994002  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:53:44.006915  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:53:44.006952  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:53:44.078928  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:53:44.078981  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:53:44.078999  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:53:44.158129  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:53:44.158188  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:53:44.199543  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:53:44.199577  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:53:46.748607  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:46.762381  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:53:46.762494  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:53:46.795674  662586 cri.go:89] found id: ""
	I1209 11:53:46.795713  662586 logs.go:282] 0 containers: []
	W1209 11:53:46.795727  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:53:46.795737  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:53:46.795812  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:53:46.834027  662586 cri.go:89] found id: ""
	I1209 11:53:46.834055  662586 logs.go:282] 0 containers: []
	W1209 11:53:46.834065  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:53:46.834072  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:53:46.834124  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:53:46.872116  662586 cri.go:89] found id: ""
	I1209 11:53:46.872156  662586 logs.go:282] 0 containers: []
	W1209 11:53:46.872169  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:53:46.872179  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:53:46.872264  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:53:46.906571  662586 cri.go:89] found id: ""
	I1209 11:53:46.906599  662586 logs.go:282] 0 containers: []
	W1209 11:53:46.906608  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:53:46.906615  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:53:46.906676  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:53:46.938266  662586 cri.go:89] found id: ""
	I1209 11:53:46.938303  662586 logs.go:282] 0 containers: []
	W1209 11:53:46.938315  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:53:46.938323  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:53:46.938381  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:53:46.972281  662586 cri.go:89] found id: ""
	I1209 11:53:46.972318  662586 logs.go:282] 0 containers: []
	W1209 11:53:46.972329  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:53:46.972337  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:53:46.972391  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:53:47.004797  662586 cri.go:89] found id: ""
	I1209 11:53:47.004828  662586 logs.go:282] 0 containers: []
	W1209 11:53:47.004837  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:53:47.004843  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:53:47.004908  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:53:47.035877  662586 cri.go:89] found id: ""
	I1209 11:53:47.035905  662586 logs.go:282] 0 containers: []
	W1209 11:53:47.035917  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:53:47.035931  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:53:47.035947  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:53:47.087654  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:53:47.087706  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:53:47.102311  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:53:47.102346  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:53:47.195370  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:53:47.195396  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:53:47.195414  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:53:47.279103  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:53:47.279158  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:53:44.895382  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:46.895838  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:48.896133  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:47.453879  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:49.951686  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:48.591393  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:51.090806  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:49.817942  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:49.830291  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:53:49.830357  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:53:49.862917  662586 cri.go:89] found id: ""
	I1209 11:53:49.862950  662586 logs.go:282] 0 containers: []
	W1209 11:53:49.862959  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:53:49.862965  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:53:49.863033  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:53:49.894971  662586 cri.go:89] found id: ""
	I1209 11:53:49.895005  662586 logs.go:282] 0 containers: []
	W1209 11:53:49.895018  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:53:49.895027  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:53:49.895097  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:53:49.931737  662586 cri.go:89] found id: ""
	I1209 11:53:49.931775  662586 logs.go:282] 0 containers: []
	W1209 11:53:49.931786  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:53:49.931800  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:53:49.931862  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:53:49.971064  662586 cri.go:89] found id: ""
	I1209 11:53:49.971097  662586 logs.go:282] 0 containers: []
	W1209 11:53:49.971109  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:53:49.971118  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:53:49.971210  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:53:50.005354  662586 cri.go:89] found id: ""
	I1209 11:53:50.005393  662586 logs.go:282] 0 containers: []
	W1209 11:53:50.005417  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:53:50.005427  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:53:50.005501  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:53:50.044209  662586 cri.go:89] found id: ""
	I1209 11:53:50.044240  662586 logs.go:282] 0 containers: []
	W1209 11:53:50.044249  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:53:50.044257  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:53:50.044313  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:53:50.076360  662586 cri.go:89] found id: ""
	I1209 11:53:50.076408  662586 logs.go:282] 0 containers: []
	W1209 11:53:50.076418  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:53:50.076426  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:53:50.076494  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:53:50.112125  662586 cri.go:89] found id: ""
	I1209 11:53:50.112168  662586 logs.go:282] 0 containers: []
	W1209 11:53:50.112196  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:53:50.112210  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:53:50.112228  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:53:50.164486  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:53:50.164530  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:53:50.178489  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:53:50.178525  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:53:50.250131  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:53:50.250165  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:53:50.250196  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:53:50.329733  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:53:50.329779  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:53:50.896354  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:53.395149  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:52.450595  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:54.450939  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:53.092311  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:55.590766  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:52.874887  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:52.888518  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:53:52.888607  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:53:52.924361  662586 cri.go:89] found id: ""
	I1209 11:53:52.924389  662586 logs.go:282] 0 containers: []
	W1209 11:53:52.924398  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:53:52.924404  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:53:52.924467  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:53:52.957769  662586 cri.go:89] found id: ""
	I1209 11:53:52.957803  662586 logs.go:282] 0 containers: []
	W1209 11:53:52.957816  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:53:52.957824  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:53:52.957891  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:53:52.990339  662586 cri.go:89] found id: ""
	I1209 11:53:52.990376  662586 logs.go:282] 0 containers: []
	W1209 11:53:52.990388  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:53:52.990397  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:53:52.990461  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:53:53.022959  662586 cri.go:89] found id: ""
	I1209 11:53:53.023003  662586 logs.go:282] 0 containers: []
	W1209 11:53:53.023017  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:53:53.023028  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:53:53.023111  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:53:53.060271  662586 cri.go:89] found id: ""
	I1209 11:53:53.060299  662586 logs.go:282] 0 containers: []
	W1209 11:53:53.060315  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:53:53.060321  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:53:53.060390  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:53:53.093470  662586 cri.go:89] found id: ""
	I1209 11:53:53.093500  662586 logs.go:282] 0 containers: []
	W1209 11:53:53.093511  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:53:53.093519  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:53:53.093575  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:53:53.128902  662586 cri.go:89] found id: ""
	I1209 11:53:53.128941  662586 logs.go:282] 0 containers: []
	W1209 11:53:53.128955  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:53:53.128963  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:53:53.129036  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:53:53.161927  662586 cri.go:89] found id: ""
	I1209 11:53:53.161955  662586 logs.go:282] 0 containers: []
	W1209 11:53:53.161964  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:53:53.161974  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:53:53.161988  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:53:53.214098  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:53:53.214140  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:53:53.229191  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:53:53.229232  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:53:53.308648  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:53:53.308678  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:53:53.308695  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:53:53.386776  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:53:53.386816  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:53:55.929307  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:55.942217  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:53:55.942285  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:53:55.983522  662586 cri.go:89] found id: ""
	I1209 11:53:55.983563  662586 logs.go:282] 0 containers: []
	W1209 11:53:55.983572  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:53:55.983579  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:53:55.983645  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:53:56.017262  662586 cri.go:89] found id: ""
	I1209 11:53:56.017293  662586 logs.go:282] 0 containers: []
	W1209 11:53:56.017308  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:53:56.017314  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:53:56.017367  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:53:56.052385  662586 cri.go:89] found id: ""
	I1209 11:53:56.052419  662586 logs.go:282] 0 containers: []
	W1209 11:53:56.052429  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:53:56.052436  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:53:56.052489  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:53:56.085385  662586 cri.go:89] found id: ""
	I1209 11:53:56.085432  662586 logs.go:282] 0 containers: []
	W1209 11:53:56.085444  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:53:56.085452  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:53:56.085519  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:53:56.122754  662586 cri.go:89] found id: ""
	I1209 11:53:56.122785  662586 logs.go:282] 0 containers: []
	W1209 11:53:56.122794  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:53:56.122800  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:53:56.122862  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:53:56.159033  662586 cri.go:89] found id: ""
	I1209 11:53:56.159061  662586 logs.go:282] 0 containers: []
	W1209 11:53:56.159070  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:53:56.159077  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:53:56.159128  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:53:56.198022  662586 cri.go:89] found id: ""
	I1209 11:53:56.198058  662586 logs.go:282] 0 containers: []
	W1209 11:53:56.198070  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:53:56.198078  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:53:56.198148  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:53:56.231475  662586 cri.go:89] found id: ""
	I1209 11:53:56.231515  662586 logs.go:282] 0 containers: []
	W1209 11:53:56.231528  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:53:56.231542  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:53:56.231559  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:53:56.304922  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:53:56.304969  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:53:56.339875  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:53:56.339916  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:53:56.392893  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:53:56.392929  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:53:56.406334  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:53:56.406376  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:53:56.474037  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:53:55.895077  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:57.895835  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:56.452163  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:58.950981  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:57.590943  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:00.091057  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:58.974725  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:58.987817  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:53:58.987890  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:53:59.020951  662586 cri.go:89] found id: ""
	I1209 11:53:59.020987  662586 logs.go:282] 0 containers: []
	W1209 11:53:59.020996  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:53:59.021003  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:53:59.021055  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:53:59.055675  662586 cri.go:89] found id: ""
	I1209 11:53:59.055715  662586 logs.go:282] 0 containers: []
	W1209 11:53:59.055727  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:53:59.055733  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:53:59.055800  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:53:59.090099  662586 cri.go:89] found id: ""
	I1209 11:53:59.090138  662586 logs.go:282] 0 containers: []
	W1209 11:53:59.090150  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:53:59.090158  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:53:59.090252  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:53:59.124680  662586 cri.go:89] found id: ""
	I1209 11:53:59.124718  662586 logs.go:282] 0 containers: []
	W1209 11:53:59.124730  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:53:59.124739  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:53:59.124802  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:53:59.157772  662586 cri.go:89] found id: ""
	I1209 11:53:59.157808  662586 logs.go:282] 0 containers: []
	W1209 11:53:59.157819  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:53:59.157828  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:53:59.157892  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:53:59.191098  662586 cri.go:89] found id: ""
	I1209 11:53:59.191132  662586 logs.go:282] 0 containers: []
	W1209 11:53:59.191141  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:53:59.191148  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:53:59.191212  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:53:59.224050  662586 cri.go:89] found id: ""
	I1209 11:53:59.224090  662586 logs.go:282] 0 containers: []
	W1209 11:53:59.224102  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:53:59.224110  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:53:59.224198  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:53:59.262361  662586 cri.go:89] found id: ""
	I1209 11:53:59.262397  662586 logs.go:282] 0 containers: []
	W1209 11:53:59.262418  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:53:59.262432  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:53:59.262456  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:53:59.276811  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:53:59.276844  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:53:59.349465  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:53:59.349492  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:53:59.349506  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:53:59.429146  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:53:59.429192  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:53:59.470246  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:53:59.470287  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:02.021651  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:02.036039  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:02.036109  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:02.070999  662586 cri.go:89] found id: ""
	I1209 11:54:02.071034  662586 logs.go:282] 0 containers: []
	W1209 11:54:02.071045  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:02.071052  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:02.071119  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:02.107506  662586 cri.go:89] found id: ""
	I1209 11:54:02.107536  662586 logs.go:282] 0 containers: []
	W1209 11:54:02.107546  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:02.107554  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:02.107624  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:02.146279  662586 cri.go:89] found id: ""
	I1209 11:54:02.146314  662586 logs.go:282] 0 containers: []
	W1209 11:54:02.146326  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:02.146342  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:02.146408  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:02.178349  662586 cri.go:89] found id: ""
	I1209 11:54:02.178378  662586 logs.go:282] 0 containers: []
	W1209 11:54:02.178387  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:02.178402  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:02.178460  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:02.211916  662586 cri.go:89] found id: ""
	I1209 11:54:02.211952  662586 logs.go:282] 0 containers: []
	W1209 11:54:02.211963  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:02.211969  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:02.212038  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:02.246334  662586 cri.go:89] found id: ""
	I1209 11:54:02.246370  662586 logs.go:282] 0 containers: []
	W1209 11:54:02.246380  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:02.246387  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:02.246452  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:02.280111  662586 cri.go:89] found id: ""
	I1209 11:54:02.280157  662586 logs.go:282] 0 containers: []
	W1209 11:54:02.280168  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:02.280175  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:02.280246  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:02.314141  662586 cri.go:89] found id: ""
	I1209 11:54:02.314188  662586 logs.go:282] 0 containers: []
	W1209 11:54:02.314203  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:02.314216  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:02.314236  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:02.327220  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:02.327253  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:02.396099  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:02.396127  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:02.396142  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:02.478096  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:02.478148  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:02.515144  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:02.515175  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:53:59.896109  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:02.396485  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:04.396925  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:01.450279  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:03.450732  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:05.451265  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:02.092010  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:04.092427  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:05.069286  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:05.082453  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:05.082540  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:05.116263  662586 cri.go:89] found id: ""
	I1209 11:54:05.116299  662586 logs.go:282] 0 containers: []
	W1209 11:54:05.116313  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:05.116321  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:05.116388  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:05.150736  662586 cri.go:89] found id: ""
	I1209 11:54:05.150775  662586 logs.go:282] 0 containers: []
	W1209 11:54:05.150788  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:05.150796  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:05.150864  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:05.183757  662586 cri.go:89] found id: ""
	I1209 11:54:05.183792  662586 logs.go:282] 0 containers: []
	W1209 11:54:05.183804  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:05.183812  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:05.183873  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:05.215986  662586 cri.go:89] found id: ""
	I1209 11:54:05.216017  662586 logs.go:282] 0 containers: []
	W1209 11:54:05.216029  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:05.216038  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:05.216096  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:05.247648  662586 cri.go:89] found id: ""
	I1209 11:54:05.247686  662586 logs.go:282] 0 containers: []
	W1209 11:54:05.247698  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:05.247707  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:05.247776  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:05.279455  662586 cri.go:89] found id: ""
	I1209 11:54:05.279484  662586 logs.go:282] 0 containers: []
	W1209 11:54:05.279495  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:05.279504  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:05.279567  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:05.320334  662586 cri.go:89] found id: ""
	I1209 11:54:05.320374  662586 logs.go:282] 0 containers: []
	W1209 11:54:05.320387  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:05.320398  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:05.320490  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:05.353475  662586 cri.go:89] found id: ""
	I1209 11:54:05.353503  662586 logs.go:282] 0 containers: []
	W1209 11:54:05.353512  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:05.353522  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:05.353536  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:05.368320  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:05.368357  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:05.442152  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:05.442193  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:05.442212  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:05.523726  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:05.523768  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:05.562405  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:05.562438  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:06.895764  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:08.897032  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:07.454237  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:09.456440  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:06.591474  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:08.591578  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:11.091599  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:08.115564  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:08.129426  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:08.129523  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:08.162412  662586 cri.go:89] found id: ""
	I1209 11:54:08.162454  662586 logs.go:282] 0 containers: []
	W1209 11:54:08.162467  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:08.162477  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:08.162543  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:08.196821  662586 cri.go:89] found id: ""
	I1209 11:54:08.196860  662586 logs.go:282] 0 containers: []
	W1209 11:54:08.196873  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:08.196882  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:08.196949  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:08.233068  662586 cri.go:89] found id: ""
	I1209 11:54:08.233106  662586 logs.go:282] 0 containers: []
	W1209 11:54:08.233117  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:08.233124  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:08.233184  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:08.268683  662586 cri.go:89] found id: ""
	I1209 11:54:08.268715  662586 logs.go:282] 0 containers: []
	W1209 11:54:08.268724  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:08.268731  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:08.268790  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:08.303237  662586 cri.go:89] found id: ""
	I1209 11:54:08.303276  662586 logs.go:282] 0 containers: []
	W1209 11:54:08.303288  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:08.303309  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:08.303393  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:08.339513  662586 cri.go:89] found id: ""
	I1209 11:54:08.339543  662586 logs.go:282] 0 containers: []
	W1209 11:54:08.339551  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:08.339557  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:08.339612  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:08.376237  662586 cri.go:89] found id: ""
	I1209 11:54:08.376268  662586 logs.go:282] 0 containers: []
	W1209 11:54:08.376289  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:08.376298  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:08.376363  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:08.410530  662586 cri.go:89] found id: ""
	I1209 11:54:08.410560  662586 logs.go:282] 0 containers: []
	W1209 11:54:08.410568  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:08.410577  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:08.410589  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:08.460064  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:08.460101  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:08.474547  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:08.474582  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:08.544231  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:08.544260  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:08.544277  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:08.624727  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:08.624775  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:11.167943  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:11.183210  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:11.183294  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:11.221326  662586 cri.go:89] found id: ""
	I1209 11:54:11.221356  662586 logs.go:282] 0 containers: []
	W1209 11:54:11.221365  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:11.221371  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:11.221434  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:11.254688  662586 cri.go:89] found id: ""
	I1209 11:54:11.254721  662586 logs.go:282] 0 containers: []
	W1209 11:54:11.254730  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:11.254736  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:11.254801  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:11.287611  662586 cri.go:89] found id: ""
	I1209 11:54:11.287649  662586 logs.go:282] 0 containers: []
	W1209 11:54:11.287660  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:11.287666  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:11.287738  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:11.320533  662586 cri.go:89] found id: ""
	I1209 11:54:11.320565  662586 logs.go:282] 0 containers: []
	W1209 11:54:11.320574  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:11.320580  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:11.320638  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:11.362890  662586 cri.go:89] found id: ""
	I1209 11:54:11.362923  662586 logs.go:282] 0 containers: []
	W1209 11:54:11.362933  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:11.362939  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:11.363007  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:11.418729  662586 cri.go:89] found id: ""
	I1209 11:54:11.418762  662586 logs.go:282] 0 containers: []
	W1209 11:54:11.418772  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:11.418779  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:11.418837  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:11.455336  662586 cri.go:89] found id: ""
	I1209 11:54:11.455374  662586 logs.go:282] 0 containers: []
	W1209 11:54:11.455388  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:11.455397  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:11.455479  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:11.491307  662586 cri.go:89] found id: ""
	I1209 11:54:11.491334  662586 logs.go:282] 0 containers: []
	W1209 11:54:11.491344  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:11.491355  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:11.491369  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:11.543161  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:11.543204  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:11.556633  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:11.556670  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:11.626971  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:11.627001  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:11.627025  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:11.702061  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:11.702107  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:11.396167  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:13.897097  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:11.952029  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:14.451701  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:13.590749  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:15.591845  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:14.245241  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:14.258461  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:14.258544  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:14.292108  662586 cri.go:89] found id: ""
	I1209 11:54:14.292147  662586 logs.go:282] 0 containers: []
	W1209 11:54:14.292156  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:14.292163  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:14.292219  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:14.327347  662586 cri.go:89] found id: ""
	I1209 11:54:14.327381  662586 logs.go:282] 0 containers: []
	W1209 11:54:14.327394  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:14.327403  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:14.327484  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:14.361188  662586 cri.go:89] found id: ""
	I1209 11:54:14.361220  662586 logs.go:282] 0 containers: []
	W1209 11:54:14.361229  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:14.361236  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:14.361290  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:14.394898  662586 cri.go:89] found id: ""
	I1209 11:54:14.394936  662586 logs.go:282] 0 containers: []
	W1209 11:54:14.394948  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:14.394960  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:14.395027  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:14.429326  662586 cri.go:89] found id: ""
	I1209 11:54:14.429402  662586 logs.go:282] 0 containers: []
	W1209 11:54:14.429420  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:14.429431  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:14.429510  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:14.462903  662586 cri.go:89] found id: ""
	I1209 11:54:14.462938  662586 logs.go:282] 0 containers: []
	W1209 11:54:14.462947  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:14.462954  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:14.463009  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:14.496362  662586 cri.go:89] found id: ""
	I1209 11:54:14.496396  662586 logs.go:282] 0 containers: []
	W1209 11:54:14.496409  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:14.496418  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:14.496562  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:14.530052  662586 cri.go:89] found id: ""
	I1209 11:54:14.530085  662586 logs.go:282] 0 containers: []
	W1209 11:54:14.530098  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:14.530111  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:14.530131  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:14.543096  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:14.543133  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:14.611030  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:14.611055  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:14.611067  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:14.684984  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:14.685041  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:14.722842  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:14.722881  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:17.275868  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:17.288812  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:17.288898  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:17.323732  662586 cri.go:89] found id: ""
	I1209 11:54:17.323766  662586 logs.go:282] 0 containers: []
	W1209 11:54:17.323777  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:17.323786  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:17.323852  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:17.367753  662586 cri.go:89] found id: ""
	I1209 11:54:17.367788  662586 logs.go:282] 0 containers: []
	W1209 11:54:17.367801  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:17.367810  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:17.367878  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:17.411444  662586 cri.go:89] found id: ""
	I1209 11:54:17.411476  662586 logs.go:282] 0 containers: []
	W1209 11:54:17.411488  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:17.411496  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:17.411563  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:17.450790  662586 cri.go:89] found id: ""
	I1209 11:54:17.450821  662586 logs.go:282] 0 containers: []
	W1209 11:54:17.450832  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:17.450840  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:17.450913  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:17.488824  662586 cri.go:89] found id: ""
	I1209 11:54:17.488859  662586 logs.go:282] 0 containers: []
	W1209 11:54:17.488869  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:17.488876  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:17.488948  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:17.522051  662586 cri.go:89] found id: ""
	I1209 11:54:17.522085  662586 logs.go:282] 0 containers: []
	W1209 11:54:17.522094  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:17.522102  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:17.522165  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:17.556653  662586 cri.go:89] found id: ""
	I1209 11:54:17.556687  662586 logs.go:282] 0 containers: []
	W1209 11:54:17.556700  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:17.556707  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:17.556783  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:17.591303  662586 cri.go:89] found id: ""
	I1209 11:54:17.591337  662586 logs.go:282] 0 containers: []
	W1209 11:54:17.591355  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:17.591367  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:17.591384  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:17.656675  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:17.656699  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:17.656712  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:16.396574  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:18.896050  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:16.950508  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:19.451294  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:18.091307  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:20.091489  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:17.739894  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:17.739939  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:17.789486  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:17.789517  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:17.843606  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:17.843648  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:20.361896  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:20.378015  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:20.378105  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:20.412252  662586 cri.go:89] found id: ""
	I1209 11:54:20.412299  662586 logs.go:282] 0 containers: []
	W1209 11:54:20.412311  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:20.412327  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:20.412396  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:20.443638  662586 cri.go:89] found id: ""
	I1209 11:54:20.443671  662586 logs.go:282] 0 containers: []
	W1209 11:54:20.443682  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:20.443690  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:20.443758  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:20.478578  662586 cri.go:89] found id: ""
	I1209 11:54:20.478613  662586 logs.go:282] 0 containers: []
	W1209 11:54:20.478625  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:20.478634  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:20.478704  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:20.512232  662586 cri.go:89] found id: ""
	I1209 11:54:20.512266  662586 logs.go:282] 0 containers: []
	W1209 11:54:20.512279  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:20.512295  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:20.512357  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:20.544358  662586 cri.go:89] found id: ""
	I1209 11:54:20.544398  662586 logs.go:282] 0 containers: []
	W1209 11:54:20.544413  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:20.544429  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:20.544494  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:20.579476  662586 cri.go:89] found id: ""
	I1209 11:54:20.579513  662586 logs.go:282] 0 containers: []
	W1209 11:54:20.579525  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:20.579533  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:20.579600  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:20.613851  662586 cri.go:89] found id: ""
	I1209 11:54:20.613884  662586 logs.go:282] 0 containers: []
	W1209 11:54:20.613897  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:20.613903  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:20.613973  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:20.647311  662586 cri.go:89] found id: ""
	I1209 11:54:20.647342  662586 logs.go:282] 0 containers: []
	W1209 11:54:20.647351  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:20.647362  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:20.647375  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:20.695798  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:20.695839  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:20.709443  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:20.709478  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:20.779211  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:20.779237  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:20.779253  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:20.857966  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:20.858012  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:20.896168  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:22.896667  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:21.455716  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:23.950823  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:25.952038  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:22.592225  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:25.091934  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:23.398095  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:23.412622  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:23.412686  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:23.446582  662586 cri.go:89] found id: ""
	I1209 11:54:23.446616  662586 logs.go:282] 0 containers: []
	W1209 11:54:23.446628  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:23.446637  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:23.446705  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:23.487896  662586 cri.go:89] found id: ""
	I1209 11:54:23.487926  662586 logs.go:282] 0 containers: []
	W1209 11:54:23.487935  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:23.487941  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:23.488007  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:23.521520  662586 cri.go:89] found id: ""
	I1209 11:54:23.521559  662586 logs.go:282] 0 containers: []
	W1209 11:54:23.521571  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:23.521579  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:23.521651  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:23.561296  662586 cri.go:89] found id: ""
	I1209 11:54:23.561329  662586 logs.go:282] 0 containers: []
	W1209 11:54:23.561342  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:23.561350  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:23.561417  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:23.604936  662586 cri.go:89] found id: ""
	I1209 11:54:23.604965  662586 logs.go:282] 0 containers: []
	W1209 11:54:23.604976  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:23.604985  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:23.605055  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:23.665193  662586 cri.go:89] found id: ""
	I1209 11:54:23.665225  662586 logs.go:282] 0 containers: []
	W1209 11:54:23.665237  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:23.665247  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:23.665315  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:23.700202  662586 cri.go:89] found id: ""
	I1209 11:54:23.700239  662586 logs.go:282] 0 containers: []
	W1209 11:54:23.700251  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:23.700259  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:23.700336  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:23.734877  662586 cri.go:89] found id: ""
	I1209 11:54:23.734907  662586 logs.go:282] 0 containers: []
	W1209 11:54:23.734917  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:23.734927  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:23.734941  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:23.817328  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:23.817371  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:23.855052  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:23.855085  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:23.909107  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:23.909154  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:23.924198  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:23.924227  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:23.991976  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:26.492366  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:26.506223  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:26.506299  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:26.544932  662586 cri.go:89] found id: ""
	I1209 11:54:26.544974  662586 logs.go:282] 0 containers: []
	W1209 11:54:26.544987  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:26.544997  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:26.545080  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:26.579581  662586 cri.go:89] found id: ""
	I1209 11:54:26.579621  662586 logs.go:282] 0 containers: []
	W1209 11:54:26.579634  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:26.579643  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:26.579716  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:26.612510  662586 cri.go:89] found id: ""
	I1209 11:54:26.612545  662586 logs.go:282] 0 containers: []
	W1209 11:54:26.612567  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:26.612577  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:26.612646  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:26.646273  662586 cri.go:89] found id: ""
	I1209 11:54:26.646306  662586 logs.go:282] 0 containers: []
	W1209 11:54:26.646316  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:26.646322  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:26.646376  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:26.682027  662586 cri.go:89] found id: ""
	I1209 11:54:26.682063  662586 logs.go:282] 0 containers: []
	W1209 11:54:26.682072  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:26.682078  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:26.682132  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:26.715822  662586 cri.go:89] found id: ""
	I1209 11:54:26.715876  662586 logs.go:282] 0 containers: []
	W1209 11:54:26.715889  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:26.715898  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:26.715964  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:26.755976  662586 cri.go:89] found id: ""
	I1209 11:54:26.756016  662586 logs.go:282] 0 containers: []
	W1209 11:54:26.756031  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:26.756040  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:26.756122  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:26.787258  662586 cri.go:89] found id: ""
	I1209 11:54:26.787297  662586 logs.go:282] 0 containers: []
	W1209 11:54:26.787308  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:26.787319  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:26.787333  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:26.800534  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:26.800573  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:26.865767  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:26.865798  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:26.865824  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:26.950409  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:26.950460  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:26.994281  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:26.994320  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:25.396411  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:27.894846  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:28.451141  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:30.455101  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:27.591769  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:30.091528  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:29.544568  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:29.565182  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:29.565263  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:29.625116  662586 cri.go:89] found id: ""
	I1209 11:54:29.625155  662586 logs.go:282] 0 containers: []
	W1209 11:54:29.625168  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:29.625181  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:29.625257  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:29.673689  662586 cri.go:89] found id: ""
	I1209 11:54:29.673727  662586 logs.go:282] 0 containers: []
	W1209 11:54:29.673739  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:29.673747  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:29.673811  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:29.705925  662586 cri.go:89] found id: ""
	I1209 11:54:29.705959  662586 logs.go:282] 0 containers: []
	W1209 11:54:29.705971  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:29.705979  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:29.706033  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:29.738731  662586 cri.go:89] found id: ""
	I1209 11:54:29.738759  662586 logs.go:282] 0 containers: []
	W1209 11:54:29.738767  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:29.738774  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:29.738832  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:29.770778  662586 cri.go:89] found id: ""
	I1209 11:54:29.770814  662586 logs.go:282] 0 containers: []
	W1209 11:54:29.770826  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:29.770833  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:29.770899  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:29.801925  662586 cri.go:89] found id: ""
	I1209 11:54:29.801961  662586 logs.go:282] 0 containers: []
	W1209 11:54:29.801973  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:29.801981  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:29.802050  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:29.833681  662586 cri.go:89] found id: ""
	I1209 11:54:29.833712  662586 logs.go:282] 0 containers: []
	W1209 11:54:29.833722  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:29.833727  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:29.833791  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:29.873666  662586 cri.go:89] found id: ""
	I1209 11:54:29.873700  662586 logs.go:282] 0 containers: []
	W1209 11:54:29.873712  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:29.873722  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:29.873735  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:29.914855  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:29.914895  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:29.967730  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:29.967772  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:29.982037  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:29.982070  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:30.047168  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:30.047195  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:30.047212  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:32.623371  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:32.636346  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:32.636411  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:32.677709  662586 cri.go:89] found id: ""
	I1209 11:54:32.677736  662586 logs.go:282] 0 containers: []
	W1209 11:54:32.677744  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:32.677753  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:32.677805  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:29.896176  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:32.395216  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:32.952287  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:35.451456  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:32.092615  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:34.591397  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:32.710906  662586 cri.go:89] found id: ""
	I1209 11:54:32.710933  662586 logs.go:282] 0 containers: []
	W1209 11:54:32.710942  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:32.710948  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:32.711000  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:32.744623  662586 cri.go:89] found id: ""
	I1209 11:54:32.744654  662586 logs.go:282] 0 containers: []
	W1209 11:54:32.744667  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:32.744676  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:32.744736  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:32.779334  662586 cri.go:89] found id: ""
	I1209 11:54:32.779364  662586 logs.go:282] 0 containers: []
	W1209 11:54:32.779375  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:32.779382  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:32.779443  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:32.814998  662586 cri.go:89] found id: ""
	I1209 11:54:32.815032  662586 logs.go:282] 0 containers: []
	W1209 11:54:32.815046  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:32.815055  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:32.815128  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:32.850054  662586 cri.go:89] found id: ""
	I1209 11:54:32.850099  662586 logs.go:282] 0 containers: []
	W1209 11:54:32.850116  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:32.850127  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:32.850213  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:32.885769  662586 cri.go:89] found id: ""
	I1209 11:54:32.885805  662586 logs.go:282] 0 containers: []
	W1209 11:54:32.885818  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:32.885827  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:32.885899  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:32.927973  662586 cri.go:89] found id: ""
	I1209 11:54:32.928001  662586 logs.go:282] 0 containers: []
	W1209 11:54:32.928010  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:32.928019  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:32.928032  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:32.981915  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:32.981966  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:32.995817  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:32.995851  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:33.062409  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:33.062445  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:33.062462  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:33.146967  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:33.147011  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:35.688225  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:35.701226  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:35.701325  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:35.738628  662586 cri.go:89] found id: ""
	I1209 11:54:35.738655  662586 logs.go:282] 0 containers: []
	W1209 11:54:35.738663  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:35.738670  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:35.738737  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:35.771125  662586 cri.go:89] found id: ""
	I1209 11:54:35.771163  662586 logs.go:282] 0 containers: []
	W1209 11:54:35.771177  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:35.771187  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:35.771260  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:35.806244  662586 cri.go:89] found id: ""
	I1209 11:54:35.806277  662586 logs.go:282] 0 containers: []
	W1209 11:54:35.806290  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:35.806301  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:35.806376  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:35.839871  662586 cri.go:89] found id: ""
	I1209 11:54:35.839912  662586 logs.go:282] 0 containers: []
	W1209 11:54:35.839925  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:35.839932  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:35.840010  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:35.874994  662586 cri.go:89] found id: ""
	I1209 11:54:35.875034  662586 logs.go:282] 0 containers: []
	W1209 11:54:35.875046  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:35.875054  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:35.875129  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:35.910802  662586 cri.go:89] found id: ""
	I1209 11:54:35.910834  662586 logs.go:282] 0 containers: []
	W1209 11:54:35.910846  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:35.910855  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:35.910927  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:35.944633  662586 cri.go:89] found id: ""
	I1209 11:54:35.944663  662586 logs.go:282] 0 containers: []
	W1209 11:54:35.944672  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:35.944678  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:35.944749  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:35.982732  662586 cri.go:89] found id: ""
	I1209 11:54:35.982781  662586 logs.go:282] 0 containers: []
	W1209 11:54:35.982796  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:35.982811  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:35.982830  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:35.996271  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:35.996302  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:36.063463  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:36.063533  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:36.063554  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:36.141789  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:36.141833  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:36.187015  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:36.187047  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:34.895890  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:37.396472  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:37.951404  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:40.452814  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:37.091548  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:39.092168  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:38.739585  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:38.754322  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:38.754394  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:38.792497  662586 cri.go:89] found id: ""
	I1209 11:54:38.792525  662586 logs.go:282] 0 containers: []
	W1209 11:54:38.792535  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:38.792543  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:38.792608  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:38.829730  662586 cri.go:89] found id: ""
	I1209 11:54:38.829759  662586 logs.go:282] 0 containers: []
	W1209 11:54:38.829768  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:38.829774  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:38.829834  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:38.869942  662586 cri.go:89] found id: ""
	I1209 11:54:38.869981  662586 logs.go:282] 0 containers: []
	W1209 11:54:38.869994  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:38.870015  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:38.870085  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:38.906001  662586 cri.go:89] found id: ""
	I1209 11:54:38.906041  662586 logs.go:282] 0 containers: []
	W1209 11:54:38.906054  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:38.906063  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:38.906133  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:38.944389  662586 cri.go:89] found id: ""
	I1209 11:54:38.944427  662586 logs.go:282] 0 containers: []
	W1209 11:54:38.944445  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:38.944453  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:38.944534  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:38.979633  662586 cri.go:89] found id: ""
	I1209 11:54:38.979665  662586 logs.go:282] 0 containers: []
	W1209 11:54:38.979674  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:38.979681  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:38.979735  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:39.016366  662586 cri.go:89] found id: ""
	I1209 11:54:39.016402  662586 logs.go:282] 0 containers: []
	W1209 11:54:39.016416  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:39.016424  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:39.016489  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:39.049084  662586 cri.go:89] found id: ""
	I1209 11:54:39.049116  662586 logs.go:282] 0 containers: []
	W1209 11:54:39.049125  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:39.049134  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:39.049148  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:39.113953  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:39.113985  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:39.114004  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:39.191715  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:39.191767  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:39.232127  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:39.232167  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:39.281406  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:39.281448  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:41.795395  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:41.810293  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:41.810364  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:41.849819  662586 cri.go:89] found id: ""
	I1209 11:54:41.849858  662586 logs.go:282] 0 containers: []
	W1209 11:54:41.849872  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:41.849882  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:41.849952  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:41.883871  662586 cri.go:89] found id: ""
	I1209 11:54:41.883908  662586 logs.go:282] 0 containers: []
	W1209 11:54:41.883934  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:41.883942  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:41.884017  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:41.918194  662586 cri.go:89] found id: ""
	I1209 11:54:41.918230  662586 logs.go:282] 0 containers: []
	W1209 11:54:41.918239  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:41.918245  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:41.918312  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:41.950878  662586 cri.go:89] found id: ""
	I1209 11:54:41.950912  662586 logs.go:282] 0 containers: []
	W1209 11:54:41.950924  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:41.950933  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:41.950995  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:41.982922  662586 cri.go:89] found id: ""
	I1209 11:54:41.982964  662586 logs.go:282] 0 containers: []
	W1209 11:54:41.982976  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:41.982985  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:41.983064  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:42.014066  662586 cri.go:89] found id: ""
	I1209 11:54:42.014107  662586 logs.go:282] 0 containers: []
	W1209 11:54:42.014120  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:42.014129  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:42.014229  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:42.048017  662586 cri.go:89] found id: ""
	I1209 11:54:42.048056  662586 logs.go:282] 0 containers: []
	W1209 11:54:42.048070  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:42.048079  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:42.048146  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:42.080585  662586 cri.go:89] found id: ""
	I1209 11:54:42.080614  662586 logs.go:282] 0 containers: []
	W1209 11:54:42.080624  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:42.080634  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:42.080646  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:42.135012  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:42.135054  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:42.148424  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:42.148462  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:42.219179  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:42.219206  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:42.219230  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:42.305855  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:42.305902  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:39.895830  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:41.896255  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:44.398373  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:42.949835  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:44.951542  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:41.590831  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:43.592053  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:45.593044  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:44.843158  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:44.856317  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:44.856380  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:44.890940  662586 cri.go:89] found id: ""
	I1209 11:54:44.890984  662586 logs.go:282] 0 containers: []
	W1209 11:54:44.891003  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:44.891012  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:44.891081  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:44.923657  662586 cri.go:89] found id: ""
	I1209 11:54:44.923684  662586 logs.go:282] 0 containers: []
	W1209 11:54:44.923692  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:44.923698  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:44.923769  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:44.957512  662586 cri.go:89] found id: ""
	I1209 11:54:44.957545  662586 logs.go:282] 0 containers: []
	W1209 11:54:44.957558  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:44.957566  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:44.957636  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:44.998084  662586 cri.go:89] found id: ""
	I1209 11:54:44.998112  662586 logs.go:282] 0 containers: []
	W1209 11:54:44.998121  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:44.998128  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:44.998210  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:45.030335  662586 cri.go:89] found id: ""
	I1209 11:54:45.030360  662586 logs.go:282] 0 containers: []
	W1209 11:54:45.030369  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:45.030375  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:45.030447  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:45.063098  662586 cri.go:89] found id: ""
	I1209 11:54:45.063127  662586 logs.go:282] 0 containers: []
	W1209 11:54:45.063135  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:45.063141  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:45.063210  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:45.098430  662586 cri.go:89] found id: ""
	I1209 11:54:45.098458  662586 logs.go:282] 0 containers: []
	W1209 11:54:45.098466  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:45.098472  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:45.098526  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:45.132064  662586 cri.go:89] found id: ""
	I1209 11:54:45.132094  662586 logs.go:282] 0 containers: []
	W1209 11:54:45.132102  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:45.132113  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:45.132131  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:45.185512  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:45.185556  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:45.199543  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:45.199572  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:45.268777  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:45.268803  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:45.268817  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:45.352250  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:45.352299  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:46.897153  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:49.395935  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:46.952862  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:49.450006  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:48.092394  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:50.591937  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:47.892201  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:47.906961  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:47.907053  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:47.941349  662586 cri.go:89] found id: ""
	I1209 11:54:47.941394  662586 logs.go:282] 0 containers: []
	W1209 11:54:47.941408  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:47.941418  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:47.941479  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:47.981086  662586 cri.go:89] found id: ""
	I1209 11:54:47.981120  662586 logs.go:282] 0 containers: []
	W1209 11:54:47.981133  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:47.981141  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:47.981210  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:48.014105  662586 cri.go:89] found id: ""
	I1209 11:54:48.014142  662586 logs.go:282] 0 containers: []
	W1209 11:54:48.014151  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:48.014162  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:48.014249  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:48.049506  662586 cri.go:89] found id: ""
	I1209 11:54:48.049535  662586 logs.go:282] 0 containers: []
	W1209 11:54:48.049544  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:48.049552  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:48.049619  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:48.084284  662586 cri.go:89] found id: ""
	I1209 11:54:48.084314  662586 logs.go:282] 0 containers: []
	W1209 11:54:48.084324  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:48.084336  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:48.084406  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:48.117318  662586 cri.go:89] found id: ""
	I1209 11:54:48.117349  662586 logs.go:282] 0 containers: []
	W1209 11:54:48.117362  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:48.117371  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:48.117441  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:48.150121  662586 cri.go:89] found id: ""
	I1209 11:54:48.150151  662586 logs.go:282] 0 containers: []
	W1209 11:54:48.150187  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:48.150198  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:48.150266  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:48.180919  662586 cri.go:89] found id: ""
	I1209 11:54:48.180947  662586 logs.go:282] 0 containers: []
	W1209 11:54:48.180955  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:48.180966  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:48.180978  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:48.249572  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:48.249602  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:48.249617  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:48.324508  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:48.324552  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:48.363856  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:48.363901  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:48.415662  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:48.415721  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:50.929811  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:50.943650  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:50.943714  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:50.976444  662586 cri.go:89] found id: ""
	I1209 11:54:50.976480  662586 logs.go:282] 0 containers: []
	W1209 11:54:50.976493  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:50.976502  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:50.976574  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:51.016567  662586 cri.go:89] found id: ""
	I1209 11:54:51.016600  662586 logs.go:282] 0 containers: []
	W1209 11:54:51.016613  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:51.016621  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:51.016699  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:51.048933  662586 cri.go:89] found id: ""
	I1209 11:54:51.048967  662586 logs.go:282] 0 containers: []
	W1209 11:54:51.048977  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:51.048986  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:51.049073  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:51.083292  662586 cri.go:89] found id: ""
	I1209 11:54:51.083333  662586 logs.go:282] 0 containers: []
	W1209 11:54:51.083345  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:51.083354  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:51.083423  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:51.118505  662586 cri.go:89] found id: ""
	I1209 11:54:51.118547  662586 logs.go:282] 0 containers: []
	W1209 11:54:51.118560  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:51.118571  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:51.118644  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:51.152818  662586 cri.go:89] found id: ""
	I1209 11:54:51.152847  662586 logs.go:282] 0 containers: []
	W1209 11:54:51.152856  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:51.152870  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:51.152922  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:51.186953  662586 cri.go:89] found id: ""
	I1209 11:54:51.186981  662586 logs.go:282] 0 containers: []
	W1209 11:54:51.186991  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:51.186997  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:51.187063  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:51.219305  662586 cri.go:89] found id: ""
	I1209 11:54:51.219337  662586 logs.go:282] 0 containers: []
	W1209 11:54:51.219348  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:51.219361  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:51.219380  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:51.256295  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:51.256338  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:51.313751  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:51.313806  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:51.326940  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:51.326977  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:51.397395  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:51.397428  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:51.397445  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:51.396434  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:53.896554  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:51.456719  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:53.951566  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:52.592043  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:55.091800  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:53.975557  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:53.989509  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:53.989581  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:54.024363  662586 cri.go:89] found id: ""
	I1209 11:54:54.024403  662586 logs.go:282] 0 containers: []
	W1209 11:54:54.024416  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:54.024423  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:54.024484  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:54.062618  662586 cri.go:89] found id: ""
	I1209 11:54:54.062649  662586 logs.go:282] 0 containers: []
	W1209 11:54:54.062659  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:54.062667  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:54.062739  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:54.100194  662586 cri.go:89] found id: ""
	I1209 11:54:54.100231  662586 logs.go:282] 0 containers: []
	W1209 11:54:54.100243  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:54.100252  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:54.100324  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:54.135302  662586 cri.go:89] found id: ""
	I1209 11:54:54.135341  662586 logs.go:282] 0 containers: []
	W1209 11:54:54.135354  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:54.135363  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:54.135447  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:54.170898  662586 cri.go:89] found id: ""
	I1209 11:54:54.170940  662586 logs.go:282] 0 containers: []
	W1209 11:54:54.170953  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:54.170963  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:54.171035  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:54.205098  662586 cri.go:89] found id: ""
	I1209 11:54:54.205138  662586 logs.go:282] 0 containers: []
	W1209 11:54:54.205151  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:54.205159  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:54.205223  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:54.239153  662586 cri.go:89] found id: ""
	I1209 11:54:54.239210  662586 logs.go:282] 0 containers: []
	W1209 11:54:54.239226  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:54.239234  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:54.239307  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:54.278213  662586 cri.go:89] found id: ""
	I1209 11:54:54.278248  662586 logs.go:282] 0 containers: []
	W1209 11:54:54.278260  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:54.278275  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:54.278296  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:54.348095  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:54.348128  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:54.348156  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:54.427181  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:54.427224  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:54.467623  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:54.467656  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:54.519690  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:54.519734  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:57.033524  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:57.046420  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:57.046518  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:57.079588  662586 cri.go:89] found id: ""
	I1209 11:54:57.079616  662586 logs.go:282] 0 containers: []
	W1209 11:54:57.079626  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:57.079633  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:57.079687  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:57.114944  662586 cri.go:89] found id: ""
	I1209 11:54:57.114973  662586 logs.go:282] 0 containers: []
	W1209 11:54:57.114982  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:57.114988  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:57.115043  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:57.147667  662586 cri.go:89] found id: ""
	I1209 11:54:57.147708  662586 logs.go:282] 0 containers: []
	W1209 11:54:57.147721  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:57.147730  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:57.147794  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:57.182339  662586 cri.go:89] found id: ""
	I1209 11:54:57.182370  662586 logs.go:282] 0 containers: []
	W1209 11:54:57.182386  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:57.182395  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:57.182470  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:57.223129  662586 cri.go:89] found id: ""
	I1209 11:54:57.223170  662586 logs.go:282] 0 containers: []
	W1209 11:54:57.223186  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:57.223197  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:57.223270  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:57.262351  662586 cri.go:89] found id: ""
	I1209 11:54:57.262386  662586 logs.go:282] 0 containers: []
	W1209 11:54:57.262398  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:57.262409  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:57.262471  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:57.298743  662586 cri.go:89] found id: ""
	I1209 11:54:57.298772  662586 logs.go:282] 0 containers: []
	W1209 11:54:57.298782  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:57.298789  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:57.298856  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:57.339030  662586 cri.go:89] found id: ""
	I1209 11:54:57.339064  662586 logs.go:282] 0 containers: []
	W1209 11:54:57.339073  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:57.339085  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:57.339122  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:57.352603  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:57.352637  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:57.426627  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:57.426653  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:57.426669  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:57.515357  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:57.515401  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:57.554882  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:57.554925  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:56.396610  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:58.895822  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:56.451429  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:58.951440  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:57.590864  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:00.091967  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:00.112082  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:00.124977  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:00.125056  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:00.159003  662586 cri.go:89] found id: ""
	I1209 11:55:00.159032  662586 logs.go:282] 0 containers: []
	W1209 11:55:00.159041  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:00.159048  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:00.159101  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:00.192479  662586 cri.go:89] found id: ""
	I1209 11:55:00.192515  662586 logs.go:282] 0 containers: []
	W1209 11:55:00.192527  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:00.192533  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:00.192587  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:00.226146  662586 cri.go:89] found id: ""
	I1209 11:55:00.226194  662586 logs.go:282] 0 containers: []
	W1209 11:55:00.226208  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:00.226216  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:00.226273  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:00.260389  662586 cri.go:89] found id: ""
	I1209 11:55:00.260420  662586 logs.go:282] 0 containers: []
	W1209 11:55:00.260430  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:00.260442  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:00.260500  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:00.296091  662586 cri.go:89] found id: ""
	I1209 11:55:00.296121  662586 logs.go:282] 0 containers: []
	W1209 11:55:00.296131  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:00.296138  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:00.296195  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:00.332101  662586 cri.go:89] found id: ""
	I1209 11:55:00.332137  662586 logs.go:282] 0 containers: []
	W1209 11:55:00.332150  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:00.332158  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:00.332244  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:00.377329  662586 cri.go:89] found id: ""
	I1209 11:55:00.377358  662586 logs.go:282] 0 containers: []
	W1209 11:55:00.377368  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:00.377374  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:00.377438  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:00.415660  662586 cri.go:89] found id: ""
	I1209 11:55:00.415688  662586 logs.go:282] 0 containers: []
	W1209 11:55:00.415751  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:00.415767  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:00.415781  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:00.467734  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:00.467776  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:00.481244  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:00.481280  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:00.545721  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:00.545755  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:00.545777  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:00.624482  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:00.624533  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:01.396452  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:03.895539  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:01.452337  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:03.950752  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:05.951246  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:02.092654  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:04.592173  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:03.168340  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:03.183354  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:03.183439  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:03.223131  662586 cri.go:89] found id: ""
	I1209 11:55:03.223171  662586 logs.go:282] 0 containers: []
	W1209 11:55:03.223185  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:03.223193  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:03.223263  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:03.256561  662586 cri.go:89] found id: ""
	I1209 11:55:03.256595  662586 logs.go:282] 0 containers: []
	W1209 11:55:03.256603  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:03.256609  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:03.256667  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:03.289670  662586 cri.go:89] found id: ""
	I1209 11:55:03.289707  662586 logs.go:282] 0 containers: []
	W1209 11:55:03.289722  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:03.289738  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:03.289813  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:03.323687  662586 cri.go:89] found id: ""
	I1209 11:55:03.323714  662586 logs.go:282] 0 containers: []
	W1209 11:55:03.323724  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:03.323730  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:03.323786  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:03.358163  662586 cri.go:89] found id: ""
	I1209 11:55:03.358221  662586 logs.go:282] 0 containers: []
	W1209 11:55:03.358233  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:03.358241  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:03.358311  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:03.399688  662586 cri.go:89] found id: ""
	I1209 11:55:03.399721  662586 logs.go:282] 0 containers: []
	W1209 11:55:03.399734  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:03.399744  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:03.399812  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:03.433909  662586 cri.go:89] found id: ""
	I1209 11:55:03.433939  662586 logs.go:282] 0 containers: []
	W1209 11:55:03.433948  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:03.433954  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:03.434011  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:03.470208  662586 cri.go:89] found id: ""
	I1209 11:55:03.470239  662586 logs.go:282] 0 containers: []
	W1209 11:55:03.470248  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:03.470270  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:03.470289  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:03.545801  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:03.545848  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:03.584357  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:03.584389  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:03.641241  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:03.641283  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:03.657034  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:03.657080  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:03.731285  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:06.232380  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:06.246339  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:06.246411  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:06.281323  662586 cri.go:89] found id: ""
	I1209 11:55:06.281362  662586 logs.go:282] 0 containers: []
	W1209 11:55:06.281377  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:06.281385  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:06.281444  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:06.318225  662586 cri.go:89] found id: ""
	I1209 11:55:06.318261  662586 logs.go:282] 0 containers: []
	W1209 11:55:06.318277  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:06.318293  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:06.318364  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:06.353649  662586 cri.go:89] found id: ""
	I1209 11:55:06.353685  662586 logs.go:282] 0 containers: []
	W1209 11:55:06.353699  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:06.353708  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:06.353782  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:06.395204  662586 cri.go:89] found id: ""
	I1209 11:55:06.395242  662586 logs.go:282] 0 containers: []
	W1209 11:55:06.395257  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:06.395266  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:06.395335  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:06.436421  662586 cri.go:89] found id: ""
	I1209 11:55:06.436452  662586 logs.go:282] 0 containers: []
	W1209 11:55:06.436462  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:06.436469  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:06.436524  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:06.472218  662586 cri.go:89] found id: ""
	I1209 11:55:06.472246  662586 logs.go:282] 0 containers: []
	W1209 11:55:06.472255  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:06.472268  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:06.472335  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:06.506585  662586 cri.go:89] found id: ""
	I1209 11:55:06.506629  662586 logs.go:282] 0 containers: []
	W1209 11:55:06.506640  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:06.506647  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:06.506702  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:06.541442  662586 cri.go:89] found id: ""
	I1209 11:55:06.541472  662586 logs.go:282] 0 containers: []
	W1209 11:55:06.541481  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:06.541493  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:06.541512  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:06.592642  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:06.592682  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:06.606764  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:06.606805  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:06.677693  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:06.677720  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:06.677740  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:06.766074  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:06.766124  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:05.896263  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:08.396283  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:07.951409  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:10.451540  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:06.592724  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:09.091961  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:09.305144  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:09.319352  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:09.319444  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:09.357918  662586 cri.go:89] found id: ""
	I1209 11:55:09.358027  662586 logs.go:282] 0 containers: []
	W1209 11:55:09.358066  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:09.358077  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:09.358139  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:09.413181  662586 cri.go:89] found id: ""
	I1209 11:55:09.413213  662586 logs.go:282] 0 containers: []
	W1209 11:55:09.413226  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:09.413234  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:09.413310  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:09.448417  662586 cri.go:89] found id: ""
	I1209 11:55:09.448460  662586 logs.go:282] 0 containers: []
	W1209 11:55:09.448471  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:09.448480  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:09.448566  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:09.489732  662586 cri.go:89] found id: ""
	I1209 11:55:09.489765  662586 logs.go:282] 0 containers: []
	W1209 11:55:09.489775  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:09.489781  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:09.489845  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:09.524919  662586 cri.go:89] found id: ""
	I1209 11:55:09.524948  662586 logs.go:282] 0 containers: []
	W1209 11:55:09.524959  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:09.524968  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:09.525051  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:09.563268  662586 cri.go:89] found id: ""
	I1209 11:55:09.563301  662586 logs.go:282] 0 containers: []
	W1209 11:55:09.563311  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:09.563318  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:09.563373  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:09.598747  662586 cri.go:89] found id: ""
	I1209 11:55:09.598780  662586 logs.go:282] 0 containers: []
	W1209 11:55:09.598790  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:09.598798  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:09.598866  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:09.634447  662586 cri.go:89] found id: ""
	I1209 11:55:09.634479  662586 logs.go:282] 0 containers: []
	W1209 11:55:09.634492  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:09.634505  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:09.634520  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:09.647380  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:09.647419  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:09.721335  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:09.721363  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:09.721380  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:09.801039  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:09.801088  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:09.840929  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:09.840971  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:12.393810  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:12.407553  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:12.407654  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:12.444391  662586 cri.go:89] found id: ""
	I1209 11:55:12.444437  662586 logs.go:282] 0 containers: []
	W1209 11:55:12.444450  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:12.444459  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:12.444533  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:12.482714  662586 cri.go:89] found id: ""
	I1209 11:55:12.482752  662586 logs.go:282] 0 containers: []
	W1209 11:55:12.482764  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:12.482771  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:12.482853  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:12.518139  662586 cri.go:89] found id: ""
	I1209 11:55:12.518187  662586 logs.go:282] 0 containers: []
	W1209 11:55:12.518202  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:12.518211  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:12.518281  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:12.556903  662586 cri.go:89] found id: ""
	I1209 11:55:12.556938  662586 logs.go:282] 0 containers: []
	W1209 11:55:12.556950  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:12.556958  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:12.557028  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:12.591915  662586 cri.go:89] found id: ""
	I1209 11:55:12.591953  662586 logs.go:282] 0 containers: []
	W1209 11:55:12.591963  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:12.591971  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:12.592038  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:12.629767  662586 cri.go:89] found id: ""
	I1209 11:55:12.629797  662586 logs.go:282] 0 containers: []
	W1209 11:55:12.629806  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:12.629812  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:12.629878  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:12.667677  662586 cri.go:89] found id: ""
	I1209 11:55:12.667710  662586 logs.go:282] 0 containers: []
	W1209 11:55:12.667720  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:12.667727  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:12.667781  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:10.896109  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:12.896992  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:12.451770  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:14.952359  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:11.591952  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:14.092213  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:12.705720  662586 cri.go:89] found id: ""
	I1209 11:55:12.705747  662586 logs.go:282] 0 containers: []
	W1209 11:55:12.705756  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:12.705766  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:12.705780  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:12.758399  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:12.758441  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:12.772297  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:12.772336  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:12.839545  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:12.839569  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:12.839582  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:12.918424  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:12.918467  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:15.458122  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:15.473193  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:15.473284  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:15.508756  662586 cri.go:89] found id: ""
	I1209 11:55:15.508790  662586 logs.go:282] 0 containers: []
	W1209 11:55:15.508799  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:15.508806  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:15.508862  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:15.544735  662586 cri.go:89] found id: ""
	I1209 11:55:15.544770  662586 logs.go:282] 0 containers: []
	W1209 11:55:15.544782  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:15.544791  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:15.544866  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:15.577169  662586 cri.go:89] found id: ""
	I1209 11:55:15.577200  662586 logs.go:282] 0 containers: []
	W1209 11:55:15.577210  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:15.577216  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:15.577277  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:15.610662  662586 cri.go:89] found id: ""
	I1209 11:55:15.610690  662586 logs.go:282] 0 containers: []
	W1209 11:55:15.610700  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:15.610707  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:15.610763  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:15.645339  662586 cri.go:89] found id: ""
	I1209 11:55:15.645375  662586 logs.go:282] 0 containers: []
	W1209 11:55:15.645386  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:15.645394  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:15.645469  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:15.682044  662586 cri.go:89] found id: ""
	I1209 11:55:15.682079  662586 logs.go:282] 0 containers: []
	W1209 11:55:15.682096  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:15.682106  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:15.682201  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:15.717193  662586 cri.go:89] found id: ""
	I1209 11:55:15.717228  662586 logs.go:282] 0 containers: []
	W1209 11:55:15.717245  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:15.717256  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:15.717332  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:15.751756  662586 cri.go:89] found id: ""
	I1209 11:55:15.751792  662586 logs.go:282] 0 containers: []
	W1209 11:55:15.751803  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:15.751813  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:15.751827  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:15.811010  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:15.811063  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:15.842556  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:15.842597  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:15.920169  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:15.920195  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:15.920209  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:16.003180  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:16.003226  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:15.395666  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:17.396041  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:19.396262  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:17.451272  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:19.951638  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:16.591423  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:18.592456  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:21.090108  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:18.542563  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:18.555968  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:18.556059  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:18.588746  662586 cri.go:89] found id: ""
	I1209 11:55:18.588780  662586 logs.go:282] 0 containers: []
	W1209 11:55:18.588790  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:18.588797  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:18.588854  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:18.623664  662586 cri.go:89] found id: ""
	I1209 11:55:18.623707  662586 logs.go:282] 0 containers: []
	W1209 11:55:18.623720  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:18.623728  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:18.623798  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:18.659012  662586 cri.go:89] found id: ""
	I1209 11:55:18.659051  662586 logs.go:282] 0 containers: []
	W1209 11:55:18.659065  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:18.659074  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:18.659148  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:18.693555  662586 cri.go:89] found id: ""
	I1209 11:55:18.693588  662586 logs.go:282] 0 containers: []
	W1209 11:55:18.693600  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:18.693607  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:18.693661  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:18.726609  662586 cri.go:89] found id: ""
	I1209 11:55:18.726641  662586 logs.go:282] 0 containers: []
	W1209 11:55:18.726652  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:18.726659  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:18.726712  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:18.760654  662586 cri.go:89] found id: ""
	I1209 11:55:18.760682  662586 logs.go:282] 0 containers: []
	W1209 11:55:18.760694  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:18.760704  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:18.760761  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:18.794656  662586 cri.go:89] found id: ""
	I1209 11:55:18.794688  662586 logs.go:282] 0 containers: []
	W1209 11:55:18.794699  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:18.794706  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:18.794769  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:18.829988  662586 cri.go:89] found id: ""
	I1209 11:55:18.830030  662586 logs.go:282] 0 containers: []
	W1209 11:55:18.830045  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:18.830059  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:18.830073  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:18.872523  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:18.872558  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:18.929408  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:18.929449  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:18.943095  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:18.943133  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:19.009125  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:19.009150  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:19.009164  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:21.587418  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:21.606271  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:21.606358  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:21.653536  662586 cri.go:89] found id: ""
	I1209 11:55:21.653574  662586 logs.go:282] 0 containers: []
	W1209 11:55:21.653586  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:21.653595  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:21.653671  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:21.687023  662586 cri.go:89] found id: ""
	I1209 11:55:21.687049  662586 logs.go:282] 0 containers: []
	W1209 11:55:21.687060  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:21.687068  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:21.687131  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:21.720112  662586 cri.go:89] found id: ""
	I1209 11:55:21.720150  662586 logs.go:282] 0 containers: []
	W1209 11:55:21.720163  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:21.720171  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:21.720243  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:21.754697  662586 cri.go:89] found id: ""
	I1209 11:55:21.754729  662586 logs.go:282] 0 containers: []
	W1209 11:55:21.754740  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:21.754749  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:21.754814  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:21.793926  662586 cri.go:89] found id: ""
	I1209 11:55:21.793957  662586 logs.go:282] 0 containers: []
	W1209 11:55:21.793967  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:21.793973  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:21.794040  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:21.827572  662586 cri.go:89] found id: ""
	I1209 11:55:21.827609  662586 logs.go:282] 0 containers: []
	W1209 11:55:21.827622  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:21.827633  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:21.827700  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:21.861442  662586 cri.go:89] found id: ""
	I1209 11:55:21.861472  662586 logs.go:282] 0 containers: []
	W1209 11:55:21.861490  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:21.861499  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:21.861565  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:21.894858  662586 cri.go:89] found id: ""
	I1209 11:55:21.894884  662586 logs.go:282] 0 containers: []
	W1209 11:55:21.894892  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:21.894901  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:21.894914  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:21.942567  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:21.942625  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:21.956849  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:21.956879  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:22.020700  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:22.020724  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:22.020738  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:22.095730  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:22.095767  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:21.896304  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:24.395936  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:21.951928  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:24.450997  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:23.090962  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:25.091816  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:24.631715  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:24.644165  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:24.644252  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:24.677720  662586 cri.go:89] found id: ""
	I1209 11:55:24.677757  662586 logs.go:282] 0 containers: []
	W1209 11:55:24.677769  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:24.677778  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:24.677835  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:24.711053  662586 cri.go:89] found id: ""
	I1209 11:55:24.711086  662586 logs.go:282] 0 containers: []
	W1209 11:55:24.711095  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:24.711101  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:24.711154  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:24.744107  662586 cri.go:89] found id: ""
	I1209 11:55:24.744139  662586 logs.go:282] 0 containers: []
	W1209 11:55:24.744148  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:24.744154  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:24.744210  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:24.777811  662586 cri.go:89] found id: ""
	I1209 11:55:24.777853  662586 logs.go:282] 0 containers: []
	W1209 11:55:24.777866  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:24.777876  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:24.777938  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:24.810524  662586 cri.go:89] found id: ""
	I1209 11:55:24.810558  662586 logs.go:282] 0 containers: []
	W1209 11:55:24.810571  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:24.810580  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:24.810648  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:24.843551  662586 cri.go:89] found id: ""
	I1209 11:55:24.843582  662586 logs.go:282] 0 containers: []
	W1209 11:55:24.843590  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:24.843597  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:24.843649  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:24.875342  662586 cri.go:89] found id: ""
	I1209 11:55:24.875371  662586 logs.go:282] 0 containers: []
	W1209 11:55:24.875384  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:24.875390  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:24.875446  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:24.910298  662586 cri.go:89] found id: ""
	I1209 11:55:24.910329  662586 logs.go:282] 0 containers: []
	W1209 11:55:24.910340  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:24.910352  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:24.910377  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:24.962151  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:24.962204  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:24.976547  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:24.976577  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:25.050606  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:25.050635  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:25.050652  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:25.134204  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:25.134254  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:27.671220  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:27.685132  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:27.685194  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:26.895311  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:28.895954  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:26.950106  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:28.950915  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:30.952019  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:27.591908  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:30.090353  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:27.718113  662586 cri.go:89] found id: ""
	I1209 11:55:27.718141  662586 logs.go:282] 0 containers: []
	W1209 11:55:27.718150  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:27.718160  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:27.718242  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:27.752350  662586 cri.go:89] found id: ""
	I1209 11:55:27.752384  662586 logs.go:282] 0 containers: []
	W1209 11:55:27.752395  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:27.752401  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:27.752481  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:27.797360  662586 cri.go:89] found id: ""
	I1209 11:55:27.797393  662586 logs.go:282] 0 containers: []
	W1209 11:55:27.797406  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:27.797415  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:27.797488  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:27.834549  662586 cri.go:89] found id: ""
	I1209 11:55:27.834579  662586 logs.go:282] 0 containers: []
	W1209 11:55:27.834588  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:27.834594  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:27.834655  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:27.874403  662586 cri.go:89] found id: ""
	I1209 11:55:27.874440  662586 logs.go:282] 0 containers: []
	W1209 11:55:27.874465  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:27.874474  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:27.874557  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:27.914324  662586 cri.go:89] found id: ""
	I1209 11:55:27.914360  662586 logs.go:282] 0 containers: []
	W1209 11:55:27.914373  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:27.914380  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:27.914450  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:27.948001  662586 cri.go:89] found id: ""
	I1209 11:55:27.948043  662586 logs.go:282] 0 containers: []
	W1209 11:55:27.948056  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:27.948066  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:27.948219  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:27.982329  662586 cri.go:89] found id: ""
	I1209 11:55:27.982360  662586 logs.go:282] 0 containers: []
	W1209 11:55:27.982369  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:27.982379  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:27.982391  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:28.038165  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:28.038228  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:28.051578  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:28.051609  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:28.119914  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:28.119937  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:28.119951  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:28.195634  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:28.195679  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:30.735392  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:30.748430  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:30.748521  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:30.780500  662586 cri.go:89] found id: ""
	I1209 11:55:30.780528  662586 logs.go:282] 0 containers: []
	W1209 11:55:30.780537  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:30.780544  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:30.780606  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:30.812430  662586 cri.go:89] found id: ""
	I1209 11:55:30.812462  662586 logs.go:282] 0 containers: []
	W1209 11:55:30.812470  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:30.812477  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:30.812530  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:30.854030  662586 cri.go:89] found id: ""
	I1209 11:55:30.854057  662586 logs.go:282] 0 containers: []
	W1209 11:55:30.854066  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:30.854073  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:30.854130  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:30.892144  662586 cri.go:89] found id: ""
	I1209 11:55:30.892182  662586 logs.go:282] 0 containers: []
	W1209 11:55:30.892202  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:30.892211  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:30.892284  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:30.927540  662586 cri.go:89] found id: ""
	I1209 11:55:30.927576  662586 logs.go:282] 0 containers: []
	W1209 11:55:30.927590  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:30.927597  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:30.927660  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:30.963820  662586 cri.go:89] found id: ""
	I1209 11:55:30.963852  662586 logs.go:282] 0 containers: []
	W1209 11:55:30.963861  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:30.963867  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:30.963920  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:30.997793  662586 cri.go:89] found id: ""
	I1209 11:55:30.997819  662586 logs.go:282] 0 containers: []
	W1209 11:55:30.997828  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:30.997836  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:30.997902  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:31.031649  662586 cri.go:89] found id: ""
	I1209 11:55:31.031699  662586 logs.go:282] 0 containers: []
	W1209 11:55:31.031712  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:31.031726  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:31.031746  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:31.101464  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:31.101492  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:31.101509  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:31.184635  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:31.184681  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:31.222690  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:31.222732  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:31.276518  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:31.276566  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:30.896544  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:33.395861  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:33.451560  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:35.952567  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:32.091788  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:34.592091  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:33.790941  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:33.805299  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:33.805390  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:33.844205  662586 cri.go:89] found id: ""
	I1209 11:55:33.844241  662586 logs.go:282] 0 containers: []
	W1209 11:55:33.844253  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:33.844262  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:33.844337  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:33.883378  662586 cri.go:89] found id: ""
	I1209 11:55:33.883410  662586 logs.go:282] 0 containers: []
	W1209 11:55:33.883424  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:33.883431  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:33.883505  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:33.920007  662586 cri.go:89] found id: ""
	I1209 11:55:33.920049  662586 logs.go:282] 0 containers: []
	W1209 11:55:33.920061  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:33.920074  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:33.920141  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:33.956111  662586 cri.go:89] found id: ""
	I1209 11:55:33.956163  662586 logs.go:282] 0 containers: []
	W1209 11:55:33.956175  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:33.956183  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:33.956241  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:33.990057  662586 cri.go:89] found id: ""
	I1209 11:55:33.990092  662586 logs.go:282] 0 containers: []
	W1209 11:55:33.990102  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:33.990109  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:33.990166  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:34.023046  662586 cri.go:89] found id: ""
	I1209 11:55:34.023082  662586 logs.go:282] 0 containers: []
	W1209 11:55:34.023096  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:34.023103  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:34.023171  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:34.055864  662586 cri.go:89] found id: ""
	I1209 11:55:34.055898  662586 logs.go:282] 0 containers: []
	W1209 11:55:34.055909  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:34.055916  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:34.055987  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:34.091676  662586 cri.go:89] found id: ""
	I1209 11:55:34.091710  662586 logs.go:282] 0 containers: []
	W1209 11:55:34.091722  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:34.091733  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:34.091747  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:34.142959  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:34.143002  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:34.156431  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:34.156466  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:34.230277  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:34.230303  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:34.230320  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:34.313660  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:34.313713  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:36.850056  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:36.862486  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:36.862582  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:36.893134  662586 cri.go:89] found id: ""
	I1209 11:55:36.893163  662586 logs.go:282] 0 containers: []
	W1209 11:55:36.893173  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:36.893179  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:36.893257  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:36.927438  662586 cri.go:89] found id: ""
	I1209 11:55:36.927469  662586 logs.go:282] 0 containers: []
	W1209 11:55:36.927479  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:36.927485  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:36.927546  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:36.958787  662586 cri.go:89] found id: ""
	I1209 11:55:36.958818  662586 logs.go:282] 0 containers: []
	W1209 11:55:36.958829  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:36.958837  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:36.958901  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:36.995470  662586 cri.go:89] found id: ""
	I1209 11:55:36.995508  662586 logs.go:282] 0 containers: []
	W1209 11:55:36.995520  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:36.995529  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:36.995590  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:37.026705  662586 cri.go:89] found id: ""
	I1209 11:55:37.026736  662586 logs.go:282] 0 containers: []
	W1209 11:55:37.026746  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:37.026752  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:37.026805  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:37.059717  662586 cri.go:89] found id: ""
	I1209 11:55:37.059748  662586 logs.go:282] 0 containers: []
	W1209 11:55:37.059756  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:37.059762  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:37.059820  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:37.094049  662586 cri.go:89] found id: ""
	I1209 11:55:37.094076  662586 logs.go:282] 0 containers: []
	W1209 11:55:37.094088  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:37.094097  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:37.094190  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:37.128684  662586 cri.go:89] found id: ""
	I1209 11:55:37.128715  662586 logs.go:282] 0 containers: []
	W1209 11:55:37.128724  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:37.128735  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:37.128755  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:37.177932  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:37.177973  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:37.191218  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:37.191252  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:37.256488  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:37.256521  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:37.256538  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:37.330603  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:37.330647  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:35.895823  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:37.895972  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:37.952764  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:40.450704  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:37.092013  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:39.591402  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:39.868604  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:39.881991  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:39.882063  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:39.916750  662586 cri.go:89] found id: ""
	I1209 11:55:39.916786  662586 logs.go:282] 0 containers: []
	W1209 11:55:39.916799  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:39.916806  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:39.916874  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:39.957744  662586 cri.go:89] found id: ""
	I1209 11:55:39.957773  662586 logs.go:282] 0 containers: []
	W1209 11:55:39.957781  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:39.957788  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:39.957854  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:39.994613  662586 cri.go:89] found id: ""
	I1209 11:55:39.994645  662586 logs.go:282] 0 containers: []
	W1209 11:55:39.994654  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:39.994661  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:39.994726  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:40.032606  662586 cri.go:89] found id: ""
	I1209 11:55:40.032635  662586 logs.go:282] 0 containers: []
	W1209 11:55:40.032644  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:40.032650  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:40.032710  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:40.067172  662586 cri.go:89] found id: ""
	I1209 11:55:40.067204  662586 logs.go:282] 0 containers: []
	W1209 11:55:40.067214  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:40.067221  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:40.067278  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:40.101391  662586 cri.go:89] found id: ""
	I1209 11:55:40.101423  662586 logs.go:282] 0 containers: []
	W1209 11:55:40.101432  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:40.101439  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:40.101510  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:40.133160  662586 cri.go:89] found id: ""
	I1209 11:55:40.133196  662586 logs.go:282] 0 containers: []
	W1209 11:55:40.133209  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:40.133217  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:40.133283  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:40.166105  662586 cri.go:89] found id: ""
	I1209 11:55:40.166137  662586 logs.go:282] 0 containers: []
	W1209 11:55:40.166145  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:40.166160  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:40.166187  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:40.231525  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:40.231559  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:40.231582  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:40.311298  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:40.311354  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:40.350040  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:40.350077  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:40.404024  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:40.404061  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:39.896541  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:42.396800  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:42.453720  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:44.950595  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:42.091300  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:44.591230  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:42.917868  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:42.930289  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:42.930357  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:42.962822  662586 cri.go:89] found id: ""
	I1209 11:55:42.962856  662586 logs.go:282] 0 containers: []
	W1209 11:55:42.962869  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:42.962878  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:42.962950  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:42.996932  662586 cri.go:89] found id: ""
	I1209 11:55:42.996962  662586 logs.go:282] 0 containers: []
	W1209 11:55:42.996972  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:42.996979  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:42.997040  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:43.031782  662586 cri.go:89] found id: ""
	I1209 11:55:43.031824  662586 logs.go:282] 0 containers: []
	W1209 11:55:43.031837  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:43.031846  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:43.031915  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:43.064717  662586 cri.go:89] found id: ""
	I1209 11:55:43.064751  662586 logs.go:282] 0 containers: []
	W1209 11:55:43.064764  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:43.064774  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:43.064851  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:43.097248  662586 cri.go:89] found id: ""
	I1209 11:55:43.097278  662586 logs.go:282] 0 containers: []
	W1209 11:55:43.097287  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:43.097294  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:43.097356  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:43.135726  662586 cri.go:89] found id: ""
	I1209 11:55:43.135766  662586 logs.go:282] 0 containers: []
	W1209 11:55:43.135779  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:43.135788  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:43.135881  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:43.171120  662586 cri.go:89] found id: ""
	I1209 11:55:43.171148  662586 logs.go:282] 0 containers: []
	W1209 11:55:43.171157  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:43.171163  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:43.171216  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:43.207488  662586 cri.go:89] found id: ""
	I1209 11:55:43.207523  662586 logs.go:282] 0 containers: []
	W1209 11:55:43.207533  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:43.207545  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:43.207565  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:43.276112  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:43.276142  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:43.276159  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:43.354942  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:43.354990  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:43.392755  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:43.392800  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:43.445708  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:43.445752  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:45.962533  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:45.975508  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:45.975589  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:46.009619  662586 cri.go:89] found id: ""
	I1209 11:55:46.009653  662586 logs.go:282] 0 containers: []
	W1209 11:55:46.009663  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:46.009670  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:46.009726  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:46.042218  662586 cri.go:89] found id: ""
	I1209 11:55:46.042250  662586 logs.go:282] 0 containers: []
	W1209 11:55:46.042259  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:46.042265  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:46.042318  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:46.076204  662586 cri.go:89] found id: ""
	I1209 11:55:46.076239  662586 logs.go:282] 0 containers: []
	W1209 11:55:46.076249  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:46.076255  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:46.076326  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:46.113117  662586 cri.go:89] found id: ""
	I1209 11:55:46.113145  662586 logs.go:282] 0 containers: []
	W1209 11:55:46.113154  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:46.113160  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:46.113225  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:46.148232  662586 cri.go:89] found id: ""
	I1209 11:55:46.148277  662586 logs.go:282] 0 containers: []
	W1209 11:55:46.148293  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:46.148303  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:46.148379  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:46.185028  662586 cri.go:89] found id: ""
	I1209 11:55:46.185083  662586 logs.go:282] 0 containers: []
	W1209 11:55:46.185096  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:46.185106  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:46.185200  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:46.222882  662586 cri.go:89] found id: ""
	I1209 11:55:46.222920  662586 logs.go:282] 0 containers: []
	W1209 11:55:46.222933  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:46.222941  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:46.223007  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:46.263486  662586 cri.go:89] found id: ""
	I1209 11:55:46.263528  662586 logs.go:282] 0 containers: []
	W1209 11:55:46.263538  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:46.263549  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:46.263565  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:46.340524  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:46.340550  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:46.340567  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:46.422768  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:46.422810  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:46.464344  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:46.464382  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:46.517311  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:46.517354  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:44.895283  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:46.895427  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:48.895674  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:46.952912  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:48.953432  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:46.591521  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:49.093057  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:49.031192  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:49.043840  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:49.043929  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:49.077648  662586 cri.go:89] found id: ""
	I1209 11:55:49.077705  662586 logs.go:282] 0 containers: []
	W1209 11:55:49.077720  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:49.077730  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:49.077802  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:49.114111  662586 cri.go:89] found id: ""
	I1209 11:55:49.114138  662586 logs.go:282] 0 containers: []
	W1209 11:55:49.114146  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:49.114154  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:49.114236  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:49.147870  662586 cri.go:89] found id: ""
	I1209 11:55:49.147908  662586 logs.go:282] 0 containers: []
	W1209 11:55:49.147917  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:49.147923  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:49.147976  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:49.185223  662586 cri.go:89] found id: ""
	I1209 11:55:49.185256  662586 logs.go:282] 0 containers: []
	W1209 11:55:49.185269  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:49.185277  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:49.185350  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:49.218037  662586 cri.go:89] found id: ""
	I1209 11:55:49.218068  662586 logs.go:282] 0 containers: []
	W1209 11:55:49.218077  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:49.218084  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:49.218138  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:49.255483  662586 cri.go:89] found id: ""
	I1209 11:55:49.255522  662586 logs.go:282] 0 containers: []
	W1209 11:55:49.255535  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:49.255549  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:49.255629  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:49.288623  662586 cri.go:89] found id: ""
	I1209 11:55:49.288650  662586 logs.go:282] 0 containers: []
	W1209 11:55:49.288659  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:49.288666  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:49.288732  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:49.322880  662586 cri.go:89] found id: ""
	I1209 11:55:49.322913  662586 logs.go:282] 0 containers: []
	W1209 11:55:49.322921  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:49.322930  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:49.322943  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:49.372380  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:49.372428  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:49.385877  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:49.385914  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:49.460078  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:49.460101  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:49.460114  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:49.534588  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:49.534647  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:52.071408  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:52.084198  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:52.084276  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:52.118908  662586 cri.go:89] found id: ""
	I1209 11:55:52.118937  662586 logs.go:282] 0 containers: []
	W1209 11:55:52.118950  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:52.118958  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:52.119026  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:52.156494  662586 cri.go:89] found id: ""
	I1209 11:55:52.156521  662586 logs.go:282] 0 containers: []
	W1209 11:55:52.156530  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:52.156535  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:52.156586  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:52.196037  662586 cri.go:89] found id: ""
	I1209 11:55:52.196075  662586 logs.go:282] 0 containers: []
	W1209 11:55:52.196094  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:52.196102  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:52.196177  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:52.229436  662586 cri.go:89] found id: ""
	I1209 11:55:52.229465  662586 logs.go:282] 0 containers: []
	W1209 11:55:52.229477  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:52.229486  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:52.229558  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:52.268751  662586 cri.go:89] found id: ""
	I1209 11:55:52.268785  662586 logs.go:282] 0 containers: []
	W1209 11:55:52.268797  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:52.268805  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:52.268871  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:52.302405  662586 cri.go:89] found id: ""
	I1209 11:55:52.302436  662586 logs.go:282] 0 containers: []
	W1209 11:55:52.302446  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:52.302453  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:52.302522  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:52.338641  662586 cri.go:89] found id: ""
	I1209 11:55:52.338676  662586 logs.go:282] 0 containers: []
	W1209 11:55:52.338688  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:52.338698  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:52.338754  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:52.375541  662586 cri.go:89] found id: ""
	I1209 11:55:52.375578  662586 logs.go:282] 0 containers: []
	W1209 11:55:52.375591  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:52.375604  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:52.375624  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:52.389140  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:52.389190  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:52.460520  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:52.460546  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:52.460562  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:52.535234  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:52.535280  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:52.573317  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:52.573354  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:50.896292  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:52.896875  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:51.453540  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:53.456640  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:55.950197  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:51.590899  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:53.591317  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:56.092219  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:55.124068  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:55.136800  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:55.136868  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:55.169724  662586 cri.go:89] found id: ""
	I1209 11:55:55.169757  662586 logs.go:282] 0 containers: []
	W1209 11:55:55.169769  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:55.169777  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:55.169843  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:55.207466  662586 cri.go:89] found id: ""
	I1209 11:55:55.207514  662586 logs.go:282] 0 containers: []
	W1209 11:55:55.207528  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:55.207537  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:55.207600  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:55.241761  662586 cri.go:89] found id: ""
	I1209 11:55:55.241790  662586 logs.go:282] 0 containers: []
	W1209 11:55:55.241801  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:55.241809  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:55.241874  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:55.274393  662586 cri.go:89] found id: ""
	I1209 11:55:55.274434  662586 logs.go:282] 0 containers: []
	W1209 11:55:55.274447  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:55.274455  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:55.274522  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:55.307942  662586 cri.go:89] found id: ""
	I1209 11:55:55.307988  662586 logs.go:282] 0 containers: []
	W1209 11:55:55.308002  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:55.308012  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:55.308088  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:55.340074  662586 cri.go:89] found id: ""
	I1209 11:55:55.340107  662586 logs.go:282] 0 containers: []
	W1209 11:55:55.340116  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:55.340122  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:55.340196  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:55.388077  662586 cri.go:89] found id: ""
	I1209 11:55:55.388119  662586 logs.go:282] 0 containers: []
	W1209 11:55:55.388140  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:55.388149  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:55.388230  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:55.422923  662586 cri.go:89] found id: ""
	I1209 11:55:55.422961  662586 logs.go:282] 0 containers: []
	W1209 11:55:55.422975  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:55.422990  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:55.423008  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:55.476178  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:55.476219  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:55.489891  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:55.489919  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:55.555705  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:55.555726  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:55.555745  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:55.634818  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:55.634862  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:55.396320  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:57.895122  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:57.951119  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:00.451659  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:58.092427  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:00.590304  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:58.173169  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:58.188529  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:58.188620  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:58.225602  662586 cri.go:89] found id: ""
	I1209 11:55:58.225630  662586 logs.go:282] 0 containers: []
	W1209 11:55:58.225641  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:58.225649  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:58.225709  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:58.259597  662586 cri.go:89] found id: ""
	I1209 11:55:58.259638  662586 logs.go:282] 0 containers: []
	W1209 11:55:58.259652  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:58.259662  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:58.259744  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:58.293287  662586 cri.go:89] found id: ""
	I1209 11:55:58.293320  662586 logs.go:282] 0 containers: []
	W1209 11:55:58.293329  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:58.293336  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:58.293390  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:58.326581  662586 cri.go:89] found id: ""
	I1209 11:55:58.326611  662586 logs.go:282] 0 containers: []
	W1209 11:55:58.326622  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:58.326630  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:58.326699  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:58.359636  662586 cri.go:89] found id: ""
	I1209 11:55:58.359665  662586 logs.go:282] 0 containers: []
	W1209 11:55:58.359675  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:58.359681  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:58.359736  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:58.396767  662586 cri.go:89] found id: ""
	I1209 11:55:58.396798  662586 logs.go:282] 0 containers: []
	W1209 11:55:58.396809  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:58.396818  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:58.396887  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:58.428907  662586 cri.go:89] found id: ""
	I1209 11:55:58.428941  662586 logs.go:282] 0 containers: []
	W1209 11:55:58.428954  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:58.428962  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:58.429032  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:58.466082  662586 cri.go:89] found id: ""
	I1209 11:55:58.466124  662586 logs.go:282] 0 containers: []
	W1209 11:55:58.466136  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:58.466149  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:58.466186  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:58.542333  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:58.542378  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:58.582397  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:58.582436  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:58.632980  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:58.633030  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:58.648464  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:58.648514  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:58.711714  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:56:01.212475  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:56:01.225574  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:56:01.225642  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:56:01.259666  662586 cri.go:89] found id: ""
	I1209 11:56:01.259704  662586 logs.go:282] 0 containers: []
	W1209 11:56:01.259718  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:56:01.259726  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:56:01.259800  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:56:01.295433  662586 cri.go:89] found id: ""
	I1209 11:56:01.295474  662586 logs.go:282] 0 containers: []
	W1209 11:56:01.295495  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:56:01.295503  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:56:01.295561  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:56:01.330316  662586 cri.go:89] found id: ""
	I1209 11:56:01.330352  662586 logs.go:282] 0 containers: []
	W1209 11:56:01.330364  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:56:01.330373  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:56:01.330447  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:56:01.366762  662586 cri.go:89] found id: ""
	I1209 11:56:01.366797  662586 logs.go:282] 0 containers: []
	W1209 11:56:01.366808  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:56:01.366814  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:56:01.366878  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:56:01.403511  662586 cri.go:89] found id: ""
	I1209 11:56:01.403539  662586 logs.go:282] 0 containers: []
	W1209 11:56:01.403547  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:56:01.403553  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:56:01.403604  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:56:01.436488  662586 cri.go:89] found id: ""
	I1209 11:56:01.436526  662586 logs.go:282] 0 containers: []
	W1209 11:56:01.436538  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:56:01.436546  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:56:01.436617  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:56:01.471647  662586 cri.go:89] found id: ""
	I1209 11:56:01.471676  662586 logs.go:282] 0 containers: []
	W1209 11:56:01.471685  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:56:01.471690  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:56:01.471744  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:56:01.504065  662586 cri.go:89] found id: ""
	I1209 11:56:01.504099  662586 logs.go:282] 0 containers: []
	W1209 11:56:01.504111  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:56:01.504124  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:56:01.504143  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:56:01.553434  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:56:01.553482  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:56:01.567537  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:56:01.567579  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:56:01.636968  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:56:01.636995  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:56:01.637012  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:56:01.713008  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:56:01.713049  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:59.896841  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:02.396972  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:02.451893  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:04.453118  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:02.591218  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:04.592199  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:04.253143  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:56:04.266428  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:56:04.266512  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:56:04.298769  662586 cri.go:89] found id: ""
	I1209 11:56:04.298810  662586 logs.go:282] 0 containers: []
	W1209 11:56:04.298823  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:56:04.298833  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:56:04.298913  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:56:04.330392  662586 cri.go:89] found id: ""
	I1209 11:56:04.330428  662586 logs.go:282] 0 containers: []
	W1209 11:56:04.330441  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:56:04.330449  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:56:04.330528  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:56:04.362409  662586 cri.go:89] found id: ""
	I1209 11:56:04.362443  662586 logs.go:282] 0 containers: []
	W1209 11:56:04.362455  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:56:04.362463  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:56:04.362544  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:56:04.396853  662586 cri.go:89] found id: ""
	I1209 11:56:04.396884  662586 logs.go:282] 0 containers: []
	W1209 11:56:04.396893  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:56:04.396899  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:56:04.396966  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:56:04.430425  662586 cri.go:89] found id: ""
	I1209 11:56:04.430461  662586 logs.go:282] 0 containers: []
	W1209 11:56:04.430470  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:56:04.430477  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:56:04.430531  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:56:04.465354  662586 cri.go:89] found id: ""
	I1209 11:56:04.465391  662586 logs.go:282] 0 containers: []
	W1209 11:56:04.465403  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:56:04.465411  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:56:04.465480  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:56:04.500114  662586 cri.go:89] found id: ""
	I1209 11:56:04.500156  662586 logs.go:282] 0 containers: []
	W1209 11:56:04.500167  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:56:04.500179  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:56:04.500259  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:56:04.534853  662586 cri.go:89] found id: ""
	I1209 11:56:04.534888  662586 logs.go:282] 0 containers: []
	W1209 11:56:04.534902  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:56:04.534914  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:56:04.534928  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:56:04.586419  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:56:04.586457  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:56:04.600690  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:56:04.600728  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:56:04.669645  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:56:04.669685  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:56:04.669703  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:56:04.747973  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:56:04.748026  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:56:07.288721  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:56:07.302905  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:56:07.302975  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:56:07.336686  662586 cri.go:89] found id: ""
	I1209 11:56:07.336720  662586 logs.go:282] 0 containers: []
	W1209 11:56:07.336728  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:56:07.336735  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:56:07.336798  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:56:07.370119  662586 cri.go:89] found id: ""
	I1209 11:56:07.370150  662586 logs.go:282] 0 containers: []
	W1209 11:56:07.370159  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:56:07.370165  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:56:07.370245  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:56:07.402818  662586 cri.go:89] found id: ""
	I1209 11:56:07.402845  662586 logs.go:282] 0 containers: []
	W1209 11:56:07.402853  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:56:07.402861  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:56:07.402923  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:56:07.437694  662586 cri.go:89] found id: ""
	I1209 11:56:07.437722  662586 logs.go:282] 0 containers: []
	W1209 11:56:07.437732  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:56:07.437741  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:56:07.437806  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:56:07.474576  662586 cri.go:89] found id: ""
	I1209 11:56:07.474611  662586 logs.go:282] 0 containers: []
	W1209 11:56:07.474622  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:56:07.474629  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:56:07.474705  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:56:07.508538  662586 cri.go:89] found id: ""
	I1209 11:56:07.508575  662586 logs.go:282] 0 containers: []
	W1209 11:56:07.508585  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:56:07.508592  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:56:07.508661  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:56:07.548863  662586 cri.go:89] found id: ""
	I1209 11:56:07.548897  662586 logs.go:282] 0 containers: []
	W1209 11:56:07.548911  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:56:07.548922  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:56:07.549093  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:56:07.592515  662586 cri.go:89] found id: ""
	I1209 11:56:07.592543  662586 logs.go:282] 0 containers: []
	W1209 11:56:07.592555  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:56:07.592564  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:56:07.592579  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:56:07.652176  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:56:07.652219  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:56:04.895898  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:07.395712  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:09.398273  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:06.950668  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:09.450539  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:07.091573  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:09.591049  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:07.703040  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:56:07.703094  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:56:07.717880  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:56:07.717924  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:56:07.783396  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:56:07.783425  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:56:07.783441  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:56:10.362395  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:56:10.377478  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:56:10.377574  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:56:10.411923  662586 cri.go:89] found id: ""
	I1209 11:56:10.411956  662586 logs.go:282] 0 containers: []
	W1209 11:56:10.411969  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:56:10.411978  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:56:10.412049  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:56:10.444601  662586 cri.go:89] found id: ""
	I1209 11:56:10.444633  662586 logs.go:282] 0 containers: []
	W1209 11:56:10.444642  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:56:10.444648  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:56:10.444705  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:56:10.486720  662586 cri.go:89] found id: ""
	I1209 11:56:10.486753  662586 logs.go:282] 0 containers: []
	W1209 11:56:10.486763  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:56:10.486769  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:56:10.486822  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:56:10.523535  662586 cri.go:89] found id: ""
	I1209 11:56:10.523572  662586 logs.go:282] 0 containers: []
	W1209 11:56:10.523581  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:56:10.523587  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:56:10.523641  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:56:10.557701  662586 cri.go:89] found id: ""
	I1209 11:56:10.557741  662586 logs.go:282] 0 containers: []
	W1209 11:56:10.557754  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:56:10.557762  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:56:10.557834  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:56:10.593914  662586 cri.go:89] found id: ""
	I1209 11:56:10.593949  662586 logs.go:282] 0 containers: []
	W1209 11:56:10.593959  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:56:10.593965  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:56:10.594017  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:56:10.626367  662586 cri.go:89] found id: ""
	I1209 11:56:10.626469  662586 logs.go:282] 0 containers: []
	W1209 11:56:10.626482  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:56:10.626489  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:56:10.626547  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:56:10.665415  662586 cri.go:89] found id: ""
	I1209 11:56:10.665446  662586 logs.go:282] 0 containers: []
	W1209 11:56:10.665456  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:56:10.665467  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:56:10.665480  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:56:10.747483  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:56:10.747532  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:56:10.787728  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:56:10.787758  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:56:10.840678  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:56:10.840722  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:56:10.855774  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:56:10.855809  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:56:10.929638  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:56:11.896254  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:14.395661  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:11.451031  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:13.452502  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:15.951720  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:11.592197  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:13.593711  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:16.091641  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:13.430793  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:56:13.446156  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:56:13.446261  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:56:13.491624  662586 cri.go:89] found id: ""
	I1209 11:56:13.491662  662586 logs.go:282] 0 containers: []
	W1209 11:56:13.491675  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:56:13.491684  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:56:13.491758  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:56:13.537619  662586 cri.go:89] found id: ""
	I1209 11:56:13.537653  662586 logs.go:282] 0 containers: []
	W1209 11:56:13.537666  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:56:13.537675  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:56:13.537750  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:56:13.585761  662586 cri.go:89] found id: ""
	I1209 11:56:13.585796  662586 logs.go:282] 0 containers: []
	W1209 11:56:13.585810  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:56:13.585819  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:56:13.585883  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:56:13.620740  662586 cri.go:89] found id: ""
	I1209 11:56:13.620774  662586 logs.go:282] 0 containers: []
	W1209 11:56:13.620785  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:56:13.620791  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:56:13.620858  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:56:13.654405  662586 cri.go:89] found id: ""
	I1209 11:56:13.654433  662586 logs.go:282] 0 containers: []
	W1209 11:56:13.654442  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:56:13.654448  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:56:13.654509  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:56:13.687520  662586 cri.go:89] found id: ""
	I1209 11:56:13.687547  662586 logs.go:282] 0 containers: []
	W1209 11:56:13.687558  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:56:13.687566  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:56:13.687642  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:56:13.721105  662586 cri.go:89] found id: ""
	I1209 11:56:13.721140  662586 logs.go:282] 0 containers: []
	W1209 11:56:13.721153  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:56:13.721162  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:56:13.721238  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:56:13.753900  662586 cri.go:89] found id: ""
	I1209 11:56:13.753933  662586 logs.go:282] 0 containers: []
	W1209 11:56:13.753945  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:56:13.753960  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:56:13.753978  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:56:13.805864  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:56:13.805909  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:56:13.819356  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:56:13.819393  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:56:13.896097  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:56:13.896128  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:56:13.896150  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:56:13.979041  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:56:13.979084  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:56:16.516777  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:56:16.529916  662586 kubeadm.go:597] duration metric: took 4m1.869807937s to restartPrimaryControlPlane
	W1209 11:56:16.530015  662586 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1209 11:56:16.530067  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1209 11:56:16.396353  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:18.896097  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:18.451294  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:20.452525  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:18.092780  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:20.593275  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:18.635832  662586 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.105742271s)
	I1209 11:56:18.635914  662586 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 11:56:18.651678  662586 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 11:56:18.661965  662586 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 11:56:18.672060  662586 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 11:56:18.672082  662586 kubeadm.go:157] found existing configuration files:
	
	I1209 11:56:18.672147  662586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 11:56:18.681627  662586 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 11:56:18.681697  662586 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 11:56:18.691514  662586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 11:56:18.701210  662586 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 11:56:18.701292  662586 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 11:56:18.710934  662586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 11:56:18.720506  662586 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 11:56:18.720583  662586 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 11:56:18.729996  662586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 11:56:18.739425  662586 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 11:56:18.739486  662586 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 11:56:18.748788  662586 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1209 11:56:18.981849  662586 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1209 11:56:21.396764  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:23.894781  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:22.950912  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:24.951678  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:23.091306  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:24.592439  662109 pod_ready.go:82] duration metric: took 4m0.007699806s for pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace to be "Ready" ...
	E1209 11:56:24.592477  662109 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1209 11:56:24.592486  662109 pod_ready.go:39] duration metric: took 4m7.416528348s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 11:56:24.592504  662109 api_server.go:52] waiting for apiserver process to appear ...
	I1209 11:56:24.592537  662109 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:56:24.592590  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:56:24.643050  662109 cri.go:89] found id: "478ca5095dcdb90c166952aee1291e7de907591fa02ac65de132b6d7e6ea79cb"
	I1209 11:56:24.643085  662109 cri.go:89] found id: ""
	I1209 11:56:24.643094  662109 logs.go:282] 1 containers: [478ca5095dcdb90c166952aee1291e7de907591fa02ac65de132b6d7e6ea79cb]
	I1209 11:56:24.643151  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:24.647529  662109 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:56:24.647590  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:56:24.683125  662109 cri.go:89] found id: "13e00a6fef368f36aa166cfe9a5a6e37476d0bff708b09a552a102e6103aff16"
	I1209 11:56:24.683150  662109 cri.go:89] found id: ""
	I1209 11:56:24.683159  662109 logs.go:282] 1 containers: [13e00a6fef368f36aa166cfe9a5a6e37476d0bff708b09a552a102e6103aff16]
	I1209 11:56:24.683222  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:24.687584  662109 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:56:24.687706  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:56:24.720663  662109 cri.go:89] found id: "909852cc820d2c5faeb9b92e83a833c397dbbf46bbc715f79ef8b77f0a338f42"
	I1209 11:56:24.720699  662109 cri.go:89] found id: ""
	I1209 11:56:24.720708  662109 logs.go:282] 1 containers: [909852cc820d2c5faeb9b92e83a833c397dbbf46bbc715f79ef8b77f0a338f42]
	I1209 11:56:24.720769  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:24.724881  662109 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:56:24.724942  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:56:24.766055  662109 cri.go:89] found id: "73b01a8a4080f1488054462bb97f98c08528c799629927ce6a684fb0ade63413"
	I1209 11:56:24.766081  662109 cri.go:89] found id: ""
	I1209 11:56:24.766091  662109 logs.go:282] 1 containers: [73b01a8a4080f1488054462bb97f98c08528c799629927ce6a684fb0ade63413]
	I1209 11:56:24.766152  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:24.770491  662109 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:56:24.770557  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:56:24.804523  662109 cri.go:89] found id: "de64a319ab30ae80695633af0e1c9206551e2408918b743c2258216259de56d2"
	I1209 11:56:24.804549  662109 cri.go:89] found id: ""
	I1209 11:56:24.804558  662109 logs.go:282] 1 containers: [de64a319ab30ae80695633af0e1c9206551e2408918b743c2258216259de56d2]
	I1209 11:56:24.804607  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:24.808452  662109 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:56:24.808528  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:56:24.846043  662109 cri.go:89] found id: "b6662f1bed1995329291aee23d70ac4157125e45c75dd92e34e07fa257f2be2d"
	I1209 11:56:24.846072  662109 cri.go:89] found id: ""
	I1209 11:56:24.846084  662109 logs.go:282] 1 containers: [b6662f1bed1995329291aee23d70ac4157125e45c75dd92e34e07fa257f2be2d]
	I1209 11:56:24.846140  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:24.849991  662109 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:56:24.850057  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:56:24.884853  662109 cri.go:89] found id: ""
	I1209 11:56:24.884889  662109 logs.go:282] 0 containers: []
	W1209 11:56:24.884902  662109 logs.go:284] No container was found matching "kindnet"
	I1209 11:56:24.884912  662109 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 11:56:24.884983  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 11:56:24.920103  662109 cri.go:89] found id: "d184b6139f52f80605886a70426659453eee61916ead187b881c64f6b4c59bbb"
	I1209 11:56:24.920131  662109 cri.go:89] found id: "0ef403336ca71e96a468d654fafbbe9fd9c37a3d04d2578e06771b425f20004f"
	I1209 11:56:24.920135  662109 cri.go:89] found id: ""
	I1209 11:56:24.920152  662109 logs.go:282] 2 containers: [d184b6139f52f80605886a70426659453eee61916ead187b881c64f6b4c59bbb 0ef403336ca71e96a468d654fafbbe9fd9c37a3d04d2578e06771b425f20004f]
	I1209 11:56:24.920223  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:24.924212  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:24.928416  662109 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:56:24.928436  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 11:56:25.077407  662109 logs.go:123] Gathering logs for kube-apiserver [478ca5095dcdb90c166952aee1291e7de907591fa02ac65de132b6d7e6ea79cb] ...
	I1209 11:56:25.077468  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 478ca5095dcdb90c166952aee1291e7de907591fa02ac65de132b6d7e6ea79cb"
	I1209 11:56:25.125600  662109 logs.go:123] Gathering logs for storage-provisioner [d184b6139f52f80605886a70426659453eee61916ead187b881c64f6b4c59bbb] ...
	I1209 11:56:25.125649  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d184b6139f52f80605886a70426659453eee61916ead187b881c64f6b4c59bbb"
	I1209 11:56:25.163222  662109 logs.go:123] Gathering logs for container status ...
	I1209 11:56:25.163268  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:56:25.208430  662109 logs.go:123] Gathering logs for storage-provisioner [0ef403336ca71e96a468d654fafbbe9fd9c37a3d04d2578e06771b425f20004f] ...
	I1209 11:56:25.208465  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0ef403336ca71e96a468d654fafbbe9fd9c37a3d04d2578e06771b425f20004f"
	I1209 11:56:25.245884  662109 logs.go:123] Gathering logs for kubelet ...
	I1209 11:56:25.245917  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:56:25.318723  662109 logs.go:123] Gathering logs for dmesg ...
	I1209 11:56:25.318775  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:56:25.333173  662109 logs.go:123] Gathering logs for etcd [13e00a6fef368f36aa166cfe9a5a6e37476d0bff708b09a552a102e6103aff16] ...
	I1209 11:56:25.333207  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 13e00a6fef368f36aa166cfe9a5a6e37476d0bff708b09a552a102e6103aff16"
	I1209 11:56:25.394636  662109 logs.go:123] Gathering logs for coredns [909852cc820d2c5faeb9b92e83a833c397dbbf46bbc715f79ef8b77f0a338f42] ...
	I1209 11:56:25.394683  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 909852cc820d2c5faeb9b92e83a833c397dbbf46bbc715f79ef8b77f0a338f42"
	I1209 11:56:25.435210  662109 logs.go:123] Gathering logs for kube-scheduler [73b01a8a4080f1488054462bb97f98c08528c799629927ce6a684fb0ade63413] ...
	I1209 11:56:25.435248  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 73b01a8a4080f1488054462bb97f98c08528c799629927ce6a684fb0ade63413"
	I1209 11:56:25.482142  662109 logs.go:123] Gathering logs for kube-proxy [de64a319ab30ae80695633af0e1c9206551e2408918b743c2258216259de56d2] ...
	I1209 11:56:25.482184  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de64a319ab30ae80695633af0e1c9206551e2408918b743c2258216259de56d2"
	I1209 11:56:25.516975  662109 logs.go:123] Gathering logs for kube-controller-manager [b6662f1bed1995329291aee23d70ac4157125e45c75dd92e34e07fa257f2be2d] ...
	I1209 11:56:25.517006  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b6662f1bed1995329291aee23d70ac4157125e45c75dd92e34e07fa257f2be2d"
	I1209 11:56:25.565526  662109 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:56:25.565565  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:56:25.896281  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:28.395529  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:27.454449  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:29.950704  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:28.549071  662109 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:56:28.567288  662109 api_server.go:72] duration metric: took 4m18.770451099s to wait for apiserver process to appear ...
	I1209 11:56:28.567319  662109 api_server.go:88] waiting for apiserver healthz status ...
	I1209 11:56:28.567367  662109 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:56:28.567418  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:56:28.603341  662109 cri.go:89] found id: "478ca5095dcdb90c166952aee1291e7de907591fa02ac65de132b6d7e6ea79cb"
	I1209 11:56:28.603365  662109 cri.go:89] found id: ""
	I1209 11:56:28.603372  662109 logs.go:282] 1 containers: [478ca5095dcdb90c166952aee1291e7de907591fa02ac65de132b6d7e6ea79cb]
	I1209 11:56:28.603423  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:28.607416  662109 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:56:28.607493  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:56:28.647437  662109 cri.go:89] found id: "13e00a6fef368f36aa166cfe9a5a6e37476d0bff708b09a552a102e6103aff16"
	I1209 11:56:28.647465  662109 cri.go:89] found id: ""
	I1209 11:56:28.647477  662109 logs.go:282] 1 containers: [13e00a6fef368f36aa166cfe9a5a6e37476d0bff708b09a552a102e6103aff16]
	I1209 11:56:28.647539  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:28.651523  662109 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:56:28.651584  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:56:28.687889  662109 cri.go:89] found id: "909852cc820d2c5faeb9b92e83a833c397dbbf46bbc715f79ef8b77f0a338f42"
	I1209 11:56:28.687920  662109 cri.go:89] found id: ""
	I1209 11:56:28.687929  662109 logs.go:282] 1 containers: [909852cc820d2c5faeb9b92e83a833c397dbbf46bbc715f79ef8b77f0a338f42]
	I1209 11:56:28.687983  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:28.692025  662109 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:56:28.692100  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:56:28.728934  662109 cri.go:89] found id: "73b01a8a4080f1488054462bb97f98c08528c799629927ce6a684fb0ade63413"
	I1209 11:56:28.728961  662109 cri.go:89] found id: ""
	I1209 11:56:28.728969  662109 logs.go:282] 1 containers: [73b01a8a4080f1488054462bb97f98c08528c799629927ce6a684fb0ade63413]
	I1209 11:56:28.729020  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:28.733217  662109 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:56:28.733300  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:56:28.768700  662109 cri.go:89] found id: "de64a319ab30ae80695633af0e1c9206551e2408918b743c2258216259de56d2"
	I1209 11:56:28.768726  662109 cri.go:89] found id: ""
	I1209 11:56:28.768735  662109 logs.go:282] 1 containers: [de64a319ab30ae80695633af0e1c9206551e2408918b743c2258216259de56d2]
	I1209 11:56:28.768790  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:28.772844  662109 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:56:28.772921  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:56:28.812073  662109 cri.go:89] found id: "b6662f1bed1995329291aee23d70ac4157125e45c75dd92e34e07fa257f2be2d"
	I1209 11:56:28.812104  662109 cri.go:89] found id: ""
	I1209 11:56:28.812116  662109 logs.go:282] 1 containers: [b6662f1bed1995329291aee23d70ac4157125e45c75dd92e34e07fa257f2be2d]
	I1209 11:56:28.812195  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:28.816542  662109 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:56:28.816612  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:56:28.850959  662109 cri.go:89] found id: ""
	I1209 11:56:28.850997  662109 logs.go:282] 0 containers: []
	W1209 11:56:28.851010  662109 logs.go:284] No container was found matching "kindnet"
	I1209 11:56:28.851018  662109 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 11:56:28.851075  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 11:56:28.894115  662109 cri.go:89] found id: "d184b6139f52f80605886a70426659453eee61916ead187b881c64f6b4c59bbb"
	I1209 11:56:28.894142  662109 cri.go:89] found id: "0ef403336ca71e96a468d654fafbbe9fd9c37a3d04d2578e06771b425f20004f"
	I1209 11:56:28.894148  662109 cri.go:89] found id: ""
	I1209 11:56:28.894157  662109 logs.go:282] 2 containers: [d184b6139f52f80605886a70426659453eee61916ead187b881c64f6b4c59bbb 0ef403336ca71e96a468d654fafbbe9fd9c37a3d04d2578e06771b425f20004f]
	I1209 11:56:28.894228  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:28.899260  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:28.903033  662109 logs.go:123] Gathering logs for dmesg ...
	I1209 11:56:28.903055  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:56:28.916411  662109 logs.go:123] Gathering logs for kube-apiserver [478ca5095dcdb90c166952aee1291e7de907591fa02ac65de132b6d7e6ea79cb] ...
	I1209 11:56:28.916447  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 478ca5095dcdb90c166952aee1291e7de907591fa02ac65de132b6d7e6ea79cb"
	I1209 11:56:28.965873  662109 logs.go:123] Gathering logs for coredns [909852cc820d2c5faeb9b92e83a833c397dbbf46bbc715f79ef8b77f0a338f42] ...
	I1209 11:56:28.965911  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 909852cc820d2c5faeb9b92e83a833c397dbbf46bbc715f79ef8b77f0a338f42"
	I1209 11:56:29.003553  662109 logs.go:123] Gathering logs for kube-scheduler [73b01a8a4080f1488054462bb97f98c08528c799629927ce6a684fb0ade63413] ...
	I1209 11:56:29.003591  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 73b01a8a4080f1488054462bb97f98c08528c799629927ce6a684fb0ade63413"
	I1209 11:56:29.038945  662109 logs.go:123] Gathering logs for kube-proxy [de64a319ab30ae80695633af0e1c9206551e2408918b743c2258216259de56d2] ...
	I1209 11:56:29.038989  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de64a319ab30ae80695633af0e1c9206551e2408918b743c2258216259de56d2"
	I1209 11:56:29.079595  662109 logs.go:123] Gathering logs for storage-provisioner [d184b6139f52f80605886a70426659453eee61916ead187b881c64f6b4c59bbb] ...
	I1209 11:56:29.079636  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d184b6139f52f80605886a70426659453eee61916ead187b881c64f6b4c59bbb"
	I1209 11:56:29.117632  662109 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:56:29.117665  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:56:29.556193  662109 logs.go:123] Gathering logs for kubelet ...
	I1209 11:56:29.556245  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:56:29.629530  662109 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:56:29.629571  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 11:56:29.746102  662109 logs.go:123] Gathering logs for etcd [13e00a6fef368f36aa166cfe9a5a6e37476d0bff708b09a552a102e6103aff16] ...
	I1209 11:56:29.746137  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 13e00a6fef368f36aa166cfe9a5a6e37476d0bff708b09a552a102e6103aff16"
	I1209 11:56:29.799342  662109 logs.go:123] Gathering logs for kube-controller-manager [b6662f1bed1995329291aee23d70ac4157125e45c75dd92e34e07fa257f2be2d] ...
	I1209 11:56:29.799379  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b6662f1bed1995329291aee23d70ac4157125e45c75dd92e34e07fa257f2be2d"
	I1209 11:56:29.851197  662109 logs.go:123] Gathering logs for storage-provisioner [0ef403336ca71e96a468d654fafbbe9fd9c37a3d04d2578e06771b425f20004f] ...
	I1209 11:56:29.851254  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0ef403336ca71e96a468d654fafbbe9fd9c37a3d04d2578e06771b425f20004f"
	I1209 11:56:29.884688  662109 logs.go:123] Gathering logs for container status ...
	I1209 11:56:29.884725  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:56:30.396025  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:32.396195  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:34.396605  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:31.951405  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:34.451838  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:32.425773  662109 api_server.go:253] Checking apiserver healthz at https://192.168.39.169:8443/healthz ...
	I1209 11:56:32.432276  662109 api_server.go:279] https://192.168.39.169:8443/healthz returned 200:
	ok
	I1209 11:56:32.433602  662109 api_server.go:141] control plane version: v1.31.2
	I1209 11:56:32.433634  662109 api_server.go:131] duration metric: took 3.866306159s to wait for apiserver health ...
	I1209 11:56:32.433647  662109 system_pods.go:43] waiting for kube-system pods to appear ...
	I1209 11:56:32.433680  662109 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:56:32.433744  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:56:32.471560  662109 cri.go:89] found id: "478ca5095dcdb90c166952aee1291e7de907591fa02ac65de132b6d7e6ea79cb"
	I1209 11:56:32.471593  662109 cri.go:89] found id: ""
	I1209 11:56:32.471604  662109 logs.go:282] 1 containers: [478ca5095dcdb90c166952aee1291e7de907591fa02ac65de132b6d7e6ea79cb]
	I1209 11:56:32.471684  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:32.475735  662109 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:56:32.475809  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:56:32.509788  662109 cri.go:89] found id: "13e00a6fef368f36aa166cfe9a5a6e37476d0bff708b09a552a102e6103aff16"
	I1209 11:56:32.509821  662109 cri.go:89] found id: ""
	I1209 11:56:32.509833  662109 logs.go:282] 1 containers: [13e00a6fef368f36aa166cfe9a5a6e37476d0bff708b09a552a102e6103aff16]
	I1209 11:56:32.509889  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:32.513849  662109 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:56:32.513908  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:56:32.547022  662109 cri.go:89] found id: "909852cc820d2c5faeb9b92e83a833c397dbbf46bbc715f79ef8b77f0a338f42"
	I1209 11:56:32.547046  662109 cri.go:89] found id: ""
	I1209 11:56:32.547055  662109 logs.go:282] 1 containers: [909852cc820d2c5faeb9b92e83a833c397dbbf46bbc715f79ef8b77f0a338f42]
	I1209 11:56:32.547113  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:32.551393  662109 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:56:32.551476  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:56:32.586478  662109 cri.go:89] found id: "73b01a8a4080f1488054462bb97f98c08528c799629927ce6a684fb0ade63413"
	I1209 11:56:32.586516  662109 cri.go:89] found id: ""
	I1209 11:56:32.586536  662109 logs.go:282] 1 containers: [73b01a8a4080f1488054462bb97f98c08528c799629927ce6a684fb0ade63413]
	I1209 11:56:32.586605  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:32.592876  662109 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:56:32.592950  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:56:32.626775  662109 cri.go:89] found id: "de64a319ab30ae80695633af0e1c9206551e2408918b743c2258216259de56d2"
	I1209 11:56:32.626803  662109 cri.go:89] found id: ""
	I1209 11:56:32.626812  662109 logs.go:282] 1 containers: [de64a319ab30ae80695633af0e1c9206551e2408918b743c2258216259de56d2]
	I1209 11:56:32.626869  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:32.630757  662109 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:56:32.630825  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:56:32.663980  662109 cri.go:89] found id: "b6662f1bed1995329291aee23d70ac4157125e45c75dd92e34e07fa257f2be2d"
	I1209 11:56:32.664013  662109 cri.go:89] found id: ""
	I1209 11:56:32.664026  662109 logs.go:282] 1 containers: [b6662f1bed1995329291aee23d70ac4157125e45c75dd92e34e07fa257f2be2d]
	I1209 11:56:32.664093  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:32.668368  662109 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:56:32.668449  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:56:32.704638  662109 cri.go:89] found id: ""
	I1209 11:56:32.704675  662109 logs.go:282] 0 containers: []
	W1209 11:56:32.704688  662109 logs.go:284] No container was found matching "kindnet"
	I1209 11:56:32.704695  662109 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 11:56:32.704752  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 11:56:32.743694  662109 cri.go:89] found id: "d184b6139f52f80605886a70426659453eee61916ead187b881c64f6b4c59bbb"
	I1209 11:56:32.743729  662109 cri.go:89] found id: "0ef403336ca71e96a468d654fafbbe9fd9c37a3d04d2578e06771b425f20004f"
	I1209 11:56:32.743735  662109 cri.go:89] found id: ""
	I1209 11:56:32.743746  662109 logs.go:282] 2 containers: [d184b6139f52f80605886a70426659453eee61916ead187b881c64f6b4c59bbb 0ef403336ca71e96a468d654fafbbe9fd9c37a3d04d2578e06771b425f20004f]
	I1209 11:56:32.743814  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:32.749146  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:32.753226  662109 logs.go:123] Gathering logs for kube-scheduler [73b01a8a4080f1488054462bb97f98c08528c799629927ce6a684fb0ade63413] ...
	I1209 11:56:32.753253  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 73b01a8a4080f1488054462bb97f98c08528c799629927ce6a684fb0ade63413"
	I1209 11:56:32.787832  662109 logs.go:123] Gathering logs for kube-proxy [de64a319ab30ae80695633af0e1c9206551e2408918b743c2258216259de56d2] ...
	I1209 11:56:32.787877  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de64a319ab30ae80695633af0e1c9206551e2408918b743c2258216259de56d2"
	I1209 11:56:32.824859  662109 logs.go:123] Gathering logs for kube-controller-manager [b6662f1bed1995329291aee23d70ac4157125e45c75dd92e34e07fa257f2be2d] ...
	I1209 11:56:32.824891  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b6662f1bed1995329291aee23d70ac4157125e45c75dd92e34e07fa257f2be2d"
	I1209 11:56:32.881776  662109 logs.go:123] Gathering logs for storage-provisioner [d184b6139f52f80605886a70426659453eee61916ead187b881c64f6b4c59bbb] ...
	I1209 11:56:32.881808  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d184b6139f52f80605886a70426659453eee61916ead187b881c64f6b4c59bbb"
	I1209 11:56:32.919018  662109 logs.go:123] Gathering logs for storage-provisioner [0ef403336ca71e96a468d654fafbbe9fd9c37a3d04d2578e06771b425f20004f] ...
	I1209 11:56:32.919064  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0ef403336ca71e96a468d654fafbbe9fd9c37a3d04d2578e06771b425f20004f"
	I1209 11:56:32.956839  662109 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:56:32.956869  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:56:33.334255  662109 logs.go:123] Gathering logs for kubelet ...
	I1209 11:56:33.334300  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:56:33.406008  662109 logs.go:123] Gathering logs for etcd [13e00a6fef368f36aa166cfe9a5a6e37476d0bff708b09a552a102e6103aff16] ...
	I1209 11:56:33.406049  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 13e00a6fef368f36aa166cfe9a5a6e37476d0bff708b09a552a102e6103aff16"
	I1209 11:56:33.453689  662109 logs.go:123] Gathering logs for kube-apiserver [478ca5095dcdb90c166952aee1291e7de907591fa02ac65de132b6d7e6ea79cb] ...
	I1209 11:56:33.453724  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 478ca5095dcdb90c166952aee1291e7de907591fa02ac65de132b6d7e6ea79cb"
	I1209 11:56:33.496168  662109 logs.go:123] Gathering logs for coredns [909852cc820d2c5faeb9b92e83a833c397dbbf46bbc715f79ef8b77f0a338f42] ...
	I1209 11:56:33.496209  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 909852cc820d2c5faeb9b92e83a833c397dbbf46bbc715f79ef8b77f0a338f42"
	I1209 11:56:33.532057  662109 logs.go:123] Gathering logs for container status ...
	I1209 11:56:33.532090  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:56:33.575050  662109 logs.go:123] Gathering logs for dmesg ...
	I1209 11:56:33.575087  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:56:33.588543  662109 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:56:33.588575  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 11:56:36.194483  662109 system_pods.go:59] 8 kube-system pods found
	I1209 11:56:36.194516  662109 system_pods.go:61] "coredns-7c65d6cfc9-z647g" [0e15e13e-efe6-4ae2-8bac-205aadf8f95a] Running
	I1209 11:56:36.194522  662109 system_pods.go:61] "etcd-no-preload-820741" [3ea97088-f3b5-4c8f-ac11-7ab96f37bc5b] Running
	I1209 11:56:36.194527  662109 system_pods.go:61] "kube-apiserver-no-preload-820741" [bd0d3e20-bdab-41c1-bb25-0c5149b4e456] Running
	I1209 11:56:36.194531  662109 system_pods.go:61] "kube-controller-manager-no-preload-820741" [5eda77af-a169-4be5-9f96-6ad5406f9036] Running
	I1209 11:56:36.194534  662109 system_pods.go:61] "kube-proxy-hpvvp" [0945206c-8d1e-47e0-b35b-9011073423b2] Running
	I1209 11:56:36.194538  662109 system_pods.go:61] "kube-scheduler-no-preload-820741" [e7003668-a70d-4334-bb99-c63c463d87e0] Running
	I1209 11:56:36.194543  662109 system_pods.go:61] "metrics-server-6867b74b74-pwcsr" [40d4df7e-de82-478b-a77b-b27208d8262e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1209 11:56:36.194549  662109 system_pods.go:61] "storage-provisioner" [aeba46d3-ecf1-4923-b89c-75b34e75a06d] Running
	I1209 11:56:36.194559  662109 system_pods.go:74] duration metric: took 3.76090495s to wait for pod list to return data ...
	I1209 11:56:36.194567  662109 default_sa.go:34] waiting for default service account to be created ...
	I1209 11:56:36.197070  662109 default_sa.go:45] found service account: "default"
	I1209 11:56:36.197094  662109 default_sa.go:55] duration metric: took 2.520926ms for default service account to be created ...
	I1209 11:56:36.197104  662109 system_pods.go:116] waiting for k8s-apps to be running ...
	I1209 11:56:36.201494  662109 system_pods.go:86] 8 kube-system pods found
	I1209 11:56:36.201518  662109 system_pods.go:89] "coredns-7c65d6cfc9-z647g" [0e15e13e-efe6-4ae2-8bac-205aadf8f95a] Running
	I1209 11:56:36.201524  662109 system_pods.go:89] "etcd-no-preload-820741" [3ea97088-f3b5-4c8f-ac11-7ab96f37bc5b] Running
	I1209 11:56:36.201528  662109 system_pods.go:89] "kube-apiserver-no-preload-820741" [bd0d3e20-bdab-41c1-bb25-0c5149b4e456] Running
	I1209 11:56:36.201533  662109 system_pods.go:89] "kube-controller-manager-no-preload-820741" [5eda77af-a169-4be5-9f96-6ad5406f9036] Running
	I1209 11:56:36.201537  662109 system_pods.go:89] "kube-proxy-hpvvp" [0945206c-8d1e-47e0-b35b-9011073423b2] Running
	I1209 11:56:36.201540  662109 system_pods.go:89] "kube-scheduler-no-preload-820741" [e7003668-a70d-4334-bb99-c63c463d87e0] Running
	I1209 11:56:36.201547  662109 system_pods.go:89] "metrics-server-6867b74b74-pwcsr" [40d4df7e-de82-478b-a77b-b27208d8262e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1209 11:56:36.201551  662109 system_pods.go:89] "storage-provisioner" [aeba46d3-ecf1-4923-b89c-75b34e75a06d] Running
	I1209 11:56:36.201558  662109 system_pods.go:126] duration metric: took 4.448871ms to wait for k8s-apps to be running ...
	I1209 11:56:36.201567  662109 system_svc.go:44] waiting for kubelet service to be running ....
	I1209 11:56:36.201628  662109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 11:56:36.217457  662109 system_svc.go:56] duration metric: took 15.878252ms WaitForService to wait for kubelet
	I1209 11:56:36.217503  662109 kubeadm.go:582] duration metric: took 4m26.420670146s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 11:56:36.217527  662109 node_conditions.go:102] verifying NodePressure condition ...
	I1209 11:56:36.220498  662109 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1209 11:56:36.220526  662109 node_conditions.go:123] node cpu capacity is 2
	I1209 11:56:36.220572  662109 node_conditions.go:105] duration metric: took 3.039367ms to run NodePressure ...
	I1209 11:56:36.220586  662109 start.go:241] waiting for startup goroutines ...
	I1209 11:56:36.220597  662109 start.go:246] waiting for cluster config update ...
	I1209 11:56:36.220628  662109 start.go:255] writing updated cluster config ...
	I1209 11:56:36.220974  662109 ssh_runner.go:195] Run: rm -f paused
	I1209 11:56:36.272920  662109 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1209 11:56:36.274686  662109 out.go:177] * Done! kubectl is now configured to use "no-preload-820741" cluster and "default" namespace by default
	I1209 11:56:36.895681  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:38.896066  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:36.951281  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:39.455225  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:41.395880  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:43.895464  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:41.951287  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:44.451357  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:45.896184  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:48.398617  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:46.451733  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:48.950857  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:50.950964  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:50.895678  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:52.896291  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:53.389365  663024 pod_ready.go:82] duration metric: took 4m0.00015362s for pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace to be "Ready" ...
	E1209 11:56:53.389414  663024 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1209 11:56:53.389440  663024 pod_ready.go:39] duration metric: took 4m13.044002506s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 11:56:53.389480  663024 kubeadm.go:597] duration metric: took 4m21.286289463s to restartPrimaryControlPlane
	W1209 11:56:53.389572  663024 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1209 11:56:53.389610  663024 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1209 11:56:52.951153  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:55.451223  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:57.950413  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:57:00.449904  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:57:02.450069  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:57:04.451074  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:57:06.950873  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:57:08.951176  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:57:11.450596  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:57:13.451552  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:57:13.944884  661546 pod_ready.go:82] duration metric: took 4m0.000348644s for pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace to be "Ready" ...
	E1209 11:57:13.944919  661546 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace to be "Ready" (will not retry!)
	I1209 11:57:13.944943  661546 pod_ready.go:39] duration metric: took 4m14.049505666s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 11:57:13.944980  661546 kubeadm.go:597] duration metric: took 4m22.094543781s to restartPrimaryControlPlane
	W1209 11:57:13.945086  661546 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1209 11:57:13.945123  661546 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1209 11:57:19.569119  663024 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.179481312s)
	I1209 11:57:19.569196  663024 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 11:57:19.583584  663024 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 11:57:19.592807  663024 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 11:57:19.602121  663024 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 11:57:19.602190  663024 kubeadm.go:157] found existing configuration files:
	
	I1209 11:57:19.602249  663024 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1209 11:57:19.611109  663024 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 11:57:19.611187  663024 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 11:57:19.620264  663024 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1209 11:57:19.629026  663024 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 11:57:19.629103  663024 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 11:57:19.638036  663024 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1209 11:57:19.646265  663024 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 11:57:19.646331  663024 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 11:57:19.655187  663024 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1209 11:57:19.663908  663024 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 11:57:19.663962  663024 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 11:57:19.673002  663024 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1209 11:57:19.717664  663024 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1209 11:57:19.717737  663024 kubeadm.go:310] [preflight] Running pre-flight checks
	I1209 11:57:19.818945  663024 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1209 11:57:19.819065  663024 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1209 11:57:19.819160  663024 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1209 11:57:19.828186  663024 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1209 11:57:19.829831  663024 out.go:235]   - Generating certificates and keys ...
	I1209 11:57:19.829938  663024 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1209 11:57:19.830031  663024 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1209 11:57:19.830145  663024 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1209 11:57:19.830252  663024 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1209 11:57:19.830377  663024 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1209 11:57:19.830470  663024 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1209 11:57:19.830568  663024 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1209 11:57:19.830644  663024 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1209 11:57:19.830745  663024 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1209 11:57:19.830825  663024 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1209 11:57:19.830878  663024 kubeadm.go:310] [certs] Using the existing "sa" key
	I1209 11:57:19.830963  663024 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1209 11:57:19.961813  663024 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1209 11:57:20.436964  663024 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1209 11:57:20.652041  663024 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1209 11:57:20.837664  663024 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1209 11:57:20.892035  663024 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1209 11:57:20.892497  663024 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1209 11:57:20.895295  663024 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1209 11:57:20.896871  663024 out.go:235]   - Booting up control plane ...
	I1209 11:57:20.896992  663024 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1209 11:57:20.897139  663024 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1209 11:57:20.897260  663024 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1209 11:57:20.914735  663024 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1209 11:57:20.920520  663024 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1209 11:57:20.920566  663024 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1209 11:57:21.047290  663024 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1209 11:57:21.047437  663024 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1209 11:57:22.049131  663024 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001914766s
	I1209 11:57:22.049257  663024 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1209 11:57:27.053443  663024 kubeadm.go:310] [api-check] The API server is healthy after 5.002570817s
	I1209 11:57:27.068518  663024 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1209 11:57:27.086371  663024 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1209 11:57:27.114617  663024 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1209 11:57:27.114833  663024 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-482476 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1209 11:57:27.131354  663024 kubeadm.go:310] [bootstrap-token] Using token: 6aanjy.0y855mmcca5ic9co
	I1209 11:57:27.132852  663024 out.go:235]   - Configuring RBAC rules ...
	I1209 11:57:27.132992  663024 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1209 11:57:27.139770  663024 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1209 11:57:27.147974  663024 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1209 11:57:27.155508  663024 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1209 11:57:27.159181  663024 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1209 11:57:27.163403  663024 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1209 11:57:27.458812  663024 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1209 11:57:27.900322  663024 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1209 11:57:28.458864  663024 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1209 11:57:28.459944  663024 kubeadm.go:310] 
	I1209 11:57:28.460043  663024 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1209 11:57:28.460054  663024 kubeadm.go:310] 
	I1209 11:57:28.460156  663024 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1209 11:57:28.460166  663024 kubeadm.go:310] 
	I1209 11:57:28.460198  663024 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1209 11:57:28.460284  663024 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1209 11:57:28.460385  663024 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1209 11:57:28.460414  663024 kubeadm.go:310] 
	I1209 11:57:28.460499  663024 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1209 11:57:28.460509  663024 kubeadm.go:310] 
	I1209 11:57:28.460576  663024 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1209 11:57:28.460586  663024 kubeadm.go:310] 
	I1209 11:57:28.460663  663024 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1209 11:57:28.460766  663024 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1209 11:57:28.460862  663024 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1209 11:57:28.460871  663024 kubeadm.go:310] 
	I1209 11:57:28.460992  663024 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1209 11:57:28.461096  663024 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1209 11:57:28.461121  663024 kubeadm.go:310] 
	I1209 11:57:28.461244  663024 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token 6aanjy.0y855mmcca5ic9co \
	I1209 11:57:28.461395  663024 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c011865ce27f188d55feab5f04226df0aae8d2e7cfe661283806c6f2de0ef29a \
	I1209 11:57:28.461435  663024 kubeadm.go:310] 	--control-plane 
	I1209 11:57:28.461446  663024 kubeadm.go:310] 
	I1209 11:57:28.461551  663024 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1209 11:57:28.461574  663024 kubeadm.go:310] 
	I1209 11:57:28.461679  663024 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token 6aanjy.0y855mmcca5ic9co \
	I1209 11:57:28.461832  663024 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c011865ce27f188d55feab5f04226df0aae8d2e7cfe661283806c6f2de0ef29a 
	I1209 11:57:28.462544  663024 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1209 11:57:28.462594  663024 cni.go:84] Creating CNI manager for ""
	I1209 11:57:28.462620  663024 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 11:57:28.464574  663024 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1209 11:57:28.465952  663024 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1209 11:57:28.476155  663024 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1209 11:57:28.493471  663024 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1209 11:57:28.493551  663024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 11:57:28.493594  663024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-482476 minikube.k8s.io/updated_at=2024_12_09T11_57_28_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=60110addcdbf0fec7168b962521659e922988d6c minikube.k8s.io/name=default-k8s-diff-port-482476 minikube.k8s.io/primary=true
	I1209 11:57:28.506467  663024 ops.go:34] apiserver oom_adj: -16
	I1209 11:57:28.724224  663024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 11:57:29.224971  663024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 11:57:29.724660  663024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 11:57:30.224466  663024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 11:57:30.724354  663024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 11:57:31.224702  663024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 11:57:31.725101  663024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 11:57:32.224364  663024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 11:57:32.724357  663024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 11:57:32.844191  663024 kubeadm.go:1113] duration metric: took 4.350713188s to wait for elevateKubeSystemPrivileges
	I1209 11:57:32.844243  663024 kubeadm.go:394] duration metric: took 5m0.79272843s to StartCluster
	I1209 11:57:32.844287  663024 settings.go:142] acquiring lock: {Name:mkd819f3469f56ef651f2e5bb461e93bb6b2b5fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 11:57:32.844417  663024 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20068-609844/kubeconfig
	I1209 11:57:32.846697  663024 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/kubeconfig: {Name:mk32876a1ab47f13c0d7c0ba1ee5e32de01b4a4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 11:57:32.847014  663024 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.25 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 11:57:32.847067  663024 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1209 11:57:32.847162  663024 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-482476"
	I1209 11:57:32.847186  663024 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-482476"
	I1209 11:57:32.847192  663024 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-482476"
	W1209 11:57:32.847201  663024 addons.go:243] addon storage-provisioner should already be in state true
	I1209 11:57:32.847204  663024 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-482476"
	I1209 11:57:32.847228  663024 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-482476"
	I1209 11:57:32.847272  663024 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-482476"
	W1209 11:57:32.847287  663024 addons.go:243] addon metrics-server should already be in state true
	I1209 11:57:32.847285  663024 config.go:182] Loaded profile config "default-k8s-diff-port-482476": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 11:57:32.847328  663024 host.go:66] Checking if "default-k8s-diff-port-482476" exists ...
	I1209 11:57:32.847237  663024 host.go:66] Checking if "default-k8s-diff-port-482476" exists ...
	I1209 11:57:32.847705  663024 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:57:32.847713  663024 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:57:32.847750  663024 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:57:32.847755  663024 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:57:32.847841  663024 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:57:32.847873  663024 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:57:32.848599  663024 out.go:177] * Verifying Kubernetes components...
	I1209 11:57:32.850246  663024 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 11:57:32.864945  663024 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44639
	I1209 11:57:32.865141  663024 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37211
	I1209 11:57:32.865203  663024 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42835
	I1209 11:57:32.865473  663024 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:57:32.865635  663024 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:57:32.865733  663024 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:57:32.866096  663024 main.go:141] libmachine: Using API Version  1
	I1209 11:57:32.866115  663024 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:57:32.866241  663024 main.go:141] libmachine: Using API Version  1
	I1209 11:57:32.866264  663024 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:57:32.866241  663024 main.go:141] libmachine: Using API Version  1
	I1209 11:57:32.866316  663024 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:57:32.866642  663024 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:57:32.866654  663024 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:57:32.866656  663024 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:57:32.866865  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetState
	I1209 11:57:32.867243  663024 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:57:32.867287  663024 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:57:32.867321  663024 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:57:32.867358  663024 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:57:32.871085  663024 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-482476"
	W1209 11:57:32.871109  663024 addons.go:243] addon default-storageclass should already be in state true
	I1209 11:57:32.871142  663024 host.go:66] Checking if "default-k8s-diff-port-482476" exists ...
	I1209 11:57:32.871395  663024 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:57:32.871431  663024 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:57:32.883301  663024 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45049
	I1209 11:57:32.883976  663024 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:57:32.884508  663024 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36509
	I1209 11:57:32.884758  663024 main.go:141] libmachine: Using API Version  1
	I1209 11:57:32.884775  663024 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:57:32.885123  663024 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:57:32.885279  663024 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:57:32.885610  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetState
	I1209 11:57:32.885801  663024 main.go:141] libmachine: Using API Version  1
	I1209 11:57:32.885817  663024 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:57:32.886142  663024 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:57:32.886347  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetState
	I1209 11:57:32.888357  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .DriverName
	I1209 11:57:32.888762  663024 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37611
	I1209 11:57:32.889103  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .DriverName
	I1209 11:57:32.889192  663024 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:57:32.889669  663024 main.go:141] libmachine: Using API Version  1
	I1209 11:57:32.889692  663024 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:57:32.890035  663024 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:57:32.890082  663024 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 11:57:32.890647  663024 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:57:32.890687  663024 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:57:32.890867  663024 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1209 11:57:32.891756  663024 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 11:57:32.891774  663024 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1209 11:57:32.891794  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHHostname
	I1209 11:57:32.892543  663024 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1209 11:57:32.892563  663024 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1209 11:57:32.892587  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHHostname
	I1209 11:57:32.896754  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:57:32.897437  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:c9:8a", ip: ""} in network mk-default-k8s-diff-port-482476: {Iface:virbr2 ExpiryTime:2024-12-09 12:52:18 +0000 UTC Type:0 Mac:52:54:00:f0:c9:8a Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:default-k8s-diff-port-482476 Clientid:01:52:54:00:f0:c9:8a}
	I1209 11:57:32.897471  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined IP address 192.168.50.25 and MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:57:32.897752  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHPort
	I1209 11:57:32.897836  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:57:32.897975  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHKeyPath
	I1209 11:57:32.898370  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:c9:8a", ip: ""} in network mk-default-k8s-diff-port-482476: {Iface:virbr2 ExpiryTime:2024-12-09 12:52:18 +0000 UTC Type:0 Mac:52:54:00:f0:c9:8a Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:default-k8s-diff-port-482476 Clientid:01:52:54:00:f0:c9:8a}
	I1209 11:57:32.898381  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHUsername
	I1209 11:57:32.898395  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined IP address 192.168.50.25 and MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:57:32.898556  663024 sshutil.go:53] new ssh client: &{IP:192.168.50.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/default-k8s-diff-port-482476/id_rsa Username:docker}
	I1209 11:57:32.898649  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHPort
	I1209 11:57:32.898829  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHKeyPath
	I1209 11:57:32.898975  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHUsername
	I1209 11:57:32.899101  663024 sshutil.go:53] new ssh client: &{IP:192.168.50.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/default-k8s-diff-port-482476/id_rsa Username:docker}
	I1209 11:57:32.907891  663024 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34063
	I1209 11:57:32.908317  663024 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:57:32.908827  663024 main.go:141] libmachine: Using API Version  1
	I1209 11:57:32.908848  663024 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:57:32.909352  663024 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:57:32.909551  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetState
	I1209 11:57:32.911172  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .DriverName
	I1209 11:57:32.911417  663024 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1209 11:57:32.911434  663024 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1209 11:57:32.911460  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHHostname
	I1209 11:57:32.914016  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:57:32.914474  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:c9:8a", ip: ""} in network mk-default-k8s-diff-port-482476: {Iface:virbr2 ExpiryTime:2024-12-09 12:52:18 +0000 UTC Type:0 Mac:52:54:00:f0:c9:8a Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:default-k8s-diff-port-482476 Clientid:01:52:54:00:f0:c9:8a}
	I1209 11:57:32.914490  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined IP address 192.168.50.25 and MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:57:32.914646  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHPort
	I1209 11:57:32.914838  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHKeyPath
	I1209 11:57:32.914965  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHUsername
	I1209 11:57:32.915071  663024 sshutil.go:53] new ssh client: &{IP:192.168.50.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/default-k8s-diff-port-482476/id_rsa Username:docker}
	I1209 11:57:33.067075  663024 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 11:57:33.085671  663024 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-482476" to be "Ready" ...
	I1209 11:57:33.095765  663024 node_ready.go:49] node "default-k8s-diff-port-482476" has status "Ready":"True"
	I1209 11:57:33.095801  663024 node_ready.go:38] duration metric: took 10.096442ms for node "default-k8s-diff-port-482476" to be "Ready" ...
	I1209 11:57:33.095815  663024 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 11:57:33.105497  663024 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-7rr27" in "kube-system" namespace to be "Ready" ...
	I1209 11:57:33.200059  663024 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 11:57:33.218467  663024 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1209 11:57:33.218496  663024 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1209 11:57:33.225990  663024 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1209 11:57:33.278736  663024 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1209 11:57:33.278772  663024 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1209 11:57:33.342270  663024 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1209 11:57:33.342304  663024 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1209 11:57:33.412771  663024 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1209 11:57:34.250639  663024 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.050535014s)
	I1209 11:57:34.250706  663024 main.go:141] libmachine: Making call to close driver server
	I1209 11:57:34.250720  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .Close
	I1209 11:57:34.250704  663024 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.024681453s)
	I1209 11:57:34.250811  663024 main.go:141] libmachine: Making call to close driver server
	I1209 11:57:34.250820  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .Close
	I1209 11:57:34.251151  663024 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:57:34.251170  663024 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:57:34.251182  663024 main.go:141] libmachine: Making call to close driver server
	I1209 11:57:34.251192  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .Close
	I1209 11:57:34.251197  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | Closing plugin on server side
	I1209 11:57:34.251238  663024 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:57:34.251245  663024 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:57:34.251253  663024 main.go:141] libmachine: Making call to close driver server
	I1209 11:57:34.251261  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .Close
	I1209 11:57:34.253136  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | Closing plugin on server side
	I1209 11:57:34.253141  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | Closing plugin on server side
	I1209 11:57:34.253180  663024 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:57:34.253182  663024 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:57:34.253194  663024 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:57:34.253214  663024 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:57:34.279650  663024 main.go:141] libmachine: Making call to close driver server
	I1209 11:57:34.279682  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .Close
	I1209 11:57:34.280064  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | Closing plugin on server side
	I1209 11:57:34.280116  663024 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:57:34.280130  663024 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:57:34.656217  663024 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.243394493s)
	I1209 11:57:34.656287  663024 main.go:141] libmachine: Making call to close driver server
	I1209 11:57:34.656305  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .Close
	I1209 11:57:34.656641  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | Closing plugin on server side
	I1209 11:57:34.656655  663024 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:57:34.656671  663024 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:57:34.656683  663024 main.go:141] libmachine: Making call to close driver server
	I1209 11:57:34.656691  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .Close
	I1209 11:57:34.656982  663024 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:57:34.656999  663024 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:57:34.657011  663024 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-482476"
	I1209 11:57:34.658878  663024 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1209 11:57:34.660089  663024 addons.go:510] duration metric: took 1.813029421s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1209 11:57:35.122487  663024 pod_ready.go:103] pod "coredns-7c65d6cfc9-7rr27" in "kube-system" namespace has status "Ready":"False"
	I1209 11:57:36.112072  663024 pod_ready.go:93] pod "coredns-7c65d6cfc9-7rr27" in "kube-system" namespace has status "Ready":"True"
	I1209 11:57:36.112097  663024 pod_ready.go:82] duration metric: took 3.006564547s for pod "coredns-7c65d6cfc9-7rr27" in "kube-system" namespace to be "Ready" ...
	I1209 11:57:36.112110  663024 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-bb47s" in "kube-system" namespace to be "Ready" ...
	I1209 11:57:36.117521  663024 pod_ready.go:93] pod "coredns-7c65d6cfc9-bb47s" in "kube-system" namespace has status "Ready":"True"
	I1209 11:57:36.117545  663024 pod_ready.go:82] duration metric: took 5.428168ms for pod "coredns-7c65d6cfc9-bb47s" in "kube-system" namespace to be "Ready" ...
	I1209 11:57:36.117554  663024 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-482476" in "kube-system" namespace to be "Ready" ...
	I1209 11:57:36.122929  663024 pod_ready.go:93] pod "etcd-default-k8s-diff-port-482476" in "kube-system" namespace has status "Ready":"True"
	I1209 11:57:36.122953  663024 pod_ready.go:82] duration metric: took 5.392834ms for pod "etcd-default-k8s-diff-port-482476" in "kube-system" namespace to be "Ready" ...
	I1209 11:57:36.122972  663024 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-482476" in "kube-system" namespace to be "Ready" ...
	I1209 11:57:36.127025  663024 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-482476" in "kube-system" namespace has status "Ready":"True"
	I1209 11:57:36.127047  663024 pod_ready.go:82] duration metric: took 4.068175ms for pod "kube-apiserver-default-k8s-diff-port-482476" in "kube-system" namespace to be "Ready" ...
	I1209 11:57:36.127056  663024 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-482476" in "kube-system" namespace to be "Ready" ...
	I1209 11:57:36.131036  663024 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-482476" in "kube-system" namespace has status "Ready":"True"
	I1209 11:57:36.131055  663024 pod_ready.go:82] duration metric: took 3.993825ms for pod "kube-controller-manager-default-k8s-diff-port-482476" in "kube-system" namespace to be "Ready" ...
	I1209 11:57:36.131064  663024 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-pgs52" in "kube-system" namespace to be "Ready" ...
	I1209 11:57:36.508951  663024 pod_ready.go:93] pod "kube-proxy-pgs52" in "kube-system" namespace has status "Ready":"True"
	I1209 11:57:36.508980  663024 pod_ready.go:82] duration metric: took 377.910722ms for pod "kube-proxy-pgs52" in "kube-system" namespace to be "Ready" ...
	I1209 11:57:36.508991  663024 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-482476" in "kube-system" namespace to be "Ready" ...
	I1209 11:57:36.909065  663024 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-482476" in "kube-system" namespace has status "Ready":"True"
	I1209 11:57:36.909093  663024 pod_ready.go:82] duration metric: took 400.095775ms for pod "kube-scheduler-default-k8s-diff-port-482476" in "kube-system" namespace to be "Ready" ...
	I1209 11:57:36.909100  663024 pod_ready.go:39] duration metric: took 3.813270613s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 11:57:36.909116  663024 api_server.go:52] waiting for apiserver process to appear ...
	I1209 11:57:36.909169  663024 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:57:36.924688  663024 api_server.go:72] duration metric: took 4.077626254s to wait for apiserver process to appear ...
	I1209 11:57:36.924726  663024 api_server.go:88] waiting for apiserver healthz status ...
	I1209 11:57:36.924752  663024 api_server.go:253] Checking apiserver healthz at https://192.168.50.25:8444/healthz ...
	I1209 11:57:36.930782  663024 api_server.go:279] https://192.168.50.25:8444/healthz returned 200:
	ok
	I1209 11:57:36.931734  663024 api_server.go:141] control plane version: v1.31.2
	I1209 11:57:36.931758  663024 api_server.go:131] duration metric: took 7.024599ms to wait for apiserver health ...
	I1209 11:57:36.931766  663024 system_pods.go:43] waiting for kube-system pods to appear ...
	I1209 11:57:37.112291  663024 system_pods.go:59] 9 kube-system pods found
	I1209 11:57:37.112323  663024 system_pods.go:61] "coredns-7c65d6cfc9-7rr27" [a5dd0401-80bf-4c87-9771-e1837c960425] Running
	I1209 11:57:37.112328  663024 system_pods.go:61] "coredns-7c65d6cfc9-bb47s" [2ff9fcf2-1494-4739-9bf4-6c9dd5bcbbf4] Running
	I1209 11:57:37.112332  663024 system_pods.go:61] "etcd-default-k8s-diff-port-482476" [01dfcc5e-2ac5-4cee-8b68-ac1f3bf8866b] Running
	I1209 11:57:37.112337  663024 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-482476" [c65f1162-8d8a-47e8-8f1a-4abc0ebc8649] Running
	I1209 11:57:37.112340  663024 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-482476" [363e5f34-878a-434e-8bf2-21cdc06be305] Running
	I1209 11:57:37.112343  663024 system_pods.go:61] "kube-proxy-pgs52" [d5a3463e-e955-4345-9559-b23cce44fa0e] Running
	I1209 11:57:37.112346  663024 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-482476" [53f924fa-554b-4ceb-b171-efcfecdc137e] Running
	I1209 11:57:37.112356  663024 system_pods.go:61] "metrics-server-6867b74b74-2lmtn" [60803d31-d0b0-4d51-a9f2-cadafd184a90] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1209 11:57:37.112363  663024 system_pods.go:61] "storage-provisioner" [6b53e3ba-9bc9-4b5a-bec9-d06336616c8a] Running
	I1209 11:57:37.112373  663024 system_pods.go:74] duration metric: took 180.599339ms to wait for pod list to return data ...
	I1209 11:57:37.112387  663024 default_sa.go:34] waiting for default service account to be created ...
	I1209 11:57:37.309750  663024 default_sa.go:45] found service account: "default"
	I1209 11:57:37.309777  663024 default_sa.go:55] duration metric: took 197.382304ms for default service account to be created ...
	I1209 11:57:37.309787  663024 system_pods.go:116] waiting for k8s-apps to be running ...
	I1209 11:57:37.513080  663024 system_pods.go:86] 9 kube-system pods found
	I1209 11:57:37.513112  663024 system_pods.go:89] "coredns-7c65d6cfc9-7rr27" [a5dd0401-80bf-4c87-9771-e1837c960425] Running
	I1209 11:57:37.513118  663024 system_pods.go:89] "coredns-7c65d6cfc9-bb47s" [2ff9fcf2-1494-4739-9bf4-6c9dd5bcbbf4] Running
	I1209 11:57:37.513121  663024 system_pods.go:89] "etcd-default-k8s-diff-port-482476" [01dfcc5e-2ac5-4cee-8b68-ac1f3bf8866b] Running
	I1209 11:57:37.513128  663024 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-482476" [c65f1162-8d8a-47e8-8f1a-4abc0ebc8649] Running
	I1209 11:57:37.513133  663024 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-482476" [363e5f34-878a-434e-8bf2-21cdc06be305] Running
	I1209 11:57:37.513136  663024 system_pods.go:89] "kube-proxy-pgs52" [d5a3463e-e955-4345-9559-b23cce44fa0e] Running
	I1209 11:57:37.513141  663024 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-482476" [53f924fa-554b-4ceb-b171-efcfecdc137e] Running
	I1209 11:57:37.513150  663024 system_pods.go:89] "metrics-server-6867b74b74-2lmtn" [60803d31-d0b0-4d51-a9f2-cadafd184a90] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1209 11:57:37.513156  663024 system_pods.go:89] "storage-provisioner" [6b53e3ba-9bc9-4b5a-bec9-d06336616c8a] Running
	I1209 11:57:37.513168  663024 system_pods.go:126] duration metric: took 203.373238ms to wait for k8s-apps to be running ...
	I1209 11:57:37.513181  663024 system_svc.go:44] waiting for kubelet service to be running ....
	I1209 11:57:37.513233  663024 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 11:57:37.527419  663024 system_svc.go:56] duration metric: took 14.22618ms WaitForService to wait for kubelet
	I1209 11:57:37.527451  663024 kubeadm.go:582] duration metric: took 4.680397826s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 11:57:37.527473  663024 node_conditions.go:102] verifying NodePressure condition ...
	I1209 11:57:37.710396  663024 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1209 11:57:37.710429  663024 node_conditions.go:123] node cpu capacity is 2
	I1209 11:57:37.710447  663024 node_conditions.go:105] duration metric: took 182.968526ms to run NodePressure ...
	I1209 11:57:37.710463  663024 start.go:241] waiting for startup goroutines ...
	I1209 11:57:37.710473  663024 start.go:246] waiting for cluster config update ...
	I1209 11:57:37.710487  663024 start.go:255] writing updated cluster config ...
	I1209 11:57:37.710799  663024 ssh_runner.go:195] Run: rm -f paused
	I1209 11:57:37.760468  663024 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1209 11:57:37.762472  663024 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-482476" cluster and "default" namespace by default
	I1209 11:57:40.219406  661546 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.274255602s)
	I1209 11:57:40.219478  661546 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 11:57:40.234863  661546 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 11:57:40.245357  661546 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 11:57:40.255253  661546 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 11:57:40.255276  661546 kubeadm.go:157] found existing configuration files:
	
	I1209 11:57:40.255319  661546 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 11:57:40.264881  661546 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 11:57:40.264934  661546 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 11:57:40.274990  661546 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 11:57:40.284941  661546 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 11:57:40.284998  661546 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 11:57:40.295188  661546 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 11:57:40.305136  661546 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 11:57:40.305181  661546 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 11:57:40.315125  661546 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 11:57:40.324727  661546 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 11:57:40.324789  661546 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 11:57:40.333574  661546 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1209 11:57:40.378743  661546 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1209 11:57:40.378932  661546 kubeadm.go:310] [preflight] Running pre-flight checks
	I1209 11:57:40.492367  661546 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1209 11:57:40.492493  661546 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1209 11:57:40.492658  661546 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1209 11:57:40.504994  661546 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1209 11:57:40.506760  661546 out.go:235]   - Generating certificates and keys ...
	I1209 11:57:40.506878  661546 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1209 11:57:40.506955  661546 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1209 11:57:40.507033  661546 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1209 11:57:40.507088  661546 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1209 11:57:40.507156  661546 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1209 11:57:40.507274  661546 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1209 11:57:40.507377  661546 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1209 11:57:40.507463  661546 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1209 11:57:40.507573  661546 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1209 11:57:40.507692  661546 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1209 11:57:40.507756  661546 kubeadm.go:310] [certs] Using the existing "sa" key
	I1209 11:57:40.507836  661546 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1209 11:57:40.607744  661546 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1209 11:57:40.684950  661546 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1209 11:57:40.826079  661546 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1209 11:57:40.945768  661546 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1209 11:57:41.212984  661546 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1209 11:57:41.213406  661546 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1209 11:57:41.216390  661546 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1209 11:57:41.218053  661546 out.go:235]   - Booting up control plane ...
	I1209 11:57:41.218202  661546 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1209 11:57:41.218307  661546 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1209 11:57:41.220009  661546 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1209 11:57:41.237816  661546 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1209 11:57:41.244148  661546 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1209 11:57:41.244204  661546 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1209 11:57:41.371083  661546 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1209 11:57:41.371245  661546 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1209 11:57:41.872938  661546 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.998998ms
	I1209 11:57:41.873141  661546 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1209 11:57:46.874725  661546 kubeadm.go:310] [api-check] The API server is healthy after 5.001587898s
	I1209 11:57:46.886996  661546 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1209 11:57:46.897941  661546 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1209 11:57:46.927451  661546 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1209 11:57:46.927718  661546 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-005123 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1209 11:57:46.945578  661546 kubeadm.go:310] [bootstrap-token] Using token: bhdcn7.orsewwwtbk1gmdg8
	I1209 11:57:46.946894  661546 out.go:235]   - Configuring RBAC rules ...
	I1209 11:57:46.947041  661546 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1209 11:57:46.950006  661546 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1209 11:57:46.956761  661546 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1209 11:57:46.959756  661546 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1209 11:57:46.962973  661546 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1209 11:57:46.970016  661546 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1209 11:57:47.282251  661546 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1209 11:57:47.714588  661546 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1209 11:57:48.283610  661546 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1209 11:57:48.283671  661546 kubeadm.go:310] 
	I1209 11:57:48.283774  661546 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1209 11:57:48.283786  661546 kubeadm.go:310] 
	I1209 11:57:48.283901  661546 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1209 11:57:48.283948  661546 kubeadm.go:310] 
	I1209 11:57:48.283995  661546 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1209 11:57:48.284089  661546 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1209 11:57:48.284139  661546 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1209 11:57:48.284148  661546 kubeadm.go:310] 
	I1209 11:57:48.284216  661546 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1209 11:57:48.284224  661546 kubeadm.go:310] 
	I1209 11:57:48.284281  661546 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1209 11:57:48.284291  661546 kubeadm.go:310] 
	I1209 11:57:48.284359  661546 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1209 11:57:48.284465  661546 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1209 11:57:48.284583  661546 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1209 11:57:48.284596  661546 kubeadm.go:310] 
	I1209 11:57:48.284739  661546 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1209 11:57:48.284846  661546 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1209 11:57:48.284859  661546 kubeadm.go:310] 
	I1209 11:57:48.284972  661546 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token bhdcn7.orsewwwtbk1gmdg8 \
	I1209 11:57:48.285133  661546 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c011865ce27f188d55feab5f04226df0aae8d2e7cfe661283806c6f2de0ef29a \
	I1209 11:57:48.285170  661546 kubeadm.go:310] 	--control-plane 
	I1209 11:57:48.285184  661546 kubeadm.go:310] 
	I1209 11:57:48.285312  661546 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1209 11:57:48.285321  661546 kubeadm.go:310] 
	I1209 11:57:48.285388  661546 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token bhdcn7.orsewwwtbk1gmdg8 \
	I1209 11:57:48.285530  661546 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c011865ce27f188d55feab5f04226df0aae8d2e7cfe661283806c6f2de0ef29a 
	I1209 11:57:48.286117  661546 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1209 11:57:48.286246  661546 cni.go:84] Creating CNI manager for ""
	I1209 11:57:48.286263  661546 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 11:57:48.288141  661546 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1209 11:57:48.289484  661546 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1209 11:57:48.301160  661546 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1209 11:57:48.320752  661546 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1209 11:57:48.320831  661546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 11:57:48.320831  661546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-005123 minikube.k8s.io/updated_at=2024_12_09T11_57_48_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=60110addcdbf0fec7168b962521659e922988d6c minikube.k8s.io/name=embed-certs-005123 minikube.k8s.io/primary=true
	I1209 11:57:48.552069  661546 ops.go:34] apiserver oom_adj: -16
	I1209 11:57:48.552119  661546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 11:57:49.052304  661546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 11:57:49.552516  661546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 11:57:50.052548  661546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 11:57:50.552931  661546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 11:57:51.052381  661546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 11:57:51.552589  661546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 11:57:52.052273  661546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 11:57:52.552546  661546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 11:57:52.645059  661546 kubeadm.go:1113] duration metric: took 4.324296774s to wait for elevateKubeSystemPrivileges
	I1209 11:57:52.645107  661546 kubeadm.go:394] duration metric: took 5m0.847017281s to StartCluster
	I1209 11:57:52.645133  661546 settings.go:142] acquiring lock: {Name:mkd819f3469f56ef651f2e5bb461e93bb6b2b5fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 11:57:52.645241  661546 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20068-609844/kubeconfig
	I1209 11:57:52.647822  661546 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/kubeconfig: {Name:mk32876a1ab47f13c0d7c0ba1ee5e32de01b4a4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 11:57:52.648129  661546 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.218 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 11:57:52.648226  661546 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1209 11:57:52.648338  661546 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-005123"
	I1209 11:57:52.648354  661546 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-005123"
	W1209 11:57:52.648366  661546 addons.go:243] addon storage-provisioner should already be in state true
	I1209 11:57:52.648367  661546 addons.go:69] Setting default-storageclass=true in profile "embed-certs-005123"
	I1209 11:57:52.648396  661546 config.go:182] Loaded profile config "embed-certs-005123": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 11:57:52.648397  661546 addons.go:69] Setting metrics-server=true in profile "embed-certs-005123"
	I1209 11:57:52.648434  661546 addons.go:234] Setting addon metrics-server=true in "embed-certs-005123"
	I1209 11:57:52.648399  661546 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-005123"
	W1209 11:57:52.648448  661546 addons.go:243] addon metrics-server should already be in state true
	I1209 11:57:52.648499  661546 host.go:66] Checking if "embed-certs-005123" exists ...
	I1209 11:57:52.648400  661546 host.go:66] Checking if "embed-certs-005123" exists ...
	I1209 11:57:52.648867  661546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:57:52.648883  661546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:57:52.648914  661546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:57:52.648932  661546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:57:52.648947  661546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:57:52.648917  661546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:57:52.649702  661546 out.go:177] * Verifying Kubernetes components...
	I1209 11:57:52.651094  661546 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 11:57:52.665090  661546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38065
	I1209 11:57:52.665309  661546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35905
	I1209 11:57:52.665602  661546 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:57:52.665889  661546 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:57:52.666308  661546 main.go:141] libmachine: Using API Version  1
	I1209 11:57:52.666329  661546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:57:52.666470  661546 main.go:141] libmachine: Using API Version  1
	I1209 11:57:52.666492  661546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:57:52.666768  661546 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:57:52.666907  661546 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:57:52.667140  661546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35159
	I1209 11:57:52.667344  661546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:57:52.667387  661546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:57:52.667536  661546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:57:52.667580  661546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:57:52.667652  661546 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:57:52.668127  661546 main.go:141] libmachine: Using API Version  1
	I1209 11:57:52.668154  661546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:57:52.668657  661546 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:57:52.668868  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetState
	I1209 11:57:52.672550  661546 addons.go:234] Setting addon default-storageclass=true in "embed-certs-005123"
	W1209 11:57:52.672580  661546 addons.go:243] addon default-storageclass should already be in state true
	I1209 11:57:52.672612  661546 host.go:66] Checking if "embed-certs-005123" exists ...
	I1209 11:57:52.672985  661546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:57:52.673032  661546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:57:52.684848  661546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43667
	I1209 11:57:52.684854  661546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36851
	I1209 11:57:52.685398  661546 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:57:52.685451  661546 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:57:52.686054  661546 main.go:141] libmachine: Using API Version  1
	I1209 11:57:52.686081  661546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:57:52.686155  661546 main.go:141] libmachine: Using API Version  1
	I1209 11:57:52.686228  661546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:57:52.686553  661546 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:57:52.686614  661546 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:57:52.686753  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetState
	I1209 11:57:52.686930  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetState
	I1209 11:57:52.687838  661546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33245
	I1209 11:57:52.688391  661546 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:57:52.688818  661546 main.go:141] libmachine: (embed-certs-005123) Calling .DriverName
	I1209 11:57:52.689013  661546 main.go:141] libmachine: Using API Version  1
	I1209 11:57:52.689040  661546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:57:52.689314  661546 main.go:141] libmachine: (embed-certs-005123) Calling .DriverName
	I1209 11:57:52.689450  661546 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:57:52.689908  661546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:57:52.689943  661546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:57:52.691136  661546 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 11:57:52.691137  661546 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1209 11:57:52.692714  661546 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1209 11:57:52.692732  661546 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1209 11:57:52.692749  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHHostname
	I1209 11:57:52.692789  661546 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 11:57:52.692800  661546 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1209 11:57:52.692813  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHHostname
	I1209 11:57:52.696349  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:57:52.696791  661546 main.go:141] libmachine: (embed-certs-005123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:a0:a8", ip: ""} in network mk-embed-certs-005123: {Iface:virbr4 ExpiryTime:2024-12-09 12:52:37 +0000 UTC Type:0 Mac:52:54:00:ee:a0:a8 Iaid: IPaddr:192.168.72.218 Prefix:24 Hostname:embed-certs-005123 Clientid:01:52:54:00:ee:a0:a8}
	I1209 11:57:52.696815  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined IP address 192.168.72.218 and MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:57:52.697088  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:57:52.697143  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHPort
	I1209 11:57:52.697314  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHKeyPath
	I1209 11:57:52.697482  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHUsername
	I1209 11:57:52.697512  661546 main.go:141] libmachine: (embed-certs-005123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:a0:a8", ip: ""} in network mk-embed-certs-005123: {Iface:virbr4 ExpiryTime:2024-12-09 12:52:37 +0000 UTC Type:0 Mac:52:54:00:ee:a0:a8 Iaid: IPaddr:192.168.72.218 Prefix:24 Hostname:embed-certs-005123 Clientid:01:52:54:00:ee:a0:a8}
	I1209 11:57:52.697547  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined IP address 192.168.72.218 and MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:57:52.697658  661546 sshutil.go:53] new ssh client: &{IP:192.168.72.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/embed-certs-005123/id_rsa Username:docker}
	I1209 11:57:52.697787  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHPort
	I1209 11:57:52.697962  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHKeyPath
	I1209 11:57:52.698093  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHUsername
	I1209 11:57:52.698209  661546 sshutil.go:53] new ssh client: &{IP:192.168.72.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/embed-certs-005123/id_rsa Username:docker}
	I1209 11:57:52.705766  661546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37265
	I1209 11:57:52.706265  661546 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:57:52.706694  661546 main.go:141] libmachine: Using API Version  1
	I1209 11:57:52.706721  661546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:57:52.707031  661546 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:57:52.707241  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetState
	I1209 11:57:52.708747  661546 main.go:141] libmachine: (embed-certs-005123) Calling .DriverName
	I1209 11:57:52.708980  661546 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1209 11:57:52.708997  661546 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1209 11:57:52.709016  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHHostname
	I1209 11:57:52.711546  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:57:52.711986  661546 main.go:141] libmachine: (embed-certs-005123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:a0:a8", ip: ""} in network mk-embed-certs-005123: {Iface:virbr4 ExpiryTime:2024-12-09 12:52:37 +0000 UTC Type:0 Mac:52:54:00:ee:a0:a8 Iaid: IPaddr:192.168.72.218 Prefix:24 Hostname:embed-certs-005123 Clientid:01:52:54:00:ee:a0:a8}
	I1209 11:57:52.712011  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined IP address 192.168.72.218 and MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:57:52.712263  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHPort
	I1209 11:57:52.712438  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHKeyPath
	I1209 11:57:52.712604  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHUsername
	I1209 11:57:52.712751  661546 sshutil.go:53] new ssh client: &{IP:192.168.72.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/embed-certs-005123/id_rsa Username:docker}
	I1209 11:57:52.858535  661546 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 11:57:52.879035  661546 node_ready.go:35] waiting up to 6m0s for node "embed-certs-005123" to be "Ready" ...
	I1209 11:57:52.899550  661546 node_ready.go:49] node "embed-certs-005123" has status "Ready":"True"
	I1209 11:57:52.899575  661546 node_ready.go:38] duration metric: took 20.508179ms for node "embed-certs-005123" to be "Ready" ...
	I1209 11:57:52.899589  661546 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 11:57:52.960716  661546 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-t49mk" in "kube-system" namespace to be "Ready" ...
	I1209 11:57:52.962755  661546 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1209 11:57:52.962779  661546 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1209 11:57:52.995747  661546 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1209 11:57:52.995787  661546 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1209 11:57:53.031395  661546 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1209 11:57:53.031426  661546 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1209 11:57:53.031535  661546 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1209 11:57:53.049695  661546 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 11:57:53.061716  661546 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1209 11:57:53.314158  661546 main.go:141] libmachine: Making call to close driver server
	I1209 11:57:53.314212  661546 main.go:141] libmachine: (embed-certs-005123) Calling .Close
	I1209 11:57:53.314523  661546 main.go:141] libmachine: (embed-certs-005123) DBG | Closing plugin on server side
	I1209 11:57:53.314548  661546 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:57:53.314565  661546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:57:53.314586  661546 main.go:141] libmachine: Making call to close driver server
	I1209 11:57:53.314598  661546 main.go:141] libmachine: (embed-certs-005123) Calling .Close
	I1209 11:57:53.314857  661546 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:57:53.314875  661546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:57:53.323573  661546 main.go:141] libmachine: Making call to close driver server
	I1209 11:57:53.323590  661546 main.go:141] libmachine: (embed-certs-005123) Calling .Close
	I1209 11:57:53.323822  661546 main.go:141] libmachine: (embed-certs-005123) DBG | Closing plugin on server side
	I1209 11:57:53.323873  661546 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:57:53.323882  661546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:57:54.004616  661546 main.go:141] libmachine: Making call to close driver server
	I1209 11:57:54.004655  661546 main.go:141] libmachine: (embed-certs-005123) Calling .Close
	I1209 11:57:54.005050  661546 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:57:54.005067  661546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:57:54.005075  661546 main.go:141] libmachine: Making call to close driver server
	I1209 11:57:54.005083  661546 main.go:141] libmachine: (embed-certs-005123) Calling .Close
	I1209 11:57:54.005351  661546 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:57:54.005372  661546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:57:54.005370  661546 main.go:141] libmachine: (embed-certs-005123) DBG | Closing plugin on server side
	I1209 11:57:54.352527  661546 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.290758533s)
	I1209 11:57:54.352616  661546 main.go:141] libmachine: Making call to close driver server
	I1209 11:57:54.352636  661546 main.go:141] libmachine: (embed-certs-005123) Calling .Close
	I1209 11:57:54.352957  661546 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:57:54.352977  661546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:57:54.352987  661546 main.go:141] libmachine: Making call to close driver server
	I1209 11:57:54.352995  661546 main.go:141] libmachine: (embed-certs-005123) Calling .Close
	I1209 11:57:54.353278  661546 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:57:54.353320  661546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:57:54.353336  661546 addons.go:475] Verifying addon metrics-server=true in "embed-certs-005123"
	I1209 11:57:54.353387  661546 main.go:141] libmachine: (embed-certs-005123) DBG | Closing plugin on server side
	I1209 11:57:54.355153  661546 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1209 11:57:54.356250  661546 addons.go:510] duration metric: took 1.708044398s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1209 11:57:54.968202  661546 pod_ready.go:103] pod "coredns-7c65d6cfc9-t49mk" in "kube-system" namespace has status "Ready":"False"
	I1209 11:57:57.467948  661546 pod_ready.go:93] pod "coredns-7c65d6cfc9-t49mk" in "kube-system" namespace has status "Ready":"True"
	I1209 11:57:57.467979  661546 pod_ready.go:82] duration metric: took 4.507228843s for pod "coredns-7c65d6cfc9-t49mk" in "kube-system" namespace to be "Ready" ...
	I1209 11:57:57.467992  661546 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-xspr9" in "kube-system" namespace to be "Ready" ...
	I1209 11:57:59.475024  661546 pod_ready.go:103] pod "coredns-7c65d6cfc9-xspr9" in "kube-system" namespace has status "Ready":"False"
	I1209 11:58:00.473961  661546 pod_ready.go:93] pod "coredns-7c65d6cfc9-xspr9" in "kube-system" namespace has status "Ready":"True"
	I1209 11:58:00.473987  661546 pod_ready.go:82] duration metric: took 3.005987981s for pod "coredns-7c65d6cfc9-xspr9" in "kube-system" namespace to be "Ready" ...
	I1209 11:58:00.473996  661546 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-005123" in "kube-system" namespace to be "Ready" ...
	I1209 11:58:00.478022  661546 pod_ready.go:93] pod "etcd-embed-certs-005123" in "kube-system" namespace has status "Ready":"True"
	I1209 11:58:00.478040  661546 pod_ready.go:82] duration metric: took 4.038353ms for pod "etcd-embed-certs-005123" in "kube-system" namespace to be "Ready" ...
	I1209 11:58:00.478049  661546 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-005123" in "kube-system" namespace to be "Ready" ...
	I1209 11:58:00.482415  661546 pod_ready.go:93] pod "kube-apiserver-embed-certs-005123" in "kube-system" namespace has status "Ready":"True"
	I1209 11:58:00.482439  661546 pod_ready.go:82] duration metric: took 4.384854ms for pod "kube-apiserver-embed-certs-005123" in "kube-system" namespace to be "Ready" ...
	I1209 11:58:00.482449  661546 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-005123" in "kube-system" namespace to be "Ready" ...
	I1209 11:58:00.486284  661546 pod_ready.go:93] pod "kube-controller-manager-embed-certs-005123" in "kube-system" namespace has status "Ready":"True"
	I1209 11:58:00.486311  661546 pod_ready.go:82] duration metric: took 3.85467ms for pod "kube-controller-manager-embed-certs-005123" in "kube-system" namespace to be "Ready" ...
	I1209 11:58:00.486326  661546 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-n4pph" in "kube-system" namespace to be "Ready" ...
	I1209 11:58:00.490260  661546 pod_ready.go:93] pod "kube-proxy-n4pph" in "kube-system" namespace has status "Ready":"True"
	I1209 11:58:00.490284  661546 pod_ready.go:82] duration metric: took 3.949342ms for pod "kube-proxy-n4pph" in "kube-system" namespace to be "Ready" ...
	I1209 11:58:00.490296  661546 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-005123" in "kube-system" namespace to be "Ready" ...
	I1209 11:58:00.872396  661546 pod_ready.go:93] pod "kube-scheduler-embed-certs-005123" in "kube-system" namespace has status "Ready":"True"
	I1209 11:58:00.872420  661546 pod_ready.go:82] duration metric: took 382.116873ms for pod "kube-scheduler-embed-certs-005123" in "kube-system" namespace to be "Ready" ...
	I1209 11:58:00.872428  661546 pod_ready.go:39] duration metric: took 7.97282742s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 11:58:00.872446  661546 api_server.go:52] waiting for apiserver process to appear ...
	I1209 11:58:00.872502  661546 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:58:00.887281  661546 api_server.go:72] duration metric: took 8.239108757s to wait for apiserver process to appear ...
	I1209 11:58:00.887312  661546 api_server.go:88] waiting for apiserver healthz status ...
	I1209 11:58:00.887333  661546 api_server.go:253] Checking apiserver healthz at https://192.168.72.218:8443/healthz ...
	I1209 11:58:00.892005  661546 api_server.go:279] https://192.168.72.218:8443/healthz returned 200:
	ok
	I1209 11:58:00.893247  661546 api_server.go:141] control plane version: v1.31.2
	I1209 11:58:00.893277  661546 api_server.go:131] duration metric: took 5.95753ms to wait for apiserver health ...
	I1209 11:58:00.893288  661546 system_pods.go:43] waiting for kube-system pods to appear ...
	I1209 11:58:01.074723  661546 system_pods.go:59] 9 kube-system pods found
	I1209 11:58:01.074756  661546 system_pods.go:61] "coredns-7c65d6cfc9-t49mk" [ca3ba094-58a2-401d-8aea-46d6d96baacb] Running
	I1209 11:58:01.074762  661546 system_pods.go:61] "coredns-7c65d6cfc9-xspr9" [9384e9ea-987e-4728-bdf2-773645d52ab1] Running
	I1209 11:58:01.074766  661546 system_pods.go:61] "etcd-embed-certs-005123" [f8b23ab4-b852-4598-93d7-3b6eb1543a4b] Running
	I1209 11:58:01.074771  661546 system_pods.go:61] "kube-apiserver-embed-certs-005123" [96b0cb5a-1d81-48fc-bbc9-015de7c48ac5] Running
	I1209 11:58:01.074774  661546 system_pods.go:61] "kube-controller-manager-embed-certs-005123" [33c237d8-8f5a-4d8f-870a-7ba0dc2180dd] Running
	I1209 11:58:01.074777  661546 system_pods.go:61] "kube-proxy-n4pph" [520d101f-0df0-413f-a0fc-22ecc2884d40] Running
	I1209 11:58:01.074780  661546 system_pods.go:61] "kube-scheduler-embed-certs-005123" [11dcaf15-fc3b-4945-af9b-d08aa5528679] Running
	I1209 11:58:01.074786  661546 system_pods.go:61] "metrics-server-6867b74b74-zfw9r" [8438b820-4cc5-4d7b-8af5-9349fdd87ca8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1209 11:58:01.074791  661546 system_pods.go:61] "storage-provisioner" [91ceb801-7262-4d7e-9623-c8c1931fc34b] Running
	I1209 11:58:01.074797  661546 system_pods.go:74] duration metric: took 181.502993ms to wait for pod list to return data ...
	I1209 11:58:01.074804  661546 default_sa.go:34] waiting for default service account to be created ...
	I1209 11:58:01.272664  661546 default_sa.go:45] found service account: "default"
	I1209 11:58:01.272697  661546 default_sa.go:55] duration metric: took 197.886347ms for default service account to be created ...
	I1209 11:58:01.272707  661546 system_pods.go:116] waiting for k8s-apps to be running ...
	I1209 11:58:01.475062  661546 system_pods.go:86] 9 kube-system pods found
	I1209 11:58:01.475096  661546 system_pods.go:89] "coredns-7c65d6cfc9-t49mk" [ca3ba094-58a2-401d-8aea-46d6d96baacb] Running
	I1209 11:58:01.475102  661546 system_pods.go:89] "coredns-7c65d6cfc9-xspr9" [9384e9ea-987e-4728-bdf2-773645d52ab1] Running
	I1209 11:58:01.475105  661546 system_pods.go:89] "etcd-embed-certs-005123" [f8b23ab4-b852-4598-93d7-3b6eb1543a4b] Running
	I1209 11:58:01.475109  661546 system_pods.go:89] "kube-apiserver-embed-certs-005123" [96b0cb5a-1d81-48fc-bbc9-015de7c48ac5] Running
	I1209 11:58:01.475114  661546 system_pods.go:89] "kube-controller-manager-embed-certs-005123" [33c237d8-8f5a-4d8f-870a-7ba0dc2180dd] Running
	I1209 11:58:01.475118  661546 system_pods.go:89] "kube-proxy-n4pph" [520d101f-0df0-413f-a0fc-22ecc2884d40] Running
	I1209 11:58:01.475121  661546 system_pods.go:89] "kube-scheduler-embed-certs-005123" [11dcaf15-fc3b-4945-af9b-d08aa5528679] Running
	I1209 11:58:01.475131  661546 system_pods.go:89] "metrics-server-6867b74b74-zfw9r" [8438b820-4cc5-4d7b-8af5-9349fdd87ca8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1209 11:58:01.475138  661546 system_pods.go:89] "storage-provisioner" [91ceb801-7262-4d7e-9623-c8c1931fc34b] Running
	I1209 11:58:01.475148  661546 system_pods.go:126] duration metric: took 202.434687ms to wait for k8s-apps to be running ...
	I1209 11:58:01.475158  661546 system_svc.go:44] waiting for kubelet service to be running ....
	I1209 11:58:01.475220  661546 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 11:58:01.490373  661546 system_svc.go:56] duration metric: took 15.20079ms WaitForService to wait for kubelet
	I1209 11:58:01.490416  661546 kubeadm.go:582] duration metric: took 8.842250416s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 11:58:01.490451  661546 node_conditions.go:102] verifying NodePressure condition ...
	I1209 11:58:01.673621  661546 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1209 11:58:01.673651  661546 node_conditions.go:123] node cpu capacity is 2
	I1209 11:58:01.673662  661546 node_conditions.go:105] duration metric: took 183.205852ms to run NodePressure ...
	I1209 11:58:01.673674  661546 start.go:241] waiting for startup goroutines ...
	I1209 11:58:01.673681  661546 start.go:246] waiting for cluster config update ...
	I1209 11:58:01.673691  661546 start.go:255] writing updated cluster config ...
	I1209 11:58:01.673995  661546 ssh_runner.go:195] Run: rm -f paused
	I1209 11:58:01.725363  661546 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1209 11:58:01.727275  661546 out.go:177] * Done! kubectl is now configured to use "embed-certs-005123" cluster and "default" namespace by default
	I1209 11:58:14.994765  662586 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1209 11:58:14.994918  662586 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1209 11:58:14.995050  662586 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1209 11:58:14.995118  662586 kubeadm.go:310] [preflight] Running pre-flight checks
	I1209 11:58:14.995182  662586 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1209 11:58:14.995272  662586 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1209 11:58:14.995353  662586 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1209 11:58:14.995410  662586 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1209 11:58:14.996905  662586 out.go:235]   - Generating certificates and keys ...
	I1209 11:58:14.997000  662586 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1209 11:58:14.997055  662586 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1209 11:58:14.997123  662586 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1209 11:58:14.997184  662586 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1209 11:58:14.997278  662586 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1209 11:58:14.997349  662586 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1209 11:58:14.997474  662586 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1209 11:58:14.997567  662586 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1209 11:58:14.997631  662586 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1209 11:58:14.997700  662586 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1209 11:58:14.997736  662586 kubeadm.go:310] [certs] Using the existing "sa" key
	I1209 11:58:14.997783  662586 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1209 11:58:14.997826  662586 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1209 11:58:14.997871  662586 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1209 11:58:14.997930  662586 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1209 11:58:14.997977  662586 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1209 11:58:14.998063  662586 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1209 11:58:14.998141  662586 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1209 11:58:14.998199  662586 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1209 11:58:14.998264  662586 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1209 11:58:14.999539  662586 out.go:235]   - Booting up control plane ...
	I1209 11:58:14.999663  662586 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1209 11:58:14.999748  662586 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1209 11:58:14.999824  662586 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1209 11:58:14.999946  662586 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1209 11:58:15.000148  662586 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1209 11:58:15.000221  662586 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1209 11:58:15.000326  662586 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1209 11:58:15.000532  662586 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1209 11:58:15.000598  662586 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1209 11:58:15.000753  662586 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1209 11:58:15.000814  662586 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1209 11:58:15.000971  662586 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1209 11:58:15.001064  662586 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1209 11:58:15.001273  662586 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1209 11:58:15.001335  662586 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1209 11:58:15.001486  662586 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1209 11:58:15.001493  662586 kubeadm.go:310] 
	I1209 11:58:15.001553  662586 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1209 11:58:15.001616  662586 kubeadm.go:310] 		timed out waiting for the condition
	I1209 11:58:15.001631  662586 kubeadm.go:310] 
	I1209 11:58:15.001685  662586 kubeadm.go:310] 	This error is likely caused by:
	I1209 11:58:15.001732  662586 kubeadm.go:310] 		- The kubelet is not running
	I1209 11:58:15.001883  662586 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1209 11:58:15.001897  662586 kubeadm.go:310] 
	I1209 11:58:15.002041  662586 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1209 11:58:15.002087  662586 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1209 11:58:15.002146  662586 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1209 11:58:15.002156  662586 kubeadm.go:310] 
	I1209 11:58:15.002294  662586 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1209 11:58:15.002373  662586 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1209 11:58:15.002380  662586 kubeadm.go:310] 
	I1209 11:58:15.002502  662586 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1209 11:58:15.002623  662586 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1209 11:58:15.002725  662586 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1209 11:58:15.002799  662586 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1209 11:58:15.002835  662586 kubeadm.go:310] 
	W1209 11:58:15.002956  662586 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1209 11:58:15.003022  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1209 11:58:15.469838  662586 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 11:58:15.484503  662586 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 11:58:15.493409  662586 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 11:58:15.493430  662586 kubeadm.go:157] found existing configuration files:
	
	I1209 11:58:15.493487  662586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 11:58:15.502508  662586 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 11:58:15.502568  662586 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 11:58:15.511743  662586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 11:58:15.519855  662586 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 11:58:15.519913  662586 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 11:58:15.528743  662586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 11:58:15.537000  662586 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 11:58:15.537072  662586 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 11:58:15.546520  662586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 11:58:15.555448  662586 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 11:58:15.555526  662586 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 11:58:15.565618  662586 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1209 11:58:15.631763  662586 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1209 11:58:15.631832  662586 kubeadm.go:310] [preflight] Running pre-flight checks
	I1209 11:58:15.798683  662586 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1209 11:58:15.798822  662586 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1209 11:58:15.798957  662586 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1209 11:58:15.974522  662586 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1209 11:58:15.976286  662586 out.go:235]   - Generating certificates and keys ...
	I1209 11:58:15.976408  662586 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1209 11:58:15.976492  662586 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1209 11:58:15.976616  662586 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1209 11:58:15.976714  662586 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1209 11:58:15.976813  662586 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1209 11:58:15.976889  662586 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1209 11:58:15.976978  662586 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1209 11:58:15.977064  662586 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1209 11:58:15.977184  662586 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1209 11:58:15.977251  662586 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1209 11:58:15.977287  662586 kubeadm.go:310] [certs] Using the existing "sa" key
	I1209 11:58:15.977363  662586 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1209 11:58:16.193383  662586 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1209 11:58:16.324912  662586 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1209 11:58:16.541372  662586 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1209 11:58:16.786389  662586 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1209 11:58:16.807241  662586 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1209 11:58:16.808750  662586 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1209 11:58:16.808823  662586 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1209 11:58:16.951756  662586 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1209 11:58:16.954338  662586 out.go:235]   - Booting up control plane ...
	I1209 11:58:16.954486  662586 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1209 11:58:16.968892  662586 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1209 11:58:16.970556  662586 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1209 11:58:16.971301  662586 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1209 11:58:16.974040  662586 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1209 11:58:56.976537  662586 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1209 11:58:56.976966  662586 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1209 11:58:56.977214  662586 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1209 11:59:01.977861  662586 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1209 11:59:01.978074  662586 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1209 11:59:11.978821  662586 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1209 11:59:11.979056  662586 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1209 11:59:31.980118  662586 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1209 11:59:31.980386  662586 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1209 12:00:11.981507  662586 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1209 12:00:11.981791  662586 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1209 12:00:11.981804  662586 kubeadm.go:310] 
	I1209 12:00:11.981863  662586 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1209 12:00:11.981916  662586 kubeadm.go:310] 		timed out waiting for the condition
	I1209 12:00:11.981926  662586 kubeadm.go:310] 
	I1209 12:00:11.981977  662586 kubeadm.go:310] 	This error is likely caused by:
	I1209 12:00:11.982028  662586 kubeadm.go:310] 		- The kubelet is not running
	I1209 12:00:11.982232  662586 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1209 12:00:11.982262  662586 kubeadm.go:310] 
	I1209 12:00:11.982449  662586 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1209 12:00:11.982506  662586 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1209 12:00:11.982555  662586 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1209 12:00:11.982564  662586 kubeadm.go:310] 
	I1209 12:00:11.982709  662586 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1209 12:00:11.982824  662586 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1209 12:00:11.982837  662586 kubeadm.go:310] 
	I1209 12:00:11.982975  662586 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1209 12:00:11.983092  662586 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1209 12:00:11.983186  662586 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1209 12:00:11.983259  662586 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1209 12:00:11.983308  662586 kubeadm.go:310] 
	I1209 12:00:11.983442  662586 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1209 12:00:11.983534  662586 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1209 12:00:11.983622  662586 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1209 12:00:11.983692  662586 kubeadm.go:394] duration metric: took 7m57.372617524s to StartCluster
	I1209 12:00:11.983778  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 12:00:11.983852  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 12:00:12.032068  662586 cri.go:89] found id: ""
	I1209 12:00:12.032110  662586 logs.go:282] 0 containers: []
	W1209 12:00:12.032126  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 12:00:12.032139  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 12:00:12.032232  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 12:00:12.074929  662586 cri.go:89] found id: ""
	I1209 12:00:12.074977  662586 logs.go:282] 0 containers: []
	W1209 12:00:12.074990  662586 logs.go:284] No container was found matching "etcd"
	I1209 12:00:12.075001  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 12:00:12.075074  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 12:00:12.113547  662586 cri.go:89] found id: ""
	I1209 12:00:12.113582  662586 logs.go:282] 0 containers: []
	W1209 12:00:12.113592  662586 logs.go:284] No container was found matching "coredns"
	I1209 12:00:12.113598  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 12:00:12.113661  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 12:00:12.147436  662586 cri.go:89] found id: ""
	I1209 12:00:12.147465  662586 logs.go:282] 0 containers: []
	W1209 12:00:12.147475  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 12:00:12.147481  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 12:00:12.147535  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 12:00:12.184398  662586 cri.go:89] found id: ""
	I1209 12:00:12.184439  662586 logs.go:282] 0 containers: []
	W1209 12:00:12.184453  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 12:00:12.184463  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 12:00:12.184541  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 12:00:12.230844  662586 cri.go:89] found id: ""
	I1209 12:00:12.230884  662586 logs.go:282] 0 containers: []
	W1209 12:00:12.230896  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 12:00:12.230905  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 12:00:12.230981  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 12:00:12.264897  662586 cri.go:89] found id: ""
	I1209 12:00:12.264930  662586 logs.go:282] 0 containers: []
	W1209 12:00:12.264939  662586 logs.go:284] No container was found matching "kindnet"
	I1209 12:00:12.264946  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 12:00:12.265001  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 12:00:12.303553  662586 cri.go:89] found id: ""
	I1209 12:00:12.303594  662586 logs.go:282] 0 containers: []
	W1209 12:00:12.303607  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 12:00:12.303622  662586 logs.go:123] Gathering logs for container status ...
	I1209 12:00:12.303638  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 12:00:12.342799  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 12:00:12.342838  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 12:00:12.392992  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 12:00:12.393039  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 12:00:12.407065  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 12:00:12.407100  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 12:00:12.483599  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 12:00:12.483651  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 12:00:12.483675  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1209 12:00:12.591518  662586 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1209 12:00:12.591615  662586 out.go:270] * 
	W1209 12:00:12.591715  662586 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1209 12:00:12.591737  662586 out.go:270] * 
	W1209 12:00:12.592644  662586 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 12:00:12.596340  662586 out.go:201] 
	W1209 12:00:12.597706  662586 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1209 12:00:12.597757  662586 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1209 12:00:12.597798  662586 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1209 12:00:12.599219  662586 out.go:201] 
	
	
	==> CRI-O <==
	Dec 09 12:09:17 old-k8s-version-014592 crio[629]: time="2024-12-09 12:09:17.883295526Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733746157883266569,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bf723c1b-cf08-491b-a3da-1a28cae3b4ff name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 12:09:17 old-k8s-version-014592 crio[629]: time="2024-12-09 12:09:17.884059208Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ed4e9d97-07fb-4ad3-bb0e-c3c1aebc2462 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 12:09:17 old-k8s-version-014592 crio[629]: time="2024-12-09 12:09:17.884111219Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ed4e9d97-07fb-4ad3-bb0e-c3c1aebc2462 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 12:09:17 old-k8s-version-014592 crio[629]: time="2024-12-09 12:09:17.884151458Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=ed4e9d97-07fb-4ad3-bb0e-c3c1aebc2462 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 12:09:17 old-k8s-version-014592 crio[629]: time="2024-12-09 12:09:17.913529633Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d616da46-eb9e-49b2-a02d-abfec055cdd5 name=/runtime.v1.RuntimeService/Version
	Dec 09 12:09:17 old-k8s-version-014592 crio[629]: time="2024-12-09 12:09:17.913641305Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d616da46-eb9e-49b2-a02d-abfec055cdd5 name=/runtime.v1.RuntimeService/Version
	Dec 09 12:09:17 old-k8s-version-014592 crio[629]: time="2024-12-09 12:09:17.914795298Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cbef3ad3-1694-439c-b59c-ab92994b4ca3 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 12:09:17 old-k8s-version-014592 crio[629]: time="2024-12-09 12:09:17.915243863Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733746157915212954,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cbef3ad3-1694-439c-b59c-ab92994b4ca3 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 12:09:17 old-k8s-version-014592 crio[629]: time="2024-12-09 12:09:17.915821244Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=207983ef-8340-4213-8149-0e67a50ce9a4 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 12:09:17 old-k8s-version-014592 crio[629]: time="2024-12-09 12:09:17.915894339Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=207983ef-8340-4213-8149-0e67a50ce9a4 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 12:09:17 old-k8s-version-014592 crio[629]: time="2024-12-09 12:09:17.915928022Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=207983ef-8340-4213-8149-0e67a50ce9a4 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 12:09:17 old-k8s-version-014592 crio[629]: time="2024-12-09 12:09:17.949697243Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=50d327f4-eeb7-4fd6-b46c-6985f85d75c5 name=/runtime.v1.RuntimeService/Version
	Dec 09 12:09:17 old-k8s-version-014592 crio[629]: time="2024-12-09 12:09:17.949795158Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=50d327f4-eeb7-4fd6-b46c-6985f85d75c5 name=/runtime.v1.RuntimeService/Version
	Dec 09 12:09:17 old-k8s-version-014592 crio[629]: time="2024-12-09 12:09:17.950755465Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4fc770ba-3734-4ba1-9b14-a915242fb310 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 12:09:17 old-k8s-version-014592 crio[629]: time="2024-12-09 12:09:17.951214311Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733746157951193348,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4fc770ba-3734-4ba1-9b14-a915242fb310 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 12:09:17 old-k8s-version-014592 crio[629]: time="2024-12-09 12:09:17.951717597Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a090960b-da2a-4acc-9c35-ab5f0bb09b39 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 12:09:17 old-k8s-version-014592 crio[629]: time="2024-12-09 12:09:17.951766164Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a090960b-da2a-4acc-9c35-ab5f0bb09b39 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 12:09:17 old-k8s-version-014592 crio[629]: time="2024-12-09 12:09:17.951812863Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=a090960b-da2a-4acc-9c35-ab5f0bb09b39 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 12:09:17 old-k8s-version-014592 crio[629]: time="2024-12-09 12:09:17.982859575Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d061d33b-c105-4e66-af48-6bc0fc82391d name=/runtime.v1.RuntimeService/Version
	Dec 09 12:09:17 old-k8s-version-014592 crio[629]: time="2024-12-09 12:09:17.982932270Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d061d33b-c105-4e66-af48-6bc0fc82391d name=/runtime.v1.RuntimeService/Version
	Dec 09 12:09:17 old-k8s-version-014592 crio[629]: time="2024-12-09 12:09:17.984066780Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f8d7c422-6f84-48e4-bec9-39dcc96e8881 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 12:09:17 old-k8s-version-014592 crio[629]: time="2024-12-09 12:09:17.984450007Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733746157984425711,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f8d7c422-6f84-48e4-bec9-39dcc96e8881 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 12:09:17 old-k8s-version-014592 crio[629]: time="2024-12-09 12:09:17.985023560Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ce7542be-8660-424c-a356-ce4b06d032b1 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 12:09:17 old-k8s-version-014592 crio[629]: time="2024-12-09 12:09:17.985089745Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ce7542be-8660-424c-a356-ce4b06d032b1 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 12:09:17 old-k8s-version-014592 crio[629]: time="2024-12-09 12:09:17.985140113Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=ce7542be-8660-424c-a356-ce4b06d032b1 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 9 11:51] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053266] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039222] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.927032] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.003479] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.562691] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Dec 9 11:52] systemd-fstab-generator[557]: Ignoring "noauto" option for root device
	[  +0.070928] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.073924] systemd-fstab-generator[569]: Ignoring "noauto" option for root device
	[  +0.215176] systemd-fstab-generator[583]: Ignoring "noauto" option for root device
	[  +0.123356] systemd-fstab-generator[595]: Ignoring "noauto" option for root device
	[  +0.253740] systemd-fstab-generator[620]: Ignoring "noauto" option for root device
	[  +6.933985] systemd-fstab-generator[876]: Ignoring "noauto" option for root device
	[  +0.063858] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.761344] systemd-fstab-generator[1001]: Ignoring "noauto" option for root device
	[  +9.884362] kauditd_printk_skb: 46 callbacks suppressed
	[Dec 9 11:56] systemd-fstab-generator[5066]: Ignoring "noauto" option for root device
	[Dec 9 11:58] systemd-fstab-generator[5348]: Ignoring "noauto" option for root device
	[  +0.064846] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 12:09:18 up 17 min,  0 users,  load average: 0.16, 0.06, 0.07
	Linux old-k8s-version-014592 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Dec 09 12:09:18 old-k8s-version-014592 kubelet[6535]: net.(*sysDialer).doDialTCP(0xc000568b80, 0x4f7fe40, 0xc000c44540, 0x0, 0xc000c26c90, 0x3fddce0, 0x70f9210, 0x0)
	Dec 09 12:09:18 old-k8s-version-014592 kubelet[6535]:         /usr/local/go/src/net/tcpsock_posix.go:65 +0xc5
	Dec 09 12:09:18 old-k8s-version-014592 kubelet[6535]: net.(*sysDialer).dialTCP(0xc000568b80, 0x4f7fe40, 0xc000c44540, 0x0, 0xc000c26c90, 0x57b620, 0x48ab5d6, 0x7f50b61ad288)
	Dec 09 12:09:18 old-k8s-version-014592 kubelet[6535]:         /usr/local/go/src/net/tcpsock_posix.go:61 +0xd7
	Dec 09 12:09:18 old-k8s-version-014592 kubelet[6535]: net.(*sysDialer).dialSingle(0xc000568b80, 0x4f7fe40, 0xc000c44540, 0x4f1ff00, 0xc000c26c90, 0x0, 0x0, 0x0, 0x0)
	Dec 09 12:09:18 old-k8s-version-014592 kubelet[6535]:         /usr/local/go/src/net/dial.go:580 +0x5e5
	Dec 09 12:09:18 old-k8s-version-014592 kubelet[6535]: net.(*sysDialer).dialSerial(0xc000568b80, 0x4f7fe40, 0xc000c44540, 0xc000b919d0, 0x1, 0x1, 0x0, 0x0, 0x0, 0x0)
	Dec 09 12:09:18 old-k8s-version-014592 kubelet[6535]:         /usr/local/go/src/net/dial.go:548 +0x152
	Dec 09 12:09:18 old-k8s-version-014592 kubelet[6535]: net.(*Dialer).DialContext(0xc0001aa540, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc000bd97a0, 0x24, 0x0, 0x0, 0x0, ...)
	Dec 09 12:09:18 old-k8s-version-014592 kubelet[6535]:         /usr/local/go/src/net/dial.go:425 +0x6e5
	Dec 09 12:09:18 old-k8s-version-014592 kubelet[6535]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc00098e640, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc000bd97a0, 0x24, 0x60, 0x7f50b61ad178, 0x118, ...)
	Dec 09 12:09:18 old-k8s-version-014592 kubelet[6535]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Dec 09 12:09:18 old-k8s-version-014592 kubelet[6535]: net/http.(*Transport).dial(0xc000a8adc0, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc000bd97a0, 0x24, 0x0, 0x0, 0x0, ...)
	Dec 09 12:09:18 old-k8s-version-014592 kubelet[6535]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Dec 09 12:09:18 old-k8s-version-014592 kubelet[6535]: net/http.(*Transport).dialConn(0xc000a8adc0, 0x4f7fe00, 0xc000052030, 0x0, 0xc0004ae540, 0x5, 0xc000bd97a0, 0x24, 0x0, 0xc000b95200, ...)
	Dec 09 12:09:18 old-k8s-version-014592 kubelet[6535]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Dec 09 12:09:18 old-k8s-version-014592 kubelet[6535]: net/http.(*Transport).dialConnFor(0xc000a8adc0, 0xc000b97550)
	Dec 09 12:09:18 old-k8s-version-014592 kubelet[6535]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Dec 09 12:09:18 old-k8s-version-014592 kubelet[6535]: created by net/http.(*Transport).queueForDial
	Dec 09 12:09:18 old-k8s-version-014592 kubelet[6535]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Dec 09 12:09:18 old-k8s-version-014592 kubelet[6535]: E1209 12:09:18.137046    6535 reflector.go:138] k8s.io/kubernetes/pkg/kubelet/kubelet.go:438: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dold-k8s-version-014592&limit=500&resourceVersion=0": dial tcp 192.168.61.132:8443: connect: connection refused
	Dec 09 12:09:18 old-k8s-version-014592 kubelet[6535]: E1209 12:09:18.137224    6535 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.61.132:8443: connect: connection refused
	Dec 09 12:09:18 old-k8s-version-014592 kubelet[6535]: E1209 12:09:18.137344    6535 reflector.go:138] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://control-plane.minikube.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dold-k8s-version-014592&limit=500&resourceVersion=0": dial tcp 192.168.61.132:8443: connect: connection refused
	Dec 09 12:09:18 old-k8s-version-014592 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Dec 09 12:09:18 old-k8s-version-014592 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-014592 -n old-k8s-version-014592
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-014592 -n old-k8s-version-014592: exit status 2 (235.865229ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-014592" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.37s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (447.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-820741 -n no-preload-820741
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-12-09 12:13:05.958196621 +0000 UTC m=+5974.418922145
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-820741 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-820741 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.225µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-820741 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-820741 -n no-preload-820741
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-820741 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-820741 logs -n 25: (1.348271935s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p no-preload-820741             | no-preload-820741            | jenkins | v1.34.0 | 09 Dec 24 11:44 UTC | 09 Dec 24 11:44 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-820741                                   | no-preload-820741            | jenkins | v1.34.0 | 09 Dec 24 11:44 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-835095                           | kubernetes-upgrade-835095    | jenkins | v1.34.0 | 09 Dec 24 11:45 UTC | 09 Dec 24 11:45 UTC |
	| start   | -p kubernetes-upgrade-835095                           | kubernetes-upgrade-835095    | jenkins | v1.34.0 | 09 Dec 24 11:45 UTC | 09 Dec 24 11:45 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-835095                           | kubernetes-upgrade-835095    | jenkins | v1.34.0 | 09 Dec 24 11:45 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-835095                           | kubernetes-upgrade-835095    | jenkins | v1.34.0 | 09 Dec 24 11:45 UTC | 09 Dec 24 11:46 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-835095                           | kubernetes-upgrade-835095    | jenkins | v1.34.0 | 09 Dec 24 11:46 UTC | 09 Dec 24 11:46 UTC |
	| start   | -p                                                     | default-k8s-diff-port-482476 | jenkins | v1.34.0 | 09 Dec 24 11:46 UTC | 09 Dec 24 11:47 UTC |
	|         | default-k8s-diff-port-482476                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-005123                 | embed-certs-005123           | jenkins | v1.34.0 | 09 Dec 24 11:46 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-005123                                  | embed-certs-005123           | jenkins | v1.34.0 | 09 Dec 24 11:46 UTC | 09 Dec 24 11:58 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-014592        | old-k8s-version-014592       | jenkins | v1.34.0 | 09 Dec 24 11:46 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-820741                  | no-preload-820741            | jenkins | v1.34.0 | 09 Dec 24 11:47 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-820741                                   | no-preload-820741            | jenkins | v1.34.0 | 09 Dec 24 11:47 UTC | 09 Dec 24 11:56 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-482476  | default-k8s-diff-port-482476 | jenkins | v1.34.0 | 09 Dec 24 11:47 UTC | 09 Dec 24 11:47 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-482476 | jenkins | v1.34.0 | 09 Dec 24 11:47 UTC |                     |
	|         | default-k8s-diff-port-482476                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-014592                              | old-k8s-version-014592       | jenkins | v1.34.0 | 09 Dec 24 11:48 UTC | 09 Dec 24 11:48 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-014592             | old-k8s-version-014592       | jenkins | v1.34.0 | 09 Dec 24 11:48 UTC | 09 Dec 24 11:48 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-014592                              | old-k8s-version-014592       | jenkins | v1.34.0 | 09 Dec 24 11:48 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-482476       | default-k8s-diff-port-482476 | jenkins | v1.34.0 | 09 Dec 24 11:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-482476 | jenkins | v1.34.0 | 09 Dec 24 11:49 UTC | 09 Dec 24 11:57 UTC |
	|         | default-k8s-diff-port-482476                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-014592                              | old-k8s-version-014592       | jenkins | v1.34.0 | 09 Dec 24 12:12 UTC | 09 Dec 24 12:12 UTC |
	| start   | -p newest-cni-932878 --memory=2200 --alsologtostderr   | newest-cni-932878            | jenkins | v1.34.0 | 09 Dec 24 12:12 UTC | 09 Dec 24 12:13 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| delete  | -p embed-certs-005123                                  | embed-certs-005123           | jenkins | v1.34.0 | 09 Dec 24 12:13 UTC | 09 Dec 24 12:13 UTC |
	| start   | -p auto-763643 --memory=3072                           | auto-763643                  | jenkins | v1.34.0 | 09 Dec 24 12:13 UTC |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-932878             | newest-cni-932878            | jenkins | v1.34.0 | 09 Dec 24 12:13 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/09 12:13:05
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 12:13:05.183835  669561 out.go:345] Setting OutFile to fd 1 ...
	I1209 12:13:05.184000  669561 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 12:13:05.184013  669561 out.go:358] Setting ErrFile to fd 2...
	I1209 12:13:05.184020  669561 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 12:13:05.184284  669561 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20068-609844/.minikube/bin
	I1209 12:13:05.185125  669561 out.go:352] Setting JSON to false
	I1209 12:13:05.186509  669561 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":17729,"bootTime":1733728656,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 12:13:05.186632  669561 start.go:139] virtualization: kvm guest
	I1209 12:13:05.188886  669561 out.go:177] * [auto-763643] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1209 12:13:05.190542  669561 notify.go:220] Checking for updates...
	I1209 12:13:05.190556  669561 out.go:177]   - MINIKUBE_LOCATION=20068
	I1209 12:13:05.192353  669561 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 12:13:05.193652  669561 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20068-609844/kubeconfig
	I1209 12:13:05.195262  669561 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20068-609844/.minikube
	I1209 12:13:05.196622  669561 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1209 12:13:05.198330  669561 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 12:13:05.200625  669561 config.go:182] Loaded profile config "default-k8s-diff-port-482476": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 12:13:05.200807  669561 config.go:182] Loaded profile config "newest-cni-932878": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 12:13:05.200960  669561 config.go:182] Loaded profile config "no-preload-820741": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 12:13:05.201102  669561 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 12:13:05.243513  669561 out.go:177] * Using the kvm2 driver based on user configuration
	I1209 12:13:05.245079  669561 start.go:297] selected driver: kvm2
	I1209 12:13:05.245105  669561 start.go:901] validating driver "kvm2" against <nil>
	I1209 12:13:05.245125  669561 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 12:13:05.246444  669561 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 12:13:05.246640  669561 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20068-609844/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1209 12:13:05.264641  669561 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1209 12:13:05.264694  669561 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1209 12:13:05.265052  669561 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 12:13:05.265095  669561 cni.go:84] Creating CNI manager for ""
	I1209 12:13:05.265158  669561 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 12:13:05.265173  669561 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1209 12:13:05.265247  669561 start.go:340] cluster config:
	{Name:auto-763643 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:auto-763643 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgent
PID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 12:13:05.265387  669561 iso.go:125] acquiring lock: {Name:mk3bf4d9da9b75cc0ae0a3732f4492bac54a4b46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 12:13:05.267240  669561 out.go:177] * Starting "auto-763643" primary control-plane node in "auto-763643" cluster
	I1209 12:13:05.268669  669561 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 12:13:05.268721  669561 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1209 12:13:05.268734  669561 cache.go:56] Caching tarball of preloaded images
	I1209 12:13:05.268853  669561 preload.go:172] Found /home/jenkins/minikube-integration/20068-609844/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1209 12:13:05.268870  669561 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1209 12:13:05.268992  669561 profile.go:143] Saving config to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/auto-763643/config.json ...
	I1209 12:13:05.269021  669561 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/auto-763643/config.json: {Name:mk233091455ab6a974f1ebea1ad0bcce713291ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 12:13:05.269189  669561 start.go:360] acquireMachinesLock for auto-763643: {Name:mka6afffe9ddf2faebdc603ed0401805d58bf31e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 12:13:05.269232  669561 start.go:364] duration metric: took 25.88µs to acquireMachinesLock for "auto-763643"
	I1209 12:13:05.269258  669561 start.go:93] Provisioning new machine with config: &{Name:auto-763643 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.31.2 ClusterName:auto-763643 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 12:13:05.269352  669561 start.go:125] createHost starting for "" (driver="kvm2")
	I1209 12:13:04.433188  668975 start.go:971] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I1209 12:13:04.433294  668975 main.go:141] libmachine: Making call to close driver server
	I1209 12:13:04.433311  668975 main.go:141] libmachine: (newest-cni-932878) Calling .Close
	I1209 12:13:04.433715  668975 main.go:141] libmachine: Successfully made call to close driver server
	I1209 12:13:04.433736  668975 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 12:13:04.433756  668975 main.go:141] libmachine: Making call to close driver server
	I1209 12:13:04.433767  668975 main.go:141] libmachine: (newest-cni-932878) Calling .Close
	I1209 12:13:04.434328  668975 main.go:141] libmachine: (newest-cni-932878) DBG | Closing plugin on server side
	I1209 12:13:04.434370  668975 main.go:141] libmachine: Successfully made call to close driver server
	I1209 12:13:04.434378  668975 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 12:13:04.435120  668975 api_server.go:52] waiting for apiserver process to appear ...
	I1209 12:13:04.435190  668975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 12:13:04.476950  668975 main.go:141] libmachine: Making call to close driver server
	I1209 12:13:04.476982  668975 main.go:141] libmachine: (newest-cni-932878) Calling .Close
	I1209 12:13:04.477434  668975 main.go:141] libmachine: (newest-cni-932878) DBG | Closing plugin on server side
	I1209 12:13:04.477507  668975 main.go:141] libmachine: Successfully made call to close driver server
	I1209 12:13:04.477522  668975 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 12:13:04.942200  668975 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-932878" context rescaled to 1 replicas
	I1209 12:13:05.307167  668975 api_server.go:72] duration metric: took 1.748334558s to wait for apiserver process to appear ...
	I1209 12:13:05.307196  668975 api_server.go:88] waiting for apiserver healthz status ...
	I1209 12:13:05.307225  668975 api_server.go:253] Checking apiserver healthz at https://192.168.61.104:8443/healthz ...
	I1209 12:13:05.307684  668975 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.218165952s)
	I1209 12:13:05.307724  668975 main.go:141] libmachine: Making call to close driver server
	I1209 12:13:05.307737  668975 main.go:141] libmachine: (newest-cni-932878) Calling .Close
	I1209 12:13:05.308239  668975 main.go:141] libmachine: (newest-cni-932878) DBG | Closing plugin on server side
	I1209 12:13:05.308281  668975 main.go:141] libmachine: Successfully made call to close driver server
	I1209 12:13:05.308288  668975 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 12:13:05.308297  668975 main.go:141] libmachine: Making call to close driver server
	I1209 12:13:05.308304  668975 main.go:141] libmachine: (newest-cni-932878) Calling .Close
	I1209 12:13:05.308609  668975 main.go:141] libmachine: (newest-cni-932878) DBG | Closing plugin on server side
	I1209 12:13:05.308639  668975 main.go:141] libmachine: Successfully made call to close driver server
	I1209 12:13:05.308656  668975 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 12:13:05.310742  668975 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1209 12:13:05.312601  668975 addons.go:510] duration metric: took 1.753753392s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1209 12:13:05.329483  668975 api_server.go:279] https://192.168.61.104:8443/healthz returned 200:
	ok
	I1209 12:13:05.333522  668975 api_server.go:141] control plane version: v1.31.2
	I1209 12:13:05.333558  668975 api_server.go:131] duration metric: took 26.351824ms to wait for apiserver health ...
	I1209 12:13:05.333569  668975 system_pods.go:43] waiting for kube-system pods to appear ...
	I1209 12:13:05.355740  668975 system_pods.go:59] 8 kube-system pods found
	I1209 12:13:05.355792  668975 system_pods.go:61] "coredns-7c65d6cfc9-9frbh" [cebbfe2d-d640-4969-85e1-31f52fc127af] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1209 12:13:05.355814  668975 system_pods.go:61] "coredns-7c65d6cfc9-r8fpf" [b09d11e9-2a5e-46bd-bee6-a682ec001863] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1209 12:13:05.355831  668975 system_pods.go:61] "etcd-newest-cni-932878" [30801911-5e56-4cbf-9a1c-5a26ade667f6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1209 12:13:05.355847  668975 system_pods.go:61] "kube-apiserver-newest-cni-932878" [315ed3e2-70d9-48f7-9dc8-5f5a2a1538f2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1209 12:13:05.355865  668975 system_pods.go:61] "kube-controller-manager-newest-cni-932878" [4737a656-a45a-4fc1-be29-8e883b40754d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1209 12:13:05.355879  668975 system_pods.go:61] "kube-proxy-m7k5t" [0e94c309-7e0e-40cd-bff8-9442861648f6] Running
	I1209 12:13:05.355886  668975 system_pods.go:61] "kube-scheduler-newest-cni-932878" [1b80b249-29b3-4dc3-854e-c63c76e352d1] Running
	I1209 12:13:05.355904  668975 system_pods.go:61] "storage-provisioner" [a7b8606c-5815-4f0d-9400-541154968b8b] Pending
	I1209 12:13:05.355917  668975 system_pods.go:74] duration metric: took 22.337992ms to wait for pod list to return data ...
	I1209 12:13:05.355931  668975 default_sa.go:34] waiting for default service account to be created ...
	I1209 12:13:05.364721  668975 default_sa.go:45] found service account: "default"
	I1209 12:13:05.364758  668975 default_sa.go:55] duration metric: took 8.818235ms for default service account to be created ...
	I1209 12:13:05.364775  668975 kubeadm.go:582] duration metric: took 1.805946797s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1209 12:13:05.364800  668975 node_conditions.go:102] verifying NodePressure condition ...
	I1209 12:13:05.373428  668975 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1209 12:13:05.373471  668975 node_conditions.go:123] node cpu capacity is 2
	I1209 12:13:05.373501  668975 node_conditions.go:105] duration metric: took 8.688878ms to run NodePressure ...
	I1209 12:13:05.373518  668975 start.go:241] waiting for startup goroutines ...
	I1209 12:13:05.373530  668975 start.go:246] waiting for cluster config update ...
	I1209 12:13:05.373553  668975 start.go:255] writing updated cluster config ...
	I1209 12:13:05.373889  668975 ssh_runner.go:195] Run: rm -f paused
	I1209 12:13:05.460650  668975 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1209 12:13:05.462666  668975 out.go:177] * Done! kubectl is now configured to use "newest-cni-932878" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 09 12:13:06 no-preload-820741 crio[714]: time="2024-12-09 12:13:06.648034692Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733746386648000370,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f19dde59-eaf0-4d28-9e97-b96ccca1ce51 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 12:13:06 no-preload-820741 crio[714]: time="2024-12-09 12:13:06.648548998Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=840c7b7c-56cc-4a4e-9c76-d8538ac03d33 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 12:13:06 no-preload-820741 crio[714]: time="2024-12-09 12:13:06.648622876Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=840c7b7c-56cc-4a4e-9c76-d8538ac03d33 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 12:13:06 no-preload-820741 crio[714]: time="2024-12-09 12:13:06.649205330Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1cd2924576549a280dcba998d853546db6a30837efcbf285175564babcbff919,PodSandboxId:b98ca63b5a1555dc050a61075fce6bc10f4f1a77958ce4d0b60df2933510611c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1733745146632686621,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4e76af62-1ba8-410c-ace3-c92e48840825,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container
.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:909852cc820d2c5faeb9b92e83a833c397dbbf46bbc715f79ef8b77f0a338f42,PodSandboxId:9cc643ec88d327b685ab6fa714ccf96a1c9b2cc90138ceaa78baa070fed18a92,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733745142990608503,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-z647g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e15e13e-efe6-4ae2-8bac-205aadf8f95a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{
\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d184b6139f52f80605886a70426659453eee61916ead187b881c64f6b4c59bbb,PodSandboxId:fc6d68de344af38241a746310493add2f1c11df00bcbb98f413d294108bf2a52,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733745142977116901,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: aeba46d3-ecf1-4923-b89c-75b34e75a06d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ef403336ca71e96a468d654fafbbe9fd9c37a3d04d2578e06771b425f20004f,PodSandboxId:fc6d68de344af38241a746310493add2f1c11df00bcbb98f413d294108bf2a52,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1733745128033456849,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a
eba46d3-ecf1-4923-b89c-75b34e75a06d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de64a319ab30ae80695633af0e1c9206551e2408918b743c2258216259de56d2,PodSandboxId:caa8831be0bd8e39cb1d1990ba51ad6c70c99c9d531e9420f22596be2f01b978,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733745127414709401,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hpvvp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0945206c-8d1e-47e0-b35b-9011073423
b2,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13e00a6fef368f36aa166cfe9a5a6e37476d0bff708b09a552a102e6103aff16,PodSandboxId:910053c757fafdf5b1c3ff2c244f3d09d3ff14ad898cdf63561e1845d9373e02,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733745123566013883,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-820741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 286a8335482d6443f935ef423fb83f8c,},Annotations:map[string]string{io.kuber
netes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73b01a8a4080f1488054462bb97f98c08528c799629927ce6a684fb0ade63413,PodSandboxId:5218b6309474d233bd08077d66abd5c967dd3f75b3b28ec1a3f9c5a30ea04ed1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733745123587161547,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-820741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: feed5b01992a8257b2679a0cdc55f40b,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6662f1bed1995329291aee23d70ac4157125e45c75dd92e34e07fa257f2be2d,PodSandboxId:bc507605abc8700e8e949c93148b9faf0f46443616e103e6042634e7ad45bc52,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733745123542798054,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-820741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5afc6fc69dc6125a529552eeff4d23ad,},Annotations:map[string]string{io.kube
rnetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:478ca5095dcdb90c166952aee1291e7de907591fa02ac65de132b6d7e6ea79cb,PodSandboxId:119dbeb98f771e4092d9710b08a04c92705c549afb512e90f252736f96c6c997,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733745123548954480,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-820741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ebc694b948cf176fee9c9bd3684e24c,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=840c7b7c-56cc-4a4e-9c76-d8538ac03d33 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 12:13:06 no-preload-820741 crio[714]: time="2024-12-09 12:13:06.700491587Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=38c51588-4a34-431a-9405-0c32622cdb3d name=/runtime.v1.RuntimeService/Version
	Dec 09 12:13:06 no-preload-820741 crio[714]: time="2024-12-09 12:13:06.700565577Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=38c51588-4a34-431a-9405-0c32622cdb3d name=/runtime.v1.RuntimeService/Version
	Dec 09 12:13:06 no-preload-820741 crio[714]: time="2024-12-09 12:13:06.702067532Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3d422aca-a891-4d5d-b70d-adf4b2948240 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 12:13:06 no-preload-820741 crio[714]: time="2024-12-09 12:13:06.702933445Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733746386702893658,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3d422aca-a891-4d5d-b70d-adf4b2948240 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 12:13:06 no-preload-820741 crio[714]: time="2024-12-09 12:13:06.703655493Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8207c13d-9bc0-4a12-8520-33ae71dfe113 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 12:13:06 no-preload-820741 crio[714]: time="2024-12-09 12:13:06.703735969Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8207c13d-9bc0-4a12-8520-33ae71dfe113 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 12:13:06 no-preload-820741 crio[714]: time="2024-12-09 12:13:06.704256922Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1cd2924576549a280dcba998d853546db6a30837efcbf285175564babcbff919,PodSandboxId:b98ca63b5a1555dc050a61075fce6bc10f4f1a77958ce4d0b60df2933510611c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1733745146632686621,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4e76af62-1ba8-410c-ace3-c92e48840825,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container
.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:909852cc820d2c5faeb9b92e83a833c397dbbf46bbc715f79ef8b77f0a338f42,PodSandboxId:9cc643ec88d327b685ab6fa714ccf96a1c9b2cc90138ceaa78baa070fed18a92,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733745142990608503,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-z647g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e15e13e-efe6-4ae2-8bac-205aadf8f95a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{
\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d184b6139f52f80605886a70426659453eee61916ead187b881c64f6b4c59bbb,PodSandboxId:fc6d68de344af38241a746310493add2f1c11df00bcbb98f413d294108bf2a52,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733745142977116901,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: aeba46d3-ecf1-4923-b89c-75b34e75a06d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ef403336ca71e96a468d654fafbbe9fd9c37a3d04d2578e06771b425f20004f,PodSandboxId:fc6d68de344af38241a746310493add2f1c11df00bcbb98f413d294108bf2a52,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1733745128033456849,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a
eba46d3-ecf1-4923-b89c-75b34e75a06d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de64a319ab30ae80695633af0e1c9206551e2408918b743c2258216259de56d2,PodSandboxId:caa8831be0bd8e39cb1d1990ba51ad6c70c99c9d531e9420f22596be2f01b978,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733745127414709401,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hpvvp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0945206c-8d1e-47e0-b35b-9011073423
b2,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13e00a6fef368f36aa166cfe9a5a6e37476d0bff708b09a552a102e6103aff16,PodSandboxId:910053c757fafdf5b1c3ff2c244f3d09d3ff14ad898cdf63561e1845d9373e02,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733745123566013883,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-820741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 286a8335482d6443f935ef423fb83f8c,},Annotations:map[string]string{io.kuber
netes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73b01a8a4080f1488054462bb97f98c08528c799629927ce6a684fb0ade63413,PodSandboxId:5218b6309474d233bd08077d66abd5c967dd3f75b3b28ec1a3f9c5a30ea04ed1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733745123587161547,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-820741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: feed5b01992a8257b2679a0cdc55f40b,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6662f1bed1995329291aee23d70ac4157125e45c75dd92e34e07fa257f2be2d,PodSandboxId:bc507605abc8700e8e949c93148b9faf0f46443616e103e6042634e7ad45bc52,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733745123542798054,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-820741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5afc6fc69dc6125a529552eeff4d23ad,},Annotations:map[string]string{io.kube
rnetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:478ca5095dcdb90c166952aee1291e7de907591fa02ac65de132b6d7e6ea79cb,PodSandboxId:119dbeb98f771e4092d9710b08a04c92705c549afb512e90f252736f96c6c997,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733745123548954480,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-820741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ebc694b948cf176fee9c9bd3684e24c,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8207c13d-9bc0-4a12-8520-33ae71dfe113 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 12:13:06 no-preload-820741 crio[714]: time="2024-12-09 12:13:06.757767839Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=881af176-116e-4b64-b0f5-d60cabdcde67 name=/runtime.v1.RuntimeService/Version
	Dec 09 12:13:06 no-preload-820741 crio[714]: time="2024-12-09 12:13:06.757955102Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=881af176-116e-4b64-b0f5-d60cabdcde67 name=/runtime.v1.RuntimeService/Version
	Dec 09 12:13:06 no-preload-820741 crio[714]: time="2024-12-09 12:13:06.759693199Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=51963cc3-cd37-463b-a937-c0336e0ec9cd name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 12:13:06 no-preload-820741 crio[714]: time="2024-12-09 12:13:06.760240101Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733746386760208231,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=51963cc3-cd37-463b-a937-c0336e0ec9cd name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 12:13:06 no-preload-820741 crio[714]: time="2024-12-09 12:13:06.760899443Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=07ee2d81-a624-4991-9845-fa163114d29f name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 12:13:06 no-preload-820741 crio[714]: time="2024-12-09 12:13:06.760979085Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=07ee2d81-a624-4991-9845-fa163114d29f name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 12:13:06 no-preload-820741 crio[714]: time="2024-12-09 12:13:06.761693116Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1cd2924576549a280dcba998d853546db6a30837efcbf285175564babcbff919,PodSandboxId:b98ca63b5a1555dc050a61075fce6bc10f4f1a77958ce4d0b60df2933510611c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1733745146632686621,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4e76af62-1ba8-410c-ace3-c92e48840825,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container
.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:909852cc820d2c5faeb9b92e83a833c397dbbf46bbc715f79ef8b77f0a338f42,PodSandboxId:9cc643ec88d327b685ab6fa714ccf96a1c9b2cc90138ceaa78baa070fed18a92,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733745142990608503,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-z647g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e15e13e-efe6-4ae2-8bac-205aadf8f95a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{
\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d184b6139f52f80605886a70426659453eee61916ead187b881c64f6b4c59bbb,PodSandboxId:fc6d68de344af38241a746310493add2f1c11df00bcbb98f413d294108bf2a52,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733745142977116901,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: aeba46d3-ecf1-4923-b89c-75b34e75a06d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ef403336ca71e96a468d654fafbbe9fd9c37a3d04d2578e06771b425f20004f,PodSandboxId:fc6d68de344af38241a746310493add2f1c11df00bcbb98f413d294108bf2a52,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1733745128033456849,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a
eba46d3-ecf1-4923-b89c-75b34e75a06d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de64a319ab30ae80695633af0e1c9206551e2408918b743c2258216259de56d2,PodSandboxId:caa8831be0bd8e39cb1d1990ba51ad6c70c99c9d531e9420f22596be2f01b978,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733745127414709401,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hpvvp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0945206c-8d1e-47e0-b35b-9011073423
b2,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13e00a6fef368f36aa166cfe9a5a6e37476d0bff708b09a552a102e6103aff16,PodSandboxId:910053c757fafdf5b1c3ff2c244f3d09d3ff14ad898cdf63561e1845d9373e02,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733745123566013883,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-820741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 286a8335482d6443f935ef423fb83f8c,},Annotations:map[string]string{io.kuber
netes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73b01a8a4080f1488054462bb97f98c08528c799629927ce6a684fb0ade63413,PodSandboxId:5218b6309474d233bd08077d66abd5c967dd3f75b3b28ec1a3f9c5a30ea04ed1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733745123587161547,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-820741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: feed5b01992a8257b2679a0cdc55f40b,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6662f1bed1995329291aee23d70ac4157125e45c75dd92e34e07fa257f2be2d,PodSandboxId:bc507605abc8700e8e949c93148b9faf0f46443616e103e6042634e7ad45bc52,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733745123542798054,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-820741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5afc6fc69dc6125a529552eeff4d23ad,},Annotations:map[string]string{io.kube
rnetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:478ca5095dcdb90c166952aee1291e7de907591fa02ac65de132b6d7e6ea79cb,PodSandboxId:119dbeb98f771e4092d9710b08a04c92705c549afb512e90f252736f96c6c997,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733745123548954480,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-820741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ebc694b948cf176fee9c9bd3684e24c,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=07ee2d81-a624-4991-9845-fa163114d29f name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 12:13:06 no-preload-820741 crio[714]: time="2024-12-09 12:13:06.814193603Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b73d88e2-844a-4e2e-aed2-9ce9205b3f61 name=/runtime.v1.RuntimeService/Version
	Dec 09 12:13:06 no-preload-820741 crio[714]: time="2024-12-09 12:13:06.814305385Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b73d88e2-844a-4e2e-aed2-9ce9205b3f61 name=/runtime.v1.RuntimeService/Version
	Dec 09 12:13:06 no-preload-820741 crio[714]: time="2024-12-09 12:13:06.816889345Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1a1cd826-10f1-41f1-9b37-fe2866e27659 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 12:13:06 no-preload-820741 crio[714]: time="2024-12-09 12:13:06.817378460Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733746386817345666,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1a1cd826-10f1-41f1-9b37-fe2866e27659 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 12:13:06 no-preload-820741 crio[714]: time="2024-12-09 12:13:06.818288911Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f8e0e6a6-c626-4899-be51-86bddb987521 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 12:13:06 no-preload-820741 crio[714]: time="2024-12-09 12:13:06.818371568Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f8e0e6a6-c626-4899-be51-86bddb987521 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 12:13:06 no-preload-820741 crio[714]: time="2024-12-09 12:13:06.818635403Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1cd2924576549a280dcba998d853546db6a30837efcbf285175564babcbff919,PodSandboxId:b98ca63b5a1555dc050a61075fce6bc10f4f1a77958ce4d0b60df2933510611c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1733745146632686621,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4e76af62-1ba8-410c-ace3-c92e48840825,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container
.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:909852cc820d2c5faeb9b92e83a833c397dbbf46bbc715f79ef8b77f0a338f42,PodSandboxId:9cc643ec88d327b685ab6fa714ccf96a1c9b2cc90138ceaa78baa070fed18a92,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733745142990608503,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-z647g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e15e13e-efe6-4ae2-8bac-205aadf8f95a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{
\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d184b6139f52f80605886a70426659453eee61916ead187b881c64f6b4c59bbb,PodSandboxId:fc6d68de344af38241a746310493add2f1c11df00bcbb98f413d294108bf2a52,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733745142977116901,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: aeba46d3-ecf1-4923-b89c-75b34e75a06d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ef403336ca71e96a468d654fafbbe9fd9c37a3d04d2578e06771b425f20004f,PodSandboxId:fc6d68de344af38241a746310493add2f1c11df00bcbb98f413d294108bf2a52,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1733745128033456849,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a
eba46d3-ecf1-4923-b89c-75b34e75a06d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de64a319ab30ae80695633af0e1c9206551e2408918b743c2258216259de56d2,PodSandboxId:caa8831be0bd8e39cb1d1990ba51ad6c70c99c9d531e9420f22596be2f01b978,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733745127414709401,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hpvvp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0945206c-8d1e-47e0-b35b-9011073423
b2,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13e00a6fef368f36aa166cfe9a5a6e37476d0bff708b09a552a102e6103aff16,PodSandboxId:910053c757fafdf5b1c3ff2c244f3d09d3ff14ad898cdf63561e1845d9373e02,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733745123566013883,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-820741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 286a8335482d6443f935ef423fb83f8c,},Annotations:map[string]string{io.kuber
netes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73b01a8a4080f1488054462bb97f98c08528c799629927ce6a684fb0ade63413,PodSandboxId:5218b6309474d233bd08077d66abd5c967dd3f75b3b28ec1a3f9c5a30ea04ed1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733745123587161547,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-820741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: feed5b01992a8257b2679a0cdc55f40b,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6662f1bed1995329291aee23d70ac4157125e45c75dd92e34e07fa257f2be2d,PodSandboxId:bc507605abc8700e8e949c93148b9faf0f46443616e103e6042634e7ad45bc52,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733745123542798054,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-820741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5afc6fc69dc6125a529552eeff4d23ad,},Annotations:map[string]string{io.kube
rnetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:478ca5095dcdb90c166952aee1291e7de907591fa02ac65de132b6d7e6ea79cb,PodSandboxId:119dbeb98f771e4092d9710b08a04c92705c549afb512e90f252736f96c6c997,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733745123548954480,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-820741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ebc694b948cf176fee9c9bd3684e24c,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f8e0e6a6-c626-4899-be51-86bddb987521 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	1cd2924576549       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   20 minutes ago      Running             busybox                   1                   b98ca63b5a155       busybox
	909852cc820d2       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      20 minutes ago      Running             coredns                   1                   9cc643ec88d32       coredns-7c65d6cfc9-z647g
	d184b6139f52f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      20 minutes ago      Running             storage-provisioner       3                   fc6d68de344af       storage-provisioner
	0ef403336ca71       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      20 minutes ago      Exited              storage-provisioner       2                   fc6d68de344af       storage-provisioner
	de64a319ab30a       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                      20 minutes ago      Running             kube-proxy                1                   caa8831be0bd8       kube-proxy-hpvvp
	73b01a8a4080f       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                      21 minutes ago      Running             kube-scheduler            1                   5218b6309474d       kube-scheduler-no-preload-820741
	13e00a6fef368       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      21 minutes ago      Running             etcd                      1                   910053c757faf       etcd-no-preload-820741
	478ca5095dcdb       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                      21 minutes ago      Running             kube-apiserver            1                   119dbeb98f771       kube-apiserver-no-preload-820741
	b6662f1bed199       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                      21 minutes ago      Running             kube-controller-manager   1                   bc507605abc87       kube-controller-manager-no-preload-820741
	
	
	==> coredns [909852cc820d2c5faeb9b92e83a833c397dbbf46bbc715f79ef8b77f0a338f42] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:42173 - 37964 "HINFO IN 7368892457938397498.2172018361582216149. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011174169s
	
	
	==> describe nodes <==
	Name:               no-preload-820741
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-820741
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=60110addcdbf0fec7168b962521659e922988d6c
	                    minikube.k8s.io/name=no-preload-820741
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_09T11_44_20_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 09 Dec 2024 11:44:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-820741
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 09 Dec 2024 12:13:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 09 Dec 2024 12:13:02 +0000   Mon, 09 Dec 2024 11:44:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 09 Dec 2024 12:13:02 +0000   Mon, 09 Dec 2024 11:44:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 09 Dec 2024 12:13:02 +0000   Mon, 09 Dec 2024 11:44:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 09 Dec 2024 12:13:02 +0000   Mon, 09 Dec 2024 11:52:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.169
	  Hostname:    no-preload-820741
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 7f24c170235740c2b22e7e8cd666993b
	  System UUID:                7f24c170-2357-40c2-b22e-7e8cd666993b
	  Boot ID:                    aa8f51f5-2473-41a2-8839-2f66039495cc
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 coredns-7c65d6cfc9-z647g                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     28m
	  kube-system                 etcd-no-preload-820741                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         28m
	  kube-system                 kube-apiserver-no-preload-820741             250m (12%)    0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-controller-manager-no-preload-820741    200m (10%)    0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-proxy-hpvvp                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-scheduler-no-preload-820741             100m (5%)     0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 metrics-server-6867b74b74-pwcsr              100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         28m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28m                kube-proxy       
	  Normal  Starting                 20m                kube-proxy       
	  Normal  Starting                 28m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     28m                kubelet          Node no-preload-820741 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  28m                kubelet          Node no-preload-820741 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28m                kubelet          Node no-preload-820741 status is now: NodeHasNoDiskPressure
	  Normal  NodeReady                28m                kubelet          Node no-preload-820741 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           28m                node-controller  Node no-preload-820741 event: Registered Node no-preload-820741 in Controller
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  21m (x8 over 21m)  kubelet          Node no-preload-820741 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m (x8 over 21m)  kubelet          Node no-preload-820741 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m (x7 over 21m)  kubelet          Node no-preload-820741 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           20m                node-controller  Node no-preload-820741 event: Registered Node no-preload-820741 in Controller
	
	
	==> dmesg <==
	[Dec 9 11:51] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053494] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.038497] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.816659] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.043961] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.600221] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.746987] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +0.056332] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.059060] systemd-fstab-generator[648]: Ignoring "noauto" option for root device
	[  +0.193561] systemd-fstab-generator[662]: Ignoring "noauto" option for root device
	[  +0.130875] systemd-fstab-generator[674]: Ignoring "noauto" option for root device
	[  +0.294390] systemd-fstab-generator[704]: Ignoring "noauto" option for root device
	[Dec 9 11:52] systemd-fstab-generator[1305]: Ignoring "noauto" option for root device
	[  +0.059653] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.811855] systemd-fstab-generator[1428]: Ignoring "noauto" option for root device
	[  +3.321821] kauditd_printk_skb: 97 callbacks suppressed
	[  +3.920131] systemd-fstab-generator[2123]: Ignoring "noauto" option for root device
	[  +5.064464] kauditd_printk_skb: 67 callbacks suppressed
	[  +7.796709] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [13e00a6fef368f36aa166cfe9a5a6e37476d0bff708b09a552a102e6103aff16] <==
	{"level":"warn","ts":"2024-12-09T11:52:14.492642Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-09T11:52:14.054059Z","time spent":"438.577758ms","remote":"127.0.0.1:45932","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-12-09T11:52:14.492884Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"326.224947ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/no-preload-820741\" ","response":"range_response_count:1 size:4646"}
	{"level":"info","ts":"2024-12-09T11:52:14.492949Z","caller":"traceutil/trace.go:171","msg":"trace[1314823773] range","detail":"{range_begin:/registry/minions/no-preload-820741; range_end:; response_count:1; response_revision:559; }","duration":"326.290523ms","start":"2024-12-09T11:52:14.166652Z","end":"2024-12-09T11:52:14.492942Z","steps":["trace[1314823773] 'agreement among raft nodes before linearized reading'  (duration: 326.126909ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-09T11:52:14.493034Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-09T11:52:14.166536Z","time spent":"326.488113ms","remote":"127.0.0.1:46150","response type":"/etcdserverpb.KV/Range","request count":0,"request size":37,"response count":1,"response size":4670,"request content":"key:\"/registry/minions/no-preload-820741\" "}
	{"level":"warn","ts":"2024-12-09T11:52:14.809687Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"108.062889ms","expected-duration":"100ms","prefix":"","request":"header:<ID:16466167026371683156 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kube-apiserver-no-preload-820741\" mod_revision:470 > success:<request_put:<key:\"/registry/pods/kube-system/kube-apiserver-no-preload-820741\" value_size:6987 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-apiserver-no-preload-820741\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-12-09T11:52:14.809876Z","caller":"traceutil/trace.go:171","msg":"trace[408837113] linearizableReadLoop","detail":"{readStateIndex:597; appliedIndex:596; }","duration":"230.703343ms","start":"2024-12-09T11:52:14.579157Z","end":"2024-12-09T11:52:14.809861Z","steps":["trace[408837113] 'read index received'  (duration: 122.327389ms)","trace[408837113] 'applied index is now lower than readState.Index'  (duration: 108.374233ms)"],"step_count":2}
	{"level":"info","ts":"2024-12-09T11:52:14.809986Z","caller":"traceutil/trace.go:171","msg":"trace[1519864487] transaction","detail":"{read_only:false; response_revision:560; number_of_response:1; }","duration":"306.333232ms","start":"2024-12-09T11:52:14.503644Z","end":"2024-12-09T11:52:14.809978Z","steps":["trace[1519864487] 'process raft request'  (duration: 197.900023ms)","trace[1519864487] 'compare'  (duration: 107.921495ms)"],"step_count":2}
	{"level":"warn","ts":"2024-12-09T11:52:14.810083Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-09T11:52:14.503631Z","time spent":"306.400251ms","remote":"127.0.0.1:46164","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":7054,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-apiserver-no-preload-820741\" mod_revision:470 > success:<request_put:<key:\"/registry/pods/kube-system/kube-apiserver-no-preload-820741\" value_size:6987 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-apiserver-no-preload-820741\" > >"}
	{"level":"warn","ts":"2024-12-09T11:52:14.810243Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"143.146794ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/no-preload-820741\" ","response":"range_response_count:1 size:4646"}
	{"level":"info","ts":"2024-12-09T11:52:14.810795Z","caller":"traceutil/trace.go:171","msg":"trace[1954236310] range","detail":"{range_begin:/registry/minions/no-preload-820741; range_end:; response_count:1; response_revision:560; }","duration":"143.697928ms","start":"2024-12-09T11:52:14.667081Z","end":"2024-12-09T11:52:14.810779Z","steps":["trace[1954236310] 'agreement among raft nodes before linearized reading'  (duration: 143.026382ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-09T11:52:14.810313Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"231.178944ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kube-system/coredns-7c65d6cfc9-z647g.180f8001d928315c\" ","response":"range_response_count:1 size:810"}
	{"level":"info","ts":"2024-12-09T11:52:14.811109Z","caller":"traceutil/trace.go:171","msg":"trace[760022203] range","detail":"{range_begin:/registry/events/kube-system/coredns-7c65d6cfc9-z647g.180f8001d928315c; range_end:; response_count:1; response_revision:560; }","duration":"231.967268ms","start":"2024-12-09T11:52:14.579128Z","end":"2024-12-09T11:52:14.811095Z","steps":["trace[760022203] 'agreement among raft nodes before linearized reading'  (duration: 231.136879ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-09T11:52:54.938191Z","caller":"traceutil/trace.go:171","msg":"trace[157916628] transaction","detail":"{read_only:false; response_revision:629; number_of_response:1; }","duration":"334.346305ms","start":"2024-12-09T11:52:54.603779Z","end":"2024-12-09T11:52:54.938125Z","steps":["trace[157916628] 'process raft request'  (duration: 334.146809ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-09T11:52:54.938541Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-09T11:52:54.603765Z","time spent":"334.626726ms","remote":"127.0.0.1:46138","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1102,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:628 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-12-09T11:52:55.243229Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"166.08882ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-6867b74b74-pwcsr\" ","response":"range_response_count:1 size:4385"}
	{"level":"info","ts":"2024-12-09T11:52:55.243339Z","caller":"traceutil/trace.go:171","msg":"trace[1486569575] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-6867b74b74-pwcsr; range_end:; response_count:1; response_revision:629; }","duration":"166.203658ms","start":"2024-12-09T11:52:55.077120Z","end":"2024-12-09T11:52:55.243324Z","steps":["trace[1486569575] 'range keys from in-memory index tree'  (duration: 165.980555ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-09T12:02:05.099120Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":848}
	{"level":"info","ts":"2024-12-09T12:02:05.110968Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":848,"took":"11.198155ms","hash":3965474849,"current-db-size-bytes":2596864,"current-db-size":"2.6 MB","current-db-size-in-use-bytes":2596864,"current-db-size-in-use":"2.6 MB"}
	{"level":"info","ts":"2024-12-09T12:02:05.111072Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3965474849,"revision":848,"compact-revision":-1}
	{"level":"info","ts":"2024-12-09T12:07:05.110711Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1090}
	{"level":"info","ts":"2024-12-09T12:07:05.116804Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1090,"took":"4.796024ms","hash":556317151,"current-db-size-bytes":2596864,"current-db-size":"2.6 MB","current-db-size-in-use-bytes":1564672,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-12-09T12:07:05.117002Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":556317151,"revision":1090,"compact-revision":848}
	{"level":"info","ts":"2024-12-09T12:12:05.119452Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1335}
	{"level":"info","ts":"2024-12-09T12:12:05.123382Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1335,"took":"3.560764ms","hash":2302482620,"current-db-size-bytes":2596864,"current-db-size":"2.6 MB","current-db-size-in-use-bytes":1556480,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-12-09T12:12:05.123426Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2302482620,"revision":1335,"compact-revision":1090}
	
	
	==> kernel <==
	 12:13:07 up 21 min,  0 users,  load average: 0.02, 0.06, 0.09
	Linux no-preload-820741 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [478ca5095dcdb90c166952aee1291e7de907591fa02ac65de132b6d7e6ea79cb] <==
	I1209 12:08:07.537785       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1209 12:08:07.538988       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1209 12:10:07.538231       1 handler_proxy.go:99] no RequestInfo found in the context
	E1209 12:10:07.538646       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1209 12:10:07.539299       1 handler_proxy.go:99] no RequestInfo found in the context
	E1209 12:10:07.539413       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1209 12:10:07.540374       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1209 12:10:07.541572       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1209 12:12:06.538551       1 handler_proxy.go:99] no RequestInfo found in the context
	E1209 12:12:06.538715       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1209 12:12:07.541070       1 handler_proxy.go:99] no RequestInfo found in the context
	E1209 12:12:07.541313       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1209 12:12:07.541216       1 handler_proxy.go:99] no RequestInfo found in the context
	E1209 12:12:07.541413       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1209 12:12:07.542571       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1209 12:12:07.542664       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [b6662f1bed1995329291aee23d70ac4157125e45c75dd92e34e07fa257f2be2d] <==
	I1209 12:07:40.832551       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1209 12:07:55.923988       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-820741"
	E1209 12:08:10.361184       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1209 12:08:10.840109       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1209 12:08:33.957528       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="342.392µs"
	E1209 12:08:40.368175       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1209 12:08:40.848391       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1209 12:08:47.959139       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="233.586µs"
	E1209 12:09:10.375660       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1209 12:09:10.856191       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1209 12:09:40.381354       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1209 12:09:40.864583       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1209 12:10:10.388296       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1209 12:10:10.873480       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1209 12:10:40.395195       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1209 12:10:40.881106       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1209 12:11:10.401288       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1209 12:11:10.890495       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1209 12:11:40.407223       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1209 12:11:40.899101       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1209 12:12:10.418161       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1209 12:12:10.906606       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1209 12:12:40.425667       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1209 12:12:40.913724       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1209 12:13:02.639367       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-820741"
	
	
	==> kube-proxy [de64a319ab30ae80695633af0e1c9206551e2408918b743c2258216259de56d2] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1209 11:52:07.936672       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1209 11:52:07.962984       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.169"]
	E1209 11:52:07.963187       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1209 11:52:08.120354       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1209 11:52:08.120432       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1209 11:52:08.120499       1 server_linux.go:169] "Using iptables Proxier"
	I1209 11:52:08.128711       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1209 11:52:08.131933       1 server.go:483] "Version info" version="v1.31.2"
	I1209 11:52:08.132013       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1209 11:52:08.139432       1 config.go:199] "Starting service config controller"
	I1209 11:52:08.139718       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1209 11:52:08.139989       1 config.go:328] "Starting node config controller"
	I1209 11:52:08.140009       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1209 11:52:08.140904       1 config.go:105] "Starting endpoint slice config controller"
	I1209 11:52:08.140918       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1209 11:52:08.240969       1 shared_informer.go:320] Caches are synced for node config
	I1209 11:52:08.240998       1 shared_informer.go:320] Caches are synced for service config
	I1209 11:52:08.241010       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [73b01a8a4080f1488054462bb97f98c08528c799629927ce6a684fb0ade63413] <==
	I1209 11:52:04.346699       1 serving.go:386] Generated self-signed cert in-memory
	W1209 11:52:06.471192       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1209 11:52:06.471232       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1209 11:52:06.471243       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1209 11:52:06.471294       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1209 11:52:06.528300       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.2"
	I1209 11:52:06.528361       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1209 11:52:06.537494       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1209 11:52:06.537538       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1209 11:52:06.538205       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1209 11:52:06.538275       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1209 11:52:06.638397       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 09 12:12:02 no-preload-820741 kubelet[1435]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 09 12:12:03 no-preload-820741 kubelet[1435]: E1209 12:12:03.240651    1435 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733746323239801833,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 12:12:03 no-preload-820741 kubelet[1435]: E1209 12:12:03.240739    1435 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733746323239801833,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 12:12:06 no-preload-820741 kubelet[1435]: E1209 12:12:06.940458    1435 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-pwcsr" podUID="40d4df7e-de82-478b-a77b-b27208d8262e"
	Dec 09 12:12:13 no-preload-820741 kubelet[1435]: E1209 12:12:13.242542    1435 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733746333242125334,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 12:12:13 no-preload-820741 kubelet[1435]: E1209 12:12:13.242966    1435 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733746333242125334,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 12:12:18 no-preload-820741 kubelet[1435]: E1209 12:12:18.941039    1435 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-pwcsr" podUID="40d4df7e-de82-478b-a77b-b27208d8262e"
	Dec 09 12:12:23 no-preload-820741 kubelet[1435]: E1209 12:12:23.245210    1435 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733746343244436289,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 12:12:23 no-preload-820741 kubelet[1435]: E1209 12:12:23.245620    1435 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733746343244436289,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 12:12:32 no-preload-820741 kubelet[1435]: E1209 12:12:32.942799    1435 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-pwcsr" podUID="40d4df7e-de82-478b-a77b-b27208d8262e"
	Dec 09 12:12:33 no-preload-820741 kubelet[1435]: E1209 12:12:33.247880    1435 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733746353247336041,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 12:12:33 no-preload-820741 kubelet[1435]: E1209 12:12:33.247965    1435 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733746353247336041,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 12:12:43 no-preload-820741 kubelet[1435]: E1209 12:12:43.249497    1435 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733746363249167014,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 12:12:43 no-preload-820741 kubelet[1435]: E1209 12:12:43.249896    1435 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733746363249167014,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 12:12:45 no-preload-820741 kubelet[1435]: E1209 12:12:45.941198    1435 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-pwcsr" podUID="40d4df7e-de82-478b-a77b-b27208d8262e"
	Dec 09 12:12:53 no-preload-820741 kubelet[1435]: E1209 12:12:53.251874    1435 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733746373251431615,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 12:12:53 no-preload-820741 kubelet[1435]: E1209 12:12:53.251942    1435 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733746373251431615,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 12:12:57 no-preload-820741 kubelet[1435]: E1209 12:12:57.940572    1435 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-pwcsr" podUID="40d4df7e-de82-478b-a77b-b27208d8262e"
	Dec 09 12:13:02 no-preload-820741 kubelet[1435]: E1209 12:13:02.975429    1435 iptables.go:577] "Could not set up iptables canary" err=<
	Dec 09 12:13:02 no-preload-820741 kubelet[1435]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 09 12:13:02 no-preload-820741 kubelet[1435]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 09 12:13:02 no-preload-820741 kubelet[1435]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 09 12:13:02 no-preload-820741 kubelet[1435]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 09 12:13:03 no-preload-820741 kubelet[1435]: E1209 12:13:03.254306    1435 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733746383253891571,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 12:13:03 no-preload-820741 kubelet[1435]: E1209 12:13:03.254333    1435 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733746383253891571,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [0ef403336ca71e96a468d654fafbbe9fd9c37a3d04d2578e06771b425f20004f] <==
	I1209 11:52:08.198656       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1209 11:52:08.203115       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [d184b6139f52f80605886a70426659453eee61916ead187b881c64f6b4c59bbb] <==
	I1209 11:52:23.062937       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1209 11:52:23.097448       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1209 11:52:23.097523       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1209 11:52:40.518291       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1209 11:52:40.519561       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"445575d8-e094-46ac-b459-bc165449ec3d", APIVersion:"v1", ResourceVersion:"615", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-820741_8697bd8a-a10e-4417-905d-a77078050fe9 became leader
	I1209 11:52:40.519883       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-820741_8697bd8a-a10e-4417-905d-a77078050fe9!
	I1209 11:52:40.621043       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-820741_8697bd8a-a10e-4417-905d-a77078050fe9!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-820741 -n no-preload-820741
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-820741 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-pwcsr
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-820741 describe pod metrics-server-6867b74b74-pwcsr
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-820741 describe pod metrics-server-6867b74b74-pwcsr: exit status 1 (71.17158ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-pwcsr" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-820741 describe pod metrics-server-6867b74b74-pwcsr: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (447.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (543.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-482476 -n default-k8s-diff-port-482476
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-12-09 12:15:42.620507569 +0000 UTC m=+6131.081233086
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-482476 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-482476 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (138.676169ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): namespaces "kubernetes-dashboard" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-482476 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-482476 -n default-k8s-diff-port-482476
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-482476 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-482476 logs -n 25: (1.940490248s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p kindnet-763643 sudo                               | kindnet-763643            | jenkins | v1.34.0 | 09 Dec 24 12:15 UTC | 09 Dec 24 12:15 UTC |
	|         | systemctl status kubelet --all                       |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p kindnet-763643 sudo                               | kindnet-763643            | jenkins | v1.34.0 | 09 Dec 24 12:15 UTC | 09 Dec 24 12:15 UTC |
	|         | systemctl cat kubelet                                |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p kindnet-763643 sudo                               | kindnet-763643            | jenkins | v1.34.0 | 09 Dec 24 12:15 UTC | 09 Dec 24 12:15 UTC |
	|         | journalctl -xeu kubelet --all                        |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p kindnet-763643 sudo cat                           | kindnet-763643            | jenkins | v1.34.0 | 09 Dec 24 12:15 UTC | 09 Dec 24 12:15 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |                           |         |         |                     |                     |
	| ssh     | -p kindnet-763643 sudo cat                           | kindnet-763643            | jenkins | v1.34.0 | 09 Dec 24 12:15 UTC | 09 Dec 24 12:15 UTC |
	|         | /var/lib/kubelet/config.yaml                         |                           |         |         |                     |                     |
	| ssh     | -p kindnet-763643 sudo                               | kindnet-763643            | jenkins | v1.34.0 | 09 Dec 24 12:15 UTC |                     |
	|         | systemctl status docker --all                        |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p kindnet-763643 sudo                               | kindnet-763643            | jenkins | v1.34.0 | 09 Dec 24 12:15 UTC | 09 Dec 24 12:15 UTC |
	|         | systemctl cat docker                                 |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p kindnet-763643 sudo cat                           | kindnet-763643            | jenkins | v1.34.0 | 09 Dec 24 12:15 UTC | 09 Dec 24 12:15 UTC |
	|         | /etc/docker/daemon.json                              |                           |         |         |                     |                     |
	| ssh     | -p kindnet-763643 sudo docker                        | kindnet-763643            | jenkins | v1.34.0 | 09 Dec 24 12:15 UTC |                     |
	|         | system info                                          |                           |         |         |                     |                     |
	| ssh     | -p kindnet-763643 sudo                               | kindnet-763643            | jenkins | v1.34.0 | 09 Dec 24 12:15 UTC |                     |
	|         | systemctl status cri-docker                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p kindnet-763643 sudo                               | kindnet-763643            | jenkins | v1.34.0 | 09 Dec 24 12:15 UTC | 09 Dec 24 12:15 UTC |
	|         | systemctl cat cri-docker                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p kindnet-763643 sudo cat                           | kindnet-763643            | jenkins | v1.34.0 | 09 Dec 24 12:15 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p kindnet-763643 sudo cat                           | kindnet-763643            | jenkins | v1.34.0 | 09 Dec 24 12:15 UTC | 09 Dec 24 12:15 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p kindnet-763643 sudo                               | kindnet-763643            | jenkins | v1.34.0 | 09 Dec 24 12:15 UTC | 09 Dec 24 12:15 UTC |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p kindnet-763643 sudo                               | kindnet-763643            | jenkins | v1.34.0 | 09 Dec 24 12:15 UTC |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p kindnet-763643 sudo                               | kindnet-763643            | jenkins | v1.34.0 | 09 Dec 24 12:15 UTC | 09 Dec 24 12:15 UTC |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p kindnet-763643 sudo cat                           | kindnet-763643            | jenkins | v1.34.0 | 09 Dec 24 12:15 UTC | 09 Dec 24 12:15 UTC |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p kindnet-763643 sudo cat                           | kindnet-763643            | jenkins | v1.34.0 | 09 Dec 24 12:15 UTC | 09 Dec 24 12:15 UTC |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p kindnet-763643 sudo                               | kindnet-763643            | jenkins | v1.34.0 | 09 Dec 24 12:15 UTC | 09 Dec 24 12:15 UTC |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p kindnet-763643 sudo                               | kindnet-763643            | jenkins | v1.34.0 | 09 Dec 24 12:15 UTC | 09 Dec 24 12:15 UTC |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p kindnet-763643 sudo                               | kindnet-763643            | jenkins | v1.34.0 | 09 Dec 24 12:15 UTC | 09 Dec 24 12:15 UTC |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p kindnet-763643 sudo find                          | kindnet-763643            | jenkins | v1.34.0 | 09 Dec 24 12:15 UTC | 09 Dec 24 12:15 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p kindnet-763643 sudo crio                          | kindnet-763643            | jenkins | v1.34.0 | 09 Dec 24 12:15 UTC | 09 Dec 24 12:15 UTC |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p kindnet-763643                                    | kindnet-763643            | jenkins | v1.34.0 | 09 Dec 24 12:15 UTC | 09 Dec 24 12:15 UTC |
	| start   | -p enable-default-cni-763643                         | enable-default-cni-763643 | jenkins | v1.34.0 | 09 Dec 24 12:15 UTC |                     |
	|         | --memory=3072                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                           |         |         |                     |                     |
	|         | --enable-default-cni=true                            |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/09 12:15:15
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 12:15:15.603080  674327 out.go:345] Setting OutFile to fd 1 ...
	I1209 12:15:15.603224  674327 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 12:15:15.603237  674327 out.go:358] Setting ErrFile to fd 2...
	I1209 12:15:15.603241  674327 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 12:15:15.603470  674327 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20068-609844/.minikube/bin
	I1209 12:15:15.604093  674327 out.go:352] Setting JSON to false
	I1209 12:15:15.605341  674327 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":17860,"bootTime":1733728656,"procs":287,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 12:15:15.605417  674327 start.go:139] virtualization: kvm guest
	I1209 12:15:15.607841  674327 out.go:177] * [enable-default-cni-763643] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1209 12:15:15.609318  674327 out.go:177]   - MINIKUBE_LOCATION=20068
	I1209 12:15:15.609367  674327 notify.go:220] Checking for updates...
	I1209 12:15:15.611918  674327 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 12:15:15.613258  674327 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20068-609844/kubeconfig
	I1209 12:15:15.614558  674327 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20068-609844/.minikube
	I1209 12:15:15.615837  674327 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1209 12:15:15.617090  674327 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 12:15:15.618971  674327 config.go:182] Loaded profile config "calico-763643": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 12:15:15.619086  674327 config.go:182] Loaded profile config "custom-flannel-763643": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 12:15:15.619173  674327 config.go:182] Loaded profile config "default-k8s-diff-port-482476": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 12:15:15.619278  674327 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 12:15:15.658522  674327 out.go:177] * Using the kvm2 driver based on user configuration
	I1209 12:15:15.659911  674327 start.go:297] selected driver: kvm2
	I1209 12:15:15.659925  674327 start.go:901] validating driver "kvm2" against <nil>
	I1209 12:15:15.659946  674327 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 12:15:15.660731  674327 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 12:15:15.660823  674327 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20068-609844/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1209 12:15:15.678824  674327 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1209 12:15:15.678883  674327 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	E1209 12:15:15.679163  674327 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I1209 12:15:15.679192  674327 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 12:15:15.679238  674327 cni.go:84] Creating CNI manager for "bridge"
	I1209 12:15:15.679253  674327 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1209 12:15:15.679329  674327 start.go:340] cluster config:
	{Name:enable-default-cni-763643 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:enable-default-cni-763643 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPat
h: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 12:15:15.679470  674327 iso.go:125] acquiring lock: {Name:mk3bf4d9da9b75cc0ae0a3732f4492bac54a4b46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 12:15:15.681292  674327 out.go:177] * Starting "enable-default-cni-763643" primary control-plane node in "enable-default-cni-763643" cluster
	I1209 12:15:11.468986  672636 main.go:141] libmachine: (custom-flannel-763643) DBG | domain custom-flannel-763643 has defined MAC address 52:54:00:42:91:d8 in network mk-custom-flannel-763643
	I1209 12:15:11.469477  672636 main.go:141] libmachine: (custom-flannel-763643) DBG | unable to find current IP address of domain custom-flannel-763643 in network mk-custom-flannel-763643
	I1209 12:15:11.469506  672636 main.go:141] libmachine: (custom-flannel-763643) DBG | I1209 12:15:11.469425  673127 retry.go:31] will retry after 1.386869865s: waiting for machine to come up
	I1209 12:15:12.857606  672636 main.go:141] libmachine: (custom-flannel-763643) DBG | domain custom-flannel-763643 has defined MAC address 52:54:00:42:91:d8 in network mk-custom-flannel-763643
	I1209 12:15:12.858059  672636 main.go:141] libmachine: (custom-flannel-763643) DBG | unable to find current IP address of domain custom-flannel-763643 in network mk-custom-flannel-763643
	I1209 12:15:12.858081  672636 main.go:141] libmachine: (custom-flannel-763643) DBG | I1209 12:15:12.858047  673127 retry.go:31] will retry after 1.391417087s: waiting for machine to come up
	I1209 12:15:14.251336  672636 main.go:141] libmachine: (custom-flannel-763643) DBG | domain custom-flannel-763643 has defined MAC address 52:54:00:42:91:d8 in network mk-custom-flannel-763643
	I1209 12:15:14.251850  672636 main.go:141] libmachine: (custom-flannel-763643) DBG | unable to find current IP address of domain custom-flannel-763643 in network mk-custom-flannel-763643
	I1209 12:15:14.251879  672636 main.go:141] libmachine: (custom-flannel-763643) DBG | I1209 12:15:14.251804  673127 retry.go:31] will retry after 1.812324292s: waiting for machine to come up
	I1209 12:15:16.066678  672636 main.go:141] libmachine: (custom-flannel-763643) DBG | domain custom-flannel-763643 has defined MAC address 52:54:00:42:91:d8 in network mk-custom-flannel-763643
	I1209 12:15:16.067278  672636 main.go:141] libmachine: (custom-flannel-763643) DBG | unable to find current IP address of domain custom-flannel-763643 in network mk-custom-flannel-763643
	I1209 12:15:16.067306  672636 main.go:141] libmachine: (custom-flannel-763643) DBG | I1209 12:15:16.067221  673127 retry.go:31] will retry after 2.087097445s: waiting for machine to come up
	I1209 12:15:15.534437  672179 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1209 12:15:15.728514  672179 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1209 12:15:15.902349  672179 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1209 12:15:16.082128  672179 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1209 12:15:16.302790  672179 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1209 12:15:16.303605  672179 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1209 12:15:16.306058  672179 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1209 12:15:16.307981  672179 out.go:235]   - Booting up control plane ...
	I1209 12:15:16.308131  672179 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1209 12:15:16.308239  672179 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1209 12:15:16.308358  672179 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1209 12:15:16.326900  672179 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1209 12:15:16.336340  672179 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1209 12:15:16.336585  672179 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1209 12:15:16.472333  672179 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1209 12:15:16.472551  672179 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1209 12:15:16.975047  672179 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.61451ms
	I1209 12:15:16.975211  672179 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1209 12:15:15.682618  674327 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 12:15:15.682666  674327 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1209 12:15:15.682678  674327 cache.go:56] Caching tarball of preloaded images
	I1209 12:15:15.682769  674327 preload.go:172] Found /home/jenkins/minikube-integration/20068-609844/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1209 12:15:15.682785  674327 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1209 12:15:15.682878  674327 profile.go:143] Saving config to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/enable-default-cni-763643/config.json ...
	I1209 12:15:15.682903  674327 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/enable-default-cni-763643/config.json: {Name:mk168c53a0ae33a039e9b8fd9ea3304a7e5ee7a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 12:15:15.683058  674327 start.go:360] acquireMachinesLock for enable-default-cni-763643: {Name:mka6afffe9ddf2faebdc603ed0401805d58bf31e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 12:15:18.155576  672636 main.go:141] libmachine: (custom-flannel-763643) DBG | domain custom-flannel-763643 has defined MAC address 52:54:00:42:91:d8 in network mk-custom-flannel-763643
	I1209 12:15:18.156036  672636 main.go:141] libmachine: (custom-flannel-763643) DBG | unable to find current IP address of domain custom-flannel-763643 in network mk-custom-flannel-763643
	I1209 12:15:18.156069  672636 main.go:141] libmachine: (custom-flannel-763643) DBG | I1209 12:15:18.155970  673127 retry.go:31] will retry after 2.782224954s: waiting for machine to come up
	I1209 12:15:20.939199  672636 main.go:141] libmachine: (custom-flannel-763643) DBG | domain custom-flannel-763643 has defined MAC address 52:54:00:42:91:d8 in network mk-custom-flannel-763643
	I1209 12:15:20.939570  672636 main.go:141] libmachine: (custom-flannel-763643) DBG | unable to find current IP address of domain custom-flannel-763643 in network mk-custom-flannel-763643
	I1209 12:15:20.939593  672636 main.go:141] libmachine: (custom-flannel-763643) DBG | I1209 12:15:20.939530  673127 retry.go:31] will retry after 2.921047276s: waiting for machine to come up
	I1209 12:15:22.476661  672179 kubeadm.go:310] [api-check] The API server is healthy after 5.502711544s
	I1209 12:15:22.488486  672179 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1209 12:15:22.505493  672179 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1209 12:15:22.538907  672179 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1209 12:15:22.539164  672179 kubeadm.go:310] [mark-control-plane] Marking the node calico-763643 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1209 12:15:22.554606  672179 kubeadm.go:310] [bootstrap-token] Using token: hc5bgb.2grpnw6j8ldm0i0s
	I1209 12:15:22.556005  672179 out.go:235]   - Configuring RBAC rules ...
	I1209 12:15:22.556185  672179 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1209 12:15:22.563355  672179 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1209 12:15:22.571364  672179 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1209 12:15:22.576541  672179 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1209 12:15:22.580744  672179 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1209 12:15:22.587781  672179 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1209 12:15:22.885622  672179 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1209 12:15:23.306804  672179 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1209 12:15:23.886048  672179 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1209 12:15:23.886073  672179 kubeadm.go:310] 
	I1209 12:15:23.886216  672179 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1209 12:15:23.886236  672179 kubeadm.go:310] 
	I1209 12:15:23.886352  672179 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1209 12:15:23.886364  672179 kubeadm.go:310] 
	I1209 12:15:23.886396  672179 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1209 12:15:23.886482  672179 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1209 12:15:23.886551  672179 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1209 12:15:23.886562  672179 kubeadm.go:310] 
	I1209 12:15:23.886631  672179 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1209 12:15:23.886640  672179 kubeadm.go:310] 
	I1209 12:15:23.886691  672179 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1209 12:15:23.886699  672179 kubeadm.go:310] 
	I1209 12:15:23.886748  672179 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1209 12:15:23.886867  672179 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1209 12:15:23.886996  672179 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1209 12:15:23.887019  672179 kubeadm.go:310] 
	I1209 12:15:23.887133  672179 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1209 12:15:23.887245  672179 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1209 12:15:23.887265  672179 kubeadm.go:310] 
	I1209 12:15:23.887384  672179 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token hc5bgb.2grpnw6j8ldm0i0s \
	I1209 12:15:23.887544  672179 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c011865ce27f188d55feab5f04226df0aae8d2e7cfe661283806c6f2de0ef29a \
	I1209 12:15:23.887583  672179 kubeadm.go:310] 	--control-plane 
	I1209 12:15:23.887590  672179 kubeadm.go:310] 
	I1209 12:15:23.887700  672179 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1209 12:15:23.887711  672179 kubeadm.go:310] 
	I1209 12:15:23.887843  672179 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token hc5bgb.2grpnw6j8ldm0i0s \
	I1209 12:15:23.887996  672179 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c011865ce27f188d55feab5f04226df0aae8d2e7cfe661283806c6f2de0ef29a 
	I1209 12:15:23.888451  672179 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1209 12:15:23.888590  672179 cni.go:84] Creating CNI manager for "calico"
	I1209 12:15:23.890298  672179 out.go:177] * Configuring Calico (Container Networking Interface) ...
	I1209 12:15:23.891870  672179 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.2/kubectl ...
	I1209 12:15:23.891898  672179 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (323422 bytes)
	I1209 12:15:23.919255  672179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1209 12:15:23.863544  672636 main.go:141] libmachine: (custom-flannel-763643) DBG | domain custom-flannel-763643 has defined MAC address 52:54:00:42:91:d8 in network mk-custom-flannel-763643
	I1209 12:15:23.864055  672636 main.go:141] libmachine: (custom-flannel-763643) DBG | unable to find current IP address of domain custom-flannel-763643 in network mk-custom-flannel-763643
	I1209 12:15:23.864079  672636 main.go:141] libmachine: (custom-flannel-763643) DBG | I1209 12:15:23.864009  673127 retry.go:31] will retry after 4.664767413s: waiting for machine to come up
	I1209 12:15:25.480210  672179 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.31.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.560898479s)
	I1209 12:15:25.480265  672179 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1209 12:15:25.480370  672179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 12:15:25.480382  672179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes calico-763643 minikube.k8s.io/updated_at=2024_12_09T12_15_25_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=60110addcdbf0fec7168b962521659e922988d6c minikube.k8s.io/name=calico-763643 minikube.k8s.io/primary=true
	I1209 12:15:25.502918  672179 ops.go:34] apiserver oom_adj: -16
	I1209 12:15:25.598732  672179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 12:15:26.099777  672179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 12:15:26.598957  672179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 12:15:27.099229  672179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 12:15:27.599500  672179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 12:15:28.099414  672179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 12:15:28.199252  672179 kubeadm.go:1113] duration metric: took 2.718960139s to wait for elevateKubeSystemPrivileges
	I1209 12:15:28.199291  672179 kubeadm.go:394] duration metric: took 15.785049934s to StartCluster
	I1209 12:15:28.199311  672179 settings.go:142] acquiring lock: {Name:mkd819f3469f56ef651f2e5bb461e93bb6b2b5fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 12:15:28.199386  672179 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20068-609844/kubeconfig
	I1209 12:15:28.200311  672179 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/kubeconfig: {Name:mk32876a1ab47f13c0d7c0ba1ee5e32de01b4a4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 12:15:28.200542  672179 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1209 12:15:28.200549  672179 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.72.150 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 12:15:28.200630  672179 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1209 12:15:28.200739  672179 addons.go:69] Setting storage-provisioner=true in profile "calico-763643"
	I1209 12:15:28.200759  672179 addons.go:234] Setting addon storage-provisioner=true in "calico-763643"
	I1209 12:15:28.200756  672179 addons.go:69] Setting default-storageclass=true in profile "calico-763643"
	I1209 12:15:28.200772  672179 config.go:182] Loaded profile config "calico-763643": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 12:15:28.200790  672179 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "calico-763643"
	I1209 12:15:28.200804  672179 host.go:66] Checking if "calico-763643" exists ...
	I1209 12:15:28.201261  672179 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 12:15:28.201268  672179 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 12:15:28.201290  672179 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 12:15:28.201296  672179 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 12:15:28.202216  672179 out.go:177] * Verifying Kubernetes components...
	I1209 12:15:28.203623  672179 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 12:15:28.216777  672179 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42149
	I1209 12:15:28.217169  672179 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36653
	I1209 12:15:28.217428  672179 main.go:141] libmachine: () Calling .GetVersion
	I1209 12:15:28.217669  672179 main.go:141] libmachine: () Calling .GetVersion
	I1209 12:15:28.218018  672179 main.go:141] libmachine: Using API Version  1
	I1209 12:15:28.218042  672179 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 12:15:28.218283  672179 main.go:141] libmachine: Using API Version  1
	I1209 12:15:28.218309  672179 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 12:15:28.218444  672179 main.go:141] libmachine: () Calling .GetMachineName
	I1209 12:15:28.218630  672179 main.go:141] libmachine: (calico-763643) Calling .GetState
	I1209 12:15:28.218718  672179 main.go:141] libmachine: () Calling .GetMachineName
	I1209 12:15:28.219289  672179 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 12:15:28.219320  672179 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 12:15:28.222213  672179 addons.go:234] Setting addon default-storageclass=true in "calico-763643"
	I1209 12:15:28.222261  672179 host.go:66] Checking if "calico-763643" exists ...
	I1209 12:15:28.222657  672179 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 12:15:28.222690  672179 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 12:15:28.235219  672179 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33709
	I1209 12:15:28.235835  672179 main.go:141] libmachine: () Calling .GetVersion
	I1209 12:15:28.236338  672179 main.go:141] libmachine: Using API Version  1
	I1209 12:15:28.236367  672179 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 12:15:28.236793  672179 main.go:141] libmachine: () Calling .GetMachineName
	I1209 12:15:28.237096  672179 main.go:141] libmachine: (calico-763643) Calling .GetState
	I1209 12:15:28.238760  672179 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36827
	I1209 12:15:28.239033  672179 main.go:141] libmachine: (calico-763643) Calling .DriverName
	I1209 12:15:28.239286  672179 main.go:141] libmachine: () Calling .GetVersion
	I1209 12:15:28.239733  672179 main.go:141] libmachine: Using API Version  1
	I1209 12:15:28.239755  672179 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 12:15:28.240138  672179 main.go:141] libmachine: () Calling .GetMachineName
	I1209 12:15:28.240588  672179 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 12:15:28.240613  672179 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 12:15:28.240783  672179 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 12:15:28.241990  672179 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 12:15:28.242009  672179 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1209 12:15:28.242035  672179 main.go:141] libmachine: (calico-763643) Calling .GetSSHHostname
	I1209 12:15:28.244797  672179 main.go:141] libmachine: (calico-763643) DBG | domain calico-763643 has defined MAC address 52:54:00:9d:52:97 in network mk-calico-763643
	I1209 12:15:28.245168  672179 main.go:141] libmachine: (calico-763643) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:52:97", ip: ""} in network mk-calico-763643: {Iface:virbr4 ExpiryTime:2024-12-09 13:14:51 +0000 UTC Type:0 Mac:52:54:00:9d:52:97 Iaid: IPaddr:192.168.72.150 Prefix:24 Hostname:calico-763643 Clientid:01:52:54:00:9d:52:97}
	I1209 12:15:28.245202  672179 main.go:141] libmachine: (calico-763643) DBG | domain calico-763643 has defined IP address 192.168.72.150 and MAC address 52:54:00:9d:52:97 in network mk-calico-763643
	I1209 12:15:28.245445  672179 main.go:141] libmachine: (calico-763643) Calling .GetSSHPort
	I1209 12:15:28.245641  672179 main.go:141] libmachine: (calico-763643) Calling .GetSSHKeyPath
	I1209 12:15:28.245809  672179 main.go:141] libmachine: (calico-763643) Calling .GetSSHUsername
	I1209 12:15:28.245961  672179 sshutil.go:53] new ssh client: &{IP:192.168.72.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/calico-763643/id_rsa Username:docker}
	I1209 12:15:28.257661  672179 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33123
	I1209 12:15:28.258147  672179 main.go:141] libmachine: () Calling .GetVersion
	I1209 12:15:28.258672  672179 main.go:141] libmachine: Using API Version  1
	I1209 12:15:28.258695  672179 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 12:15:28.259002  672179 main.go:141] libmachine: () Calling .GetMachineName
	I1209 12:15:28.259183  672179 main.go:141] libmachine: (calico-763643) Calling .GetState
	I1209 12:15:28.260747  672179 main.go:141] libmachine: (calico-763643) Calling .DriverName
	I1209 12:15:28.260975  672179 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1209 12:15:28.260992  672179 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1209 12:15:28.261007  672179 main.go:141] libmachine: (calico-763643) Calling .GetSSHHostname
	I1209 12:15:28.263944  672179 main.go:141] libmachine: (calico-763643) DBG | domain calico-763643 has defined MAC address 52:54:00:9d:52:97 in network mk-calico-763643
	I1209 12:15:28.264395  672179 main.go:141] libmachine: (calico-763643) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:52:97", ip: ""} in network mk-calico-763643: {Iface:virbr4 ExpiryTime:2024-12-09 13:14:51 +0000 UTC Type:0 Mac:52:54:00:9d:52:97 Iaid: IPaddr:192.168.72.150 Prefix:24 Hostname:calico-763643 Clientid:01:52:54:00:9d:52:97}
	I1209 12:15:28.264438  672179 main.go:141] libmachine: (calico-763643) DBG | domain calico-763643 has defined IP address 192.168.72.150 and MAC address 52:54:00:9d:52:97 in network mk-calico-763643
	I1209 12:15:28.264765  672179 main.go:141] libmachine: (calico-763643) Calling .GetSSHPort
	I1209 12:15:28.264939  672179 main.go:141] libmachine: (calico-763643) Calling .GetSSHKeyPath
	I1209 12:15:28.265110  672179 main.go:141] libmachine: (calico-763643) Calling .GetSSHUsername
	I1209 12:15:28.265265  672179 sshutil.go:53] new ssh client: &{IP:192.168.72.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/calico-763643/id_rsa Username:docker}
	I1209 12:15:28.543497  672179 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 12:15:28.543599  672179 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1209 12:15:28.550063  672179 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1209 12:15:28.635446  672179 node_ready.go:35] waiting up to 15m0s for node "calico-763643" to be "Ready" ...
	I1209 12:15:28.713346  672179 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 12:15:29.070670  672179 start.go:971] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I1209 12:15:29.070760  672179 main.go:141] libmachine: Making call to close driver server
	I1209 12:15:29.070796  672179 main.go:141] libmachine: (calico-763643) Calling .Close
	I1209 12:15:29.071116  672179 main.go:141] libmachine: (calico-763643) DBG | Closing plugin on server side
	I1209 12:15:29.071156  672179 main.go:141] libmachine: Successfully made call to close driver server
	I1209 12:15:29.071163  672179 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 12:15:29.071172  672179 main.go:141] libmachine: Making call to close driver server
	I1209 12:15:29.071179  672179 main.go:141] libmachine: (calico-763643) Calling .Close
	I1209 12:15:29.071440  672179 main.go:141] libmachine: Successfully made call to close driver server
	I1209 12:15:29.071463  672179 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 12:15:29.080380  672179 main.go:141] libmachine: Making call to close driver server
	I1209 12:15:29.080412  672179 main.go:141] libmachine: (calico-763643) Calling .Close
	I1209 12:15:29.080743  672179 main.go:141] libmachine: Successfully made call to close driver server
	I1209 12:15:29.080765  672179 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 12:15:29.395947  672179 main.go:141] libmachine: Making call to close driver server
	I1209 12:15:29.395970  672179 main.go:141] libmachine: (calico-763643) Calling .Close
	I1209 12:15:29.396353  672179 main.go:141] libmachine: Successfully made call to close driver server
	I1209 12:15:29.396406  672179 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 12:15:29.396382  672179 main.go:141] libmachine: (calico-763643) DBG | Closing plugin on server side
	I1209 12:15:29.396420  672179 main.go:141] libmachine: Making call to close driver server
	I1209 12:15:29.396429  672179 main.go:141] libmachine: (calico-763643) Calling .Close
	I1209 12:15:29.396730  672179 main.go:141] libmachine: Successfully made call to close driver server
	I1209 12:15:29.396755  672179 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 12:15:29.399037  672179 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1209 12:15:29.400471  672179 addons.go:510] duration metric: took 1.199835282s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1209 12:15:29.575768  672179 kapi.go:214] "coredns" deployment in "kube-system" namespace and "calico-763643" context rescaled to 1 replicas
	I1209 12:15:28.531439  672636 main.go:141] libmachine: (custom-flannel-763643) DBG | domain custom-flannel-763643 has defined MAC address 52:54:00:42:91:d8 in network mk-custom-flannel-763643
	I1209 12:15:28.531984  672636 main.go:141] libmachine: (custom-flannel-763643) DBG | domain custom-flannel-763643 has current primary IP address 192.168.61.89 and MAC address 52:54:00:42:91:d8 in network mk-custom-flannel-763643
	I1209 12:15:28.532017  672636 main.go:141] libmachine: (custom-flannel-763643) Found IP for machine: 192.168.61.89
	I1209 12:15:28.532032  672636 main.go:141] libmachine: (custom-flannel-763643) Reserving static IP address...
	I1209 12:15:28.532360  672636 main.go:141] libmachine: (custom-flannel-763643) DBG | unable to find host DHCP lease matching {name: "custom-flannel-763643", mac: "52:54:00:42:91:d8", ip: "192.168.61.89"} in network mk-custom-flannel-763643
	I1209 12:15:28.613463  672636 main.go:141] libmachine: (custom-flannel-763643) DBG | Getting to WaitForSSH function...
	I1209 12:15:28.613508  672636 main.go:141] libmachine: (custom-flannel-763643) Reserved static IP address: 192.168.61.89
	I1209 12:15:28.613522  672636 main.go:141] libmachine: (custom-flannel-763643) Waiting for SSH to be available...
	I1209 12:15:28.616807  672636 main.go:141] libmachine: (custom-flannel-763643) DBG | domain custom-flannel-763643 has defined MAC address 52:54:00:42:91:d8 in network mk-custom-flannel-763643
	I1209 12:15:28.617199  672636 main.go:141] libmachine: (custom-flannel-763643) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:42:91:d8", ip: ""} in network mk-custom-flannel-763643
	I1209 12:15:28.617232  672636 main.go:141] libmachine: (custom-flannel-763643) DBG | unable to find defined IP address of network mk-custom-flannel-763643 interface with MAC address 52:54:00:42:91:d8
	I1209 12:15:28.617440  672636 main.go:141] libmachine: (custom-flannel-763643) DBG | Using SSH client type: external
	I1209 12:15:28.617466  672636 main.go:141] libmachine: (custom-flannel-763643) DBG | Using SSH private key: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/custom-flannel-763643/id_rsa (-rw-------)
	I1209 12:15:28.617514  672636 main.go:141] libmachine: (custom-flannel-763643) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20068-609844/.minikube/machines/custom-flannel-763643/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1209 12:15:28.617526  672636 main.go:141] libmachine: (custom-flannel-763643) DBG | About to run SSH command:
	I1209 12:15:28.617541  672636 main.go:141] libmachine: (custom-flannel-763643) DBG | exit 0
	I1209 12:15:28.621306  672636 main.go:141] libmachine: (custom-flannel-763643) DBG | SSH cmd err, output: exit status 255: 
	I1209 12:15:28.621340  672636 main.go:141] libmachine: (custom-flannel-763643) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1209 12:15:28.621354  672636 main.go:141] libmachine: (custom-flannel-763643) DBG | command : exit 0
	I1209 12:15:28.621368  672636 main.go:141] libmachine: (custom-flannel-763643) DBG | err     : exit status 255
	I1209 12:15:28.621382  672636 main.go:141] libmachine: (custom-flannel-763643) DBG | output  : 
	I1209 12:15:33.310962  674327 start.go:364] duration metric: took 17.627861701s to acquireMachinesLock for "enable-default-cni-763643"
	I1209 12:15:33.311044  674327 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-763643 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.2 ClusterName:enable-default-cni-763643 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26214
4 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 12:15:33.311152  674327 start.go:125] createHost starting for "" (driver="kvm2")
	I1209 12:15:31.622386  672636 main.go:141] libmachine: (custom-flannel-763643) DBG | Getting to WaitForSSH function...
	I1209 12:15:31.625053  672636 main.go:141] libmachine: (custom-flannel-763643) DBG | domain custom-flannel-763643 has defined MAC address 52:54:00:42:91:d8 in network mk-custom-flannel-763643
	I1209 12:15:31.625433  672636 main.go:141] libmachine: (custom-flannel-763643) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:91:d8", ip: ""} in network mk-custom-flannel-763643: {Iface:virbr3 ExpiryTime:2024-12-09 13:15:21 +0000 UTC Type:0 Mac:52:54:00:42:91:d8 Iaid: IPaddr:192.168.61.89 Prefix:24 Hostname:custom-flannel-763643 Clientid:01:52:54:00:42:91:d8}
	I1209 12:15:31.625463  672636 main.go:141] libmachine: (custom-flannel-763643) DBG | domain custom-flannel-763643 has defined IP address 192.168.61.89 and MAC address 52:54:00:42:91:d8 in network mk-custom-flannel-763643
	I1209 12:15:31.625614  672636 main.go:141] libmachine: (custom-flannel-763643) DBG | Using SSH client type: external
	I1209 12:15:31.625640  672636 main.go:141] libmachine: (custom-flannel-763643) DBG | Using SSH private key: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/custom-flannel-763643/id_rsa (-rw-------)
	I1209 12:15:31.625673  672636 main.go:141] libmachine: (custom-flannel-763643) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.89 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20068-609844/.minikube/machines/custom-flannel-763643/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1209 12:15:31.625683  672636 main.go:141] libmachine: (custom-flannel-763643) DBG | About to run SSH command:
	I1209 12:15:31.625699  672636 main.go:141] libmachine: (custom-flannel-763643) DBG | exit 0
	I1209 12:15:31.758196  672636 main.go:141] libmachine: (custom-flannel-763643) DBG | SSH cmd err, output: <nil>: 
	I1209 12:15:31.758470  672636 main.go:141] libmachine: (custom-flannel-763643) KVM machine creation complete!
	I1209 12:15:31.758750  672636 main.go:141] libmachine: (custom-flannel-763643) Calling .GetConfigRaw
	I1209 12:15:31.759336  672636 main.go:141] libmachine: (custom-flannel-763643) Calling .DriverName
	I1209 12:15:31.759532  672636 main.go:141] libmachine: (custom-flannel-763643) Calling .DriverName
	I1209 12:15:31.759688  672636 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1209 12:15:31.759706  672636 main.go:141] libmachine: (custom-flannel-763643) Calling .GetState
	I1209 12:15:31.760929  672636 main.go:141] libmachine: Detecting operating system of created instance...
	I1209 12:15:31.760950  672636 main.go:141] libmachine: Waiting for SSH to be available...
	I1209 12:15:31.760958  672636 main.go:141] libmachine: Getting to WaitForSSH function...
	I1209 12:15:31.760966  672636 main.go:141] libmachine: (custom-flannel-763643) Calling .GetSSHHostname
	I1209 12:15:31.763762  672636 main.go:141] libmachine: (custom-flannel-763643) DBG | domain custom-flannel-763643 has defined MAC address 52:54:00:42:91:d8 in network mk-custom-flannel-763643
	I1209 12:15:31.764071  672636 main.go:141] libmachine: (custom-flannel-763643) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:91:d8", ip: ""} in network mk-custom-flannel-763643: {Iface:virbr3 ExpiryTime:2024-12-09 13:15:21 +0000 UTC Type:0 Mac:52:54:00:42:91:d8 Iaid: IPaddr:192.168.61.89 Prefix:24 Hostname:custom-flannel-763643 Clientid:01:52:54:00:42:91:d8}
	I1209 12:15:31.764107  672636 main.go:141] libmachine: (custom-flannel-763643) DBG | domain custom-flannel-763643 has defined IP address 192.168.61.89 and MAC address 52:54:00:42:91:d8 in network mk-custom-flannel-763643
	I1209 12:15:31.764288  672636 main.go:141] libmachine: (custom-flannel-763643) Calling .GetSSHPort
	I1209 12:15:31.764464  672636 main.go:141] libmachine: (custom-flannel-763643) Calling .GetSSHKeyPath
	I1209 12:15:31.764646  672636 main.go:141] libmachine: (custom-flannel-763643) Calling .GetSSHKeyPath
	I1209 12:15:31.764776  672636 main.go:141] libmachine: (custom-flannel-763643) Calling .GetSSHUsername
	I1209 12:15:31.764956  672636 main.go:141] libmachine: Using SSH client type: native
	I1209 12:15:31.765224  672636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.89 22 <nil> <nil>}
	I1209 12:15:31.765240  672636 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1209 12:15:31.881284  672636 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 12:15:31.881314  672636 main.go:141] libmachine: Detecting the provisioner...
	I1209 12:15:31.881325  672636 main.go:141] libmachine: (custom-flannel-763643) Calling .GetSSHHostname
	I1209 12:15:31.884092  672636 main.go:141] libmachine: (custom-flannel-763643) DBG | domain custom-flannel-763643 has defined MAC address 52:54:00:42:91:d8 in network mk-custom-flannel-763643
	I1209 12:15:31.884350  672636 main.go:141] libmachine: (custom-flannel-763643) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:91:d8", ip: ""} in network mk-custom-flannel-763643: {Iface:virbr3 ExpiryTime:2024-12-09 13:15:21 +0000 UTC Type:0 Mac:52:54:00:42:91:d8 Iaid: IPaddr:192.168.61.89 Prefix:24 Hostname:custom-flannel-763643 Clientid:01:52:54:00:42:91:d8}
	I1209 12:15:31.884377  672636 main.go:141] libmachine: (custom-flannel-763643) DBG | domain custom-flannel-763643 has defined IP address 192.168.61.89 and MAC address 52:54:00:42:91:d8 in network mk-custom-flannel-763643
	I1209 12:15:31.884636  672636 main.go:141] libmachine: (custom-flannel-763643) Calling .GetSSHPort
	I1209 12:15:31.884831  672636 main.go:141] libmachine: (custom-flannel-763643) Calling .GetSSHKeyPath
	I1209 12:15:31.885014  672636 main.go:141] libmachine: (custom-flannel-763643) Calling .GetSSHKeyPath
	I1209 12:15:31.885161  672636 main.go:141] libmachine: (custom-flannel-763643) Calling .GetSSHUsername
	I1209 12:15:31.885333  672636 main.go:141] libmachine: Using SSH client type: native
	I1209 12:15:31.885551  672636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.89 22 <nil> <nil>}
	I1209 12:15:31.885562  672636 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1209 12:15:32.002765  672636 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1209 12:15:32.002831  672636 main.go:141] libmachine: found compatible host: buildroot
	I1209 12:15:32.002842  672636 main.go:141] libmachine: Provisioning with buildroot...
	I1209 12:15:32.002857  672636 main.go:141] libmachine: (custom-flannel-763643) Calling .GetMachineName
	I1209 12:15:32.003132  672636 buildroot.go:166] provisioning hostname "custom-flannel-763643"
	I1209 12:15:32.003163  672636 main.go:141] libmachine: (custom-flannel-763643) Calling .GetMachineName
	I1209 12:15:32.003350  672636 main.go:141] libmachine: (custom-flannel-763643) Calling .GetSSHHostname
	I1209 12:15:32.006431  672636 main.go:141] libmachine: (custom-flannel-763643) DBG | domain custom-flannel-763643 has defined MAC address 52:54:00:42:91:d8 in network mk-custom-flannel-763643
	I1209 12:15:32.006857  672636 main.go:141] libmachine: (custom-flannel-763643) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:91:d8", ip: ""} in network mk-custom-flannel-763643: {Iface:virbr3 ExpiryTime:2024-12-09 13:15:21 +0000 UTC Type:0 Mac:52:54:00:42:91:d8 Iaid: IPaddr:192.168.61.89 Prefix:24 Hostname:custom-flannel-763643 Clientid:01:52:54:00:42:91:d8}
	I1209 12:15:32.006895  672636 main.go:141] libmachine: (custom-flannel-763643) DBG | domain custom-flannel-763643 has defined IP address 192.168.61.89 and MAC address 52:54:00:42:91:d8 in network mk-custom-flannel-763643
	I1209 12:15:32.007110  672636 main.go:141] libmachine: (custom-flannel-763643) Calling .GetSSHPort
	I1209 12:15:32.007356  672636 main.go:141] libmachine: (custom-flannel-763643) Calling .GetSSHKeyPath
	I1209 12:15:32.007536  672636 main.go:141] libmachine: (custom-flannel-763643) Calling .GetSSHKeyPath
	I1209 12:15:32.007691  672636 main.go:141] libmachine: (custom-flannel-763643) Calling .GetSSHUsername
	I1209 12:15:32.007850  672636 main.go:141] libmachine: Using SSH client type: native
	I1209 12:15:32.008064  672636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.89 22 <nil> <nil>}
	I1209 12:15:32.008074  672636 main.go:141] libmachine: About to run SSH command:
	sudo hostname custom-flannel-763643 && echo "custom-flannel-763643" | sudo tee /etc/hostname
	I1209 12:15:32.136755  672636 main.go:141] libmachine: SSH cmd err, output: <nil>: custom-flannel-763643
	
	I1209 12:15:32.136797  672636 main.go:141] libmachine: (custom-flannel-763643) Calling .GetSSHHostname
	I1209 12:15:32.139917  672636 main.go:141] libmachine: (custom-flannel-763643) DBG | domain custom-flannel-763643 has defined MAC address 52:54:00:42:91:d8 in network mk-custom-flannel-763643
	I1209 12:15:32.140331  672636 main.go:141] libmachine: (custom-flannel-763643) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:91:d8", ip: ""} in network mk-custom-flannel-763643: {Iface:virbr3 ExpiryTime:2024-12-09 13:15:21 +0000 UTC Type:0 Mac:52:54:00:42:91:d8 Iaid: IPaddr:192.168.61.89 Prefix:24 Hostname:custom-flannel-763643 Clientid:01:52:54:00:42:91:d8}
	I1209 12:15:32.140360  672636 main.go:141] libmachine: (custom-flannel-763643) DBG | domain custom-flannel-763643 has defined IP address 192.168.61.89 and MAC address 52:54:00:42:91:d8 in network mk-custom-flannel-763643
	I1209 12:15:32.140562  672636 main.go:141] libmachine: (custom-flannel-763643) Calling .GetSSHPort
	I1209 12:15:32.140713  672636 main.go:141] libmachine: (custom-flannel-763643) Calling .GetSSHKeyPath
	I1209 12:15:32.140805  672636 main.go:141] libmachine: (custom-flannel-763643) Calling .GetSSHKeyPath
	I1209 12:15:32.140968  672636 main.go:141] libmachine: (custom-flannel-763643) Calling .GetSSHUsername
	I1209 12:15:32.141169  672636 main.go:141] libmachine: Using SSH client type: native
	I1209 12:15:32.141366  672636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.89 22 <nil> <nil>}
	I1209 12:15:32.141395  672636 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scustom-flannel-763643' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 custom-flannel-763643/g' /etc/hosts;
				else 
					echo '127.0.1.1 custom-flannel-763643' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 12:15:32.267069  672636 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 12:15:32.267106  672636 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20068-609844/.minikube CaCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20068-609844/.minikube}
	I1209 12:15:32.267132  672636 buildroot.go:174] setting up certificates
	I1209 12:15:32.267147  672636 provision.go:84] configureAuth start
	I1209 12:15:32.267165  672636 main.go:141] libmachine: (custom-flannel-763643) Calling .GetMachineName
	I1209 12:15:32.267446  672636 main.go:141] libmachine: (custom-flannel-763643) Calling .GetIP
	I1209 12:15:32.270361  672636 main.go:141] libmachine: (custom-flannel-763643) DBG | domain custom-flannel-763643 has defined MAC address 52:54:00:42:91:d8 in network mk-custom-flannel-763643
	I1209 12:15:32.270761  672636 main.go:141] libmachine: (custom-flannel-763643) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:91:d8", ip: ""} in network mk-custom-flannel-763643: {Iface:virbr3 ExpiryTime:2024-12-09 13:15:21 +0000 UTC Type:0 Mac:52:54:00:42:91:d8 Iaid: IPaddr:192.168.61.89 Prefix:24 Hostname:custom-flannel-763643 Clientid:01:52:54:00:42:91:d8}
	I1209 12:15:32.270792  672636 main.go:141] libmachine: (custom-flannel-763643) DBG | domain custom-flannel-763643 has defined IP address 192.168.61.89 and MAC address 52:54:00:42:91:d8 in network mk-custom-flannel-763643
	I1209 12:15:32.270942  672636 main.go:141] libmachine: (custom-flannel-763643) Calling .GetSSHHostname
	I1209 12:15:32.273379  672636 main.go:141] libmachine: (custom-flannel-763643) DBG | domain custom-flannel-763643 has defined MAC address 52:54:00:42:91:d8 in network mk-custom-flannel-763643
	I1209 12:15:32.273840  672636 main.go:141] libmachine: (custom-flannel-763643) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:91:d8", ip: ""} in network mk-custom-flannel-763643: {Iface:virbr3 ExpiryTime:2024-12-09 13:15:21 +0000 UTC Type:0 Mac:52:54:00:42:91:d8 Iaid: IPaddr:192.168.61.89 Prefix:24 Hostname:custom-flannel-763643 Clientid:01:52:54:00:42:91:d8}
	I1209 12:15:32.273870  672636 main.go:141] libmachine: (custom-flannel-763643) DBG | domain custom-flannel-763643 has defined IP address 192.168.61.89 and MAC address 52:54:00:42:91:d8 in network mk-custom-flannel-763643
	I1209 12:15:32.274006  672636 provision.go:143] copyHostCerts
	I1209 12:15:32.274077  672636 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem, removing ...
	I1209 12:15:32.274102  672636 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem
	I1209 12:15:32.274203  672636 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem (1082 bytes)
	I1209 12:15:32.274312  672636 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem, removing ...
	I1209 12:15:32.274322  672636 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem
	I1209 12:15:32.274345  672636 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem (1123 bytes)
	I1209 12:15:32.274404  672636 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem, removing ...
	I1209 12:15:32.274412  672636 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem
	I1209 12:15:32.274433  672636 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem (1679 bytes)
	I1209 12:15:32.274481  672636 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem org=jenkins.custom-flannel-763643 san=[127.0.0.1 192.168.61.89 custom-flannel-763643 localhost minikube]
	I1209 12:15:32.640841  672636 provision.go:177] copyRemoteCerts
	I1209 12:15:32.640901  672636 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 12:15:32.640929  672636 main.go:141] libmachine: (custom-flannel-763643) Calling .GetSSHHostname
	I1209 12:15:32.644182  672636 main.go:141] libmachine: (custom-flannel-763643) DBG | domain custom-flannel-763643 has defined MAC address 52:54:00:42:91:d8 in network mk-custom-flannel-763643
	I1209 12:15:32.644534  672636 main.go:141] libmachine: (custom-flannel-763643) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:91:d8", ip: ""} in network mk-custom-flannel-763643: {Iface:virbr3 ExpiryTime:2024-12-09 13:15:21 +0000 UTC Type:0 Mac:52:54:00:42:91:d8 Iaid: IPaddr:192.168.61.89 Prefix:24 Hostname:custom-flannel-763643 Clientid:01:52:54:00:42:91:d8}
	I1209 12:15:32.644566  672636 main.go:141] libmachine: (custom-flannel-763643) DBG | domain custom-flannel-763643 has defined IP address 192.168.61.89 and MAC address 52:54:00:42:91:d8 in network mk-custom-flannel-763643
	I1209 12:15:32.644820  672636 main.go:141] libmachine: (custom-flannel-763643) Calling .GetSSHPort
	I1209 12:15:32.645121  672636 main.go:141] libmachine: (custom-flannel-763643) Calling .GetSSHKeyPath
	I1209 12:15:32.645323  672636 main.go:141] libmachine: (custom-flannel-763643) Calling .GetSSHUsername
	I1209 12:15:32.645487  672636 sshutil.go:53] new ssh client: &{IP:192.168.61.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/custom-flannel-763643/id_rsa Username:docker}
	I1209 12:15:32.739242  672636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1209 12:15:32.763107  672636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1209 12:15:32.786965  672636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1209 12:15:32.810960  672636 provision.go:87] duration metric: took 543.791027ms to configureAuth
	I1209 12:15:32.810999  672636 buildroot.go:189] setting minikube options for container-runtime
	I1209 12:15:32.811160  672636 config.go:182] Loaded profile config "custom-flannel-763643": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 12:15:32.811253  672636 main.go:141] libmachine: (custom-flannel-763643) Calling .GetSSHHostname
	I1209 12:15:32.814131  672636 main.go:141] libmachine: (custom-flannel-763643) DBG | domain custom-flannel-763643 has defined MAC address 52:54:00:42:91:d8 in network mk-custom-flannel-763643
	I1209 12:15:32.814588  672636 main.go:141] libmachine: (custom-flannel-763643) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:91:d8", ip: ""} in network mk-custom-flannel-763643: {Iface:virbr3 ExpiryTime:2024-12-09 13:15:21 +0000 UTC Type:0 Mac:52:54:00:42:91:d8 Iaid: IPaddr:192.168.61.89 Prefix:24 Hostname:custom-flannel-763643 Clientid:01:52:54:00:42:91:d8}
	I1209 12:15:32.814615  672636 main.go:141] libmachine: (custom-flannel-763643) DBG | domain custom-flannel-763643 has defined IP address 192.168.61.89 and MAC address 52:54:00:42:91:d8 in network mk-custom-flannel-763643
	I1209 12:15:32.814870  672636 main.go:141] libmachine: (custom-flannel-763643) Calling .GetSSHPort
	I1209 12:15:32.815076  672636 main.go:141] libmachine: (custom-flannel-763643) Calling .GetSSHKeyPath
	I1209 12:15:32.815268  672636 main.go:141] libmachine: (custom-flannel-763643) Calling .GetSSHKeyPath
	I1209 12:15:32.815484  672636 main.go:141] libmachine: (custom-flannel-763643) Calling .GetSSHUsername
	I1209 12:15:32.815702  672636 main.go:141] libmachine: Using SSH client type: native
	I1209 12:15:32.815928  672636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.89 22 <nil> <nil>}
	I1209 12:15:32.815952  672636 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 12:15:33.045681  672636 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 12:15:33.045720  672636 main.go:141] libmachine: Checking connection to Docker...
	I1209 12:15:33.045734  672636 main.go:141] libmachine: (custom-flannel-763643) Calling .GetURL
	I1209 12:15:33.047170  672636 main.go:141] libmachine: (custom-flannel-763643) DBG | Using libvirt version 6000000
	I1209 12:15:33.049716  672636 main.go:141] libmachine: (custom-flannel-763643) DBG | domain custom-flannel-763643 has defined MAC address 52:54:00:42:91:d8 in network mk-custom-flannel-763643
	I1209 12:15:33.050016  672636 main.go:141] libmachine: (custom-flannel-763643) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:91:d8", ip: ""} in network mk-custom-flannel-763643: {Iface:virbr3 ExpiryTime:2024-12-09 13:15:21 +0000 UTC Type:0 Mac:52:54:00:42:91:d8 Iaid: IPaddr:192.168.61.89 Prefix:24 Hostname:custom-flannel-763643 Clientid:01:52:54:00:42:91:d8}
	I1209 12:15:33.050049  672636 main.go:141] libmachine: (custom-flannel-763643) DBG | domain custom-flannel-763643 has defined IP address 192.168.61.89 and MAC address 52:54:00:42:91:d8 in network mk-custom-flannel-763643
	I1209 12:15:33.050271  672636 main.go:141] libmachine: Docker is up and running!
	I1209 12:15:33.050287  672636 main.go:141] libmachine: Reticulating splines...
	I1209 12:15:33.050295  672636 client.go:171] duration metric: took 27.740793381s to LocalClient.Create
	I1209 12:15:33.050318  672636 start.go:167] duration metric: took 27.740860231s to libmachine.API.Create "custom-flannel-763643"
	I1209 12:15:33.050334  672636 start.go:293] postStartSetup for "custom-flannel-763643" (driver="kvm2")
	I1209 12:15:33.050350  672636 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 12:15:33.050375  672636 main.go:141] libmachine: (custom-flannel-763643) Calling .DriverName
	I1209 12:15:33.050646  672636 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 12:15:33.050675  672636 main.go:141] libmachine: (custom-flannel-763643) Calling .GetSSHHostname
	I1209 12:15:33.053438  672636 main.go:141] libmachine: (custom-flannel-763643) DBG | domain custom-flannel-763643 has defined MAC address 52:54:00:42:91:d8 in network mk-custom-flannel-763643
	I1209 12:15:33.053897  672636 main.go:141] libmachine: (custom-flannel-763643) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:91:d8", ip: ""} in network mk-custom-flannel-763643: {Iface:virbr3 ExpiryTime:2024-12-09 13:15:21 +0000 UTC Type:0 Mac:52:54:00:42:91:d8 Iaid: IPaddr:192.168.61.89 Prefix:24 Hostname:custom-flannel-763643 Clientid:01:52:54:00:42:91:d8}
	I1209 12:15:33.053927  672636 main.go:141] libmachine: (custom-flannel-763643) DBG | domain custom-flannel-763643 has defined IP address 192.168.61.89 and MAC address 52:54:00:42:91:d8 in network mk-custom-flannel-763643
	I1209 12:15:33.054106  672636 main.go:141] libmachine: (custom-flannel-763643) Calling .GetSSHPort
	I1209 12:15:33.054358  672636 main.go:141] libmachine: (custom-flannel-763643) Calling .GetSSHKeyPath
	I1209 12:15:33.054600  672636 main.go:141] libmachine: (custom-flannel-763643) Calling .GetSSHUsername
	I1209 12:15:33.054783  672636 sshutil.go:53] new ssh client: &{IP:192.168.61.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/custom-flannel-763643/id_rsa Username:docker}
	I1209 12:15:33.144802  672636 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 12:15:33.149107  672636 info.go:137] Remote host: Buildroot 2023.02.9
	I1209 12:15:33.149137  672636 filesync.go:126] Scanning /home/jenkins/minikube-integration/20068-609844/.minikube/addons for local assets ...
	I1209 12:15:33.149213  672636 filesync.go:126] Scanning /home/jenkins/minikube-integration/20068-609844/.minikube/files for local assets ...
	I1209 12:15:33.149305  672636 filesync.go:149] local asset: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem -> 6170172.pem in /etc/ssl/certs
	I1209 12:15:33.149422  672636 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 12:15:33.158450  672636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem --> /etc/ssl/certs/6170172.pem (1708 bytes)
	I1209 12:15:33.183679  672636 start.go:296] duration metric: took 133.326771ms for postStartSetup
	I1209 12:15:33.183763  672636 main.go:141] libmachine: (custom-flannel-763643) Calling .GetConfigRaw
	I1209 12:15:33.184458  672636 main.go:141] libmachine: (custom-flannel-763643) Calling .GetIP
	I1209 12:15:33.187548  672636 main.go:141] libmachine: (custom-flannel-763643) DBG | domain custom-flannel-763643 has defined MAC address 52:54:00:42:91:d8 in network mk-custom-flannel-763643
	I1209 12:15:33.187944  672636 main.go:141] libmachine: (custom-flannel-763643) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:91:d8", ip: ""} in network mk-custom-flannel-763643: {Iface:virbr3 ExpiryTime:2024-12-09 13:15:21 +0000 UTC Type:0 Mac:52:54:00:42:91:d8 Iaid: IPaddr:192.168.61.89 Prefix:24 Hostname:custom-flannel-763643 Clientid:01:52:54:00:42:91:d8}
	I1209 12:15:33.187974  672636 main.go:141] libmachine: (custom-flannel-763643) DBG | domain custom-flannel-763643 has defined IP address 192.168.61.89 and MAC address 52:54:00:42:91:d8 in network mk-custom-flannel-763643
	I1209 12:15:33.188292  672636 profile.go:143] Saving config to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/custom-flannel-763643/config.json ...
	I1209 12:15:33.188536  672636 start.go:128] duration metric: took 27.905337857s to createHost
	I1209 12:15:33.188569  672636 main.go:141] libmachine: (custom-flannel-763643) Calling .GetSSHHostname
	I1209 12:15:33.191128  672636 main.go:141] libmachine: (custom-flannel-763643) DBG | domain custom-flannel-763643 has defined MAC address 52:54:00:42:91:d8 in network mk-custom-flannel-763643
	I1209 12:15:33.191498  672636 main.go:141] libmachine: (custom-flannel-763643) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:91:d8", ip: ""} in network mk-custom-flannel-763643: {Iface:virbr3 ExpiryTime:2024-12-09 13:15:21 +0000 UTC Type:0 Mac:52:54:00:42:91:d8 Iaid: IPaddr:192.168.61.89 Prefix:24 Hostname:custom-flannel-763643 Clientid:01:52:54:00:42:91:d8}
	I1209 12:15:33.191526  672636 main.go:141] libmachine: (custom-flannel-763643) DBG | domain custom-flannel-763643 has defined IP address 192.168.61.89 and MAC address 52:54:00:42:91:d8 in network mk-custom-flannel-763643
	I1209 12:15:33.191749  672636 main.go:141] libmachine: (custom-flannel-763643) Calling .GetSSHPort
	I1209 12:15:33.191980  672636 main.go:141] libmachine: (custom-flannel-763643) Calling .GetSSHKeyPath
	I1209 12:15:33.192143  672636 main.go:141] libmachine: (custom-flannel-763643) Calling .GetSSHKeyPath
	I1209 12:15:33.192314  672636 main.go:141] libmachine: (custom-flannel-763643) Calling .GetSSHUsername
	I1209 12:15:33.192513  672636 main.go:141] libmachine: Using SSH client type: native
	I1209 12:15:33.192730  672636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.89 22 <nil> <nil>}
	I1209 12:15:33.192750  672636 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1209 12:15:33.310753  672636 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733746533.297861737
	
	I1209 12:15:33.310785  672636 fix.go:216] guest clock: 1733746533.297861737
	I1209 12:15:33.310796  672636 fix.go:229] Guest: 2024-12-09 12:15:33.297861737 +0000 UTC Remote: 2024-12-09 12:15:33.188552691 +0000 UTC m=+52.094137999 (delta=109.309046ms)
	I1209 12:15:33.310856  672636 fix.go:200] guest clock delta is within tolerance: 109.309046ms
	I1209 12:15:33.310862  672636 start.go:83] releasing machines lock for "custom-flannel-763643", held for 28.027851066s
	I1209 12:15:33.310893  672636 main.go:141] libmachine: (custom-flannel-763643) Calling .DriverName
	I1209 12:15:33.311224  672636 main.go:141] libmachine: (custom-flannel-763643) Calling .GetIP
	I1209 12:15:33.314223  672636 main.go:141] libmachine: (custom-flannel-763643) DBG | domain custom-flannel-763643 has defined MAC address 52:54:00:42:91:d8 in network mk-custom-flannel-763643
	I1209 12:15:33.314693  672636 main.go:141] libmachine: (custom-flannel-763643) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:91:d8", ip: ""} in network mk-custom-flannel-763643: {Iface:virbr3 ExpiryTime:2024-12-09 13:15:21 +0000 UTC Type:0 Mac:52:54:00:42:91:d8 Iaid: IPaddr:192.168.61.89 Prefix:24 Hostname:custom-flannel-763643 Clientid:01:52:54:00:42:91:d8}
	I1209 12:15:33.314737  672636 main.go:141] libmachine: (custom-flannel-763643) DBG | domain custom-flannel-763643 has defined IP address 192.168.61.89 and MAC address 52:54:00:42:91:d8 in network mk-custom-flannel-763643
	I1209 12:15:33.315031  672636 main.go:141] libmachine: (custom-flannel-763643) Calling .DriverName
	I1209 12:15:33.315603  672636 main.go:141] libmachine: (custom-flannel-763643) Calling .DriverName
	I1209 12:15:33.315818  672636 main.go:141] libmachine: (custom-flannel-763643) Calling .DriverName
	I1209 12:15:33.315900  672636 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 12:15:33.315960  672636 main.go:141] libmachine: (custom-flannel-763643) Calling .GetSSHHostname
	I1209 12:15:33.316016  672636 ssh_runner.go:195] Run: cat /version.json
	I1209 12:15:33.316048  672636 main.go:141] libmachine: (custom-flannel-763643) Calling .GetSSHHostname
	I1209 12:15:33.318886  672636 main.go:141] libmachine: (custom-flannel-763643) DBG | domain custom-flannel-763643 has defined MAC address 52:54:00:42:91:d8 in network mk-custom-flannel-763643
	I1209 12:15:33.319251  672636 main.go:141] libmachine: (custom-flannel-763643) DBG | domain custom-flannel-763643 has defined MAC address 52:54:00:42:91:d8 in network mk-custom-flannel-763643
	I1209 12:15:33.319351  672636 main.go:141] libmachine: (custom-flannel-763643) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:91:d8", ip: ""} in network mk-custom-flannel-763643: {Iface:virbr3 ExpiryTime:2024-12-09 13:15:21 +0000 UTC Type:0 Mac:52:54:00:42:91:d8 Iaid: IPaddr:192.168.61.89 Prefix:24 Hostname:custom-flannel-763643 Clientid:01:52:54:00:42:91:d8}
	I1209 12:15:33.319395  672636 main.go:141] libmachine: (custom-flannel-763643) DBG | domain custom-flannel-763643 has defined IP address 192.168.61.89 and MAC address 52:54:00:42:91:d8 in network mk-custom-flannel-763643
	I1209 12:15:33.319583  672636 main.go:141] libmachine: (custom-flannel-763643) Calling .GetSSHPort
	I1209 12:15:33.319774  672636 main.go:141] libmachine: (custom-flannel-763643) Calling .GetSSHKeyPath
	I1209 12:15:33.319876  672636 main.go:141] libmachine: (custom-flannel-763643) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:91:d8", ip: ""} in network mk-custom-flannel-763643: {Iface:virbr3 ExpiryTime:2024-12-09 13:15:21 +0000 UTC Type:0 Mac:52:54:00:42:91:d8 Iaid: IPaddr:192.168.61.89 Prefix:24 Hostname:custom-flannel-763643 Clientid:01:52:54:00:42:91:d8}
	I1209 12:15:33.319924  672636 main.go:141] libmachine: (custom-flannel-763643) DBG | domain custom-flannel-763643 has defined IP address 192.168.61.89 and MAC address 52:54:00:42:91:d8 in network mk-custom-flannel-763643
	I1209 12:15:33.320002  672636 main.go:141] libmachine: (custom-flannel-763643) Calling .GetSSHUsername
	I1209 12:15:33.320117  672636 main.go:141] libmachine: (custom-flannel-763643) Calling .GetSSHPort
	I1209 12:15:33.320200  672636 sshutil.go:53] new ssh client: &{IP:192.168.61.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/custom-flannel-763643/id_rsa Username:docker}
	I1209 12:15:33.320508  672636 main.go:141] libmachine: (custom-flannel-763643) Calling .GetSSHKeyPath
	I1209 12:15:33.320767  672636 main.go:141] libmachine: (custom-flannel-763643) Calling .GetSSHUsername
	I1209 12:15:33.320922  672636 sshutil.go:53] new ssh client: &{IP:192.168.61.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/custom-flannel-763643/id_rsa Username:docker}
	I1209 12:15:33.448627  672636 ssh_runner.go:195] Run: systemctl --version
	I1209 12:15:33.456822  672636 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 12:15:33.628623  672636 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 12:15:33.635318  672636 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 12:15:33.635419  672636 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 12:15:33.658795  672636 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1209 12:15:33.658838  672636 start.go:495] detecting cgroup driver to use...
	I1209 12:15:33.658917  672636 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 12:15:33.677168  672636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 12:15:33.691700  672636 docker.go:217] disabling cri-docker service (if available) ...
	I1209 12:15:33.691783  672636 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 12:15:33.706527  672636 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 12:15:33.721383  672636 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 12:15:33.872898  672636 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 12:15:34.048963  672636 docker.go:233] disabling docker service ...
	I1209 12:15:34.049050  672636 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 12:15:34.065436  672636 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 12:15:34.079898  672636 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 12:15:34.239936  672636 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 12:15:34.379590  672636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 12:15:34.396285  672636 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 12:15:34.416212  672636 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1209 12:15:34.416298  672636 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 12:15:34.428070  672636 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 12:15:34.428149  672636 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 12:15:34.439390  672636 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 12:15:34.453063  672636 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 12:15:34.464874  672636 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 12:15:34.479436  672636 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 12:15:34.491390  672636 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 12:15:34.510620  672636 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 12:15:34.521064  672636 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 12:15:34.531058  672636 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1209 12:15:34.531128  672636 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1209 12:15:34.544638  672636 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 12:15:34.554011  672636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 12:15:34.705722  672636 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 12:15:34.820347  672636 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 12:15:34.820452  672636 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 12:15:34.825692  672636 start.go:563] Will wait 60s for crictl version
	I1209 12:15:34.825762  672636 ssh_runner.go:195] Run: which crictl
	I1209 12:15:34.830698  672636 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 12:15:34.881476  672636 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1209 12:15:34.881574  672636 ssh_runner.go:195] Run: crio --version
	I1209 12:15:34.915643  672636 ssh_runner.go:195] Run: crio --version
	I1209 12:15:34.952152  672636 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1209 12:15:30.641032  672179 node_ready.go:53] node "calico-763643" has status "Ready":"False"
	I1209 12:15:33.139188  672179 node_ready.go:53] node "calico-763643" has status "Ready":"False"
	I1209 12:15:35.139824  672179 node_ready.go:53] node "calico-763643" has status "Ready":"False"
	I1209 12:15:33.313091  674327 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1209 12:15:33.313342  674327 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 12:15:33.313386  674327 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 12:15:33.330622  674327 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38369
	I1209 12:15:33.331234  674327 main.go:141] libmachine: () Calling .GetVersion
	I1209 12:15:33.331827  674327 main.go:141] libmachine: Using API Version  1
	I1209 12:15:33.331856  674327 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 12:15:33.332197  674327 main.go:141] libmachine: () Calling .GetMachineName
	I1209 12:15:33.332373  674327 main.go:141] libmachine: (enable-default-cni-763643) Calling .GetMachineName
	I1209 12:15:33.332608  674327 main.go:141] libmachine: (enable-default-cni-763643) Calling .DriverName
	I1209 12:15:33.332786  674327 start.go:159] libmachine.API.Create for "enable-default-cni-763643" (driver="kvm2")
	I1209 12:15:33.332813  674327 client.go:168] LocalClient.Create starting
	I1209 12:15:33.332847  674327 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem
	I1209 12:15:33.332889  674327 main.go:141] libmachine: Decoding PEM data...
	I1209 12:15:33.332910  674327 main.go:141] libmachine: Parsing certificate...
	I1209 12:15:33.332990  674327 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem
	I1209 12:15:33.333018  674327 main.go:141] libmachine: Decoding PEM data...
	I1209 12:15:33.333036  674327 main.go:141] libmachine: Parsing certificate...
	I1209 12:15:33.333062  674327 main.go:141] libmachine: Running pre-create checks...
	I1209 12:15:33.333075  674327 main.go:141] libmachine: (enable-default-cni-763643) Calling .PreCreateCheck
	I1209 12:15:33.333512  674327 main.go:141] libmachine: (enable-default-cni-763643) Calling .GetConfigRaw
	I1209 12:15:33.333959  674327 main.go:141] libmachine: Creating machine...
	I1209 12:15:33.333975  674327 main.go:141] libmachine: (enable-default-cni-763643) Calling .Create
	I1209 12:15:33.334125  674327 main.go:141] libmachine: (enable-default-cni-763643) Creating KVM machine...
	I1209 12:15:33.335456  674327 main.go:141] libmachine: (enable-default-cni-763643) DBG | found existing default KVM network
	I1209 12:15:33.337929  674327 main.go:141] libmachine: (enable-default-cni-763643) DBG | I1209 12:15:33.337681  674464 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000211940}
	I1209 12:15:33.337957  674327 main.go:141] libmachine: (enable-default-cni-763643) DBG | created network xml: 
	I1209 12:15:33.337978  674327 main.go:141] libmachine: (enable-default-cni-763643) DBG | <network>
	I1209 12:15:33.337987  674327 main.go:141] libmachine: (enable-default-cni-763643) DBG |   <name>mk-enable-default-cni-763643</name>
	I1209 12:15:33.337997  674327 main.go:141] libmachine: (enable-default-cni-763643) DBG |   <dns enable='no'/>
	I1209 12:15:33.338004  674327 main.go:141] libmachine: (enable-default-cni-763643) DBG |   
	I1209 12:15:33.338015  674327 main.go:141] libmachine: (enable-default-cni-763643) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1209 12:15:33.338022  674327 main.go:141] libmachine: (enable-default-cni-763643) DBG |     <dhcp>
	I1209 12:15:33.338031  674327 main.go:141] libmachine: (enable-default-cni-763643) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1209 12:15:33.338035  674327 main.go:141] libmachine: (enable-default-cni-763643) DBG |     </dhcp>
	I1209 12:15:33.338041  674327 main.go:141] libmachine: (enable-default-cni-763643) DBG |   </ip>
	I1209 12:15:33.338047  674327 main.go:141] libmachine: (enable-default-cni-763643) DBG |   
	I1209 12:15:33.338062  674327 main.go:141] libmachine: (enable-default-cni-763643) DBG | </network>
	I1209 12:15:33.338068  674327 main.go:141] libmachine: (enable-default-cni-763643) DBG | 
	I1209 12:15:33.343320  674327 main.go:141] libmachine: (enable-default-cni-763643) DBG | trying to create private KVM network mk-enable-default-cni-763643 192.168.39.0/24...
	I1209 12:15:33.434329  674327 main.go:141] libmachine: (enable-default-cni-763643) Setting up store path in /home/jenkins/minikube-integration/20068-609844/.minikube/machines/enable-default-cni-763643 ...
	I1209 12:15:33.434374  674327 main.go:141] libmachine: (enable-default-cni-763643) Building disk image from file:///home/jenkins/minikube-integration/20068-609844/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1209 12:15:33.434386  674327 main.go:141] libmachine: (enable-default-cni-763643) DBG | private KVM network mk-enable-default-cni-763643 192.168.39.0/24 created
	I1209 12:15:33.434405  674327 main.go:141] libmachine: (enable-default-cni-763643) DBG | I1209 12:15:33.431649  674464 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20068-609844/.minikube
	I1209 12:15:33.434423  674327 main.go:141] libmachine: (enable-default-cni-763643) Downloading /home/jenkins/minikube-integration/20068-609844/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20068-609844/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1209 12:15:33.744670  674327 main.go:141] libmachine: (enable-default-cni-763643) DBG | I1209 12:15:33.744518  674464 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/enable-default-cni-763643/id_rsa...
	I1209 12:15:33.978748  674327 main.go:141] libmachine: (enable-default-cni-763643) DBG | I1209 12:15:33.978574  674464 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/enable-default-cni-763643/enable-default-cni-763643.rawdisk...
	I1209 12:15:33.978794  674327 main.go:141] libmachine: (enable-default-cni-763643) DBG | Writing magic tar header
	I1209 12:15:33.978815  674327 main.go:141] libmachine: (enable-default-cni-763643) DBG | Writing SSH key tar header
	I1209 12:15:33.978830  674327 main.go:141] libmachine: (enable-default-cni-763643) DBG | I1209 12:15:33.978737  674464 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20068-609844/.minikube/machines/enable-default-cni-763643 ...
	I1209 12:15:33.978908  674327 main.go:141] libmachine: (enable-default-cni-763643) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/enable-default-cni-763643
	I1209 12:15:33.978934  674327 main.go:141] libmachine: (enable-default-cni-763643) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20068-609844/.minikube/machines
	I1209 12:15:33.978947  674327 main.go:141] libmachine: (enable-default-cni-763643) Setting executable bit set on /home/jenkins/minikube-integration/20068-609844/.minikube/machines/enable-default-cni-763643 (perms=drwx------)
	I1209 12:15:33.978963  674327 main.go:141] libmachine: (enable-default-cni-763643) Setting executable bit set on /home/jenkins/minikube-integration/20068-609844/.minikube/machines (perms=drwxr-xr-x)
	I1209 12:15:33.978979  674327 main.go:141] libmachine: (enable-default-cni-763643) Setting executable bit set on /home/jenkins/minikube-integration/20068-609844/.minikube (perms=drwxr-xr-x)
	I1209 12:15:33.978993  674327 main.go:141] libmachine: (enable-default-cni-763643) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20068-609844/.minikube
	I1209 12:15:33.979010  674327 main.go:141] libmachine: (enable-default-cni-763643) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20068-609844
	I1209 12:15:33.979025  674327 main.go:141] libmachine: (enable-default-cni-763643) Setting executable bit set on /home/jenkins/minikube-integration/20068-609844 (perms=drwxrwxr-x)
	I1209 12:15:33.979049  674327 main.go:141] libmachine: (enable-default-cni-763643) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1209 12:15:33.979064  674327 main.go:141] libmachine: (enable-default-cni-763643) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1209 12:15:33.979077  674327 main.go:141] libmachine: (enable-default-cni-763643) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1209 12:15:33.979086  674327 main.go:141] libmachine: (enable-default-cni-763643) Creating domain...
	I1209 12:15:33.979099  674327 main.go:141] libmachine: (enable-default-cni-763643) DBG | Checking permissions on dir: /home/jenkins
	I1209 12:15:33.979107  674327 main.go:141] libmachine: (enable-default-cni-763643) DBG | Checking permissions on dir: /home
	I1209 12:15:33.979120  674327 main.go:141] libmachine: (enable-default-cni-763643) DBG | Skipping /home - not owner
	I1209 12:15:33.980266  674327 main.go:141] libmachine: (enable-default-cni-763643) define libvirt domain using xml: 
	I1209 12:15:33.980286  674327 main.go:141] libmachine: (enable-default-cni-763643) <domain type='kvm'>
	I1209 12:15:33.980298  674327 main.go:141] libmachine: (enable-default-cni-763643)   <name>enable-default-cni-763643</name>
	I1209 12:15:33.980306  674327 main.go:141] libmachine: (enable-default-cni-763643)   <memory unit='MiB'>3072</memory>
	I1209 12:15:33.980318  674327 main.go:141] libmachine: (enable-default-cni-763643)   <vcpu>2</vcpu>
	I1209 12:15:33.980325  674327 main.go:141] libmachine: (enable-default-cni-763643)   <features>
	I1209 12:15:33.980332  674327 main.go:141] libmachine: (enable-default-cni-763643)     <acpi/>
	I1209 12:15:33.980338  674327 main.go:141] libmachine: (enable-default-cni-763643)     <apic/>
	I1209 12:15:33.980347  674327 main.go:141] libmachine: (enable-default-cni-763643)     <pae/>
	I1209 12:15:33.980363  674327 main.go:141] libmachine: (enable-default-cni-763643)     
	I1209 12:15:33.980374  674327 main.go:141] libmachine: (enable-default-cni-763643)   </features>
	I1209 12:15:33.980384  674327 main.go:141] libmachine: (enable-default-cni-763643)   <cpu mode='host-passthrough'>
	I1209 12:15:33.980391  674327 main.go:141] libmachine: (enable-default-cni-763643)   
	I1209 12:15:33.980410  674327 main.go:141] libmachine: (enable-default-cni-763643)   </cpu>
	I1209 12:15:33.980421  674327 main.go:141] libmachine: (enable-default-cni-763643)   <os>
	I1209 12:15:33.980428  674327 main.go:141] libmachine: (enable-default-cni-763643)     <type>hvm</type>
	I1209 12:15:33.980444  674327 main.go:141] libmachine: (enable-default-cni-763643)     <boot dev='cdrom'/>
	I1209 12:15:33.980451  674327 main.go:141] libmachine: (enable-default-cni-763643)     <boot dev='hd'/>
	I1209 12:15:33.980460  674327 main.go:141] libmachine: (enable-default-cni-763643)     <bootmenu enable='no'/>
	I1209 12:15:33.980469  674327 main.go:141] libmachine: (enable-default-cni-763643)   </os>
	I1209 12:15:33.980479  674327 main.go:141] libmachine: (enable-default-cni-763643)   <devices>
	I1209 12:15:33.980490  674327 main.go:141] libmachine: (enable-default-cni-763643)     <disk type='file' device='cdrom'>
	I1209 12:15:33.980502  674327 main.go:141] libmachine: (enable-default-cni-763643)       <source file='/home/jenkins/minikube-integration/20068-609844/.minikube/machines/enable-default-cni-763643/boot2docker.iso'/>
	I1209 12:15:33.980515  674327 main.go:141] libmachine: (enable-default-cni-763643)       <target dev='hdc' bus='scsi'/>
	I1209 12:15:33.980527  674327 main.go:141] libmachine: (enable-default-cni-763643)       <readonly/>
	I1209 12:15:33.980535  674327 main.go:141] libmachine: (enable-default-cni-763643)     </disk>
	I1209 12:15:33.980543  674327 main.go:141] libmachine: (enable-default-cni-763643)     <disk type='file' device='disk'>
	I1209 12:15:33.980555  674327 main.go:141] libmachine: (enable-default-cni-763643)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1209 12:15:33.980571  674327 main.go:141] libmachine: (enable-default-cni-763643)       <source file='/home/jenkins/minikube-integration/20068-609844/.minikube/machines/enable-default-cni-763643/enable-default-cni-763643.rawdisk'/>
	I1209 12:15:33.980593  674327 main.go:141] libmachine: (enable-default-cni-763643)       <target dev='hda' bus='virtio'/>
	I1209 12:15:33.980602  674327 main.go:141] libmachine: (enable-default-cni-763643)     </disk>
	I1209 12:15:33.980613  674327 main.go:141] libmachine: (enable-default-cni-763643)     <interface type='network'>
	I1209 12:15:33.980623  674327 main.go:141] libmachine: (enable-default-cni-763643)       <source network='mk-enable-default-cni-763643'/>
	I1209 12:15:33.980633  674327 main.go:141] libmachine: (enable-default-cni-763643)       <model type='virtio'/>
	I1209 12:15:33.980643  674327 main.go:141] libmachine: (enable-default-cni-763643)     </interface>
	I1209 12:15:33.980653  674327 main.go:141] libmachine: (enable-default-cni-763643)     <interface type='network'>
	I1209 12:15:33.980662  674327 main.go:141] libmachine: (enable-default-cni-763643)       <source network='default'/>
	I1209 12:15:33.980669  674327 main.go:141] libmachine: (enable-default-cni-763643)       <model type='virtio'/>
	I1209 12:15:33.980682  674327 main.go:141] libmachine: (enable-default-cni-763643)     </interface>
	I1209 12:15:33.980689  674327 main.go:141] libmachine: (enable-default-cni-763643)     <serial type='pty'>
	I1209 12:15:33.980702  674327 main.go:141] libmachine: (enable-default-cni-763643)       <target port='0'/>
	I1209 12:15:33.980709  674327 main.go:141] libmachine: (enable-default-cni-763643)     </serial>
	I1209 12:15:33.980721  674327 main.go:141] libmachine: (enable-default-cni-763643)     <console type='pty'>
	I1209 12:15:33.980731  674327 main.go:141] libmachine: (enable-default-cni-763643)       <target type='serial' port='0'/>
	I1209 12:15:33.980739  674327 main.go:141] libmachine: (enable-default-cni-763643)     </console>
	I1209 12:15:33.980749  674327 main.go:141] libmachine: (enable-default-cni-763643)     <rng model='virtio'>
	I1209 12:15:33.980760  674327 main.go:141] libmachine: (enable-default-cni-763643)       <backend model='random'>/dev/random</backend>
	I1209 12:15:33.980770  674327 main.go:141] libmachine: (enable-default-cni-763643)     </rng>
	I1209 12:15:33.980778  674327 main.go:141] libmachine: (enable-default-cni-763643)     
	I1209 12:15:33.980788  674327 main.go:141] libmachine: (enable-default-cni-763643)     
	I1209 12:15:33.980797  674327 main.go:141] libmachine: (enable-default-cni-763643)   </devices>
	I1209 12:15:33.980807  674327 main.go:141] libmachine: (enable-default-cni-763643) </domain>
	I1209 12:15:33.980818  674327 main.go:141] libmachine: (enable-default-cni-763643) 
	I1209 12:15:33.985182  674327 main.go:141] libmachine: (enable-default-cni-763643) DBG | domain enable-default-cni-763643 has defined MAC address 52:54:00:ec:de:f7 in network default
	I1209 12:15:33.986003  674327 main.go:141] libmachine: (enable-default-cni-763643) Ensuring networks are active...
	I1209 12:15:33.986039  674327 main.go:141] libmachine: (enable-default-cni-763643) DBG | domain enable-default-cni-763643 has defined MAC address 52:54:00:41:df:06 in network mk-enable-default-cni-763643
	I1209 12:15:33.986827  674327 main.go:141] libmachine: (enable-default-cni-763643) Ensuring network default is active
	I1209 12:15:33.987298  674327 main.go:141] libmachine: (enable-default-cni-763643) Ensuring network mk-enable-default-cni-763643 is active
	I1209 12:15:33.988006  674327 main.go:141] libmachine: (enable-default-cni-763643) Getting domain xml...
	I1209 12:15:33.988969  674327 main.go:141] libmachine: (enable-default-cni-763643) Creating domain...
	I1209 12:15:35.574908  674327 main.go:141] libmachine: (enable-default-cni-763643) Waiting to get IP...
	I1209 12:15:35.575843  674327 main.go:141] libmachine: (enable-default-cni-763643) DBG | domain enable-default-cni-763643 has defined MAC address 52:54:00:41:df:06 in network mk-enable-default-cni-763643
	I1209 12:15:35.576461  674327 main.go:141] libmachine: (enable-default-cni-763643) DBG | unable to find current IP address of domain enable-default-cni-763643 in network mk-enable-default-cni-763643
	I1209 12:15:35.576555  674327 main.go:141] libmachine: (enable-default-cni-763643) DBG | I1209 12:15:35.576457  674464 retry.go:31] will retry after 285.549998ms: waiting for machine to come up
	I1209 12:15:34.953212  672636 main.go:141] libmachine: (custom-flannel-763643) Calling .GetIP
	I1209 12:15:34.956485  672636 main.go:141] libmachine: (custom-flannel-763643) DBG | domain custom-flannel-763643 has defined MAC address 52:54:00:42:91:d8 in network mk-custom-flannel-763643
	I1209 12:15:34.956911  672636 main.go:141] libmachine: (custom-flannel-763643) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:91:d8", ip: ""} in network mk-custom-flannel-763643: {Iface:virbr3 ExpiryTime:2024-12-09 13:15:21 +0000 UTC Type:0 Mac:52:54:00:42:91:d8 Iaid: IPaddr:192.168.61.89 Prefix:24 Hostname:custom-flannel-763643 Clientid:01:52:54:00:42:91:d8}
	I1209 12:15:34.956938  672636 main.go:141] libmachine: (custom-flannel-763643) DBG | domain custom-flannel-763643 has defined IP address 192.168.61.89 and MAC address 52:54:00:42:91:d8 in network mk-custom-flannel-763643
	I1209 12:15:34.957241  672636 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1209 12:15:34.961619  672636 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 12:15:34.976659  672636 kubeadm.go:883] updating cluster {Name:custom-flannel-763643 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.31.2 ClusterName:custom-flannel-763643 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP:192.168.61.89 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize
:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 12:15:34.976772  672636 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 12:15:34.976817  672636 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 12:15:35.023864  672636 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1209 12:15:35.023953  672636 ssh_runner.go:195] Run: which lz4
	I1209 12:15:35.028139  672636 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1209 12:15:35.033819  672636 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1209 12:15:35.033901  672636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1209 12:15:37.141216  672179 node_ready.go:53] node "calico-763643" has status "Ready":"False"
	I1209 12:15:38.151215  672179 node_ready.go:49] node "calico-763643" has status "Ready":"True"
	I1209 12:15:38.151252  672179 node_ready.go:38] duration metric: took 9.515770112s for node "calico-763643" to be "Ready" ...
	I1209 12:15:38.151282  672179 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 12:15:38.168619  672179 pod_ready.go:79] waiting up to 15m0s for pod "calico-kube-controllers-5d7d9cdfd8-rcd9v" in "kube-system" namespace to be "Ready" ...
	I1209 12:15:40.175520  672179 pod_ready.go:103] pod "calico-kube-controllers-5d7d9cdfd8-rcd9v" in "kube-system" namespace has status "Ready":"False"
	I1209 12:15:35.864303  674327 main.go:141] libmachine: (enable-default-cni-763643) DBG | domain enable-default-cni-763643 has defined MAC address 52:54:00:41:df:06 in network mk-enable-default-cni-763643
	I1209 12:15:35.865029  674327 main.go:141] libmachine: (enable-default-cni-763643) DBG | unable to find current IP address of domain enable-default-cni-763643 in network mk-enable-default-cni-763643
	I1209 12:15:35.865057  674327 main.go:141] libmachine: (enable-default-cni-763643) DBG | I1209 12:15:35.864996  674464 retry.go:31] will retry after 348.571215ms: waiting for machine to come up
	I1209 12:15:36.219501  674327 main.go:141] libmachine: (enable-default-cni-763643) DBG | domain enable-default-cni-763643 has defined MAC address 52:54:00:41:df:06 in network mk-enable-default-cni-763643
	I1209 12:15:36.220288  674327 main.go:141] libmachine: (enable-default-cni-763643) DBG | unable to find current IP address of domain enable-default-cni-763643 in network mk-enable-default-cni-763643
	I1209 12:15:36.220313  674327 main.go:141] libmachine: (enable-default-cni-763643) DBG | I1209 12:15:36.220189  674464 retry.go:31] will retry after 444.360409ms: waiting for machine to come up
	I1209 12:15:36.666812  674327 main.go:141] libmachine: (enable-default-cni-763643) DBG | domain enable-default-cni-763643 has defined MAC address 52:54:00:41:df:06 in network mk-enable-default-cni-763643
	I1209 12:15:36.667460  674327 main.go:141] libmachine: (enable-default-cni-763643) DBG | unable to find current IP address of domain enable-default-cni-763643 in network mk-enable-default-cni-763643
	I1209 12:15:36.667495  674327 main.go:141] libmachine: (enable-default-cni-763643) DBG | I1209 12:15:36.667435  674464 retry.go:31] will retry after 427.943363ms: waiting for machine to come up
	I1209 12:15:37.097279  674327 main.go:141] libmachine: (enable-default-cni-763643) DBG | domain enable-default-cni-763643 has defined MAC address 52:54:00:41:df:06 in network mk-enable-default-cni-763643
	I1209 12:15:37.097793  674327 main.go:141] libmachine: (enable-default-cni-763643) DBG | unable to find current IP address of domain enable-default-cni-763643 in network mk-enable-default-cni-763643
	I1209 12:15:37.097826  674327 main.go:141] libmachine: (enable-default-cni-763643) DBG | I1209 12:15:37.097745  674464 retry.go:31] will retry after 749.167882ms: waiting for machine to come up
	I1209 12:15:37.848193  674327 main.go:141] libmachine: (enable-default-cni-763643) DBG | domain enable-default-cni-763643 has defined MAC address 52:54:00:41:df:06 in network mk-enable-default-cni-763643
	I1209 12:15:37.848660  674327 main.go:141] libmachine: (enable-default-cni-763643) DBG | unable to find current IP address of domain enable-default-cni-763643 in network mk-enable-default-cni-763643
	I1209 12:15:37.848700  674327 main.go:141] libmachine: (enable-default-cni-763643) DBG | I1209 12:15:37.848604  674464 retry.go:31] will retry after 652.040923ms: waiting for machine to come up
	I1209 12:15:38.502011  674327 main.go:141] libmachine: (enable-default-cni-763643) DBG | domain enable-default-cni-763643 has defined MAC address 52:54:00:41:df:06 in network mk-enable-default-cni-763643
	I1209 12:15:38.502519  674327 main.go:141] libmachine: (enable-default-cni-763643) DBG | unable to find current IP address of domain enable-default-cni-763643 in network mk-enable-default-cni-763643
	I1209 12:15:38.502552  674327 main.go:141] libmachine: (enable-default-cni-763643) DBG | I1209 12:15:38.502478  674464 retry.go:31] will retry after 1.056233468s: waiting for machine to come up
	I1209 12:15:39.560082  674327 main.go:141] libmachine: (enable-default-cni-763643) DBG | domain enable-default-cni-763643 has defined MAC address 52:54:00:41:df:06 in network mk-enable-default-cni-763643
	I1209 12:15:39.560701  674327 main.go:141] libmachine: (enable-default-cni-763643) DBG | unable to find current IP address of domain enable-default-cni-763643 in network mk-enable-default-cni-763643
	I1209 12:15:39.560724  674327 main.go:141] libmachine: (enable-default-cni-763643) DBG | I1209 12:15:39.560668  674464 retry.go:31] will retry after 1.113101031s: waiting for machine to come up
	I1209 12:15:36.413066  672636 crio.go:462] duration metric: took 1.384968716s to copy over tarball
	I1209 12:15:36.413178  672636 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1209 12:15:38.984963  672636 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.571731034s)
	I1209 12:15:38.985020  672636 crio.go:469] duration metric: took 2.571922515s to extract the tarball
	I1209 12:15:38.985032  672636 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1209 12:15:39.023800  672636 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 12:15:39.069599  672636 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 12:15:39.069627  672636 cache_images.go:84] Images are preloaded, skipping loading
	I1209 12:15:39.069635  672636 kubeadm.go:934] updating node { 192.168.61.89 8443 v1.31.2 crio true true} ...
	I1209 12:15:39.069740  672636 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=custom-flannel-763643 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.89
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:custom-flannel-763643 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml}
	I1209 12:15:39.069809  672636 ssh_runner.go:195] Run: crio config
	I1209 12:15:39.127715  672636 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1209 12:15:39.127759  672636 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1209 12:15:39.127794  672636 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.89 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:custom-flannel-763643 NodeName:custom-flannel-763643 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.89"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.89 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 12:15:39.127973  672636 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.89
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "custom-flannel-763643"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.89"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.89"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 12:15:39.128044  672636 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1209 12:15:39.137273  672636 binaries.go:44] Found k8s binaries, skipping transfer
	I1209 12:15:39.137343  672636 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 12:15:39.146380  672636 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (320 bytes)
	I1209 12:15:39.162794  672636 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 12:15:39.180536  672636 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2298 bytes)
	I1209 12:15:39.197373  672636 ssh_runner.go:195] Run: grep 192.168.61.89	control-plane.minikube.internal$ /etc/hosts
	I1209 12:15:39.201986  672636 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.89	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 12:15:39.214753  672636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 12:15:39.328844  672636 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 12:15:39.345227  672636 certs.go:68] Setting up /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/custom-flannel-763643 for IP: 192.168.61.89
	I1209 12:15:39.345324  672636 certs.go:194] generating shared ca certs ...
	I1209 12:15:39.345361  672636 certs.go:226] acquiring lock for ca certs: {Name:mk2df5887e08965d909a9c950da5dfffb8a04ddc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 12:15:39.345605  672636 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.key
	I1209 12:15:39.345704  672636 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.key
	I1209 12:15:39.345752  672636 certs.go:256] generating profile certs ...
	I1209 12:15:39.345852  672636 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/custom-flannel-763643/client.key
	I1209 12:15:39.345893  672636 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/custom-flannel-763643/client.crt with IP's: []
	I1209 12:15:39.698710  672636 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/custom-flannel-763643/client.crt ...
	I1209 12:15:39.698754  672636 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/custom-flannel-763643/client.crt: {Name:mkfa9b6a904cb9b5d388854811897a407cc43be4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 12:15:39.698974  672636 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/custom-flannel-763643/client.key ...
	I1209 12:15:39.698992  672636 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/custom-flannel-763643/client.key: {Name:mka876af99e3a38ada06b662cf06729032bb6ab7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 12:15:39.699083  672636 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/custom-flannel-763643/apiserver.key.4e8e870a
	I1209 12:15:39.699100  672636 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/custom-flannel-763643/apiserver.crt.4e8e870a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.89]
	I1209 12:15:39.886877  672636 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/custom-flannel-763643/apiserver.crt.4e8e870a ...
	I1209 12:15:39.886916  672636 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/custom-flannel-763643/apiserver.crt.4e8e870a: {Name:mka5c6f9a5ff8548d53dd9311a404664c1758356 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 12:15:39.887115  672636 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/custom-flannel-763643/apiserver.key.4e8e870a ...
	I1209 12:15:39.887136  672636 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/custom-flannel-763643/apiserver.key.4e8e870a: {Name:mk87a8c9e754b7a178f5ce78a626e806c8b2bc6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 12:15:39.887249  672636 certs.go:381] copying /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/custom-flannel-763643/apiserver.crt.4e8e870a -> /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/custom-flannel-763643/apiserver.crt
	I1209 12:15:39.887385  672636 certs.go:385] copying /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/custom-flannel-763643/apiserver.key.4e8e870a -> /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/custom-flannel-763643/apiserver.key
	I1209 12:15:39.887475  672636 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/custom-flannel-763643/proxy-client.key
	I1209 12:15:39.887498  672636 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/custom-flannel-763643/proxy-client.crt with IP's: []
	I1209 12:15:40.183958  672636 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/custom-flannel-763643/proxy-client.crt ...
	I1209 12:15:40.183992  672636 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/custom-flannel-763643/proxy-client.crt: {Name:mk75514237be064f7cf451a4e9e11f9f36a8caa8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 12:15:40.184182  672636 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/custom-flannel-763643/proxy-client.key ...
	I1209 12:15:40.184199  672636 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/custom-flannel-763643/proxy-client.key: {Name:mk97ca4b88db399318b59bef1d6772413857e0fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 12:15:40.184443  672636 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017.pem (1338 bytes)
	W1209 12:15:40.184497  672636 certs.go:480] ignoring /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017_empty.pem, impossibly tiny 0 bytes
	I1209 12:15:40.184512  672636 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem (1675 bytes)
	I1209 12:15:40.184544  672636 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem (1082 bytes)
	I1209 12:15:40.184580  672636 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem (1123 bytes)
	I1209 12:15:40.184614  672636 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem (1679 bytes)
	I1209 12:15:40.184675  672636 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem (1708 bytes)
	I1209 12:15:40.185495  672636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 12:15:40.219445  672636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1209 12:15:40.260479  672636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 12:15:40.293783  672636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1209 12:15:40.323145  672636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/custom-flannel-763643/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1209 12:15:40.351257  672636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/custom-flannel-763643/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1209 12:15:40.380014  672636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/custom-flannel-763643/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 12:15:40.404594  672636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/custom-flannel-763643/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1209 12:15:40.433614  672636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 12:15:40.459924  672636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017.pem --> /usr/share/ca-certificates/617017.pem (1338 bytes)
	I1209 12:15:40.485346  672636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem --> /usr/share/ca-certificates/6170172.pem (1708 bytes)
	I1209 12:15:40.511475  672636 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 12:15:40.532425  672636 ssh_runner.go:195] Run: openssl version
	I1209 12:15:40.538553  672636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1209 12:15:40.552894  672636 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 12:15:40.557855  672636 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 10:34 /usr/share/ca-certificates/minikubeCA.pem
	I1209 12:15:40.557930  672636 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 12:15:40.564586  672636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1209 12:15:40.575920  672636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/617017.pem && ln -fs /usr/share/ca-certificates/617017.pem /etc/ssl/certs/617017.pem"
	I1209 12:15:40.587835  672636 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/617017.pem
	I1209 12:15:40.593621  672636 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 10:45 /usr/share/ca-certificates/617017.pem
	I1209 12:15:40.593698  672636 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/617017.pem
	I1209 12:15:40.600958  672636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/617017.pem /etc/ssl/certs/51391683.0"
	I1209 12:15:40.614758  672636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6170172.pem && ln -fs /usr/share/ca-certificates/6170172.pem /etc/ssl/certs/6170172.pem"
	I1209 12:15:40.627245  672636 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6170172.pem
	I1209 12:15:40.632951  672636 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 10:45 /usr/share/ca-certificates/6170172.pem
	I1209 12:15:40.633042  672636 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6170172.pem
	I1209 12:15:40.638882  672636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6170172.pem /etc/ssl/certs/3ec20f2e.0"
	I1209 12:15:40.649774  672636 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 12:15:40.654753  672636 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1209 12:15:40.654865  672636 kubeadm.go:392] StartCluster: {Name:custom-flannel-763643 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.31.2 ClusterName:custom-flannel-763643 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP:192.168.61.89 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 12:15:40.654972  672636 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 12:15:40.655027  672636 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 12:15:40.695800  672636 cri.go:89] found id: ""
	I1209 12:15:40.695890  672636 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 12:15:40.705621  672636 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 12:15:40.714935  672636 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 12:15:40.725925  672636 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 12:15:40.725948  672636 kubeadm.go:157] found existing configuration files:
	
	I1209 12:15:40.725995  672636 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 12:15:40.736900  672636 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 12:15:40.736980  672636 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 12:15:40.746559  672636 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 12:15:40.756022  672636 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 12:15:40.756088  672636 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 12:15:40.765510  672636 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 12:15:40.774861  672636 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 12:15:40.774931  672636 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 12:15:40.785469  672636 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 12:15:40.794419  672636 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 12:15:40.794517  672636 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 12:15:40.804779  672636 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1209 12:15:40.865097  672636 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1209 12:15:40.865455  672636 kubeadm.go:310] [preflight] Running pre-flight checks
	I1209 12:15:41.010144  672636 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1209 12:15:41.010334  672636 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1209 12:15:41.010564  672636 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1209 12:15:41.029426  672636 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	
	
	==> CRI-O <==
	Dec 09 12:15:43 default-k8s-diff-port-482476 crio[709]: time="2024-12-09 12:15:43.493638880Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733746543493616223,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8d51f332-aa18-46e9-af06-8ac745b22d10 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 12:15:43 default-k8s-diff-port-482476 crio[709]: time="2024-12-09 12:15:43.494045422Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d2ba4d28-fdc4-47f7-ab09-c5b961c73c55 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 12:15:43 default-k8s-diff-port-482476 crio[709]: time="2024-12-09 12:15:43.494095129Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d2ba4d28-fdc4-47f7-ab09-c5b961c73c55 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 12:15:43 default-k8s-diff-port-482476 crio[709]: time="2024-12-09 12:15:43.494285169Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a6497e24ed8d6cf7d755dd9862baef214e283f280f4ef19432b7b946ffdc04af,PodSandboxId:9c6c1503f2aa142f6e1b790794ac4f72469f489f668cb81dbd99ad79be651d54,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733745455068209528,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b53e3ba-9bc9-4b5a-bec9-d06336616c8a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bc393f8ca069a02f4255df67b57878759855f7c23b28e333fa0164b3723d3ee,PodSandboxId:7b14ae84b4e8d70026481c33596cd578202967e2d5f80b7f1344ad74a2a8aef4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733745454054388888,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bb47s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ff9fcf2-1494-4739-9bf4-6c9dd5bcbbf4,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8c6e423fd231089f00f6c48db7dc922a6a0e874923f459286cce0c29e586c56,PodSandboxId:3bf9b1181e2f348d99e3601e877d02010b31bb3f3dd25be4c94adeac273be018,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733745454043841945,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7rr27,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: a5dd0401-80bf-4c87-9771-e1837c960425,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a06e322b94f2e8c310d68246143453b976f57c7b04a64527bb6d2624556e53b8,PodSandboxId:505c512572eeadc3423e7d92e427ae440cce2a8578f8be04d1e408798f589494,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING
,CreatedAt:1733745453383214534,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pgs52,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5a3463e-e955-4345-9559-b23cce44fa0e,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fe459e350a0a69c68e97eabcc631964637bad5192e31fc4f9bde455313887ff,PodSandboxId:54dba79d9d7cecfcb8ad76bb275e38e59d7421c662be0f31363876f1335e47a9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:173374544
2491333987,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-482476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08d9a3116ed44b8533a3cadf46fa536a,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1efd65a61828e45b53b8be59a071fc5b43c47d488fe8aba5126af1fe231338fa,PodSandboxId:e9eddec54729a9e6fc7f103e8e007467a867f4ec9e1e250497854ff9068a0e5e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,Cre
atedAt:1733745442461208526,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-482476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f4ab6bcaa941321f87de927012cee9d,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c9f81dc1c7ecf5412ad6a0ac64634688986ff124ce873c0846bd10cf54ff761,PodSandboxId:0e1162e4c3cf07ce5d3804edb5623a5567c82710bf86e6dd75a93bddd7c26573,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:17337
45442475944828,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-482476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b73f65421285a8dd1839e442c0c6af24,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ba56b590110406b1c4e6646e981b8f00f6fbc55308dacd367bda5339a72a122,PodSandboxId:62374aca74b288e067c703ef56dd1a6f6f6ead07233461f8e14daf9b603e84e2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733745442438840869,Labels:
map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-482476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05146362456c7def7e2b7c92028be8b7,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:404b8057940700687ef437a2f86b23b1eb47d811420982efb7d526eb07510390,PodSandboxId:913e6ad25da255a4f64f5ac795cc16bcd6a8e9cd85a4c954180010d07e3629d3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733745155403085403,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-482476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05146362456c7def7e2b7c92028be8b7,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d2ba4d28-fdc4-47f7-ab09-c5b961c73c55 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 12:15:43 default-k8s-diff-port-482476 crio[709]: time="2024-12-09 12:15:43.544240148Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=999edca2-675a-4a0f-8ee6-e51eb1844acd name=/runtime.v1.RuntimeService/Version
	Dec 09 12:15:43 default-k8s-diff-port-482476 crio[709]: time="2024-12-09 12:15:43.544351830Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=999edca2-675a-4a0f-8ee6-e51eb1844acd name=/runtime.v1.RuntimeService/Version
	Dec 09 12:15:43 default-k8s-diff-port-482476 crio[709]: time="2024-12-09 12:15:43.545547278Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0facc388-10c8-470c-86b4-484dbf7285b2 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 12:15:43 default-k8s-diff-port-482476 crio[709]: time="2024-12-09 12:15:43.546164626Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733746543546130700,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0facc388-10c8-470c-86b4-484dbf7285b2 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 12:15:43 default-k8s-diff-port-482476 crio[709]: time="2024-12-09 12:15:43.546938973Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=65ed5463-e670-4fae-a517-958182dde9ef name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 12:15:43 default-k8s-diff-port-482476 crio[709]: time="2024-12-09 12:15:43.547056167Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=65ed5463-e670-4fae-a517-958182dde9ef name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 12:15:43 default-k8s-diff-port-482476 crio[709]: time="2024-12-09 12:15:43.547391918Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a6497e24ed8d6cf7d755dd9862baef214e283f280f4ef19432b7b946ffdc04af,PodSandboxId:9c6c1503f2aa142f6e1b790794ac4f72469f489f668cb81dbd99ad79be651d54,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733745455068209528,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b53e3ba-9bc9-4b5a-bec9-d06336616c8a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bc393f8ca069a02f4255df67b57878759855f7c23b28e333fa0164b3723d3ee,PodSandboxId:7b14ae84b4e8d70026481c33596cd578202967e2d5f80b7f1344ad74a2a8aef4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733745454054388888,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bb47s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ff9fcf2-1494-4739-9bf4-6c9dd5bcbbf4,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8c6e423fd231089f00f6c48db7dc922a6a0e874923f459286cce0c29e586c56,PodSandboxId:3bf9b1181e2f348d99e3601e877d02010b31bb3f3dd25be4c94adeac273be018,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733745454043841945,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7rr27,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: a5dd0401-80bf-4c87-9771-e1837c960425,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a06e322b94f2e8c310d68246143453b976f57c7b04a64527bb6d2624556e53b8,PodSandboxId:505c512572eeadc3423e7d92e427ae440cce2a8578f8be04d1e408798f589494,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING
,CreatedAt:1733745453383214534,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pgs52,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5a3463e-e955-4345-9559-b23cce44fa0e,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fe459e350a0a69c68e97eabcc631964637bad5192e31fc4f9bde455313887ff,PodSandboxId:54dba79d9d7cecfcb8ad76bb275e38e59d7421c662be0f31363876f1335e47a9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:173374544
2491333987,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-482476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08d9a3116ed44b8533a3cadf46fa536a,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1efd65a61828e45b53b8be59a071fc5b43c47d488fe8aba5126af1fe231338fa,PodSandboxId:e9eddec54729a9e6fc7f103e8e007467a867f4ec9e1e250497854ff9068a0e5e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,Cre
atedAt:1733745442461208526,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-482476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f4ab6bcaa941321f87de927012cee9d,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c9f81dc1c7ecf5412ad6a0ac64634688986ff124ce873c0846bd10cf54ff761,PodSandboxId:0e1162e4c3cf07ce5d3804edb5623a5567c82710bf86e6dd75a93bddd7c26573,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:17337
45442475944828,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-482476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b73f65421285a8dd1839e442c0c6af24,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ba56b590110406b1c4e6646e981b8f00f6fbc55308dacd367bda5339a72a122,PodSandboxId:62374aca74b288e067c703ef56dd1a6f6f6ead07233461f8e14daf9b603e84e2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733745442438840869,Labels:
map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-482476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05146362456c7def7e2b7c92028be8b7,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:404b8057940700687ef437a2f86b23b1eb47d811420982efb7d526eb07510390,PodSandboxId:913e6ad25da255a4f64f5ac795cc16bcd6a8e9cd85a4c954180010d07e3629d3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733745155403085403,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-482476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05146362456c7def7e2b7c92028be8b7,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=65ed5463-e670-4fae-a517-958182dde9ef name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 12:15:43 default-k8s-diff-port-482476 crio[709]: time="2024-12-09 12:15:43.588603993Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8c2d3c7d-658e-4fb0-8eda-dc8431ea6656 name=/runtime.v1.RuntimeService/Version
	Dec 09 12:15:43 default-k8s-diff-port-482476 crio[709]: time="2024-12-09 12:15:43.588704880Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8c2d3c7d-658e-4fb0-8eda-dc8431ea6656 name=/runtime.v1.RuntimeService/Version
	Dec 09 12:15:43 default-k8s-diff-port-482476 crio[709]: time="2024-12-09 12:15:43.590272575Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8bd235b9-b4da-4230-a810-060222f4a8ec name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 12:15:43 default-k8s-diff-port-482476 crio[709]: time="2024-12-09 12:15:43.590923939Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733746543590898235,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8bd235b9-b4da-4230-a810-060222f4a8ec name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 12:15:43 default-k8s-diff-port-482476 crio[709]: time="2024-12-09 12:15:43.591613710Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0dbb7d93-d606-41d9-9660-0d52e6b244fe name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 12:15:43 default-k8s-diff-port-482476 crio[709]: time="2024-12-09 12:15:43.591697454Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0dbb7d93-d606-41d9-9660-0d52e6b244fe name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 12:15:43 default-k8s-diff-port-482476 crio[709]: time="2024-12-09 12:15:43.591962858Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a6497e24ed8d6cf7d755dd9862baef214e283f280f4ef19432b7b946ffdc04af,PodSandboxId:9c6c1503f2aa142f6e1b790794ac4f72469f489f668cb81dbd99ad79be651d54,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733745455068209528,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b53e3ba-9bc9-4b5a-bec9-d06336616c8a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bc393f8ca069a02f4255df67b57878759855f7c23b28e333fa0164b3723d3ee,PodSandboxId:7b14ae84b4e8d70026481c33596cd578202967e2d5f80b7f1344ad74a2a8aef4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733745454054388888,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bb47s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ff9fcf2-1494-4739-9bf4-6c9dd5bcbbf4,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8c6e423fd231089f00f6c48db7dc922a6a0e874923f459286cce0c29e586c56,PodSandboxId:3bf9b1181e2f348d99e3601e877d02010b31bb3f3dd25be4c94adeac273be018,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733745454043841945,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7rr27,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: a5dd0401-80bf-4c87-9771-e1837c960425,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a06e322b94f2e8c310d68246143453b976f57c7b04a64527bb6d2624556e53b8,PodSandboxId:505c512572eeadc3423e7d92e427ae440cce2a8578f8be04d1e408798f589494,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING
,CreatedAt:1733745453383214534,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pgs52,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5a3463e-e955-4345-9559-b23cce44fa0e,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fe459e350a0a69c68e97eabcc631964637bad5192e31fc4f9bde455313887ff,PodSandboxId:54dba79d9d7cecfcb8ad76bb275e38e59d7421c662be0f31363876f1335e47a9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:173374544
2491333987,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-482476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08d9a3116ed44b8533a3cadf46fa536a,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1efd65a61828e45b53b8be59a071fc5b43c47d488fe8aba5126af1fe231338fa,PodSandboxId:e9eddec54729a9e6fc7f103e8e007467a867f4ec9e1e250497854ff9068a0e5e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,Cre
atedAt:1733745442461208526,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-482476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f4ab6bcaa941321f87de927012cee9d,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c9f81dc1c7ecf5412ad6a0ac64634688986ff124ce873c0846bd10cf54ff761,PodSandboxId:0e1162e4c3cf07ce5d3804edb5623a5567c82710bf86e6dd75a93bddd7c26573,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:17337
45442475944828,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-482476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b73f65421285a8dd1839e442c0c6af24,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ba56b590110406b1c4e6646e981b8f00f6fbc55308dacd367bda5339a72a122,PodSandboxId:62374aca74b288e067c703ef56dd1a6f6f6ead07233461f8e14daf9b603e84e2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733745442438840869,Labels:
map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-482476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05146362456c7def7e2b7c92028be8b7,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:404b8057940700687ef437a2f86b23b1eb47d811420982efb7d526eb07510390,PodSandboxId:913e6ad25da255a4f64f5ac795cc16bcd6a8e9cd85a4c954180010d07e3629d3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733745155403085403,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-482476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05146362456c7def7e2b7c92028be8b7,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0dbb7d93-d606-41d9-9660-0d52e6b244fe name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 12:15:43 default-k8s-diff-port-482476 crio[709]: time="2024-12-09 12:15:43.642864838Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=639c80db-6799-407c-99bf-8f8efb2d5035 name=/runtime.v1.RuntimeService/Version
	Dec 09 12:15:43 default-k8s-diff-port-482476 crio[709]: time="2024-12-09 12:15:43.643026448Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=639c80db-6799-407c-99bf-8f8efb2d5035 name=/runtime.v1.RuntimeService/Version
	Dec 09 12:15:43 default-k8s-diff-port-482476 crio[709]: time="2024-12-09 12:15:43.644847267Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=64a24c87-2d0a-44de-80ee-ee8076728315 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 12:15:43 default-k8s-diff-port-482476 crio[709]: time="2024-12-09 12:15:43.645559632Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733746543645520047,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=64a24c87-2d0a-44de-80ee-ee8076728315 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 12:15:43 default-k8s-diff-port-482476 crio[709]: time="2024-12-09 12:15:43.646214640Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5c9d7211-5327-4df2-b5d1-2670ec1c6dc3 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 12:15:43 default-k8s-diff-port-482476 crio[709]: time="2024-12-09 12:15:43.646361166Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5c9d7211-5327-4df2-b5d1-2670ec1c6dc3 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 12:15:43 default-k8s-diff-port-482476 crio[709]: time="2024-12-09 12:15:43.647380773Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a6497e24ed8d6cf7d755dd9862baef214e283f280f4ef19432b7b946ffdc04af,PodSandboxId:9c6c1503f2aa142f6e1b790794ac4f72469f489f668cb81dbd99ad79be651d54,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733745455068209528,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b53e3ba-9bc9-4b5a-bec9-d06336616c8a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bc393f8ca069a02f4255df67b57878759855f7c23b28e333fa0164b3723d3ee,PodSandboxId:7b14ae84b4e8d70026481c33596cd578202967e2d5f80b7f1344ad74a2a8aef4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733745454054388888,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bb47s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ff9fcf2-1494-4739-9bf4-6c9dd5bcbbf4,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8c6e423fd231089f00f6c48db7dc922a6a0e874923f459286cce0c29e586c56,PodSandboxId:3bf9b1181e2f348d99e3601e877d02010b31bb3f3dd25be4c94adeac273be018,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733745454043841945,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7rr27,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: a5dd0401-80bf-4c87-9771-e1837c960425,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a06e322b94f2e8c310d68246143453b976f57c7b04a64527bb6d2624556e53b8,PodSandboxId:505c512572eeadc3423e7d92e427ae440cce2a8578f8be04d1e408798f589494,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING
,CreatedAt:1733745453383214534,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pgs52,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5a3463e-e955-4345-9559-b23cce44fa0e,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fe459e350a0a69c68e97eabcc631964637bad5192e31fc4f9bde455313887ff,PodSandboxId:54dba79d9d7cecfcb8ad76bb275e38e59d7421c662be0f31363876f1335e47a9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:173374544
2491333987,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-482476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08d9a3116ed44b8533a3cadf46fa536a,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1efd65a61828e45b53b8be59a071fc5b43c47d488fe8aba5126af1fe231338fa,PodSandboxId:e9eddec54729a9e6fc7f103e8e007467a867f4ec9e1e250497854ff9068a0e5e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,Cre
atedAt:1733745442461208526,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-482476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f4ab6bcaa941321f87de927012cee9d,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c9f81dc1c7ecf5412ad6a0ac64634688986ff124ce873c0846bd10cf54ff761,PodSandboxId:0e1162e4c3cf07ce5d3804edb5623a5567c82710bf86e6dd75a93bddd7c26573,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:17337
45442475944828,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-482476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b73f65421285a8dd1839e442c0c6af24,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ba56b590110406b1c4e6646e981b8f00f6fbc55308dacd367bda5339a72a122,PodSandboxId:62374aca74b288e067c703ef56dd1a6f6f6ead07233461f8e14daf9b603e84e2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733745442438840869,Labels:
map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-482476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05146362456c7def7e2b7c92028be8b7,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:404b8057940700687ef437a2f86b23b1eb47d811420982efb7d526eb07510390,PodSandboxId:913e6ad25da255a4f64f5ac795cc16bcd6a8e9cd85a4c954180010d07e3629d3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733745155403085403,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-482476,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05146362456c7def7e2b7c92028be8b7,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5c9d7211-5327-4df2-b5d1-2670ec1c6dc3 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a6497e24ed8d6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   18 minutes ago      Running             storage-provisioner       0                   9c6c1503f2aa1       storage-provisioner
	2bc393f8ca069       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   18 minutes ago      Running             coredns                   0                   7b14ae84b4e8d       coredns-7c65d6cfc9-bb47s
	d8c6e423fd231       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   18 minutes ago      Running             coredns                   0                   3bf9b1181e2f3       coredns-7c65d6cfc9-7rr27
	a06e322b94f2e       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38   18 minutes ago      Running             kube-proxy                0                   505c512572eea       kube-proxy-pgs52
	2fe459e350a0a       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503   18 minutes ago      Running             kube-controller-manager   2                   54dba79d9d7ce       kube-controller-manager-default-k8s-diff-port-482476
	3c9f81dc1c7ec       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   18 minutes ago      Running             etcd                      2                   0e1162e4c3cf0       etcd-default-k8s-diff-port-482476
	1efd65a61828e       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856   18 minutes ago      Running             kube-scheduler            2                   e9eddec54729a       kube-scheduler-default-k8s-diff-port-482476
	1ba56b5901104       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   18 minutes ago      Running             kube-apiserver            2                   62374aca74b28       kube-apiserver-default-k8s-diff-port-482476
	404b805794070       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   23 minutes ago      Exited              kube-apiserver            1                   913e6ad25da25       kube-apiserver-default-k8s-diff-port-482476
	
	
	==> coredns [2bc393f8ca069a02f4255df67b57878759855f7c23b28e333fa0164b3723d3ee] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [d8c6e423fd231089f00f6c48db7dc922a6a0e874923f459286cce0c29e586c56] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-482476
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-482476
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=60110addcdbf0fec7168b962521659e922988d6c
	                    minikube.k8s.io/name=default-k8s-diff-port-482476
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_09T11_57_28_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 09 Dec 2024 11:57:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-482476
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 09 Dec 2024 12:15:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 09 Dec 2024 12:12:57 +0000   Mon, 09 Dec 2024 11:57:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 09 Dec 2024 12:12:57 +0000   Mon, 09 Dec 2024 11:57:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 09 Dec 2024 12:12:57 +0000   Mon, 09 Dec 2024 11:57:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 09 Dec 2024 12:12:57 +0000   Mon, 09 Dec 2024 11:57:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.25
	  Hostname:    default-k8s-diff-port-482476
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 3db52be855aa4f8d8abfa5bc1b27dc59
	  System UUID:                3db52be8-55aa-4f8d-8abf-a5bc1b27dc59
	  Boot ID:                    090d3d7b-d360-4f8d-8f79-abf46cb9ac89
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-7rr27                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     18m
	  kube-system                 coredns-7c65d6cfc9-bb47s                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     18m
	  kube-system                 etcd-default-k8s-diff-port-482476                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         18m
	  kube-system                 kube-apiserver-default-k8s-diff-port-482476             250m (12%)    0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-482476    200m (10%)    0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-proxy-pgs52                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-scheduler-default-k8s-diff-port-482476             100m (5%)     0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 metrics-server-6867b74b74-2lmtn                         100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         18m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 18m   kube-proxy       
	  Normal  Starting                 18m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  18m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  18m   kubelet          Node default-k8s-diff-port-482476 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m   kubelet          Node default-k8s-diff-port-482476 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m   kubelet          Node default-k8s-diff-port-482476 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           18m   node-controller  Node default-k8s-diff-port-482476 event: Registered Node default-k8s-diff-port-482476 in Controller
	
	
	==> dmesg <==
	[  +0.051967] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.129819] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.208254] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.409388] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000004] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.486673] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.070745] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.075723] systemd-fstab-generator[646]: Ignoring "noauto" option for root device
	[  +0.192715] systemd-fstab-generator[660]: Ignoring "noauto" option for root device
	[  +0.117868] systemd-fstab-generator[672]: Ignoring "noauto" option for root device
	[  +0.318838] systemd-fstab-generator[701]: Ignoring "noauto" option for root device
	[  +4.147301] systemd-fstab-generator[791]: Ignoring "noauto" option for root device
	[  +2.112878] systemd-fstab-generator[913]: Ignoring "noauto" option for root device
	[  +0.073644] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.566190] kauditd_printk_skb: 69 callbacks suppressed
	[  +6.965514] kauditd_printk_skb: 90 callbacks suppressed
	[Dec 9 11:56] kauditd_printk_skb: 4 callbacks suppressed
	[Dec 9 11:57] kauditd_printk_skb: 3 callbacks suppressed
	[  +1.632440] systemd-fstab-generator[2604]: Ignoring "noauto" option for root device
	[  +4.921203] kauditd_printk_skb: 58 callbacks suppressed
	[  +1.639335] systemd-fstab-generator[2927]: Ignoring "noauto" option for root device
	[  +5.440565] systemd-fstab-generator[3059]: Ignoring "noauto" option for root device
	[  +0.090326] kauditd_printk_skb: 14 callbacks suppressed
	[Dec 9 11:58] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [3c9f81dc1c7ecf5412ad6a0ac64634688986ff124ce873c0846bd10cf54ff761] <==
	{"level":"info","ts":"2024-12-09T12:14:05.187472Z","caller":"traceutil/trace.go:171","msg":"trace[184759250] linearizableReadLoop","detail":"{readStateIndex:1468; appliedIndex:1467; }","duration":"220.431411ms","start":"2024-12-09T12:14:04.967024Z","end":"2024-12-09T12:14:05.187455Z","steps":["trace[184759250] 'read index received'  (duration: 220.142084ms)","trace[184759250] 'applied index is now lower than readState.Index'  (duration: 288.774µs)"],"step_count":2}
	{"level":"info","ts":"2024-12-09T12:14:05.187920Z","caller":"traceutil/trace.go:171","msg":"trace[217180709] transaction","detail":"{read_only:false; response_revision:1255; number_of_response:1; }","duration":"449.084055ms","start":"2024-12-09T12:14:04.738823Z","end":"2024-12-09T12:14:05.187907Z","steps":["trace[217180709] 'process raft request'  (duration: 448.452279ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-09T12:14:05.189138Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-09T12:14:04.738807Z","time spent":"450.259653ms","remote":"127.0.0.1:41334","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1113,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1253 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1040 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-12-09T12:14:05.188245Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"221.20209ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-09T12:14:05.189473Z","caller":"traceutil/trace.go:171","msg":"trace[477729274] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1255; }","duration":"222.443692ms","start":"2024-12-09T12:14:04.967018Z","end":"2024-12-09T12:14:05.189462Z","steps":["trace[477729274] 'agreement among raft nodes before linearized reading'  (duration: 221.177532ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-09T12:14:25.025928Z","caller":"traceutil/trace.go:171","msg":"trace[58632995] transaction","detail":"{read_only:false; response_revision:1271; number_of_response:1; }","duration":"266.009773ms","start":"2024-12-09T12:14:24.759872Z","end":"2024-12-09T12:14:25.025882Z","steps":["trace[58632995] 'process raft request'  (duration: 265.500897ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-09T12:14:25.285793Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"140.242365ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-09T12:14:25.285971Z","caller":"traceutil/trace.go:171","msg":"trace[198919471] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1271; }","duration":"140.528584ms","start":"2024-12-09T12:14:25.145426Z","end":"2024-12-09T12:14:25.285955Z","steps":["trace[198919471] 'range keys from in-memory index tree'  (duration: 140.160522ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-09T12:14:25.425502Z","caller":"traceutil/trace.go:171","msg":"trace[288170216] transaction","detail":"{read_only:false; response_revision:1272; number_of_response:1; }","duration":"112.391996ms","start":"2024-12-09T12:14:25.313085Z","end":"2024-12-09T12:14:25.425477Z","steps":["trace[288170216] 'process raft request'  (duration: 112.179966ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-09T12:15:12.299803Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"130.169958ms","expected-duration":"100ms","prefix":"","request":"header:<ID:17662154202496885957 > lease_revoke:<id:751c93ab47ffe465>","response":"size:29"}
	{"level":"info","ts":"2024-12-09T12:15:12.299972Z","caller":"traceutil/trace.go:171","msg":"trace[377577129] linearizableReadLoop","detail":"{readStateIndex:1536; appliedIndex:1535; }","duration":"156.477663ms","start":"2024-12-09T12:15:12.143480Z","end":"2024-12-09T12:15:12.299958Z","steps":["trace[377577129] 'read index received'  (duration: 26.113197ms)","trace[377577129] 'applied index is now lower than readState.Index'  (duration: 130.363527ms)"],"step_count":2}
	{"level":"warn","ts":"2024-12-09T12:15:12.300096Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"156.657637ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-09T12:15:12.300139Z","caller":"traceutil/trace.go:171","msg":"trace[255131659] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1309; }","duration":"156.711463ms","start":"2024-12-09T12:15:12.143419Z","end":"2024-12-09T12:15:12.300131Z","steps":["trace[255131659] 'agreement among raft nodes before linearized reading'  (duration: 156.633219ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-09T12:15:42.293850Z","caller":"traceutil/trace.go:171","msg":"trace[245397686] transaction","detail":"{read_only:false; response_revision:1333; number_of_response:1; }","duration":"288.463813ms","start":"2024-12-09T12:15:42.005363Z","end":"2024-12-09T12:15:42.293827Z","steps":["trace[245397686] 'process raft request'  (duration: 288.07748ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-09T12:15:42.294393Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"149.858046ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-09T12:15:42.294477Z","caller":"traceutil/trace.go:171","msg":"trace[1974958582] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1333; }","duration":"149.95416ms","start":"2024-12-09T12:15:42.144513Z","end":"2024-12-09T12:15:42.294467Z","steps":["trace[1974958582] 'agreement among raft nodes before linearized reading'  (duration: 149.830998ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-09T12:15:42.294253Z","caller":"traceutil/trace.go:171","msg":"trace[1786020801] linearizableReadLoop","detail":"{readStateIndex:1565; appliedIndex:1564; }","duration":"149.053851ms","start":"2024-12-09T12:15:42.144518Z","end":"2024-12-09T12:15:42.293572Z","steps":["trace[1786020801] 'read index received'  (duration: 148.860994ms)","trace[1786020801] 'applied index is now lower than readState.Index'  (duration: 192.337µs)"],"step_count":2}
	{"level":"warn","ts":"2024-12-09T12:15:42.694200Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"227.116448ms","expected-duration":"100ms","prefix":"","request":"header:<ID:17662154202496886145 > lease_revoke:<id:751c93ab47ffe520>","response":"size:29"}
	{"level":"info","ts":"2024-12-09T12:15:42.694336Z","caller":"traceutil/trace.go:171","msg":"trace[1600111725] linearizableReadLoop","detail":"{readStateIndex:1566; appliedIndex:1565; }","duration":"397.759237ms","start":"2024-12-09T12:15:42.296526Z","end":"2024-12-09T12:15:42.694285Z","steps":["trace[1600111725] 'read index received'  (duration: 170.504733ms)","trace[1600111725] 'applied index is now lower than readState.Index'  (duration: 227.253744ms)"],"step_count":2}
	{"level":"warn","ts":"2024-12-09T12:15:42.694466Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"397.943481ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-09T12:15:42.694515Z","caller":"traceutil/trace.go:171","msg":"trace[1046127362] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1333; }","duration":"398.002703ms","start":"2024-12-09T12:15:42.296501Z","end":"2024-12-09T12:15:42.694504Z","steps":["trace[1046127362] 'agreement among raft nodes before linearized reading'  (duration: 397.910909ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-09T12:15:42.694583Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-09T12:15:42.296468Z","time spent":"398.094709ms","remote":"127.0.0.1:41132","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-12-09T12:15:42.694657Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"379.636637ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/csistoragecapacities/\" range_end:\"/registry/csistoragecapacities0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-09T12:15:42.694728Z","caller":"traceutil/trace.go:171","msg":"trace[267477952] range","detail":"{range_begin:/registry/csistoragecapacities/; range_end:/registry/csistoragecapacities0; response_count:0; response_revision:1333; }","duration":"379.755125ms","start":"2024-12-09T12:15:42.314958Z","end":"2024-12-09T12:15:42.694713Z","steps":["trace[267477952] 'agreement among raft nodes before linearized reading'  (duration: 379.521762ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-09T12:15:42.695773Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-09T12:15:42.314917Z","time spent":"380.836265ms","remote":"127.0.0.1:41580","response type":"/etcdserverpb.KV/Range","request count":0,"request size":68,"response count":0,"response size":29,"request content":"key:\"/registry/csistoragecapacities/\" range_end:\"/registry/csistoragecapacities0\" count_only:true "}
	
	
	==> kernel <==
	 12:15:44 up 23 min,  0 users,  load average: 0.09, 0.11, 0.09
	Linux default-k8s-diff-port-482476 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [1ba56b590110406b1c4e6646e981b8f00f6fbc55308dacd367bda5339a72a122] <==
	W1209 12:12:25.949672       1 handler_proxy.go:99] no RequestInfo found in the context
	E1209 12:12:25.949773       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1209 12:12:25.950717       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1209 12:12:25.950906       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1209 12:13:25.951457       1 handler_proxy.go:99] no RequestInfo found in the context
	E1209 12:13:25.951553       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1209 12:13:25.951613       1 handler_proxy.go:99] no RequestInfo found in the context
	E1209 12:13:25.951630       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1209 12:13:25.952676       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1209 12:13:25.952722       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1209 12:15:25.953382       1 handler_proxy.go:99] no RequestInfo found in the context
	E1209 12:15:25.953565       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1209 12:15:25.953381       1 handler_proxy.go:99] no RequestInfo found in the context
	E1209 12:15:25.953671       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1209 12:15:25.954900       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1209 12:15:25.955021       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [404b8057940700687ef437a2f86b23b1eb47d811420982efb7d526eb07510390] <==
	W1209 11:57:15.213857       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:57:15.281115       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:57:15.300961       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:57:15.314618       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:57:15.412660       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:57:15.412661       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:57:15.439283       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:57:15.446766       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:57:15.492979       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:57:15.533730       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:57:15.568980       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:57:15.591592       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:57:15.599011       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:57:15.711884       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:57:15.791048       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:57:15.837428       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:57:15.839744       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:57:15.865629       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:57:15.898850       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:57:15.922095       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:57:15.941619       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:57:15.958102       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:57:16.120584       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:57:16.257291       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:57:16.259874       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [2fe459e350a0a69c68e97eabcc631964637bad5192e31fc4f9bde455313887ff] <==
	E1209 12:10:31.959764       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1209 12:10:32.519409       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1209 12:11:01.967344       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1209 12:11:02.527383       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1209 12:11:31.974018       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1209 12:11:32.534793       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1209 12:12:01.981367       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1209 12:12:02.544543       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1209 12:12:31.988922       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1209 12:12:32.554942       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1209 12:12:57.149492       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-482476"
	E1209 12:13:01.996207       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1209 12:13:02.563197       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1209 12:13:32.002691       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1209 12:13:32.572918       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1209 12:14:00.790120       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="277.498µs"
	E1209 12:14:02.009490       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1209 12:14:02.581529       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1209 12:14:11.864265       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="170.876µs"
	E1209 12:14:32.019673       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1209 12:14:32.590123       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1209 12:15:02.025924       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1209 12:15:02.597877       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1209 12:15:32.032057       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1209 12:15:32.607202       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [a06e322b94f2e8c310d68246143453b976f57c7b04a64527bb6d2624556e53b8] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1209 11:57:33.938738       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1209 11:57:33.975546       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.25"]
	E1209 11:57:33.975626       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1209 11:57:34.075671       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1209 11:57:34.075708       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1209 11:57:34.075738       1 server_linux.go:169] "Using iptables Proxier"
	I1209 11:57:34.079213       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1209 11:57:34.079675       1 server.go:483] "Version info" version="v1.31.2"
	I1209 11:57:34.079687       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1209 11:57:34.081807       1 config.go:199] "Starting service config controller"
	I1209 11:57:34.081825       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1209 11:57:34.081886       1 config.go:105] "Starting endpoint slice config controller"
	I1209 11:57:34.081892       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1209 11:57:34.082385       1 config.go:328] "Starting node config controller"
	I1209 11:57:34.082396       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1209 11:57:34.182478       1 shared_informer.go:320] Caches are synced for node config
	I1209 11:57:34.182519       1 shared_informer.go:320] Caches are synced for service config
	I1209 11:57:34.182549       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [1efd65a61828e45b53b8be59a071fc5b43c47d488fe8aba5126af1fe231338fa] <==
	W1209 11:57:24.958517       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1209 11:57:24.958561       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1209 11:57:25.766565       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1209 11:57:25.766675       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1209 11:57:25.782455       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1209 11:57:25.782499       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1209 11:57:25.959279       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1209 11:57:25.959373       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1209 11:57:25.973473       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1209 11:57:25.973516       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1209 11:57:25.999914       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1209 11:57:25.999959       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1209 11:57:26.010617       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1209 11:57:26.010689       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1209 11:57:26.094835       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1209 11:57:26.094918       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1209 11:57:26.099191       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1209 11:57:26.099272       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1209 11:57:26.141387       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1209 11:57:26.141475       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1209 11:57:26.162913       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1209 11:57:26.162957       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1209 11:57:26.165709       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1209 11:57:26.165758       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1209 11:57:29.051506       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 09 12:14:37 default-k8s-diff-port-482476 kubelet[2934]: E1209 12:14:37.770949    2934 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-2lmtn" podUID="60803d31-d0b0-4d51-a9f2-cadafd184a90"
	Dec 09 12:14:38 default-k8s-diff-port-482476 kubelet[2934]: E1209 12:14:38.066775    2934 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733746478066276574,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 12:14:38 default-k8s-diff-port-482476 kubelet[2934]: E1209 12:14:38.066812    2934 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733746478066276574,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 12:14:48 default-k8s-diff-port-482476 kubelet[2934]: E1209 12:14:48.069009    2934 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733746488068451546,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 12:14:48 default-k8s-diff-port-482476 kubelet[2934]: E1209 12:14:48.069358    2934 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733746488068451546,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 12:14:50 default-k8s-diff-port-482476 kubelet[2934]: E1209 12:14:50.768901    2934 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-2lmtn" podUID="60803d31-d0b0-4d51-a9f2-cadafd184a90"
	Dec 09 12:14:58 default-k8s-diff-port-482476 kubelet[2934]: E1209 12:14:58.071811    2934 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733746498071386978,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 12:14:58 default-k8s-diff-port-482476 kubelet[2934]: E1209 12:14:58.071850    2934 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733746498071386978,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 12:15:03 default-k8s-diff-port-482476 kubelet[2934]: E1209 12:15:03.768327    2934 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-2lmtn" podUID="60803d31-d0b0-4d51-a9f2-cadafd184a90"
	Dec 09 12:15:08 default-k8s-diff-port-482476 kubelet[2934]: E1209 12:15:08.087269    2934 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733746508074369602,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 12:15:08 default-k8s-diff-port-482476 kubelet[2934]: E1209 12:15:08.087653    2934 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733746508074369602,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 12:15:14 default-k8s-diff-port-482476 kubelet[2934]: E1209 12:15:14.768221    2934 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-2lmtn" podUID="60803d31-d0b0-4d51-a9f2-cadafd184a90"
	Dec 09 12:15:18 default-k8s-diff-port-482476 kubelet[2934]: E1209 12:15:18.089654    2934 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733746518089112695,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 12:15:18 default-k8s-diff-port-482476 kubelet[2934]: E1209 12:15:18.089688    2934 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733746518089112695,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 12:15:25 default-k8s-diff-port-482476 kubelet[2934]: E1209 12:15:25.767889    2934 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-2lmtn" podUID="60803d31-d0b0-4d51-a9f2-cadafd184a90"
	Dec 09 12:15:27 default-k8s-diff-port-482476 kubelet[2934]: E1209 12:15:27.791153    2934 iptables.go:577] "Could not set up iptables canary" err=<
	Dec 09 12:15:27 default-k8s-diff-port-482476 kubelet[2934]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 09 12:15:27 default-k8s-diff-port-482476 kubelet[2934]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 09 12:15:27 default-k8s-diff-port-482476 kubelet[2934]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 09 12:15:27 default-k8s-diff-port-482476 kubelet[2934]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 09 12:15:28 default-k8s-diff-port-482476 kubelet[2934]: E1209 12:15:28.092191    2934 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733746528091641564,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 12:15:28 default-k8s-diff-port-482476 kubelet[2934]: E1209 12:15:28.092263    2934 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733746528091641564,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 12:15:38 default-k8s-diff-port-482476 kubelet[2934]: E1209 12:15:38.094512    2934 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733746538094042217,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 12:15:38 default-k8s-diff-port-482476 kubelet[2934]: E1209 12:15:38.094878    2934 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733746538094042217,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 12:15:39 default-k8s-diff-port-482476 kubelet[2934]: E1209 12:15:39.768088    2934 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-2lmtn" podUID="60803d31-d0b0-4d51-a9f2-cadafd184a90"
	
	
	==> storage-provisioner [a6497e24ed8d6cf7d755dd9862baef214e283f280f4ef19432b7b946ffdc04af] <==
	I1209 11:57:35.191270       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1209 11:57:35.205785       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1209 11:57:35.206168       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1209 11:57:35.218112       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1209 11:57:35.218286       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-482476_2fb80794-68f7-4032-bc04-c068a5d502d0!
	I1209 11:57:35.220806       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9f2ba308-83e1-4c51-b2b2-b8ad9215dee4", APIVersion:"v1", ResourceVersion:"410", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-482476_2fb80794-68f7-4032-bc04-c068a5d502d0 became leader
	I1209 11:57:35.320655       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-482476_2fb80794-68f7-4032-bc04-c068a5d502d0!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-482476 -n default-k8s-diff-port-482476
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-482476 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-2lmtn
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-482476 describe pod metrics-server-6867b74b74-2lmtn
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-482476 describe pod metrics-server-6867b74b74-2lmtn: exit status 1 (119.847706ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-2lmtn" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-482476 describe pod metrics-server-6867b74b74-2lmtn: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (543.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (358.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-005123 -n embed-certs-005123
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-12-09 12:13:02.135492391 +0000 UTC m=+5970.596217902
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-005123 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-005123 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.454µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-005123 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-005123 -n embed-certs-005123
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-005123 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-005123 logs -n 25: (1.306924074s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p no-preload-820741                                   | no-preload-820741            | jenkins | v1.34.0 | 09 Dec 24 11:43 UTC | 09 Dec 24 11:44 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-005123            | embed-certs-005123           | jenkins | v1.34.0 | 09 Dec 24 11:44 UTC | 09 Dec 24 11:44 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-005123                                  | embed-certs-005123           | jenkins | v1.34.0 | 09 Dec 24 11:44 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-820741             | no-preload-820741            | jenkins | v1.34.0 | 09 Dec 24 11:44 UTC | 09 Dec 24 11:44 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-820741                                   | no-preload-820741            | jenkins | v1.34.0 | 09 Dec 24 11:44 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-835095                           | kubernetes-upgrade-835095    | jenkins | v1.34.0 | 09 Dec 24 11:45 UTC | 09 Dec 24 11:45 UTC |
	| start   | -p kubernetes-upgrade-835095                           | kubernetes-upgrade-835095    | jenkins | v1.34.0 | 09 Dec 24 11:45 UTC | 09 Dec 24 11:45 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-835095                           | kubernetes-upgrade-835095    | jenkins | v1.34.0 | 09 Dec 24 11:45 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-835095                           | kubernetes-upgrade-835095    | jenkins | v1.34.0 | 09 Dec 24 11:45 UTC | 09 Dec 24 11:46 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-835095                           | kubernetes-upgrade-835095    | jenkins | v1.34.0 | 09 Dec 24 11:46 UTC | 09 Dec 24 11:46 UTC |
	| start   | -p                                                     | default-k8s-diff-port-482476 | jenkins | v1.34.0 | 09 Dec 24 11:46 UTC | 09 Dec 24 11:47 UTC |
	|         | default-k8s-diff-port-482476                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-005123                 | embed-certs-005123           | jenkins | v1.34.0 | 09 Dec 24 11:46 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-005123                                  | embed-certs-005123           | jenkins | v1.34.0 | 09 Dec 24 11:46 UTC | 09 Dec 24 11:58 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-014592        | old-k8s-version-014592       | jenkins | v1.34.0 | 09 Dec 24 11:46 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-820741                  | no-preload-820741            | jenkins | v1.34.0 | 09 Dec 24 11:47 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-820741                                   | no-preload-820741            | jenkins | v1.34.0 | 09 Dec 24 11:47 UTC | 09 Dec 24 11:56 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-482476  | default-k8s-diff-port-482476 | jenkins | v1.34.0 | 09 Dec 24 11:47 UTC | 09 Dec 24 11:47 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-482476 | jenkins | v1.34.0 | 09 Dec 24 11:47 UTC |                     |
	|         | default-k8s-diff-port-482476                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-014592                              | old-k8s-version-014592       | jenkins | v1.34.0 | 09 Dec 24 11:48 UTC | 09 Dec 24 11:48 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-014592             | old-k8s-version-014592       | jenkins | v1.34.0 | 09 Dec 24 11:48 UTC | 09 Dec 24 11:48 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-014592                              | old-k8s-version-014592       | jenkins | v1.34.0 | 09 Dec 24 11:48 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-482476       | default-k8s-diff-port-482476 | jenkins | v1.34.0 | 09 Dec 24 11:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-482476 | jenkins | v1.34.0 | 09 Dec 24 11:49 UTC | 09 Dec 24 11:57 UTC |
	|         | default-k8s-diff-port-482476                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-014592                              | old-k8s-version-014592       | jenkins | v1.34.0 | 09 Dec 24 12:12 UTC | 09 Dec 24 12:12 UTC |
	| start   | -p newest-cni-932878 --memory=2200 --alsologtostderr   | newest-cni-932878            | jenkins | v1.34.0 | 09 Dec 24 12:12 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/09 12:12:19
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 12:12:19.117795  668975 out.go:345] Setting OutFile to fd 1 ...
	I1209 12:12:19.117916  668975 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 12:12:19.117926  668975 out.go:358] Setting ErrFile to fd 2...
	I1209 12:12:19.117931  668975 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 12:12:19.118095  668975 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20068-609844/.minikube/bin
	I1209 12:12:19.118715  668975 out.go:352] Setting JSON to false
	I1209 12:12:19.119858  668975 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":17683,"bootTime":1733728656,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 12:12:19.119971  668975 start.go:139] virtualization: kvm guest
	I1209 12:12:19.122361  668975 out.go:177] * [newest-cni-932878] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1209 12:12:19.123958  668975 out.go:177]   - MINIKUBE_LOCATION=20068
	I1209 12:12:19.123990  668975 notify.go:220] Checking for updates...
	I1209 12:12:19.125979  668975 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 12:12:19.127091  668975 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20068-609844/kubeconfig
	I1209 12:12:19.128132  668975 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20068-609844/.minikube
	I1209 12:12:19.129266  668975 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1209 12:12:19.130332  668975 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 12:12:19.131945  668975 config.go:182] Loaded profile config "default-k8s-diff-port-482476": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 12:12:19.132081  668975 config.go:182] Loaded profile config "embed-certs-005123": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 12:12:19.132243  668975 config.go:182] Loaded profile config "no-preload-820741": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 12:12:19.132397  668975 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 12:12:19.169995  668975 out.go:177] * Using the kvm2 driver based on user configuration
	I1209 12:12:19.171223  668975 start.go:297] selected driver: kvm2
	I1209 12:12:19.171273  668975 start.go:901] validating driver "kvm2" against <nil>
	I1209 12:12:19.171310  668975 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 12:12:19.172088  668975 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 12:12:19.172204  668975 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20068-609844/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1209 12:12:19.189230  668975 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1209 12:12:19.189318  668975 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W1209 12:12:19.189427  668975 out.go:270] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1209 12:12:19.189760  668975 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1209 12:12:19.189802  668975 cni.go:84] Creating CNI manager for ""
	I1209 12:12:19.189867  668975 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 12:12:19.189877  668975 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1209 12:12:19.189959  668975 start.go:340] cluster config:
	{Name:newest-cni-932878 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:newest-cni-932878 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 12:12:19.190121  668975 iso.go:125] acquiring lock: {Name:mk3bf4d9da9b75cc0ae0a3732f4492bac54a4b46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 12:12:19.192525  668975 out.go:177] * Starting "newest-cni-932878" primary control-plane node in "newest-cni-932878" cluster
	I1209 12:12:19.193915  668975 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 12:12:19.193984  668975 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1209 12:12:19.194003  668975 cache.go:56] Caching tarball of preloaded images
	I1209 12:12:19.194116  668975 preload.go:172] Found /home/jenkins/minikube-integration/20068-609844/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1209 12:12:19.194134  668975 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1209 12:12:19.194285  668975 profile.go:143] Saving config to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/newest-cni-932878/config.json ...
	I1209 12:12:19.194311  668975 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/newest-cni-932878/config.json: {Name:mke45a5094f19c89e680c251980c8332344d602d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 12:12:19.194517  668975 start.go:360] acquireMachinesLock for newest-cni-932878: {Name:mka6afffe9ddf2faebdc603ed0401805d58bf31e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 12:12:19.194558  668975 start.go:364] duration metric: took 23.332µs to acquireMachinesLock for "newest-cni-932878"
	I1209 12:12:19.194583  668975 start.go:93] Provisioning new machine with config: &{Name:newest-cni-932878 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.2 ClusterName:newest-cni-932878 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 12:12:19.194722  668975 start.go:125] createHost starting for "" (driver="kvm2")
	I1209 12:12:19.197836  668975 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1209 12:12:19.198339  668975 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 12:12:19.198380  668975 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 12:12:19.215517  668975 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42377
	I1209 12:12:19.215983  668975 main.go:141] libmachine: () Calling .GetVersion
	I1209 12:12:19.216672  668975 main.go:141] libmachine: Using API Version  1
	I1209 12:12:19.216697  668975 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 12:12:19.217030  668975 main.go:141] libmachine: () Calling .GetMachineName
	I1209 12:12:19.217226  668975 main.go:141] libmachine: (newest-cni-932878) Calling .GetMachineName
	I1209 12:12:19.217372  668975 main.go:141] libmachine: (newest-cni-932878) Calling .DriverName
	I1209 12:12:19.217543  668975 start.go:159] libmachine.API.Create for "newest-cni-932878" (driver="kvm2")
	I1209 12:12:19.217564  668975 client.go:168] LocalClient.Create starting
	I1209 12:12:19.217614  668975 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem
	I1209 12:12:19.217658  668975 main.go:141] libmachine: Decoding PEM data...
	I1209 12:12:19.217678  668975 main.go:141] libmachine: Parsing certificate...
	I1209 12:12:19.217752  668975 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem
	I1209 12:12:19.217782  668975 main.go:141] libmachine: Decoding PEM data...
	I1209 12:12:19.217799  668975 main.go:141] libmachine: Parsing certificate...
	I1209 12:12:19.217825  668975 main.go:141] libmachine: Running pre-create checks...
	I1209 12:12:19.217838  668975 main.go:141] libmachine: (newest-cni-932878) Calling .PreCreateCheck
	I1209 12:12:19.218218  668975 main.go:141] libmachine: (newest-cni-932878) Calling .GetConfigRaw
	I1209 12:12:19.218603  668975 main.go:141] libmachine: Creating machine...
	I1209 12:12:19.218616  668975 main.go:141] libmachine: (newest-cni-932878) Calling .Create
	I1209 12:12:19.218738  668975 main.go:141] libmachine: (newest-cni-932878) Creating KVM machine...
	I1209 12:12:19.220063  668975 main.go:141] libmachine: (newest-cni-932878) DBG | found existing default KVM network
	I1209 12:12:19.221361  668975 main.go:141] libmachine: (newest-cni-932878) DBG | I1209 12:12:19.221190  669015 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:80:72:1a} reservation:<nil>}
	I1209 12:12:19.222288  668975 main.go:141] libmachine: (newest-cni-932878) DBG | I1209 12:12:19.222214  669015 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:73:15:18} reservation:<nil>}
	I1209 12:12:19.223607  668975 main.go:141] libmachine: (newest-cni-932878) DBG | I1209 12:12:19.223515  669015 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00028d590}
	I1209 12:12:19.223629  668975 main.go:141] libmachine: (newest-cni-932878) DBG | created network xml: 
	I1209 12:12:19.223639  668975 main.go:141] libmachine: (newest-cni-932878) DBG | <network>
	I1209 12:12:19.223648  668975 main.go:141] libmachine: (newest-cni-932878) DBG |   <name>mk-newest-cni-932878</name>
	I1209 12:12:19.223657  668975 main.go:141] libmachine: (newest-cni-932878) DBG |   <dns enable='no'/>
	I1209 12:12:19.223670  668975 main.go:141] libmachine: (newest-cni-932878) DBG |   
	I1209 12:12:19.223680  668975 main.go:141] libmachine: (newest-cni-932878) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I1209 12:12:19.223687  668975 main.go:141] libmachine: (newest-cni-932878) DBG |     <dhcp>
	I1209 12:12:19.223700  668975 main.go:141] libmachine: (newest-cni-932878) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I1209 12:12:19.223717  668975 main.go:141] libmachine: (newest-cni-932878) DBG |     </dhcp>
	I1209 12:12:19.223728  668975 main.go:141] libmachine: (newest-cni-932878) DBG |   </ip>
	I1209 12:12:19.223737  668975 main.go:141] libmachine: (newest-cni-932878) DBG |   
	I1209 12:12:19.223746  668975 main.go:141] libmachine: (newest-cni-932878) DBG | </network>
	I1209 12:12:19.223757  668975 main.go:141] libmachine: (newest-cni-932878) DBG | 
	I1209 12:12:19.228903  668975 main.go:141] libmachine: (newest-cni-932878) DBG | trying to create private KVM network mk-newest-cni-932878 192.168.61.0/24...
	I1209 12:12:19.309562  668975 main.go:141] libmachine: (newest-cni-932878) Setting up store path in /home/jenkins/minikube-integration/20068-609844/.minikube/machines/newest-cni-932878 ...
	I1209 12:12:19.309609  668975 main.go:141] libmachine: (newest-cni-932878) DBG | private KVM network mk-newest-cni-932878 192.168.61.0/24 created
	I1209 12:12:19.309625  668975 main.go:141] libmachine: (newest-cni-932878) Building disk image from file:///home/jenkins/minikube-integration/20068-609844/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1209 12:12:19.309641  668975 main.go:141] libmachine: (newest-cni-932878) DBG | I1209 12:12:19.306865  669015 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20068-609844/.minikube
	I1209 12:12:19.309684  668975 main.go:141] libmachine: (newest-cni-932878) Downloading /home/jenkins/minikube-integration/20068-609844/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20068-609844/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1209 12:12:19.640432  668975 main.go:141] libmachine: (newest-cni-932878) DBG | I1209 12:12:19.640277  669015 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/newest-cni-932878/id_rsa...
	I1209 12:12:19.797524  668975 main.go:141] libmachine: (newest-cni-932878) DBG | I1209 12:12:19.797399  669015 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/newest-cni-932878/newest-cni-932878.rawdisk...
	I1209 12:12:19.797558  668975 main.go:141] libmachine: (newest-cni-932878) DBG | Writing magic tar header
	I1209 12:12:19.797574  668975 main.go:141] libmachine: (newest-cni-932878) DBG | Writing SSH key tar header
	I1209 12:12:19.797583  668975 main.go:141] libmachine: (newest-cni-932878) DBG | I1209 12:12:19.797560  669015 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20068-609844/.minikube/machines/newest-cni-932878 ...
	I1209 12:12:19.797781  668975 main.go:141] libmachine: (newest-cni-932878) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/newest-cni-932878
	I1209 12:12:19.797814  668975 main.go:141] libmachine: (newest-cni-932878) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20068-609844/.minikube/machines
	I1209 12:12:19.797825  668975 main.go:141] libmachine: (newest-cni-932878) Setting executable bit set on /home/jenkins/minikube-integration/20068-609844/.minikube/machines/newest-cni-932878 (perms=drwx------)
	I1209 12:12:19.797840  668975 main.go:141] libmachine: (newest-cni-932878) Setting executable bit set on /home/jenkins/minikube-integration/20068-609844/.minikube/machines (perms=drwxr-xr-x)
	I1209 12:12:19.797853  668975 main.go:141] libmachine: (newest-cni-932878) Setting executable bit set on /home/jenkins/minikube-integration/20068-609844/.minikube (perms=drwxr-xr-x)
	I1209 12:12:19.797872  668975 main.go:141] libmachine: (newest-cni-932878) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20068-609844/.minikube
	I1209 12:12:19.797883  668975 main.go:141] libmachine: (newest-cni-932878) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20068-609844
	I1209 12:12:19.797890  668975 main.go:141] libmachine: (newest-cni-932878) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1209 12:12:19.797897  668975 main.go:141] libmachine: (newest-cni-932878) DBG | Checking permissions on dir: /home/jenkins
	I1209 12:12:19.797904  668975 main.go:141] libmachine: (newest-cni-932878) DBG | Checking permissions on dir: /home
	I1209 12:12:19.797914  668975 main.go:141] libmachine: (newest-cni-932878) DBG | Skipping /home - not owner
	I1209 12:12:19.797925  668975 main.go:141] libmachine: (newest-cni-932878) Setting executable bit set on /home/jenkins/minikube-integration/20068-609844 (perms=drwxrwxr-x)
	I1209 12:12:19.797936  668975 main.go:141] libmachine: (newest-cni-932878) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1209 12:12:19.797982  668975 main.go:141] libmachine: (newest-cni-932878) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1209 12:12:19.798017  668975 main.go:141] libmachine: (newest-cni-932878) Creating domain...
	I1209 12:12:19.799237  668975 main.go:141] libmachine: (newest-cni-932878) define libvirt domain using xml: 
	I1209 12:12:19.799264  668975 main.go:141] libmachine: (newest-cni-932878) <domain type='kvm'>
	I1209 12:12:19.799275  668975 main.go:141] libmachine: (newest-cni-932878)   <name>newest-cni-932878</name>
	I1209 12:12:19.799288  668975 main.go:141] libmachine: (newest-cni-932878)   <memory unit='MiB'>2200</memory>
	I1209 12:12:19.799302  668975 main.go:141] libmachine: (newest-cni-932878)   <vcpu>2</vcpu>
	I1209 12:12:19.799309  668975 main.go:141] libmachine: (newest-cni-932878)   <features>
	I1209 12:12:19.799322  668975 main.go:141] libmachine: (newest-cni-932878)     <acpi/>
	I1209 12:12:19.799333  668975 main.go:141] libmachine: (newest-cni-932878)     <apic/>
	I1209 12:12:19.799338  668975 main.go:141] libmachine: (newest-cni-932878)     <pae/>
	I1209 12:12:19.799345  668975 main.go:141] libmachine: (newest-cni-932878)     
	I1209 12:12:19.799350  668975 main.go:141] libmachine: (newest-cni-932878)   </features>
	I1209 12:12:19.799360  668975 main.go:141] libmachine: (newest-cni-932878)   <cpu mode='host-passthrough'>
	I1209 12:12:19.799392  668975 main.go:141] libmachine: (newest-cni-932878)   
	I1209 12:12:19.799418  668975 main.go:141] libmachine: (newest-cni-932878)   </cpu>
	I1209 12:12:19.799427  668975 main.go:141] libmachine: (newest-cni-932878)   <os>
	I1209 12:12:19.799438  668975 main.go:141] libmachine: (newest-cni-932878)     <type>hvm</type>
	I1209 12:12:19.799448  668975 main.go:141] libmachine: (newest-cni-932878)     <boot dev='cdrom'/>
	I1209 12:12:19.799457  668975 main.go:141] libmachine: (newest-cni-932878)     <boot dev='hd'/>
	I1209 12:12:19.799467  668975 main.go:141] libmachine: (newest-cni-932878)     <bootmenu enable='no'/>
	I1209 12:12:19.799518  668975 main.go:141] libmachine: (newest-cni-932878)   </os>
	I1209 12:12:19.799539  668975 main.go:141] libmachine: (newest-cni-932878)   <devices>
	I1209 12:12:19.799555  668975 main.go:141] libmachine: (newest-cni-932878)     <disk type='file' device='cdrom'>
	I1209 12:12:19.799569  668975 main.go:141] libmachine: (newest-cni-932878)       <source file='/home/jenkins/minikube-integration/20068-609844/.minikube/machines/newest-cni-932878/boot2docker.iso'/>
	I1209 12:12:19.799585  668975 main.go:141] libmachine: (newest-cni-932878)       <target dev='hdc' bus='scsi'/>
	I1209 12:12:19.799595  668975 main.go:141] libmachine: (newest-cni-932878)       <readonly/>
	I1209 12:12:19.799603  668975 main.go:141] libmachine: (newest-cni-932878)     </disk>
	I1209 12:12:19.799614  668975 main.go:141] libmachine: (newest-cni-932878)     <disk type='file' device='disk'>
	I1209 12:12:19.799636  668975 main.go:141] libmachine: (newest-cni-932878)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1209 12:12:19.799651  668975 main.go:141] libmachine: (newest-cni-932878)       <source file='/home/jenkins/minikube-integration/20068-609844/.minikube/machines/newest-cni-932878/newest-cni-932878.rawdisk'/>
	I1209 12:12:19.799662  668975 main.go:141] libmachine: (newest-cni-932878)       <target dev='hda' bus='virtio'/>
	I1209 12:12:19.799672  668975 main.go:141] libmachine: (newest-cni-932878)     </disk>
	I1209 12:12:19.799681  668975 main.go:141] libmachine: (newest-cni-932878)     <interface type='network'>
	I1209 12:12:19.799697  668975 main.go:141] libmachine: (newest-cni-932878)       <source network='mk-newest-cni-932878'/>
	I1209 12:12:19.799709  668975 main.go:141] libmachine: (newest-cni-932878)       <model type='virtio'/>
	I1209 12:12:19.799719  668975 main.go:141] libmachine: (newest-cni-932878)     </interface>
	I1209 12:12:19.799730  668975 main.go:141] libmachine: (newest-cni-932878)     <interface type='network'>
	I1209 12:12:19.799741  668975 main.go:141] libmachine: (newest-cni-932878)       <source network='default'/>
	I1209 12:12:19.799749  668975 main.go:141] libmachine: (newest-cni-932878)       <model type='virtio'/>
	I1209 12:12:19.799759  668975 main.go:141] libmachine: (newest-cni-932878)     </interface>
	I1209 12:12:19.799768  668975 main.go:141] libmachine: (newest-cni-932878)     <serial type='pty'>
	I1209 12:12:19.799784  668975 main.go:141] libmachine: (newest-cni-932878)       <target port='0'/>
	I1209 12:12:19.799792  668975 main.go:141] libmachine: (newest-cni-932878)     </serial>
	I1209 12:12:19.799799  668975 main.go:141] libmachine: (newest-cni-932878)     <console type='pty'>
	I1209 12:12:19.799817  668975 main.go:141] libmachine: (newest-cni-932878)       <target type='serial' port='0'/>
	I1209 12:12:19.799828  668975 main.go:141] libmachine: (newest-cni-932878)     </console>
	I1209 12:12:19.799837  668975 main.go:141] libmachine: (newest-cni-932878)     <rng model='virtio'>
	I1209 12:12:19.799849  668975 main.go:141] libmachine: (newest-cni-932878)       <backend model='random'>/dev/random</backend>
	I1209 12:12:19.799857  668975 main.go:141] libmachine: (newest-cni-932878)     </rng>
	I1209 12:12:19.799867  668975 main.go:141] libmachine: (newest-cni-932878)     
	I1209 12:12:19.799893  668975 main.go:141] libmachine: (newest-cni-932878)     
	I1209 12:12:19.799919  668975 main.go:141] libmachine: (newest-cni-932878)   </devices>
	I1209 12:12:19.799927  668975 main.go:141] libmachine: (newest-cni-932878) </domain>
	I1209 12:12:19.799936  668975 main.go:141] libmachine: (newest-cni-932878) 
	I1209 12:12:19.805681  668975 main.go:141] libmachine: (newest-cni-932878) DBG | domain newest-cni-932878 has defined MAC address 52:54:00:8d:6c:d3 in network default
	I1209 12:12:19.806311  668975 main.go:141] libmachine: (newest-cni-932878) DBG | domain newest-cni-932878 has defined MAC address 52:54:00:b1:cb:7a in network mk-newest-cni-932878
	I1209 12:12:19.806342  668975 main.go:141] libmachine: (newest-cni-932878) Ensuring networks are active...
	I1209 12:12:19.807097  668975 main.go:141] libmachine: (newest-cni-932878) Ensuring network default is active
	I1209 12:12:19.807446  668975 main.go:141] libmachine: (newest-cni-932878) Ensuring network mk-newest-cni-932878 is active
	I1209 12:12:19.808039  668975 main.go:141] libmachine: (newest-cni-932878) Getting domain xml...
	I1209 12:12:19.808857  668975 main.go:141] libmachine: (newest-cni-932878) Creating domain...
	I1209 12:12:21.072327  668975 main.go:141] libmachine: (newest-cni-932878) Waiting to get IP...
	I1209 12:12:21.073030  668975 main.go:141] libmachine: (newest-cni-932878) DBG | domain newest-cni-932878 has defined MAC address 52:54:00:b1:cb:7a in network mk-newest-cni-932878
	I1209 12:12:21.073568  668975 main.go:141] libmachine: (newest-cni-932878) DBG | unable to find current IP address of domain newest-cni-932878 in network mk-newest-cni-932878
	I1209 12:12:21.073679  668975 main.go:141] libmachine: (newest-cni-932878) DBG | I1209 12:12:21.073554  669015 retry.go:31] will retry after 234.611739ms: waiting for machine to come up
	I1209 12:12:21.310253  668975 main.go:141] libmachine: (newest-cni-932878) DBG | domain newest-cni-932878 has defined MAC address 52:54:00:b1:cb:7a in network mk-newest-cni-932878
	I1209 12:12:21.310820  668975 main.go:141] libmachine: (newest-cni-932878) DBG | unable to find current IP address of domain newest-cni-932878 in network mk-newest-cni-932878
	I1209 12:12:21.310843  668975 main.go:141] libmachine: (newest-cni-932878) DBG | I1209 12:12:21.310786  669015 retry.go:31] will retry after 271.695816ms: waiting for machine to come up
	I1209 12:12:21.584338  668975 main.go:141] libmachine: (newest-cni-932878) DBG | domain newest-cni-932878 has defined MAC address 52:54:00:b1:cb:7a in network mk-newest-cni-932878
	I1209 12:12:21.584744  668975 main.go:141] libmachine: (newest-cni-932878) DBG | unable to find current IP address of domain newest-cni-932878 in network mk-newest-cni-932878
	I1209 12:12:21.584770  668975 main.go:141] libmachine: (newest-cni-932878) DBG | I1209 12:12:21.584704  669015 retry.go:31] will retry after 331.006375ms: waiting for machine to come up
	I1209 12:12:21.917273  668975 main.go:141] libmachine: (newest-cni-932878) DBG | domain newest-cni-932878 has defined MAC address 52:54:00:b1:cb:7a in network mk-newest-cni-932878
	I1209 12:12:21.917827  668975 main.go:141] libmachine: (newest-cni-932878) DBG | unable to find current IP address of domain newest-cni-932878 in network mk-newest-cni-932878
	I1209 12:12:21.917861  668975 main.go:141] libmachine: (newest-cni-932878) DBG | I1209 12:12:21.917763  669015 retry.go:31] will retry after 465.452749ms: waiting for machine to come up
	I1209 12:12:22.384637  668975 main.go:141] libmachine: (newest-cni-932878) DBG | domain newest-cni-932878 has defined MAC address 52:54:00:b1:cb:7a in network mk-newest-cni-932878
	I1209 12:12:22.385221  668975 main.go:141] libmachine: (newest-cni-932878) DBG | unable to find current IP address of domain newest-cni-932878 in network mk-newest-cni-932878
	I1209 12:12:22.385255  668975 main.go:141] libmachine: (newest-cni-932878) DBG | I1209 12:12:22.385184  669015 retry.go:31] will retry after 670.989786ms: waiting for machine to come up
	I1209 12:12:23.058062  668975 main.go:141] libmachine: (newest-cni-932878) DBG | domain newest-cni-932878 has defined MAC address 52:54:00:b1:cb:7a in network mk-newest-cni-932878
	I1209 12:12:23.058514  668975 main.go:141] libmachine: (newest-cni-932878) DBG | unable to find current IP address of domain newest-cni-932878 in network mk-newest-cni-932878
	I1209 12:12:23.058542  668975 main.go:141] libmachine: (newest-cni-932878) DBG | I1209 12:12:23.058444  669015 retry.go:31] will retry after 844.218177ms: waiting for machine to come up
	I1209 12:12:23.904485  668975 main.go:141] libmachine: (newest-cni-932878) DBG | domain newest-cni-932878 has defined MAC address 52:54:00:b1:cb:7a in network mk-newest-cni-932878
	I1209 12:12:23.905024  668975 main.go:141] libmachine: (newest-cni-932878) DBG | unable to find current IP address of domain newest-cni-932878 in network mk-newest-cni-932878
	I1209 12:12:23.905054  668975 main.go:141] libmachine: (newest-cni-932878) DBG | I1209 12:12:23.904972  669015 retry.go:31] will retry after 1.160513038s: waiting for machine to come up
	I1209 12:12:25.067848  668975 main.go:141] libmachine: (newest-cni-932878) DBG | domain newest-cni-932878 has defined MAC address 52:54:00:b1:cb:7a in network mk-newest-cni-932878
	I1209 12:12:25.068379  668975 main.go:141] libmachine: (newest-cni-932878) DBG | unable to find current IP address of domain newest-cni-932878 in network mk-newest-cni-932878
	I1209 12:12:25.068414  668975 main.go:141] libmachine: (newest-cni-932878) DBG | I1209 12:12:25.068325  669015 retry.go:31] will retry after 1.327711606s: waiting for machine to come up
	I1209 12:12:26.397300  668975 main.go:141] libmachine: (newest-cni-932878) DBG | domain newest-cni-932878 has defined MAC address 52:54:00:b1:cb:7a in network mk-newest-cni-932878
	I1209 12:12:26.397730  668975 main.go:141] libmachine: (newest-cni-932878) DBG | unable to find current IP address of domain newest-cni-932878 in network mk-newest-cni-932878
	I1209 12:12:26.397759  668975 main.go:141] libmachine: (newest-cni-932878) DBG | I1209 12:12:26.397674  669015 retry.go:31] will retry after 1.356204368s: waiting for machine to come up
	I1209 12:12:27.755489  668975 main.go:141] libmachine: (newest-cni-932878) DBG | domain newest-cni-932878 has defined MAC address 52:54:00:b1:cb:7a in network mk-newest-cni-932878
	I1209 12:12:27.755946  668975 main.go:141] libmachine: (newest-cni-932878) DBG | unable to find current IP address of domain newest-cni-932878 in network mk-newest-cni-932878
	I1209 12:12:27.755973  668975 main.go:141] libmachine: (newest-cni-932878) DBG | I1209 12:12:27.755904  669015 retry.go:31] will retry after 1.943107859s: waiting for machine to come up
	I1209 12:12:29.701001  668975 main.go:141] libmachine: (newest-cni-932878) DBG | domain newest-cni-932878 has defined MAC address 52:54:00:b1:cb:7a in network mk-newest-cni-932878
	I1209 12:12:29.701458  668975 main.go:141] libmachine: (newest-cni-932878) DBG | unable to find current IP address of domain newest-cni-932878 in network mk-newest-cni-932878
	I1209 12:12:29.701496  668975 main.go:141] libmachine: (newest-cni-932878) DBG | I1209 12:12:29.701407  669015 retry.go:31] will retry after 1.89603483s: waiting for machine to come up
	I1209 12:12:31.599572  668975 main.go:141] libmachine: (newest-cni-932878) DBG | domain newest-cni-932878 has defined MAC address 52:54:00:b1:cb:7a in network mk-newest-cni-932878
	I1209 12:12:31.600118  668975 main.go:141] libmachine: (newest-cni-932878) DBG | unable to find current IP address of domain newest-cni-932878 in network mk-newest-cni-932878
	I1209 12:12:31.600152  668975 main.go:141] libmachine: (newest-cni-932878) DBG | I1209 12:12:31.600050  669015 retry.go:31] will retry after 2.337898716s: waiting for machine to come up
	I1209 12:12:33.939712  668975 main.go:141] libmachine: (newest-cni-932878) DBG | domain newest-cni-932878 has defined MAC address 52:54:00:b1:cb:7a in network mk-newest-cni-932878
	I1209 12:12:33.940271  668975 main.go:141] libmachine: (newest-cni-932878) DBG | unable to find current IP address of domain newest-cni-932878 in network mk-newest-cni-932878
	I1209 12:12:33.940308  668975 main.go:141] libmachine: (newest-cni-932878) DBG | I1209 12:12:33.940202  669015 retry.go:31] will retry after 3.77377801s: waiting for machine to come up
	I1209 12:12:37.717577  668975 main.go:141] libmachine: (newest-cni-932878) DBG | domain newest-cni-932878 has defined MAC address 52:54:00:b1:cb:7a in network mk-newest-cni-932878
	I1209 12:12:37.717965  668975 main.go:141] libmachine: (newest-cni-932878) DBG | unable to find current IP address of domain newest-cni-932878 in network mk-newest-cni-932878
	I1209 12:12:37.717995  668975 main.go:141] libmachine: (newest-cni-932878) DBG | I1209 12:12:37.717913  669015 retry.go:31] will retry after 3.641530308s: waiting for machine to come up
	I1209 12:12:41.361336  668975 main.go:141] libmachine: (newest-cni-932878) DBG | domain newest-cni-932878 has defined MAC address 52:54:00:b1:cb:7a in network mk-newest-cni-932878
	I1209 12:12:41.361856  668975 main.go:141] libmachine: (newest-cni-932878) DBG | domain newest-cni-932878 has current primary IP address 192.168.61.104 and MAC address 52:54:00:b1:cb:7a in network mk-newest-cni-932878
	I1209 12:12:41.361874  668975 main.go:141] libmachine: (newest-cni-932878) Found IP for machine: 192.168.61.104
	I1209 12:12:41.361883  668975 main.go:141] libmachine: (newest-cni-932878) Reserving static IP address...
	I1209 12:12:41.362367  668975 main.go:141] libmachine: (newest-cni-932878) DBG | unable to find host DHCP lease matching {name: "newest-cni-932878", mac: "52:54:00:b1:cb:7a", ip: "192.168.61.104"} in network mk-newest-cni-932878
	I1209 12:12:41.445582  668975 main.go:141] libmachine: (newest-cni-932878) Reserved static IP address: 192.168.61.104
	I1209 12:12:41.445612  668975 main.go:141] libmachine: (newest-cni-932878) Waiting for SSH to be available...
	I1209 12:12:41.445622  668975 main.go:141] libmachine: (newest-cni-932878) DBG | Getting to WaitForSSH function...
	I1209 12:12:41.448112  668975 main.go:141] libmachine: (newest-cni-932878) DBG | domain newest-cni-932878 has defined MAC address 52:54:00:b1:cb:7a in network mk-newest-cni-932878
	I1209 12:12:41.448760  668975 main.go:141] libmachine: (newest-cni-932878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:cb:7a", ip: ""} in network mk-newest-cni-932878: {Iface:virbr3 ExpiryTime:2024-12-09 13:12:33 +0000 UTC Type:0 Mac:52:54:00:b1:cb:7a Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:minikube Clientid:01:52:54:00:b1:cb:7a}
	I1209 12:12:41.448806  668975 main.go:141] libmachine: (newest-cni-932878) DBG | domain newest-cni-932878 has defined IP address 192.168.61.104 and MAC address 52:54:00:b1:cb:7a in network mk-newest-cni-932878
	I1209 12:12:41.448912  668975 main.go:141] libmachine: (newest-cni-932878) DBG | Using SSH client type: external
	I1209 12:12:41.448941  668975 main.go:141] libmachine: (newest-cni-932878) DBG | Using SSH private key: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/newest-cni-932878/id_rsa (-rw-------)
	I1209 12:12:41.448973  668975 main.go:141] libmachine: (newest-cni-932878) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.104 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20068-609844/.minikube/machines/newest-cni-932878/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1209 12:12:41.448987  668975 main.go:141] libmachine: (newest-cni-932878) DBG | About to run SSH command:
	I1209 12:12:41.449000  668975 main.go:141] libmachine: (newest-cni-932878) DBG | exit 0
	I1209 12:12:41.574539  668975 main.go:141] libmachine: (newest-cni-932878) DBG | SSH cmd err, output: <nil>: 
	I1209 12:12:41.574898  668975 main.go:141] libmachine: (newest-cni-932878) KVM machine creation complete!
	I1209 12:12:41.575232  668975 main.go:141] libmachine: (newest-cni-932878) Calling .GetConfigRaw
	I1209 12:12:41.575880  668975 main.go:141] libmachine: (newest-cni-932878) Calling .DriverName
	I1209 12:12:41.576099  668975 main.go:141] libmachine: (newest-cni-932878) Calling .DriverName
	I1209 12:12:41.576341  668975 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1209 12:12:41.576363  668975 main.go:141] libmachine: (newest-cni-932878) Calling .GetState
	I1209 12:12:41.577936  668975 main.go:141] libmachine: Detecting operating system of created instance...
	I1209 12:12:41.577963  668975 main.go:141] libmachine: Waiting for SSH to be available...
	I1209 12:12:41.577968  668975 main.go:141] libmachine: Getting to WaitForSSH function...
	I1209 12:12:41.577973  668975 main.go:141] libmachine: (newest-cni-932878) Calling .GetSSHHostname
	I1209 12:12:41.580462  668975 main.go:141] libmachine: (newest-cni-932878) DBG | domain newest-cni-932878 has defined MAC address 52:54:00:b1:cb:7a in network mk-newest-cni-932878
	I1209 12:12:41.580889  668975 main.go:141] libmachine: (newest-cni-932878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:cb:7a", ip: ""} in network mk-newest-cni-932878: {Iface:virbr3 ExpiryTime:2024-12-09 13:12:33 +0000 UTC Type:0 Mac:52:54:00:b1:cb:7a Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:newest-cni-932878 Clientid:01:52:54:00:b1:cb:7a}
	I1209 12:12:41.580919  668975 main.go:141] libmachine: (newest-cni-932878) DBG | domain newest-cni-932878 has defined IP address 192.168.61.104 and MAC address 52:54:00:b1:cb:7a in network mk-newest-cni-932878
	I1209 12:12:41.581024  668975 main.go:141] libmachine: (newest-cni-932878) Calling .GetSSHPort
	I1209 12:12:41.581227  668975 main.go:141] libmachine: (newest-cni-932878) Calling .GetSSHKeyPath
	I1209 12:12:41.581396  668975 main.go:141] libmachine: (newest-cni-932878) Calling .GetSSHKeyPath
	I1209 12:12:41.581549  668975 main.go:141] libmachine: (newest-cni-932878) Calling .GetSSHUsername
	I1209 12:12:41.581705  668975 main.go:141] libmachine: Using SSH client type: native
	I1209 12:12:41.581987  668975 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.104 22 <nil> <nil>}
	I1209 12:12:41.582000  668975 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1209 12:12:41.681644  668975 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 12:12:41.681681  668975 main.go:141] libmachine: Detecting the provisioner...
	I1209 12:12:41.681692  668975 main.go:141] libmachine: (newest-cni-932878) Calling .GetSSHHostname
	I1209 12:12:41.684756  668975 main.go:141] libmachine: (newest-cni-932878) DBG | domain newest-cni-932878 has defined MAC address 52:54:00:b1:cb:7a in network mk-newest-cni-932878
	I1209 12:12:41.685211  668975 main.go:141] libmachine: (newest-cni-932878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:cb:7a", ip: ""} in network mk-newest-cni-932878: {Iface:virbr3 ExpiryTime:2024-12-09 13:12:33 +0000 UTC Type:0 Mac:52:54:00:b1:cb:7a Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:newest-cni-932878 Clientid:01:52:54:00:b1:cb:7a}
	I1209 12:12:41.685248  668975 main.go:141] libmachine: (newest-cni-932878) DBG | domain newest-cni-932878 has defined IP address 192.168.61.104 and MAC address 52:54:00:b1:cb:7a in network mk-newest-cni-932878
	I1209 12:12:41.685398  668975 main.go:141] libmachine: (newest-cni-932878) Calling .GetSSHPort
	I1209 12:12:41.685639  668975 main.go:141] libmachine: (newest-cni-932878) Calling .GetSSHKeyPath
	I1209 12:12:41.685835  668975 main.go:141] libmachine: (newest-cni-932878) Calling .GetSSHKeyPath
	I1209 12:12:41.685970  668975 main.go:141] libmachine: (newest-cni-932878) Calling .GetSSHUsername
	I1209 12:12:41.686139  668975 main.go:141] libmachine: Using SSH client type: native
	I1209 12:12:41.686349  668975 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.104 22 <nil> <nil>}
	I1209 12:12:41.686362  668975 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1209 12:12:41.787584  668975 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1209 12:12:41.787737  668975 main.go:141] libmachine: found compatible host: buildroot
	I1209 12:12:41.787756  668975 main.go:141] libmachine: Provisioning with buildroot...
	I1209 12:12:41.787771  668975 main.go:141] libmachine: (newest-cni-932878) Calling .GetMachineName
	I1209 12:12:41.788122  668975 buildroot.go:166] provisioning hostname "newest-cni-932878"
	I1209 12:12:41.788161  668975 main.go:141] libmachine: (newest-cni-932878) Calling .GetMachineName
	I1209 12:12:41.788395  668975 main.go:141] libmachine: (newest-cni-932878) Calling .GetSSHHostname
	I1209 12:12:41.791023  668975 main.go:141] libmachine: (newest-cni-932878) DBG | domain newest-cni-932878 has defined MAC address 52:54:00:b1:cb:7a in network mk-newest-cni-932878
	I1209 12:12:41.791383  668975 main.go:141] libmachine: (newest-cni-932878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:cb:7a", ip: ""} in network mk-newest-cni-932878: {Iface:virbr3 ExpiryTime:2024-12-09 13:12:33 +0000 UTC Type:0 Mac:52:54:00:b1:cb:7a Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:newest-cni-932878 Clientid:01:52:54:00:b1:cb:7a}
	I1209 12:12:41.791422  668975 main.go:141] libmachine: (newest-cni-932878) DBG | domain newest-cni-932878 has defined IP address 192.168.61.104 and MAC address 52:54:00:b1:cb:7a in network mk-newest-cni-932878
	I1209 12:12:41.791534  668975 main.go:141] libmachine: (newest-cni-932878) Calling .GetSSHPort
	I1209 12:12:41.791713  668975 main.go:141] libmachine: (newest-cni-932878) Calling .GetSSHKeyPath
	I1209 12:12:41.791835  668975 main.go:141] libmachine: (newest-cni-932878) Calling .GetSSHKeyPath
	I1209 12:12:41.791937  668975 main.go:141] libmachine: (newest-cni-932878) Calling .GetSSHUsername
	I1209 12:12:41.792114  668975 main.go:141] libmachine: Using SSH client type: native
	I1209 12:12:41.792320  668975 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.104 22 <nil> <nil>}
	I1209 12:12:41.792333  668975 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-932878 && echo "newest-cni-932878" | sudo tee /etc/hostname
	I1209 12:12:41.907830  668975 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-932878
	
	I1209 12:12:41.907862  668975 main.go:141] libmachine: (newest-cni-932878) Calling .GetSSHHostname
	I1209 12:12:41.910529  668975 main.go:141] libmachine: (newest-cni-932878) DBG | domain newest-cni-932878 has defined MAC address 52:54:00:b1:cb:7a in network mk-newest-cni-932878
	I1209 12:12:41.910887  668975 main.go:141] libmachine: (newest-cni-932878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:cb:7a", ip: ""} in network mk-newest-cni-932878: {Iface:virbr3 ExpiryTime:2024-12-09 13:12:33 +0000 UTC Type:0 Mac:52:54:00:b1:cb:7a Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:newest-cni-932878 Clientid:01:52:54:00:b1:cb:7a}
	I1209 12:12:41.910922  668975 main.go:141] libmachine: (newest-cni-932878) DBG | domain newest-cni-932878 has defined IP address 192.168.61.104 and MAC address 52:54:00:b1:cb:7a in network mk-newest-cni-932878
	I1209 12:12:41.911096  668975 main.go:141] libmachine: (newest-cni-932878) Calling .GetSSHPort
	I1209 12:12:41.911307  668975 main.go:141] libmachine: (newest-cni-932878) Calling .GetSSHKeyPath
	I1209 12:12:41.911469  668975 main.go:141] libmachine: (newest-cni-932878) Calling .GetSSHKeyPath
	I1209 12:12:41.911606  668975 main.go:141] libmachine: (newest-cni-932878) Calling .GetSSHUsername
	I1209 12:12:41.911764  668975 main.go:141] libmachine: Using SSH client type: native
	I1209 12:12:41.911929  668975 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.104 22 <nil> <nil>}
	I1209 12:12:41.911944  668975 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-932878' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-932878/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-932878' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 12:12:42.022773  668975 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 12:12:42.022811  668975 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20068-609844/.minikube CaCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20068-609844/.minikube}
	I1209 12:12:42.022835  668975 buildroot.go:174] setting up certificates
	I1209 12:12:42.022848  668975 provision.go:84] configureAuth start
	I1209 12:12:42.022859  668975 main.go:141] libmachine: (newest-cni-932878) Calling .GetMachineName
	I1209 12:12:42.023155  668975 main.go:141] libmachine: (newest-cni-932878) Calling .GetIP
	I1209 12:12:42.026455  668975 main.go:141] libmachine: (newest-cni-932878) DBG | domain newest-cni-932878 has defined MAC address 52:54:00:b1:cb:7a in network mk-newest-cni-932878
	I1209 12:12:42.026864  668975 main.go:141] libmachine: (newest-cni-932878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:cb:7a", ip: ""} in network mk-newest-cni-932878: {Iface:virbr3 ExpiryTime:2024-12-09 13:12:33 +0000 UTC Type:0 Mac:52:54:00:b1:cb:7a Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:newest-cni-932878 Clientid:01:52:54:00:b1:cb:7a}
	I1209 12:12:42.026894  668975 main.go:141] libmachine: (newest-cni-932878) DBG | domain newest-cni-932878 has defined IP address 192.168.61.104 and MAC address 52:54:00:b1:cb:7a in network mk-newest-cni-932878
	I1209 12:12:42.027063  668975 main.go:141] libmachine: (newest-cni-932878) Calling .GetSSHHostname
	I1209 12:12:42.029419  668975 main.go:141] libmachine: (newest-cni-932878) DBG | domain newest-cni-932878 has defined MAC address 52:54:00:b1:cb:7a in network mk-newest-cni-932878
	I1209 12:12:42.029728  668975 main.go:141] libmachine: (newest-cni-932878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:cb:7a", ip: ""} in network mk-newest-cni-932878: {Iface:virbr3 ExpiryTime:2024-12-09 13:12:33 +0000 UTC Type:0 Mac:52:54:00:b1:cb:7a Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:newest-cni-932878 Clientid:01:52:54:00:b1:cb:7a}
	I1209 12:12:42.029770  668975 main.go:141] libmachine: (newest-cni-932878) DBG | domain newest-cni-932878 has defined IP address 192.168.61.104 and MAC address 52:54:00:b1:cb:7a in network mk-newest-cni-932878
	I1209 12:12:42.029945  668975 provision.go:143] copyHostCerts
	I1209 12:12:42.030016  668975 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem, removing ...
	I1209 12:12:42.030038  668975 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem
	I1209 12:12:42.030122  668975 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem (1082 bytes)
	I1209 12:12:42.030288  668975 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem, removing ...
	I1209 12:12:42.030301  668975 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem
	I1209 12:12:42.030335  668975 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem (1123 bytes)
	I1209 12:12:42.030393  668975 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem, removing ...
	I1209 12:12:42.030401  668975 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem
	I1209 12:12:42.030424  668975 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem (1679 bytes)
	I1209 12:12:42.030475  668975 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem org=jenkins.newest-cni-932878 san=[127.0.0.1 192.168.61.104 localhost minikube newest-cni-932878]
	I1209 12:12:42.078822  668975 provision.go:177] copyRemoteCerts
	I1209 12:12:42.078888  668975 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 12:12:42.078915  668975 main.go:141] libmachine: (newest-cni-932878) Calling .GetSSHHostname
	I1209 12:12:42.081846  668975 main.go:141] libmachine: (newest-cni-932878) DBG | domain newest-cni-932878 has defined MAC address 52:54:00:b1:cb:7a in network mk-newest-cni-932878
	I1209 12:12:42.082137  668975 main.go:141] libmachine: (newest-cni-932878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:cb:7a", ip: ""} in network mk-newest-cni-932878: {Iface:virbr3 ExpiryTime:2024-12-09 13:12:33 +0000 UTC Type:0 Mac:52:54:00:b1:cb:7a Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:newest-cni-932878 Clientid:01:52:54:00:b1:cb:7a}
	I1209 12:12:42.082159  668975 main.go:141] libmachine: (newest-cni-932878) DBG | domain newest-cni-932878 has defined IP address 192.168.61.104 and MAC address 52:54:00:b1:cb:7a in network mk-newest-cni-932878
	I1209 12:12:42.082434  668975 main.go:141] libmachine: (newest-cni-932878) Calling .GetSSHPort
	I1209 12:12:42.082621  668975 main.go:141] libmachine: (newest-cni-932878) Calling .GetSSHKeyPath
	I1209 12:12:42.082798  668975 main.go:141] libmachine: (newest-cni-932878) Calling .GetSSHUsername
	I1209 12:12:42.082937  668975 sshutil.go:53] new ssh client: &{IP:192.168.61.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/newest-cni-932878/id_rsa Username:docker}
	I1209 12:12:42.164930  668975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1209 12:12:42.187542  668975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1209 12:12:42.211521  668975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1209 12:12:42.236962  668975 provision.go:87] duration metric: took 214.096618ms to configureAuth
	I1209 12:12:42.237002  668975 buildroot.go:189] setting minikube options for container-runtime
	I1209 12:12:42.237213  668975 config.go:182] Loaded profile config "newest-cni-932878": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 12:12:42.237307  668975 main.go:141] libmachine: (newest-cni-932878) Calling .GetSSHHostname
	I1209 12:12:42.240192  668975 main.go:141] libmachine: (newest-cni-932878) DBG | domain newest-cni-932878 has defined MAC address 52:54:00:b1:cb:7a in network mk-newest-cni-932878
	I1209 12:12:42.240590  668975 main.go:141] libmachine: (newest-cni-932878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:cb:7a", ip: ""} in network mk-newest-cni-932878: {Iface:virbr3 ExpiryTime:2024-12-09 13:12:33 +0000 UTC Type:0 Mac:52:54:00:b1:cb:7a Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:newest-cni-932878 Clientid:01:52:54:00:b1:cb:7a}
	I1209 12:12:42.240617  668975 main.go:141] libmachine: (newest-cni-932878) DBG | domain newest-cni-932878 has defined IP address 192.168.61.104 and MAC address 52:54:00:b1:cb:7a in network mk-newest-cni-932878
	I1209 12:12:42.240868  668975 main.go:141] libmachine: (newest-cni-932878) Calling .GetSSHPort
	I1209 12:12:42.241070  668975 main.go:141] libmachine: (newest-cni-932878) Calling .GetSSHKeyPath
	I1209 12:12:42.241230  668975 main.go:141] libmachine: (newest-cni-932878) Calling .GetSSHKeyPath
	I1209 12:12:42.241355  668975 main.go:141] libmachine: (newest-cni-932878) Calling .GetSSHUsername
	I1209 12:12:42.241507  668975 main.go:141] libmachine: Using SSH client type: native
	I1209 12:12:42.241679  668975 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.104 22 <nil> <nil>}
	I1209 12:12:42.241694  668975 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 12:12:42.468212  668975 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 12:12:42.468270  668975 main.go:141] libmachine: Checking connection to Docker...
	I1209 12:12:42.468284  668975 main.go:141] libmachine: (newest-cni-932878) Calling .GetURL
	I1209 12:12:42.469782  668975 main.go:141] libmachine: (newest-cni-932878) DBG | Using libvirt version 6000000
	I1209 12:12:42.472140  668975 main.go:141] libmachine: (newest-cni-932878) DBG | domain newest-cni-932878 has defined MAC address 52:54:00:b1:cb:7a in network mk-newest-cni-932878
	I1209 12:12:42.472509  668975 main.go:141] libmachine: (newest-cni-932878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:cb:7a", ip: ""} in network mk-newest-cni-932878: {Iface:virbr3 ExpiryTime:2024-12-09 13:12:33 +0000 UTC Type:0 Mac:52:54:00:b1:cb:7a Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:newest-cni-932878 Clientid:01:52:54:00:b1:cb:7a}
	I1209 12:12:42.472539  668975 main.go:141] libmachine: (newest-cni-932878) DBG | domain newest-cni-932878 has defined IP address 192.168.61.104 and MAC address 52:54:00:b1:cb:7a in network mk-newest-cni-932878
	I1209 12:12:42.472680  668975 main.go:141] libmachine: Docker is up and running!
	I1209 12:12:42.472701  668975 main.go:141] libmachine: Reticulating splines...
	I1209 12:12:42.472710  668975 client.go:171] duration metric: took 23.255137337s to LocalClient.Create
	I1209 12:12:42.472748  668975 start.go:167] duration metric: took 23.255205629s to libmachine.API.Create "newest-cni-932878"
	I1209 12:12:42.472759  668975 start.go:293] postStartSetup for "newest-cni-932878" (driver="kvm2")
	I1209 12:12:42.472769  668975 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 12:12:42.472788  668975 main.go:141] libmachine: (newest-cni-932878) Calling .DriverName
	I1209 12:12:42.473068  668975 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 12:12:42.473113  668975 main.go:141] libmachine: (newest-cni-932878) Calling .GetSSHHostname
	I1209 12:12:42.475300  668975 main.go:141] libmachine: (newest-cni-932878) DBG | domain newest-cni-932878 has defined MAC address 52:54:00:b1:cb:7a in network mk-newest-cni-932878
	I1209 12:12:42.475599  668975 main.go:141] libmachine: (newest-cni-932878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:cb:7a", ip: ""} in network mk-newest-cni-932878: {Iface:virbr3 ExpiryTime:2024-12-09 13:12:33 +0000 UTC Type:0 Mac:52:54:00:b1:cb:7a Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:newest-cni-932878 Clientid:01:52:54:00:b1:cb:7a}
	I1209 12:12:42.475626  668975 main.go:141] libmachine: (newest-cni-932878) DBG | domain newest-cni-932878 has defined IP address 192.168.61.104 and MAC address 52:54:00:b1:cb:7a in network mk-newest-cni-932878
	I1209 12:12:42.475770  668975 main.go:141] libmachine: (newest-cni-932878) Calling .GetSSHPort
	I1209 12:12:42.475950  668975 main.go:141] libmachine: (newest-cni-932878) Calling .GetSSHKeyPath
	I1209 12:12:42.476109  668975 main.go:141] libmachine: (newest-cni-932878) Calling .GetSSHUsername
	I1209 12:12:42.476247  668975 sshutil.go:53] new ssh client: &{IP:192.168.61.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/newest-cni-932878/id_rsa Username:docker}
	I1209 12:12:42.557235  668975 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 12:12:42.561247  668975 info.go:137] Remote host: Buildroot 2023.02.9
	I1209 12:12:42.561272  668975 filesync.go:126] Scanning /home/jenkins/minikube-integration/20068-609844/.minikube/addons for local assets ...
	I1209 12:12:42.561356  668975 filesync.go:126] Scanning /home/jenkins/minikube-integration/20068-609844/.minikube/files for local assets ...
	I1209 12:12:42.561431  668975 filesync.go:149] local asset: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem -> 6170172.pem in /etc/ssl/certs
	I1209 12:12:42.561529  668975 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 12:12:42.570736  668975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem --> /etc/ssl/certs/6170172.pem (1708 bytes)
	I1209 12:12:42.592988  668975 start.go:296] duration metric: took 120.212321ms for postStartSetup
	I1209 12:12:42.593055  668975 main.go:141] libmachine: (newest-cni-932878) Calling .GetConfigRaw
	I1209 12:12:42.593649  668975 main.go:141] libmachine: (newest-cni-932878) Calling .GetIP
	I1209 12:12:42.596541  668975 main.go:141] libmachine: (newest-cni-932878) DBG | domain newest-cni-932878 has defined MAC address 52:54:00:b1:cb:7a in network mk-newest-cni-932878
	I1209 12:12:42.596867  668975 main.go:141] libmachine: (newest-cni-932878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:cb:7a", ip: ""} in network mk-newest-cni-932878: {Iface:virbr3 ExpiryTime:2024-12-09 13:12:33 +0000 UTC Type:0 Mac:52:54:00:b1:cb:7a Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:newest-cni-932878 Clientid:01:52:54:00:b1:cb:7a}
	I1209 12:12:42.596896  668975 main.go:141] libmachine: (newest-cni-932878) DBG | domain newest-cni-932878 has defined IP address 192.168.61.104 and MAC address 52:54:00:b1:cb:7a in network mk-newest-cni-932878
	I1209 12:12:42.597130  668975 profile.go:143] Saving config to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/newest-cni-932878/config.json ...
	I1209 12:12:42.597324  668975 start.go:128] duration metric: took 23.402590508s to createHost
	I1209 12:12:42.597356  668975 main.go:141] libmachine: (newest-cni-932878) Calling .GetSSHHostname
	I1209 12:12:42.599681  668975 main.go:141] libmachine: (newest-cni-932878) DBG | domain newest-cni-932878 has defined MAC address 52:54:00:b1:cb:7a in network mk-newest-cni-932878
	I1209 12:12:42.600008  668975 main.go:141] libmachine: (newest-cni-932878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:cb:7a", ip: ""} in network mk-newest-cni-932878: {Iface:virbr3 ExpiryTime:2024-12-09 13:12:33 +0000 UTC Type:0 Mac:52:54:00:b1:cb:7a Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:newest-cni-932878 Clientid:01:52:54:00:b1:cb:7a}
	I1209 12:12:42.600045  668975 main.go:141] libmachine: (newest-cni-932878) DBG | domain newest-cni-932878 has defined IP address 192.168.61.104 and MAC address 52:54:00:b1:cb:7a in network mk-newest-cni-932878
	I1209 12:12:42.600155  668975 main.go:141] libmachine: (newest-cni-932878) Calling .GetSSHPort
	I1209 12:12:42.600325  668975 main.go:141] libmachine: (newest-cni-932878) Calling .GetSSHKeyPath
	I1209 12:12:42.600500  668975 main.go:141] libmachine: (newest-cni-932878) Calling .GetSSHKeyPath
	I1209 12:12:42.600671  668975 main.go:141] libmachine: (newest-cni-932878) Calling .GetSSHUsername
	I1209 12:12:42.600808  668975 main.go:141] libmachine: Using SSH client type: native
	I1209 12:12:42.600971  668975 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.104 22 <nil> <nil>}
	I1209 12:12:42.600981  668975 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1209 12:12:42.702872  668975 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733746362.678695830
	
	I1209 12:12:42.702898  668975 fix.go:216] guest clock: 1733746362.678695830
	I1209 12:12:42.702906  668975 fix.go:229] Guest: 2024-12-09 12:12:42.67869583 +0000 UTC Remote: 2024-12-09 12:12:42.597342979 +0000 UTC m=+23.520609607 (delta=81.352851ms)
	I1209 12:12:42.702935  668975 fix.go:200] guest clock delta is within tolerance: 81.352851ms
	I1209 12:12:42.702940  668975 start.go:83] releasing machines lock for "newest-cni-932878", held for 23.508370389s
	I1209 12:12:42.702960  668975 main.go:141] libmachine: (newest-cni-932878) Calling .DriverName
	I1209 12:12:42.703274  668975 main.go:141] libmachine: (newest-cni-932878) Calling .GetIP
	I1209 12:12:42.706123  668975 main.go:141] libmachine: (newest-cni-932878) DBG | domain newest-cni-932878 has defined MAC address 52:54:00:b1:cb:7a in network mk-newest-cni-932878
	I1209 12:12:42.706509  668975 main.go:141] libmachine: (newest-cni-932878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:cb:7a", ip: ""} in network mk-newest-cni-932878: {Iface:virbr3 ExpiryTime:2024-12-09 13:12:33 +0000 UTC Type:0 Mac:52:54:00:b1:cb:7a Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:newest-cni-932878 Clientid:01:52:54:00:b1:cb:7a}
	I1209 12:12:42.706536  668975 main.go:141] libmachine: (newest-cni-932878) DBG | domain newest-cni-932878 has defined IP address 192.168.61.104 and MAC address 52:54:00:b1:cb:7a in network mk-newest-cni-932878
	I1209 12:12:42.706721  668975 main.go:141] libmachine: (newest-cni-932878) Calling .DriverName
	I1209 12:12:42.707329  668975 main.go:141] libmachine: (newest-cni-932878) Calling .DriverName
	I1209 12:12:42.707535  668975 main.go:141] libmachine: (newest-cni-932878) Calling .DriverName
	I1209 12:12:42.707635  668975 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 12:12:42.707701  668975 main.go:141] libmachine: (newest-cni-932878) Calling .GetSSHHostname
	I1209 12:12:42.707772  668975 ssh_runner.go:195] Run: cat /version.json
	I1209 12:12:42.707809  668975 main.go:141] libmachine: (newest-cni-932878) Calling .GetSSHHostname
	I1209 12:12:42.710642  668975 main.go:141] libmachine: (newest-cni-932878) DBG | domain newest-cni-932878 has defined MAC address 52:54:00:b1:cb:7a in network mk-newest-cni-932878
	I1209 12:12:42.711274  668975 main.go:141] libmachine: (newest-cni-932878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:cb:7a", ip: ""} in network mk-newest-cni-932878: {Iface:virbr3 ExpiryTime:2024-12-09 13:12:33 +0000 UTC Type:0 Mac:52:54:00:b1:cb:7a Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:newest-cni-932878 Clientid:01:52:54:00:b1:cb:7a}
	I1209 12:12:42.711297  668975 main.go:141] libmachine: (newest-cni-932878) DBG | domain newest-cni-932878 has defined IP address 192.168.61.104 and MAC address 52:54:00:b1:cb:7a in network mk-newest-cni-932878
	I1209 12:12:42.711529  668975 main.go:141] libmachine: (newest-cni-932878) Calling .GetSSHPort
	I1209 12:12:42.711593  668975 main.go:141] libmachine: (newest-cni-932878) DBG | domain newest-cni-932878 has defined MAC address 52:54:00:b1:cb:7a in network mk-newest-cni-932878
	I1209 12:12:42.711777  668975 main.go:141] libmachine: (newest-cni-932878) Calling .GetSSHKeyPath
	I1209 12:12:42.712155  668975 main.go:141] libmachine: (newest-cni-932878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:cb:7a", ip: ""} in network mk-newest-cni-932878: {Iface:virbr3 ExpiryTime:2024-12-09 13:12:33 +0000 UTC Type:0 Mac:52:54:00:b1:cb:7a Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:newest-cni-932878 Clientid:01:52:54:00:b1:cb:7a}
	I1209 12:12:42.712168  668975 main.go:141] libmachine: (newest-cni-932878) Calling .GetSSHUsername
	I1209 12:12:42.712182  668975 main.go:141] libmachine: (newest-cni-932878) DBG | domain newest-cni-932878 has defined IP address 192.168.61.104 and MAC address 52:54:00:b1:cb:7a in network mk-newest-cni-932878
	I1209 12:12:42.712370  668975 sshutil.go:53] new ssh client: &{IP:192.168.61.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/newest-cni-932878/id_rsa Username:docker}
	I1209 12:12:42.712418  668975 main.go:141] libmachine: (newest-cni-932878) Calling .GetSSHPort
	I1209 12:12:42.712593  668975 main.go:141] libmachine: (newest-cni-932878) Calling .GetSSHKeyPath
	I1209 12:12:42.712762  668975 main.go:141] libmachine: (newest-cni-932878) Calling .GetSSHUsername
	I1209 12:12:42.712917  668975 sshutil.go:53] new ssh client: &{IP:192.168.61.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/newest-cni-932878/id_rsa Username:docker}
	I1209 12:12:42.788156  668975 ssh_runner.go:195] Run: systemctl --version
	I1209 12:12:42.826464  668975 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 12:12:42.988109  668975 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 12:12:42.994391  668975 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 12:12:42.994471  668975 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 12:12:43.009466  668975 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1209 12:12:43.009498  668975 start.go:495] detecting cgroup driver to use...
	I1209 12:12:43.009586  668975 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 12:12:43.025978  668975 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 12:12:43.039310  668975 docker.go:217] disabling cri-docker service (if available) ...
	I1209 12:12:43.039389  668975 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 12:12:43.053352  668975 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 12:12:43.066795  668975 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 12:12:43.191230  668975 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 12:12:43.344584  668975 docker.go:233] disabling docker service ...
	I1209 12:12:43.344683  668975 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 12:12:43.359974  668975 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 12:12:43.373572  668975 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 12:12:43.509922  668975 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 12:12:43.628091  668975 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 12:12:43.641855  668975 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 12:12:43.662898  668975 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1209 12:12:43.662980  668975 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 12:12:43.674619  668975 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 12:12:43.674726  668975 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 12:12:43.685355  668975 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 12:12:43.695586  668975 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 12:12:43.705772  668975 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 12:12:43.715668  668975 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 12:12:43.725760  668975 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 12:12:43.742368  668975 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 12:12:43.752435  668975 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 12:12:43.761631  668975 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1209 12:12:43.761696  668975 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1209 12:12:43.774107  668975 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 12:12:43.783493  668975 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 12:12:43.900033  668975 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 12:12:43.992346  668975 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 12:12:43.992418  668975 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 12:12:43.996827  668975 start.go:563] Will wait 60s for crictl version
	I1209 12:12:43.996896  668975 ssh_runner.go:195] Run: which crictl
	I1209 12:12:44.000674  668975 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 12:12:44.049020  668975 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1209 12:12:44.049107  668975 ssh_runner.go:195] Run: crio --version
	I1209 12:12:44.083504  668975 ssh_runner.go:195] Run: crio --version
	I1209 12:12:44.118639  668975 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1209 12:12:44.119942  668975 main.go:141] libmachine: (newest-cni-932878) Calling .GetIP
	I1209 12:12:44.122711  668975 main.go:141] libmachine: (newest-cni-932878) DBG | domain newest-cni-932878 has defined MAC address 52:54:00:b1:cb:7a in network mk-newest-cni-932878
	I1209 12:12:44.123018  668975 main.go:141] libmachine: (newest-cni-932878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:cb:7a", ip: ""} in network mk-newest-cni-932878: {Iface:virbr3 ExpiryTime:2024-12-09 13:12:33 +0000 UTC Type:0 Mac:52:54:00:b1:cb:7a Iaid: IPaddr:192.168.61.104 Prefix:24 Hostname:newest-cni-932878 Clientid:01:52:54:00:b1:cb:7a}
	I1209 12:12:44.123061  668975 main.go:141] libmachine: (newest-cni-932878) DBG | domain newest-cni-932878 has defined IP address 192.168.61.104 and MAC address 52:54:00:b1:cb:7a in network mk-newest-cni-932878
	I1209 12:12:44.123338  668975 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1209 12:12:44.127742  668975 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 12:12:44.141219  668975 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1209 12:12:44.142496  668975 kubeadm.go:883] updating cluster {Name:newest-cni-932878 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:newest-cni-932878 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.104 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 12:12:44.142614  668975 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 12:12:44.142685  668975 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 12:12:44.173951  668975 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1209 12:12:44.174036  668975 ssh_runner.go:195] Run: which lz4
	I1209 12:12:44.177817  668975 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1209 12:12:44.181715  668975 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1209 12:12:44.181754  668975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1209 12:12:45.412964  668975 crio.go:462] duration metric: took 1.235190952s to copy over tarball
	I1209 12:12:45.413046  668975 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1209 12:12:47.537563  668975 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.124482988s)
	I1209 12:12:47.537594  668975 crio.go:469] duration metric: took 2.12459605s to extract the tarball
	I1209 12:12:47.537601  668975 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1209 12:12:47.578217  668975 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 12:12:47.627173  668975 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 12:12:47.627201  668975 cache_images.go:84] Images are preloaded, skipping loading
	I1209 12:12:47.627210  668975 kubeadm.go:934] updating node { 192.168.61.104 8443 v1.31.2 crio true true} ...
	I1209 12:12:47.627361  668975 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-932878 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.104
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:newest-cni-932878 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 12:12:47.627442  668975 ssh_runner.go:195] Run: crio config
	I1209 12:12:47.671932  668975 cni.go:84] Creating CNI manager for ""
	I1209 12:12:47.671964  668975 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 12:12:47.671977  668975 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I1209 12:12:47.672014  668975 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.61.104 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-932878 NodeName:newest-cni-932878 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.104"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs:ma
p[] NodeIP:192.168.61.104 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 12:12:47.672142  668975 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.104
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-932878"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.104"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.104"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    - name: "feature-gates"
	      value: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "feature-gates"
	      value: "ServerSideApply=true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "feature-gates"
	      value: "ServerSideApply=true"
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 12:12:47.672207  668975 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1209 12:12:47.682075  668975 binaries.go:44] Found k8s binaries, skipping transfer
	I1209 12:12:47.682134  668975 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 12:12:47.691324  668975 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
	I1209 12:12:47.707943  668975 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 12:12:47.724967  668975 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2487 bytes)
	I1209 12:12:47.741906  668975 ssh_runner.go:195] Run: grep 192.168.61.104	control-plane.minikube.internal$ /etc/hosts
	I1209 12:12:47.745634  668975 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.104	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 12:12:47.757736  668975 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 12:12:47.901416  668975 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 12:12:47.919605  668975 certs.go:68] Setting up /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/newest-cni-932878 for IP: 192.168.61.104
	I1209 12:12:47.919641  668975 certs.go:194] generating shared ca certs ...
	I1209 12:12:47.919666  668975 certs.go:226] acquiring lock for ca certs: {Name:mk2df5887e08965d909a9c950da5dfffb8a04ddc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 12:12:47.919921  668975 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.key
	I1209 12:12:47.920009  668975 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.key
	I1209 12:12:47.920028  668975 certs.go:256] generating profile certs ...
	I1209 12:12:47.920226  668975 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/newest-cni-932878/client.key
	I1209 12:12:47.920265  668975 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/newest-cni-932878/client.crt with IP's: []
	I1209 12:12:48.254253  668975 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/newest-cni-932878/client.crt ...
	I1209 12:12:48.254290  668975 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/newest-cni-932878/client.crt: {Name:mk0ed31db4c4c4ffd1bc5bbf9691236e21d28964 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 12:12:48.254477  668975 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/newest-cni-932878/client.key ...
	I1209 12:12:48.254488  668975 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/newest-cni-932878/client.key: {Name:mk0ad44826e2736e638dc559ec3346eefda01eb7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 12:12:48.254565  668975 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/newest-cni-932878/apiserver.key.17df21af
	I1209 12:12:48.254580  668975 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/newest-cni-932878/apiserver.crt.17df21af with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.104]
	I1209 12:12:48.375320  668975 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/newest-cni-932878/apiserver.crt.17df21af ...
	I1209 12:12:48.375360  668975 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/newest-cni-932878/apiserver.crt.17df21af: {Name:mk76d987f674b609677c2d57f2adb01421e527b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 12:12:48.375584  668975 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/newest-cni-932878/apiserver.key.17df21af ...
	I1209 12:12:48.375602  668975 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/newest-cni-932878/apiserver.key.17df21af: {Name:mkfc0cdc2903cd23505359ede464f593bcd3fdc8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 12:12:48.375716  668975 certs.go:381] copying /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/newest-cni-932878/apiserver.crt.17df21af -> /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/newest-cni-932878/apiserver.crt
	I1209 12:12:48.375852  668975 certs.go:385] copying /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/newest-cni-932878/apiserver.key.17df21af -> /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/newest-cni-932878/apiserver.key
	I1209 12:12:48.375929  668975 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/newest-cni-932878/proxy-client.key
	I1209 12:12:48.375949  668975 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/newest-cni-932878/proxy-client.crt with IP's: []
	I1209 12:12:48.522731  668975 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/newest-cni-932878/proxy-client.crt ...
	I1209 12:12:48.522780  668975 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/newest-cni-932878/proxy-client.crt: {Name:mk35bd9b7ffea890418b61e17a4d5c4bf4099ec3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 12:12:48.523029  668975 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/newest-cni-932878/proxy-client.key ...
	I1209 12:12:48.523060  668975 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/newest-cni-932878/proxy-client.key: {Name:mkabcf74c3bbb19333f15a20c445531ccb216606 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 12:12:48.523315  668975 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017.pem (1338 bytes)
	W1209 12:12:48.523357  668975 certs.go:480] ignoring /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017_empty.pem, impossibly tiny 0 bytes
	I1209 12:12:48.523369  668975 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem (1675 bytes)
	I1209 12:12:48.523391  668975 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem (1082 bytes)
	I1209 12:12:48.523415  668975 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem (1123 bytes)
	I1209 12:12:48.523436  668975 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem (1679 bytes)
	I1209 12:12:48.523473  668975 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem (1708 bytes)
	I1209 12:12:48.524161  668975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 12:12:48.549683  668975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1209 12:12:48.573696  668975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 12:12:48.597538  668975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1209 12:12:48.622683  668975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/newest-cni-932878/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1209 12:12:48.647895  668975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/newest-cni-932878/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1209 12:12:48.674868  668975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/newest-cni-932878/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 12:12:48.699846  668975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/newest-cni-932878/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1209 12:12:48.733042  668975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 12:12:48.756337  668975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017.pem --> /usr/share/ca-certificates/617017.pem (1338 bytes)
	I1209 12:12:48.779406  668975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem --> /usr/share/ca-certificates/6170172.pem (1708 bytes)
	I1209 12:12:48.802877  668975 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 12:12:48.820150  668975 ssh_runner.go:195] Run: openssl version
	I1209 12:12:48.825921  668975 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1209 12:12:48.837003  668975 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 12:12:48.841484  668975 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 10:34 /usr/share/ca-certificates/minikubeCA.pem
	I1209 12:12:48.841546  668975 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 12:12:48.847458  668975 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1209 12:12:48.857952  668975 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/617017.pem && ln -fs /usr/share/ca-certificates/617017.pem /etc/ssl/certs/617017.pem"
	I1209 12:12:48.869140  668975 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/617017.pem
	I1209 12:12:48.873880  668975 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 10:45 /usr/share/ca-certificates/617017.pem
	I1209 12:12:48.873950  668975 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/617017.pem
	I1209 12:12:48.880011  668975 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/617017.pem /etc/ssl/certs/51391683.0"
	I1209 12:12:48.891334  668975 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6170172.pem && ln -fs /usr/share/ca-certificates/6170172.pem /etc/ssl/certs/6170172.pem"
	I1209 12:12:48.902647  668975 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6170172.pem
	I1209 12:12:48.907378  668975 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 10:45 /usr/share/ca-certificates/6170172.pem
	I1209 12:12:48.907486  668975 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6170172.pem
	I1209 12:12:48.913192  668975 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6170172.pem /etc/ssl/certs/3ec20f2e.0"
	I1209 12:12:48.923815  668975 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 12:12:48.928243  668975 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1209 12:12:48.928316  668975 kubeadm.go:392] StartCluster: {Name:newest-cni-932878 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:newest-cni-932878 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.104 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersio
n:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 12:12:48.928420  668975 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 12:12:48.928485  668975 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 12:12:48.964477  668975 cri.go:89] found id: ""
	I1209 12:12:48.964596  668975 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 12:12:48.974504  668975 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 12:12:48.984204  668975 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 12:12:48.995208  668975 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 12:12:48.995230  668975 kubeadm.go:157] found existing configuration files:
	
	I1209 12:12:48.995288  668975 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 12:12:49.005226  668975 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 12:12:49.005289  668975 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 12:12:49.016479  668975 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 12:12:49.026063  668975 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 12:12:49.026127  668975 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 12:12:49.037135  668975 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 12:12:49.046140  668975 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 12:12:49.046240  668975 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 12:12:49.057173  668975 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 12:12:49.067530  668975 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 12:12:49.067601  668975 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 12:12:49.078513  668975 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1209 12:12:49.192626  668975 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1209 12:12:49.192756  668975 kubeadm.go:310] [preflight] Running pre-flight checks
	I1209 12:12:49.299070  668975 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1209 12:12:49.299251  668975 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1209 12:12:49.299419  668975 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1209 12:12:49.308453  668975 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1209 12:12:49.524011  668975 out.go:235]   - Generating certificates and keys ...
	I1209 12:12:49.524167  668975 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1209 12:12:49.524268  668975 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1209 12:12:49.524427  668975 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1209 12:12:49.554731  668975 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1209 12:12:49.761845  668975 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1209 12:12:49.986325  668975 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1209 12:12:50.082949  668975 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1209 12:12:50.083178  668975 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-932878] and IPs [192.168.61.104 127.0.0.1 ::1]
	I1209 12:12:50.321047  668975 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1209 12:12:50.321359  668975 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-932878] and IPs [192.168.61.104 127.0.0.1 ::1]
	I1209 12:12:50.426907  668975 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1209 12:12:50.814932  668975 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1209 12:12:51.000986  668975 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1209 12:12:51.001117  668975 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1209 12:12:51.296442  668975 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1209 12:12:51.423026  668975 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1209 12:12:51.526504  668975 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1209 12:12:51.722969  668975 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1209 12:12:52.099655  668975 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1209 12:12:52.100511  668975 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1209 12:12:52.105185  668975 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1209 12:12:52.106745  668975 out.go:235]   - Booting up control plane ...
	I1209 12:12:52.106869  668975 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1209 12:12:52.107535  668975 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1209 12:12:52.108680  668975 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1209 12:12:52.128763  668975 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1209 12:12:52.136930  668975 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1209 12:12:52.137019  668975 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1209 12:12:52.266096  668975 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1209 12:12:52.266276  668975 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1209 12:12:52.767078  668975 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.598507ms
	I1209 12:12:52.767210  668975 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1209 12:12:58.267033  668975 kubeadm.go:310] [api-check] The API server is healthy after 5.501520947s
	I1209 12:12:58.279975  668975 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1209 12:12:58.301763  668975 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1209 12:12:58.334389  668975 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1209 12:12:58.335033  668975 kubeadm.go:310] [mark-control-plane] Marking the node newest-cni-932878 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1209 12:12:58.353423  668975 kubeadm.go:310] [bootstrap-token] Using token: mgsqx2.42ezj9iu1k5n8vml
	I1209 12:12:58.355161  668975 out.go:235]   - Configuring RBAC rules ...
	I1209 12:12:58.355305  668975 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1209 12:12:58.361269  668975 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1209 12:12:58.374315  668975 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1209 12:12:58.381825  668975 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1209 12:12:58.386069  668975 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1209 12:12:58.390609  668975 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1209 12:12:58.674577  668975 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1209 12:12:59.113923  668975 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1209 12:12:59.672078  668975 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1209 12:12:59.673346  668975 kubeadm.go:310] 
	I1209 12:12:59.673429  668975 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1209 12:12:59.673467  668975 kubeadm.go:310] 
	I1209 12:12:59.673622  668975 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1209 12:12:59.673637  668975 kubeadm.go:310] 
	I1209 12:12:59.673678  668975 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1209 12:12:59.673756  668975 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1209 12:12:59.673824  668975 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1209 12:12:59.673837  668975 kubeadm.go:310] 
	I1209 12:12:59.673881  668975 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1209 12:12:59.673912  668975 kubeadm.go:310] 
	I1209 12:12:59.674037  668975 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1209 12:12:59.674059  668975 kubeadm.go:310] 
	I1209 12:12:59.674128  668975 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1209 12:12:59.674244  668975 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1209 12:12:59.674340  668975 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1209 12:12:59.674351  668975 kubeadm.go:310] 
	I1209 12:12:59.674468  668975 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1209 12:12:59.674575  668975 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1209 12:12:59.674585  668975 kubeadm.go:310] 
	I1209 12:12:59.674709  668975 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token mgsqx2.42ezj9iu1k5n8vml \
	I1209 12:12:59.674860  668975 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c011865ce27f188d55feab5f04226df0aae8d2e7cfe661283806c6f2de0ef29a \
	I1209 12:12:59.674895  668975 kubeadm.go:310] 	--control-plane 
	I1209 12:12:59.674904  668975 kubeadm.go:310] 
	I1209 12:12:59.675003  668975 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1209 12:12:59.675022  668975 kubeadm.go:310] 
	I1209 12:12:59.675136  668975 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token mgsqx2.42ezj9iu1k5n8vml \
	I1209 12:12:59.675241  668975 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c011865ce27f188d55feab5f04226df0aae8d2e7cfe661283806c6f2de0ef29a 
	I1209 12:12:59.675928  668975 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1209 12:12:59.675960  668975 cni.go:84] Creating CNI manager for ""
	I1209 12:12:59.675975  668975 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 12:12:59.677930  668975 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	
	
	==> CRI-O <==
	Dec 09 12:13:02 embed-certs-005123 crio[702]: time="2024-12-09 12:13:02.831306462Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733746382831280971,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=15c57336-8d1e-4236-83f9-477f21fda18e name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 12:13:02 embed-certs-005123 crio[702]: time="2024-12-09 12:13:02.832090102Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5a736547-9631-458d-8c00-bef4063a2204 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 12:13:02 embed-certs-005123 crio[702]: time="2024-12-09 12:13:02.832174653Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5a736547-9631-458d-8c00-bef4063a2204 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 12:13:02 embed-certs-005123 crio[702]: time="2024-12-09 12:13:02.832418409Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bd836c617c4c71eefb766c2dfb55170cf3cf91517592b1a7a183c74e32ea64a6,PodSandboxId:7455f6989fecae39d7d0c95e8bc7072133ece82c435ce6ed37b5f621db26a696,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733745474452157860,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91ceb801-7262-4d7e-9623-c8c1931fc34b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83e04e6c67eb04484f0a7ed6ae026d286dbe58b1771e200b50a3b5fb3155cfd2,PodSandboxId:6f09b37ff62169ca1ef8b5d9ea743a40d10b39438e49add1cdf06c9242f0bad5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733745473966659142,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xspr9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9384e9ea-987e-4728-bdf2-773645d52ab1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ad57f45638e33d61673cc77cf320de335668a20f9d834892bcd702efb4ff209,PodSandboxId:8576a7808ab702e2fa9b5d11849da794aed95311a7db73c0a1536df14395d7c2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733745473869691762,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-t49mk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c
a3ba094-58a2-401d-8aea-46d6d96baacb,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:527b59b253be0db5e7d00289763d2f0aeea7b9d27b8830656ccecafd25947cf8,PodSandboxId:c4229a54854342602f545e73b05d2e5f6e82c169027d56572c6bb1c6daaab695,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt
:1733745473444443662,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-n4pph,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 520d101f-0df0-413f-a0fc-22ecc2884d40,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6f57c0d1a2bfd5d40b6509193ea8dc5b5a600119199f1468b5c725144ac6de3,PodSandboxId:b8f808142acc4c40969cb81f766a314d2992f607a199f6939bcbf4fd9da1f70e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733745462474273995
,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-005123,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 897c2c22804a3f807b6c53388bf5ae22,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:528aa672a3fab35454ac1a9762bde88dacd8b0f9c91555af3a1d1f93061a1350,PodSandboxId:d30789ee9d14a7988cb1126d5d97bf77b940b8ccac52cf00674379b888851603,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733745462471
449496,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-005123,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b46485e7a34e26d2ec4507b212be06ea,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da9e6f1bc197424a6926dd5e34d40c87175209f99a1e583f8dbdba504862c6f8,PodSandboxId:43b4489819cbc50f99af06bb8570b409765aaac8ce25482cca038fb31307432f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733745462431601063,Label
s:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-005123,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e13b64e3c640916722b20659a0937fc,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9babc273bf1d0f041ccf16c5711057fdc4abc34dc992320f1aefd25a4d5b36e,PodSandboxId:c97ab69d81cf40be8be853b2726f9c74e5222b50fc79b93e034ff92da0a4c035,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733745462358888452,Labels:map[string]string{io
.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-005123,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fdc26478084558b0a11e6df527ae8916,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ecbfc4afc60ee805fae40bf4534caf8357b9374f4449f3faac32927d8404ae4,PodSandboxId:6211679fe2c07ed8493a037532ef67b2673ec799f6e8f3a0ff2af327b3452fa6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733745175261468738,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-005123,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b46485e7a34e26d2ec4507b212be06ea,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5a736547-9631-458d-8c00-bef4063a2204 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 12:13:02 embed-certs-005123 crio[702]: time="2024-12-09 12:13:02.875481624Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=01c81624-1f27-43c5-a41d-dcf9c55d1294 name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 09 12:13:02 embed-certs-005123 crio[702]: time="2024-12-09 12:13:02.875798577Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:f6e7791f884caa9e8bbad04970f74e52becda3e5ef2fbf1b3ce797c7e1e7ad03,Metadata:&PodSandboxMetadata{Name:metrics-server-6867b74b74-zfw9r,Uid:8438b820-4cc5-4d7b-8af5-9349fdd87ca8,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733745474475011505,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-6867b74b74-zfw9r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8438b820-4cc5-4d7b-8af5-9349fdd87ca8,k8s-app: metrics-server,pod-template-hash: 6867b74b74,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-09T11:57:54.166506179Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7455f6989fecae39d7d0c95e8bc7072133ece82c435ce6ed37b5f621db26a696,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:91ceb801-7262-4d7e-9623-c8c1931fc34b,N
amespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733745474321613775,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91ceb801-7262-4d7e-9623-c8c1931fc34b,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"vol
umes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-12-09T11:57:54.009884867Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6f09b37ff62169ca1ef8b5d9ea743a40d10b39438e49add1cdf06c9242f0bad5,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-xspr9,Uid:9384e9ea-987e-4728-bdf2-773645d52ab1,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733745473243857196,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-xspr9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9384e9ea-987e-4728-bdf2-773645d52ab1,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-09T11:57:52.925476106Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8576a7808ab702e2fa9b5d11849da794aed95311a7db73c0a1536df14395d7c2,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-t49mk,Uid:ca3ba094-58a2-401d
-8aea-46d6d96baacb,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733745473192193537,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-t49mk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca3ba094-58a2-401d-8aea-46d6d96baacb,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-09T11:57:52.881536912Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c4229a54854342602f545e73b05d2e5f6e82c169027d56572c6bb1c6daaab695,Metadata:&PodSandboxMetadata{Name:kube-proxy-n4pph,Uid:520d101f-0df0-413f-a0fc-22ecc2884d40,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733745473099090666,Labels:map[string]string{controller-revision-hash: 77987969cc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-n4pph,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 520d101f-0df0-413f-a0fc-22ecc2884d40,k8s-app: kube-proxy,pod-tem
plate-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-09T11:57:52.785534692Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d30789ee9d14a7988cb1126d5d97bf77b940b8ccac52cf00674379b888851603,Metadata:&PodSandboxMetadata{Name:kube-apiserver-embed-certs-005123,Uid:b46485e7a34e26d2ec4507b212be06ea,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1733745462234810987,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-embed-certs-005123,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b46485e7a34e26d2ec4507b212be06ea,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.72.218:8443,kubernetes.io/config.hash: b46485e7a34e26d2ec4507b212be06ea,kubernetes.io/config.seen: 2024-12-09T11:57:41.787527817Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:b8f808142acc4c40969cb81f766a
314d2992f607a199f6939bcbf4fd9da1f70e,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-embed-certs-005123,Uid:897c2c22804a3f807b6c53388bf5ae22,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733745462231740538,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-embed-certs-005123,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 897c2c22804a3f807b6c53388bf5ae22,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 897c2c22804a3f807b6c53388bf5ae22,kubernetes.io/config.seen: 2024-12-09T11:57:41.787528846Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c97ab69d81cf40be8be853b2726f9c74e5222b50fc79b93e034ff92da0a4c035,Metadata:&PodSandboxMetadata{Name:etcd-embed-certs-005123,Uid:fdc26478084558b0a11e6df527ae8916,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733745462216855041,Labels:map[string]string{component: etcd,io.kube
rnetes.container.name: POD,io.kubernetes.pod.name: etcd-embed-certs-005123,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fdc26478084558b0a11e6df527ae8916,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.72.218:2379,kubernetes.io/config.hash: fdc26478084558b0a11e6df527ae8916,kubernetes.io/config.seen: 2024-12-09T11:57:41.787526483Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:43b4489819cbc50f99af06bb8570b409765aaac8ce25482cca038fb31307432f,Metadata:&PodSandboxMetadata{Name:kube-scheduler-embed-certs-005123,Uid:2e13b64e3c640916722b20659a0937fc,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733745462207201449,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-embed-certs-005123,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e13b64e3c640916722b20659a0937fc,tier: control-plane,},Annotations:map[str
ing]string{kubernetes.io/config.hash: 2e13b64e3c640916722b20659a0937fc,kubernetes.io/config.seen: 2024-12-09T11:57:41.787523355Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:6211679fe2c07ed8493a037532ef67b2673ec799f6e8f3a0ff2af327b3452fa6,Metadata:&PodSandboxMetadata{Name:kube-apiserver-embed-certs-005123,Uid:b46485e7a34e26d2ec4507b212be06ea,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1733745173892086998,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-embed-certs-005123,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b46485e7a34e26d2ec4507b212be06ea,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.72.218:8443,kubernetes.io/config.hash: b46485e7a34e26d2ec4507b212be06ea,kubernetes.io/config.seen: 2024-12-09T11:52:53.387166268Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-coll
ector/interceptors.go:74" id=01c81624-1f27-43c5-a41d-dcf9c55d1294 name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 09 12:13:02 embed-certs-005123 crio[702]: time="2024-12-09 12:13:02.876471989Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a2b0ae33-99fc-4d13-80fe-894cd071c8c7 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 12:13:02 embed-certs-005123 crio[702]: time="2024-12-09 12:13:02.876548070Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a2b0ae33-99fc-4d13-80fe-894cd071c8c7 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 12:13:02 embed-certs-005123 crio[702]: time="2024-12-09 12:13:02.876788339Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bd836c617c4c71eefb766c2dfb55170cf3cf91517592b1a7a183c74e32ea64a6,PodSandboxId:7455f6989fecae39d7d0c95e8bc7072133ece82c435ce6ed37b5f621db26a696,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733745474452157860,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91ceb801-7262-4d7e-9623-c8c1931fc34b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83e04e6c67eb04484f0a7ed6ae026d286dbe58b1771e200b50a3b5fb3155cfd2,PodSandboxId:6f09b37ff62169ca1ef8b5d9ea743a40d10b39438e49add1cdf06c9242f0bad5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733745473966659142,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xspr9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9384e9ea-987e-4728-bdf2-773645d52ab1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ad57f45638e33d61673cc77cf320de335668a20f9d834892bcd702efb4ff209,PodSandboxId:8576a7808ab702e2fa9b5d11849da794aed95311a7db73c0a1536df14395d7c2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733745473869691762,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-t49mk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c
a3ba094-58a2-401d-8aea-46d6d96baacb,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:527b59b253be0db5e7d00289763d2f0aeea7b9d27b8830656ccecafd25947cf8,PodSandboxId:c4229a54854342602f545e73b05d2e5f6e82c169027d56572c6bb1c6daaab695,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt
:1733745473444443662,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-n4pph,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 520d101f-0df0-413f-a0fc-22ecc2884d40,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6f57c0d1a2bfd5d40b6509193ea8dc5b5a600119199f1468b5c725144ac6de3,PodSandboxId:b8f808142acc4c40969cb81f766a314d2992f607a199f6939bcbf4fd9da1f70e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733745462474273995
,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-005123,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 897c2c22804a3f807b6c53388bf5ae22,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:528aa672a3fab35454ac1a9762bde88dacd8b0f9c91555af3a1d1f93061a1350,PodSandboxId:d30789ee9d14a7988cb1126d5d97bf77b940b8ccac52cf00674379b888851603,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733745462471
449496,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-005123,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b46485e7a34e26d2ec4507b212be06ea,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da9e6f1bc197424a6926dd5e34d40c87175209f99a1e583f8dbdba504862c6f8,PodSandboxId:43b4489819cbc50f99af06bb8570b409765aaac8ce25482cca038fb31307432f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733745462431601063,Label
s:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-005123,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e13b64e3c640916722b20659a0937fc,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9babc273bf1d0f041ccf16c5711057fdc4abc34dc992320f1aefd25a4d5b36e,PodSandboxId:c97ab69d81cf40be8be853b2726f9c74e5222b50fc79b93e034ff92da0a4c035,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733745462358888452,Labels:map[string]string{io
.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-005123,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fdc26478084558b0a11e6df527ae8916,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ecbfc4afc60ee805fae40bf4534caf8357b9374f4449f3faac32927d8404ae4,PodSandboxId:6211679fe2c07ed8493a037532ef67b2673ec799f6e8f3a0ff2af327b3452fa6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733745175261468738,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-005123,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b46485e7a34e26d2ec4507b212be06ea,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a2b0ae33-99fc-4d13-80fe-894cd071c8c7 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 12:13:02 embed-certs-005123 crio[702]: time="2024-12-09 12:13:02.878486711Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9dcf00cd-8f95-42c0-9412-5ec75c4cf0fa name=/runtime.v1.RuntimeService/Version
	Dec 09 12:13:02 embed-certs-005123 crio[702]: time="2024-12-09 12:13:02.878559640Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9dcf00cd-8f95-42c0-9412-5ec75c4cf0fa name=/runtime.v1.RuntimeService/Version
	Dec 09 12:13:02 embed-certs-005123 crio[702]: time="2024-12-09 12:13:02.880199105Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=58b241ae-399c-4a62-96ba-8ba169eb8098 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 12:13:02 embed-certs-005123 crio[702]: time="2024-12-09 12:13:02.880617847Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733746382880598482,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=58b241ae-399c-4a62-96ba-8ba169eb8098 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 12:13:02 embed-certs-005123 crio[702]: time="2024-12-09 12:13:02.881272280Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=56bfac3b-8eeb-4ac8-a0a2-0b871a91cbc1 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 12:13:02 embed-certs-005123 crio[702]: time="2024-12-09 12:13:02.881342583Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=56bfac3b-8eeb-4ac8-a0a2-0b871a91cbc1 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 12:13:02 embed-certs-005123 crio[702]: time="2024-12-09 12:13:02.881525250Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bd836c617c4c71eefb766c2dfb55170cf3cf91517592b1a7a183c74e32ea64a6,PodSandboxId:7455f6989fecae39d7d0c95e8bc7072133ece82c435ce6ed37b5f621db26a696,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733745474452157860,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91ceb801-7262-4d7e-9623-c8c1931fc34b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83e04e6c67eb04484f0a7ed6ae026d286dbe58b1771e200b50a3b5fb3155cfd2,PodSandboxId:6f09b37ff62169ca1ef8b5d9ea743a40d10b39438e49add1cdf06c9242f0bad5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733745473966659142,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xspr9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9384e9ea-987e-4728-bdf2-773645d52ab1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ad57f45638e33d61673cc77cf320de335668a20f9d834892bcd702efb4ff209,PodSandboxId:8576a7808ab702e2fa9b5d11849da794aed95311a7db73c0a1536df14395d7c2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733745473869691762,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-t49mk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c
a3ba094-58a2-401d-8aea-46d6d96baacb,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:527b59b253be0db5e7d00289763d2f0aeea7b9d27b8830656ccecafd25947cf8,PodSandboxId:c4229a54854342602f545e73b05d2e5f6e82c169027d56572c6bb1c6daaab695,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt
:1733745473444443662,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-n4pph,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 520d101f-0df0-413f-a0fc-22ecc2884d40,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6f57c0d1a2bfd5d40b6509193ea8dc5b5a600119199f1468b5c725144ac6de3,PodSandboxId:b8f808142acc4c40969cb81f766a314d2992f607a199f6939bcbf4fd9da1f70e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733745462474273995
,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-005123,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 897c2c22804a3f807b6c53388bf5ae22,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:528aa672a3fab35454ac1a9762bde88dacd8b0f9c91555af3a1d1f93061a1350,PodSandboxId:d30789ee9d14a7988cb1126d5d97bf77b940b8ccac52cf00674379b888851603,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733745462471
449496,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-005123,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b46485e7a34e26d2ec4507b212be06ea,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da9e6f1bc197424a6926dd5e34d40c87175209f99a1e583f8dbdba504862c6f8,PodSandboxId:43b4489819cbc50f99af06bb8570b409765aaac8ce25482cca038fb31307432f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733745462431601063,Label
s:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-005123,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e13b64e3c640916722b20659a0937fc,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9babc273bf1d0f041ccf16c5711057fdc4abc34dc992320f1aefd25a4d5b36e,PodSandboxId:c97ab69d81cf40be8be853b2726f9c74e5222b50fc79b93e034ff92da0a4c035,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733745462358888452,Labels:map[string]string{io
.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-005123,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fdc26478084558b0a11e6df527ae8916,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ecbfc4afc60ee805fae40bf4534caf8357b9374f4449f3faac32927d8404ae4,PodSandboxId:6211679fe2c07ed8493a037532ef67b2673ec799f6e8f3a0ff2af327b3452fa6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733745175261468738,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-005123,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b46485e7a34e26d2ec4507b212be06ea,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=56bfac3b-8eeb-4ac8-a0a2-0b871a91cbc1 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 12:13:02 embed-certs-005123 crio[702]: time="2024-12-09 12:13:02.906834210Z" level=debug msg="Request: &StatusRequest{Verbose:false,}" file="otel-collector/interceptors.go:62" id=1ed94533-d1d2-416d-a918-1017c0a61204 name=/runtime.v1.RuntimeService/Status
	Dec 09 12:13:02 embed-certs-005123 crio[702]: time="2024-12-09 12:13:02.906971032Z" level=debug msg="Response: &StatusResponse{Status:&RuntimeStatus{Conditions:[]*RuntimeCondition{&RuntimeCondition{Type:RuntimeReady,Status:true,Reason:,Message:,},&RuntimeCondition{Type:NetworkReady,Status:true,Reason:,Message:,},},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=1ed94533-d1d2-416d-a918-1017c0a61204 name=/runtime.v1.RuntimeService/Status
	Dec 09 12:13:02 embed-certs-005123 crio[702]: time="2024-12-09 12:13:02.922244546Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fee9974b-1292-461c-a34d-aac154f378c0 name=/runtime.v1.RuntimeService/Version
	Dec 09 12:13:02 embed-certs-005123 crio[702]: time="2024-12-09 12:13:02.922354544Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fee9974b-1292-461c-a34d-aac154f378c0 name=/runtime.v1.RuntimeService/Version
	Dec 09 12:13:02 embed-certs-005123 crio[702]: time="2024-12-09 12:13:02.923441407Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3a9166e2-850b-44df-a0b0-fae478b45586 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 12:13:02 embed-certs-005123 crio[702]: time="2024-12-09 12:13:02.923848432Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733746382923825395,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3a9166e2-850b-44df-a0b0-fae478b45586 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 12:13:02 embed-certs-005123 crio[702]: time="2024-12-09 12:13:02.924406510Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=624539a3-fb69-4220-aae6-3480c80e407f name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 12:13:02 embed-certs-005123 crio[702]: time="2024-12-09 12:13:02.924472060Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=624539a3-fb69-4220-aae6-3480c80e407f name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 12:13:02 embed-certs-005123 crio[702]: time="2024-12-09 12:13:02.924678873Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bd836c617c4c71eefb766c2dfb55170cf3cf91517592b1a7a183c74e32ea64a6,PodSandboxId:7455f6989fecae39d7d0c95e8bc7072133ece82c435ce6ed37b5f621db26a696,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733745474452157860,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91ceb801-7262-4d7e-9623-c8c1931fc34b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83e04e6c67eb04484f0a7ed6ae026d286dbe58b1771e200b50a3b5fb3155cfd2,PodSandboxId:6f09b37ff62169ca1ef8b5d9ea743a40d10b39438e49add1cdf06c9242f0bad5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733745473966659142,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xspr9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9384e9ea-987e-4728-bdf2-773645d52ab1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ad57f45638e33d61673cc77cf320de335668a20f9d834892bcd702efb4ff209,PodSandboxId:8576a7808ab702e2fa9b5d11849da794aed95311a7db73c0a1536df14395d7c2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733745473869691762,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-t49mk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c
a3ba094-58a2-401d-8aea-46d6d96baacb,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:527b59b253be0db5e7d00289763d2f0aeea7b9d27b8830656ccecafd25947cf8,PodSandboxId:c4229a54854342602f545e73b05d2e5f6e82c169027d56572c6bb1c6daaab695,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt
:1733745473444443662,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-n4pph,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 520d101f-0df0-413f-a0fc-22ecc2884d40,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6f57c0d1a2bfd5d40b6509193ea8dc5b5a600119199f1468b5c725144ac6de3,PodSandboxId:b8f808142acc4c40969cb81f766a314d2992f607a199f6939bcbf4fd9da1f70e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733745462474273995
,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-005123,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 897c2c22804a3f807b6c53388bf5ae22,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:528aa672a3fab35454ac1a9762bde88dacd8b0f9c91555af3a1d1f93061a1350,PodSandboxId:d30789ee9d14a7988cb1126d5d97bf77b940b8ccac52cf00674379b888851603,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733745462471
449496,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-005123,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b46485e7a34e26d2ec4507b212be06ea,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da9e6f1bc197424a6926dd5e34d40c87175209f99a1e583f8dbdba504862c6f8,PodSandboxId:43b4489819cbc50f99af06bb8570b409765aaac8ce25482cca038fb31307432f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733745462431601063,Label
s:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-005123,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e13b64e3c640916722b20659a0937fc,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9babc273bf1d0f041ccf16c5711057fdc4abc34dc992320f1aefd25a4d5b36e,PodSandboxId:c97ab69d81cf40be8be853b2726f9c74e5222b50fc79b93e034ff92da0a4c035,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733745462358888452,Labels:map[string]string{io
.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-005123,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fdc26478084558b0a11e6df527ae8916,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ecbfc4afc60ee805fae40bf4534caf8357b9374f4449f3faac32927d8404ae4,PodSandboxId:6211679fe2c07ed8493a037532ef67b2673ec799f6e8f3a0ff2af327b3452fa6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733745175261468738,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-005123,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b46485e7a34e26d2ec4507b212be06ea,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=624539a3-fb69-4220-aae6-3480c80e407f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	bd836c617c4c7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   15 minutes ago      Running             storage-provisioner       0                   7455f6989feca       storage-provisioner
	83e04e6c67eb0       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   15 minutes ago      Running             coredns                   0                   6f09b37ff6216       coredns-7c65d6cfc9-xspr9
	1ad57f45638e3       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   15 minutes ago      Running             coredns                   0                   8576a7808ab70       coredns-7c65d6cfc9-t49mk
	527b59b253be0       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38   15 minutes ago      Running             kube-proxy                0                   c4229a5485434       kube-proxy-n4pph
	c6f57c0d1a2bf       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503   15 minutes ago      Running             kube-controller-manager   2                   b8f808142acc4       kube-controller-manager-embed-certs-005123
	528aa672a3fab       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   15 minutes ago      Running             kube-apiserver            2                   d30789ee9d14a       kube-apiserver-embed-certs-005123
	da9e6f1bc1974       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856   15 minutes ago      Running             kube-scheduler            2                   43b4489819cbc       kube-scheduler-embed-certs-005123
	d9babc273bf1d       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   15 minutes ago      Running             etcd                      2                   c97ab69d81cf4       etcd-embed-certs-005123
	9ecbfc4afc60e       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   20 minutes ago      Exited              kube-apiserver            1                   6211679fe2c07       kube-apiserver-embed-certs-005123
	
	
	==> coredns [1ad57f45638e33d61673cc77cf320de335668a20f9d834892bcd702efb4ff209] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [83e04e6c67eb04484f0a7ed6ae026d286dbe58b1771e200b50a3b5fb3155cfd2] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               embed-certs-005123
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-005123
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=60110addcdbf0fec7168b962521659e922988d6c
	                    minikube.k8s.io/name=embed-certs-005123
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_09T11_57_48_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 09 Dec 2024 11:57:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-005123
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 09 Dec 2024 12:12:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 09 Dec 2024 12:08:09 +0000   Mon, 09 Dec 2024 11:57:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 09 Dec 2024 12:08:09 +0000   Mon, 09 Dec 2024 11:57:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 09 Dec 2024 12:08:09 +0000   Mon, 09 Dec 2024 11:57:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 09 Dec 2024 12:08:09 +0000   Mon, 09 Dec 2024 11:57:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.218
	  Hostname:    embed-certs-005123
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4bae420e2f61438cbcb08aca330ef929
	  System UUID:                4bae420e-2f61-438c-bcb0-8aca330ef929
	  Boot ID:                    540eed1d-106c-4560-9304-3f7bc5c5d90e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-t49mk                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     15m
	  kube-system                 coredns-7c65d6cfc9-xspr9                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     15m
	  kube-system                 etcd-embed-certs-005123                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         15m
	  kube-system                 kube-apiserver-embed-certs-005123             250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-embed-certs-005123    200m (10%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-n4pph                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-embed-certs-005123             100m (5%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 metrics-server-6867b74b74-zfw9r               100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         15m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 15m   kube-proxy       
	  Normal  Starting                 15m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  15m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m   kubelet          Node embed-certs-005123 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m   kubelet          Node embed-certs-005123 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m   kubelet          Node embed-certs-005123 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           15m   node-controller  Node embed-certs-005123 event: Registered Node embed-certs-005123 in Controller
	
	
	==> dmesg <==
	[  +0.055350] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041048] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.019448] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.164562] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +1.628061] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.346347] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +0.057637] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.063658] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[  +0.178175] systemd-fstab-generator[651]: Ignoring "noauto" option for root device
	[  +0.140169] systemd-fstab-generator[663]: Ignoring "noauto" option for root device
	[  +0.282387] systemd-fstab-generator[693]: Ignoring "noauto" option for root device
	[  +4.073106] systemd-fstab-generator[781]: Ignoring "noauto" option for root device
	[  +1.974083] systemd-fstab-generator[903]: Ignoring "noauto" option for root device
	[  +0.067432] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.519404] kauditd_printk_skb: 69 callbacks suppressed
	[Dec 9 11:53] kauditd_printk_skb: 90 callbacks suppressed
	[Dec 9 11:57] kauditd_printk_skb: 2 callbacks suppressed
	[  +1.346175] systemd-fstab-generator[2619]: Ignoring "noauto" option for root device
	[  +4.538537] kauditd_printk_skb: 58 callbacks suppressed
	[  +1.500808] systemd-fstab-generator[2935]: Ignoring "noauto" option for root device
	[  +5.429186] systemd-fstab-generator[3048]: Ignoring "noauto" option for root device
	[  +0.084529] kauditd_printk_skb: 14 callbacks suppressed
	[Dec 9 11:58] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [d9babc273bf1d0f041ccf16c5711057fdc4abc34dc992320f1aefd25a4d5b36e] <==
	{"level":"info","ts":"2024-12-09T11:57:42.716194Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"920986b861bdd178 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-12-09T11:57:42.716357Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"920986b861bdd178 received MsgPreVoteResp from 920986b861bdd178 at term 1"}
	{"level":"info","ts":"2024-12-09T11:57:42.716396Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"920986b861bdd178 became candidate at term 2"}
	{"level":"info","ts":"2024-12-09T11:57:42.716459Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"920986b861bdd178 received MsgVoteResp from 920986b861bdd178 at term 2"}
	{"level":"info","ts":"2024-12-09T11:57:42.716556Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"920986b861bdd178 became leader at term 2"}
	{"level":"info","ts":"2024-12-09T11:57:42.716584Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 920986b861bdd178 elected leader 920986b861bdd178 at term 2"}
	{"level":"info","ts":"2024-12-09T11:57:42.721652Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"920986b861bdd178","local-member-attributes":"{Name:embed-certs-005123 ClientURLs:[https://192.168.72.218:2379]}","request-path":"/0/members/920986b861bdd178/attributes","cluster-id":"fc7b2fb2a5a2cf43","publish-timeout":"7s"}
	{"level":"info","ts":"2024-12-09T11:57:42.723028Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-09T11:57:42.723434Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-09T11:57:42.723574Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-09T11:57:42.725383Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-09T11:57:42.726112Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.218:2379"}
	{"level":"info","ts":"2024-12-09T11:57:42.726206Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fc7b2fb2a5a2cf43","local-member-id":"920986b861bdd178","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-09T11:57:42.726280Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-09T11:57:42.726312Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-09T11:57:42.738991Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-12-09T11:57:42.739027Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-12-09T11:57:42.739547Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-09T11:57:42.740344Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-12-09T12:07:43.590182Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":688}
	{"level":"info","ts":"2024-12-09T12:07:43.598720Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":688,"took":"7.785943ms","hash":1006150834,"current-db-size-bytes":2306048,"current-db-size":"2.3 MB","current-db-size-in-use-bytes":2306048,"current-db-size-in-use":"2.3 MB"}
	{"level":"info","ts":"2024-12-09T12:07:43.598803Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1006150834,"revision":688,"compact-revision":-1}
	{"level":"info","ts":"2024-12-09T12:12:43.600481Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":932}
	{"level":"info","ts":"2024-12-09T12:12:43.605809Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":932,"took":"4.501256ms","hash":2225701780,"current-db-size-bytes":2306048,"current-db-size":"2.3 MB","current-db-size-in-use-bytes":1564672,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-12-09T12:12:43.605973Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2225701780,"revision":932,"compact-revision":688}
	
	
	==> kernel <==
	 12:13:03 up 20 min,  0 users,  load average: 0.03, 0.11, 0.12
	Linux embed-certs-005123 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [528aa672a3fab35454ac1a9762bde88dacd8b0f9c91555af3a1d1f93061a1350] <==
	I1209 12:08:45.965004       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1209 12:08:45.965076       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1209 12:10:45.965227       1 handler_proxy.go:99] no RequestInfo found in the context
	E1209 12:10:45.965406       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1209 12:10:45.965226       1 handler_proxy.go:99] no RequestInfo found in the context
	E1209 12:10:45.965454       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1209 12:10:45.966738       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1209 12:10:45.966775       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1209 12:12:44.964799       1 handler_proxy.go:99] no RequestInfo found in the context
	E1209 12:12:44.965182       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1209 12:12:45.967439       1 handler_proxy.go:99] no RequestInfo found in the context
	E1209 12:12:45.967500       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1209 12:12:45.967677       1 handler_proxy.go:99] no RequestInfo found in the context
	E1209 12:12:45.967828       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1209 12:12:45.968720       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1209 12:12:45.970002       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [9ecbfc4afc60ee805fae40bf4534caf8357b9374f4449f3faac32927d8404ae4] <==
	W1209 11:57:35.187126       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:57:35.194779       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:57:35.206387       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:57:35.251234       1 logging.go:55] [core] [Channel #18 SubChannel #19]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:57:35.354338       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:57:35.549909       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:57:35.559656       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:57:35.580493       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:57:35.591478       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:57:35.605147       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:57:35.647061       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:57:35.743226       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:57:35.744433       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:57:35.760011       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:57:35.764638       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:57:35.793517       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:57:35.864266       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:57:35.957809       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:57:35.963305       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:57:36.126666       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:57:36.343167       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:57:39.257586       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:57:39.399697       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:57:39.421750       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 11:57:39.506029       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [c6f57c0d1a2bfd5d40b6509193ea8dc5b5a600119199f1468b5c725144ac6de3] <==
	E1209 12:07:51.889276       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1209 12:07:52.452764       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1209 12:08:09.968200       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-005123"
	E1209 12:08:21.896293       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1209 12:08:22.462876       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1209 12:08:51.902744       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1209 12:08:52.473987       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1209 12:08:59.636286       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="241.009µs"
	I1209 12:09:14.626634       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="46.562µs"
	E1209 12:09:21.909345       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1209 12:09:22.481544       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1209 12:09:51.915604       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1209 12:09:52.489915       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1209 12:10:21.922267       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1209 12:10:22.498162       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1209 12:10:51.929620       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1209 12:10:52.513512       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1209 12:11:21.936022       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1209 12:11:22.521750       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1209 12:11:51.942255       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1209 12:11:52.529656       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1209 12:12:21.949840       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1209 12:12:22.541863       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1209 12:12:51.957453       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1209 12:12:52.559319       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [527b59b253be0db5e7d00289763d2f0aeea7b9d27b8830656ccecafd25947cf8] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1209 11:57:54.068875       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1209 11:57:54.096967       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.72.218"]
	E1209 11:57:54.097247       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1209 11:57:54.309274       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1209 11:57:54.309394       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1209 11:57:54.309480       1 server_linux.go:169] "Using iptables Proxier"
	I1209 11:57:54.368574       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1209 11:57:54.368806       1 server.go:483] "Version info" version="v1.31.2"
	I1209 11:57:54.368832       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1209 11:57:54.371776       1 config.go:199] "Starting service config controller"
	I1209 11:57:54.373211       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1209 11:57:54.373360       1 config.go:328] "Starting node config controller"
	I1209 11:57:54.382406       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1209 11:57:54.375882       1 config.go:105] "Starting endpoint slice config controller"
	I1209 11:57:54.382502       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1209 11:57:54.382604       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1209 11:57:54.386636       1 shared_informer.go:320] Caches are synced for node config
	I1209 11:57:54.484119       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [da9e6f1bc197424a6926dd5e34d40c87175209f99a1e583f8dbdba504862c6f8] <==
	W1209 11:57:45.893733       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1209 11:57:45.893890       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1209 11:57:45.902278       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1209 11:57:45.902397       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1209 11:57:45.950549       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1209 11:57:45.950691       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1209 11:57:46.020249       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1209 11:57:46.020433       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1209 11:57:46.078852       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1209 11:57:46.079029       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1209 11:57:46.087627       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1209 11:57:46.087772       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1209 11:57:46.116185       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1209 11:57:46.116410       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1209 11:57:46.130702       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1209 11:57:46.131081       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1209 11:57:46.165220       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1209 11:57:46.165474       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1209 11:57:46.235055       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1209 11:57:46.235227       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1209 11:57:46.237459       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1209 11:57:46.237508       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1209 11:57:46.301529       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1209 11:57:46.301564       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1209 11:57:48.398034       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 09 12:11:47 embed-certs-005123 kubelet[2942]: E1209 12:11:47.815691    2942 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733746307815438726,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 12:11:57 embed-certs-005123 kubelet[2942]: E1209 12:11:57.817630    2942 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733746317817329551,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 12:11:57 embed-certs-005123 kubelet[2942]: E1209 12:11:57.817696    2942 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733746317817329551,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 12:12:01 embed-certs-005123 kubelet[2942]: E1209 12:12:01.614132    2942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-zfw9r" podUID="8438b820-4cc5-4d7b-8af5-9349fdd87ca8"
	Dec 09 12:12:07 embed-certs-005123 kubelet[2942]: E1209 12:12:07.819618    2942 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733746327819127253,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 12:12:07 embed-certs-005123 kubelet[2942]: E1209 12:12:07.820137    2942 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733746327819127253,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 12:12:13 embed-certs-005123 kubelet[2942]: E1209 12:12:13.613828    2942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-zfw9r" podUID="8438b820-4cc5-4d7b-8af5-9349fdd87ca8"
	Dec 09 12:12:17 embed-certs-005123 kubelet[2942]: E1209 12:12:17.822278    2942 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733746337821865699,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 12:12:17 embed-certs-005123 kubelet[2942]: E1209 12:12:17.822325    2942 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733746337821865699,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 12:12:24 embed-certs-005123 kubelet[2942]: E1209 12:12:24.614830    2942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-zfw9r" podUID="8438b820-4cc5-4d7b-8af5-9349fdd87ca8"
	Dec 09 12:12:27 embed-certs-005123 kubelet[2942]: E1209 12:12:27.823832    2942 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733746347823441412,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 12:12:27 embed-certs-005123 kubelet[2942]: E1209 12:12:27.823860    2942 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733746347823441412,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 12:12:37 embed-certs-005123 kubelet[2942]: E1209 12:12:37.614088    2942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-zfw9r" podUID="8438b820-4cc5-4d7b-8af5-9349fdd87ca8"
	Dec 09 12:12:37 embed-certs-005123 kubelet[2942]: E1209 12:12:37.825704    2942 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733746357825421852,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 12:12:37 embed-certs-005123 kubelet[2942]: E1209 12:12:37.825741    2942 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733746357825421852,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 12:12:47 embed-certs-005123 kubelet[2942]: E1209 12:12:47.640246    2942 iptables.go:577] "Could not set up iptables canary" err=<
	Dec 09 12:12:47 embed-certs-005123 kubelet[2942]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 09 12:12:47 embed-certs-005123 kubelet[2942]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 09 12:12:47 embed-certs-005123 kubelet[2942]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 09 12:12:47 embed-certs-005123 kubelet[2942]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 09 12:12:47 embed-certs-005123 kubelet[2942]: E1209 12:12:47.827111    2942 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733746367826623917,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 12:12:47 embed-certs-005123 kubelet[2942]: E1209 12:12:47.827180    2942 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733746367826623917,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 12:12:51 embed-certs-005123 kubelet[2942]: E1209 12:12:51.613476    2942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-zfw9r" podUID="8438b820-4cc5-4d7b-8af5-9349fdd87ca8"
	Dec 09 12:12:57 embed-certs-005123 kubelet[2942]: E1209 12:12:57.829235    2942 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733746377828716302,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 12:12:57 embed-certs-005123 kubelet[2942]: E1209 12:12:57.829592    2942 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733746377828716302,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [bd836c617c4c71eefb766c2dfb55170cf3cf91517592b1a7a183c74e32ea64a6] <==
	I1209 11:57:54.582895       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1209 11:57:54.593401       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1209 11:57:54.593458       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1209 11:57:54.609448       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1209 11:57:54.609625       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-005123_4ce3b392-4680-457a-956d-eef012adebc5!
	I1209 11:57:54.610638       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e7aaac86-6035-4d6d-942e-248efc0c7825", APIVersion:"v1", ResourceVersion:"393", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-005123_4ce3b392-4680-457a-956d-eef012adebc5 became leader
	I1209 11:57:54.710603       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-005123_4ce3b392-4680-457a-956d-eef012adebc5!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-005123 -n embed-certs-005123
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-005123 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-zfw9r
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-005123 describe pod metrics-server-6867b74b74-zfw9r
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-005123 describe pod metrics-server-6867b74b74-zfw9r: exit status 1 (71.407329ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-zfw9r" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-005123 describe pod metrics-server-6867b74b74-zfw9r: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (358.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (178.45s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
E1209 12:09:36.386561  617017 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/addons-156041/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
E1209 12:11:33.303473  617017 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/addons-156041/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.132:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.132:8443: connect: connection refused
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-014592 -n old-k8s-version-014592
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-014592 -n old-k8s-version-014592: exit status 2 (241.810776ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-014592" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-014592 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-014592 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.584µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-014592 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-014592 -n old-k8s-version-014592
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-014592 -n old-k8s-version-014592: exit status 2 (238.391939ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-014592 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-014592 logs -n 25: (1.537888014s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p running-upgrade-119214                              | running-upgrade-119214       | jenkins | v1.34.0 | 09 Dec 24 11:43 UTC | 09 Dec 24 11:43 UTC |
	| delete  | -p                                                     | disable-driver-mounts-905993 | jenkins | v1.34.0 | 09 Dec 24 11:43 UTC | 09 Dec 24 11:43 UTC |
	|         | disable-driver-mounts-905993                           |                              |         |         |                     |                     |
	| start   | -p no-preload-820741                                   | no-preload-820741            | jenkins | v1.34.0 | 09 Dec 24 11:43 UTC | 09 Dec 24 11:44 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-005123            | embed-certs-005123           | jenkins | v1.34.0 | 09 Dec 24 11:44 UTC | 09 Dec 24 11:44 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-005123                                  | embed-certs-005123           | jenkins | v1.34.0 | 09 Dec 24 11:44 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-820741             | no-preload-820741            | jenkins | v1.34.0 | 09 Dec 24 11:44 UTC | 09 Dec 24 11:44 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-820741                                   | no-preload-820741            | jenkins | v1.34.0 | 09 Dec 24 11:44 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-835095                           | kubernetes-upgrade-835095    | jenkins | v1.34.0 | 09 Dec 24 11:45 UTC | 09 Dec 24 11:45 UTC |
	| start   | -p kubernetes-upgrade-835095                           | kubernetes-upgrade-835095    | jenkins | v1.34.0 | 09 Dec 24 11:45 UTC | 09 Dec 24 11:45 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-835095                           | kubernetes-upgrade-835095    | jenkins | v1.34.0 | 09 Dec 24 11:45 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-835095                           | kubernetes-upgrade-835095    | jenkins | v1.34.0 | 09 Dec 24 11:45 UTC | 09 Dec 24 11:46 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-835095                           | kubernetes-upgrade-835095    | jenkins | v1.34.0 | 09 Dec 24 11:46 UTC | 09 Dec 24 11:46 UTC |
	| start   | -p                                                     | default-k8s-diff-port-482476 | jenkins | v1.34.0 | 09 Dec 24 11:46 UTC | 09 Dec 24 11:47 UTC |
	|         | default-k8s-diff-port-482476                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-005123                 | embed-certs-005123           | jenkins | v1.34.0 | 09 Dec 24 11:46 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-005123                                  | embed-certs-005123           | jenkins | v1.34.0 | 09 Dec 24 11:46 UTC | 09 Dec 24 11:58 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-014592        | old-k8s-version-014592       | jenkins | v1.34.0 | 09 Dec 24 11:46 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-820741                  | no-preload-820741            | jenkins | v1.34.0 | 09 Dec 24 11:47 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-820741                                   | no-preload-820741            | jenkins | v1.34.0 | 09 Dec 24 11:47 UTC | 09 Dec 24 11:56 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-482476  | default-k8s-diff-port-482476 | jenkins | v1.34.0 | 09 Dec 24 11:47 UTC | 09 Dec 24 11:47 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-482476 | jenkins | v1.34.0 | 09 Dec 24 11:47 UTC |                     |
	|         | default-k8s-diff-port-482476                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-014592                              | old-k8s-version-014592       | jenkins | v1.34.0 | 09 Dec 24 11:48 UTC | 09 Dec 24 11:48 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-014592             | old-k8s-version-014592       | jenkins | v1.34.0 | 09 Dec 24 11:48 UTC | 09 Dec 24 11:48 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-014592                              | old-k8s-version-014592       | jenkins | v1.34.0 | 09 Dec 24 11:48 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-482476       | default-k8s-diff-port-482476 | jenkins | v1.34.0 | 09 Dec 24 11:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-482476 | jenkins | v1.34.0 | 09 Dec 24 11:49 UTC | 09 Dec 24 11:57 UTC |
	|         | default-k8s-diff-port-482476                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/09 11:49:59
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 11:49:59.489110  663024 out.go:345] Setting OutFile to fd 1 ...
	I1209 11:49:59.489218  663024 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 11:49:59.489223  663024 out.go:358] Setting ErrFile to fd 2...
	I1209 11:49:59.489227  663024 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 11:49:59.489393  663024 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20068-609844/.minikube/bin
	I1209 11:49:59.489968  663024 out.go:352] Setting JSON to false
	I1209 11:49:59.491001  663024 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":16343,"bootTime":1733728656,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 11:49:59.491116  663024 start.go:139] virtualization: kvm guest
	I1209 11:49:59.493422  663024 out.go:177] * [default-k8s-diff-port-482476] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1209 11:49:59.495230  663024 notify.go:220] Checking for updates...
	I1209 11:49:59.495310  663024 out.go:177]   - MINIKUBE_LOCATION=20068
	I1209 11:49:59.496833  663024 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 11:49:59.498350  663024 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20068-609844/kubeconfig
	I1209 11:49:59.499799  663024 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20068-609844/.minikube
	I1209 11:49:59.501159  663024 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1209 11:49:59.502351  663024 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 11:49:59.503976  663024 config.go:182] Loaded profile config "default-k8s-diff-port-482476": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 11:49:59.504355  663024 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:49:59.504434  663024 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:49:59.519867  663024 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39863
	I1209 11:49:59.520292  663024 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:49:59.520859  663024 main.go:141] libmachine: Using API Version  1
	I1209 11:49:59.520886  663024 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:49:59.521235  663024 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:49:59.521438  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .DriverName
	I1209 11:49:59.521739  663024 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 11:49:59.522124  663024 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:49:59.522225  663024 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:49:59.537355  663024 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39881
	I1209 11:49:59.537882  663024 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:49:59.538473  663024 main.go:141] libmachine: Using API Version  1
	I1209 11:49:59.538507  663024 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:49:59.538862  663024 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:49:59.539111  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .DriverName
	I1209 11:49:59.573642  663024 out.go:177] * Using the kvm2 driver based on existing profile
	I1209 11:49:59.574808  663024 start.go:297] selected driver: kvm2
	I1209 11:49:59.574821  663024 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-482476 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-482476 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.25 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 11:49:59.574939  663024 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 11:49:59.575618  663024 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 11:49:59.575711  663024 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20068-609844/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1209 11:49:59.591990  663024 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1209 11:49:59.592425  663024 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 11:49:59.592468  663024 cni.go:84] Creating CNI manager for ""
	I1209 11:49:59.592500  663024 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 11:49:59.592535  663024 start.go:340] cluster config:
	{Name:default-k8s-diff-port-482476 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-482476 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.25 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-h
ost Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 11:49:59.592645  663024 iso.go:125] acquiring lock: {Name:mk3bf4d9da9b75cc0ae0a3732f4492bac54a4b46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 11:49:59.594451  663024 out.go:177] * Starting "default-k8s-diff-port-482476" primary control-plane node in "default-k8s-diff-port-482476" cluster
	I1209 11:49:56.270467  661546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.218:22: connect: no route to host
	I1209 11:49:59.342522  661546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.218:22: connect: no route to host
	I1209 11:49:59.595812  663024 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 11:49:59.595868  663024 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1209 11:49:59.595876  663024 cache.go:56] Caching tarball of preloaded images
	I1209 11:49:59.595966  663024 preload.go:172] Found /home/jenkins/minikube-integration/20068-609844/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1209 11:49:59.595978  663024 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1209 11:49:59.596080  663024 profile.go:143] Saving config to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/default-k8s-diff-port-482476/config.json ...
	I1209 11:49:59.596311  663024 start.go:360] acquireMachinesLock for default-k8s-diff-port-482476: {Name:mka6afffe9ddf2faebdc603ed0401805d58bf31e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 11:50:05.422464  661546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.218:22: connect: no route to host
	I1209 11:50:08.494459  661546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.218:22: connect: no route to host
	I1209 11:50:14.574530  661546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.218:22: connect: no route to host
	I1209 11:50:17.646514  661546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.218:22: connect: no route to host
	I1209 11:50:23.726481  661546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.218:22: connect: no route to host
	I1209 11:50:26.798485  661546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.218:22: connect: no route to host
	I1209 11:50:32.878439  661546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.218:22: connect: no route to host
	I1209 11:50:35.950501  661546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.218:22: connect: no route to host
	I1209 11:50:42.030519  661546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.218:22: connect: no route to host
	I1209 11:50:45.102528  661546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.218:22: connect: no route to host
	I1209 11:50:51.182489  661546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.218:22: connect: no route to host
	I1209 11:50:54.254539  661546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.218:22: connect: no route to host
	I1209 11:51:00.334461  661546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.218:22: connect: no route to host
	I1209 11:51:03.406475  661546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.218:22: connect: no route to host
	I1209 11:51:09.486483  661546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.218:22: connect: no route to host
	I1209 11:51:12.558522  661546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.218:22: connect: no route to host
	I1209 11:51:18.638454  661546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.218:22: connect: no route to host
	I1209 11:51:24.715494  662109 start.go:364] duration metric: took 4m3.035196519s to acquireMachinesLock for "no-preload-820741"
	I1209 11:51:24.715567  662109 start.go:96] Skipping create...Using existing machine configuration
	I1209 11:51:24.715578  662109 fix.go:54] fixHost starting: 
	I1209 11:51:24.715984  662109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:51:24.716040  662109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:51:24.731722  662109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43799
	I1209 11:51:24.732247  662109 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:51:24.732853  662109 main.go:141] libmachine: Using API Version  1
	I1209 11:51:24.732876  662109 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:51:24.733244  662109 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:51:24.733437  662109 main.go:141] libmachine: (no-preload-820741) Calling .DriverName
	I1209 11:51:24.733606  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetState
	I1209 11:51:24.735295  662109 fix.go:112] recreateIfNeeded on no-preload-820741: state=Stopped err=<nil>
	I1209 11:51:24.735325  662109 main.go:141] libmachine: (no-preload-820741) Calling .DriverName
	W1209 11:51:24.735521  662109 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 11:51:24.737237  662109 out.go:177] * Restarting existing kvm2 VM for "no-preload-820741" ...
	I1209 11:51:21.710446  661546 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.218:22: connect: no route to host
	I1209 11:51:24.712631  661546 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 11:51:24.712695  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetMachineName
	I1209 11:51:24.713111  661546 buildroot.go:166] provisioning hostname "embed-certs-005123"
	I1209 11:51:24.713140  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetMachineName
	I1209 11:51:24.713398  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHHostname
	I1209 11:51:24.715321  661546 machine.go:96] duration metric: took 4m34.547615205s to provisionDockerMachine
	I1209 11:51:24.715372  661546 fix.go:56] duration metric: took 4m34.572283015s for fixHost
	I1209 11:51:24.715381  661546 start.go:83] releasing machines lock for "embed-certs-005123", held for 4m34.572321017s
	W1209 11:51:24.715401  661546 start.go:714] error starting host: provision: host is not running
	W1209 11:51:24.715538  661546 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1209 11:51:24.715550  661546 start.go:729] Will try again in 5 seconds ...
	I1209 11:51:24.738507  662109 main.go:141] libmachine: (no-preload-820741) Calling .Start
	I1209 11:51:24.738692  662109 main.go:141] libmachine: (no-preload-820741) Ensuring networks are active...
	I1209 11:51:24.739450  662109 main.go:141] libmachine: (no-preload-820741) Ensuring network default is active
	I1209 11:51:24.739799  662109 main.go:141] libmachine: (no-preload-820741) Ensuring network mk-no-preload-820741 is active
	I1209 11:51:24.740206  662109 main.go:141] libmachine: (no-preload-820741) Getting domain xml...
	I1209 11:51:24.740963  662109 main.go:141] libmachine: (no-preload-820741) Creating domain...
	I1209 11:51:25.958244  662109 main.go:141] libmachine: (no-preload-820741) Waiting to get IP...
	I1209 11:51:25.959122  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:25.959507  662109 main.go:141] libmachine: (no-preload-820741) DBG | unable to find current IP address of domain no-preload-820741 in network mk-no-preload-820741
	I1209 11:51:25.959585  662109 main.go:141] libmachine: (no-preload-820741) DBG | I1209 11:51:25.959486  663348 retry.go:31] will retry after 256.759149ms: waiting for machine to come up
	I1209 11:51:26.218626  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:26.219187  662109 main.go:141] libmachine: (no-preload-820741) DBG | unable to find current IP address of domain no-preload-820741 in network mk-no-preload-820741
	I1209 11:51:26.219222  662109 main.go:141] libmachine: (no-preload-820741) DBG | I1209 11:51:26.219121  663348 retry.go:31] will retry after 259.957451ms: waiting for machine to come up
	I1209 11:51:26.480403  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:26.480800  662109 main.go:141] libmachine: (no-preload-820741) DBG | unable to find current IP address of domain no-preload-820741 in network mk-no-preload-820741
	I1209 11:51:26.480828  662109 main.go:141] libmachine: (no-preload-820741) DBG | I1209 11:51:26.480753  663348 retry.go:31] will retry after 482.242492ms: waiting for machine to come up
	I1209 11:51:29.718422  661546 start.go:360] acquireMachinesLock for embed-certs-005123: {Name:mka6afffe9ddf2faebdc603ed0401805d58bf31e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 11:51:26.964420  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:26.964870  662109 main.go:141] libmachine: (no-preload-820741) DBG | unable to find current IP address of domain no-preload-820741 in network mk-no-preload-820741
	I1209 11:51:26.964903  662109 main.go:141] libmachine: (no-preload-820741) DBG | I1209 11:51:26.964821  663348 retry.go:31] will retry after 386.489156ms: waiting for machine to come up
	I1209 11:51:27.353471  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:27.353850  662109 main.go:141] libmachine: (no-preload-820741) DBG | unable to find current IP address of domain no-preload-820741 in network mk-no-preload-820741
	I1209 11:51:27.353875  662109 main.go:141] libmachine: (no-preload-820741) DBG | I1209 11:51:27.353796  663348 retry.go:31] will retry after 602.322538ms: waiting for machine to come up
	I1209 11:51:27.957621  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:27.958020  662109 main.go:141] libmachine: (no-preload-820741) DBG | unable to find current IP address of domain no-preload-820741 in network mk-no-preload-820741
	I1209 11:51:27.958051  662109 main.go:141] libmachine: (no-preload-820741) DBG | I1209 11:51:27.957967  663348 retry.go:31] will retry after 747.355263ms: waiting for machine to come up
	I1209 11:51:28.707049  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:28.707486  662109 main.go:141] libmachine: (no-preload-820741) DBG | unable to find current IP address of domain no-preload-820741 in network mk-no-preload-820741
	I1209 11:51:28.707515  662109 main.go:141] libmachine: (no-preload-820741) DBG | I1209 11:51:28.707436  663348 retry.go:31] will retry after 1.034218647s: waiting for machine to come up
	I1209 11:51:29.743755  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:29.744171  662109 main.go:141] libmachine: (no-preload-820741) DBG | unable to find current IP address of domain no-preload-820741 in network mk-no-preload-820741
	I1209 11:51:29.744213  662109 main.go:141] libmachine: (no-preload-820741) DBG | I1209 11:51:29.744119  663348 retry.go:31] will retry after 1.348194555s: waiting for machine to come up
	I1209 11:51:31.094696  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:31.095202  662109 main.go:141] libmachine: (no-preload-820741) DBG | unable to find current IP address of domain no-preload-820741 in network mk-no-preload-820741
	I1209 11:51:31.095234  662109 main.go:141] libmachine: (no-preload-820741) DBG | I1209 11:51:31.095124  663348 retry.go:31] will retry after 1.226653754s: waiting for machine to come up
	I1209 11:51:32.323529  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:32.323935  662109 main.go:141] libmachine: (no-preload-820741) DBG | unable to find current IP address of domain no-preload-820741 in network mk-no-preload-820741
	I1209 11:51:32.323959  662109 main.go:141] libmachine: (no-preload-820741) DBG | I1209 11:51:32.323884  663348 retry.go:31] will retry after 2.008914491s: waiting for machine to come up
	I1209 11:51:34.335246  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:34.335619  662109 main.go:141] libmachine: (no-preload-820741) DBG | unable to find current IP address of domain no-preload-820741 in network mk-no-preload-820741
	I1209 11:51:34.335658  662109 main.go:141] libmachine: (no-preload-820741) DBG | I1209 11:51:34.335593  663348 retry.go:31] will retry after 1.835576732s: waiting for machine to come up
	I1209 11:51:36.173316  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:36.173752  662109 main.go:141] libmachine: (no-preload-820741) DBG | unable to find current IP address of domain no-preload-820741 in network mk-no-preload-820741
	I1209 11:51:36.173786  662109 main.go:141] libmachine: (no-preload-820741) DBG | I1209 11:51:36.173711  663348 retry.go:31] will retry after 3.204076548s: waiting for machine to come up
	I1209 11:51:39.382184  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:39.382619  662109 main.go:141] libmachine: (no-preload-820741) DBG | unable to find current IP address of domain no-preload-820741 in network mk-no-preload-820741
	I1209 11:51:39.382656  662109 main.go:141] libmachine: (no-preload-820741) DBG | I1209 11:51:39.382560  663348 retry.go:31] will retry after 3.298451611s: waiting for machine to come up
	I1209 11:51:44.103077  662586 start.go:364] duration metric: took 3m16.308265809s to acquireMachinesLock for "old-k8s-version-014592"
	I1209 11:51:44.103164  662586 start.go:96] Skipping create...Using existing machine configuration
	I1209 11:51:44.103178  662586 fix.go:54] fixHost starting: 
	I1209 11:51:44.103657  662586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:51:44.103716  662586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:51:44.121162  662586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33531
	I1209 11:51:44.121672  662586 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:51:44.122203  662586 main.go:141] libmachine: Using API Version  1
	I1209 11:51:44.122232  662586 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:51:44.122644  662586 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:51:44.122852  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .DriverName
	I1209 11:51:44.123023  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetState
	I1209 11:51:44.124544  662586 fix.go:112] recreateIfNeeded on old-k8s-version-014592: state=Stopped err=<nil>
	I1209 11:51:44.124567  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .DriverName
	W1209 11:51:44.124704  662586 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 11:51:44.126942  662586 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-014592" ...
	I1209 11:51:42.684438  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:42.684824  662109 main.go:141] libmachine: (no-preload-820741) Found IP for machine: 192.168.39.169
	I1209 11:51:42.684859  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has current primary IP address 192.168.39.169 and MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:42.684867  662109 main.go:141] libmachine: (no-preload-820741) Reserving static IP address...
	I1209 11:51:42.685269  662109 main.go:141] libmachine: (no-preload-820741) DBG | found host DHCP lease matching {name: "no-preload-820741", mac: "52:54:00:27:4c:0e", ip: "192.168.39.169"} in network mk-no-preload-820741: {Iface:virbr1 ExpiryTime:2024-12-09 12:43:41 +0000 UTC Type:0 Mac:52:54:00:27:4c:0e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:no-preload-820741 Clientid:01:52:54:00:27:4c:0e}
	I1209 11:51:42.685296  662109 main.go:141] libmachine: (no-preload-820741) DBG | skip adding static IP to network mk-no-preload-820741 - found existing host DHCP lease matching {name: "no-preload-820741", mac: "52:54:00:27:4c:0e", ip: "192.168.39.169"}
	I1209 11:51:42.685311  662109 main.go:141] libmachine: (no-preload-820741) Reserved static IP address: 192.168.39.169
	I1209 11:51:42.685334  662109 main.go:141] libmachine: (no-preload-820741) Waiting for SSH to be available...
	I1209 11:51:42.685348  662109 main.go:141] libmachine: (no-preload-820741) DBG | Getting to WaitForSSH function...
	I1209 11:51:42.687295  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:42.687588  662109 main.go:141] libmachine: (no-preload-820741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4c:0e", ip: ""} in network mk-no-preload-820741: {Iface:virbr1 ExpiryTime:2024-12-09 12:43:41 +0000 UTC Type:0 Mac:52:54:00:27:4c:0e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:no-preload-820741 Clientid:01:52:54:00:27:4c:0e}
	I1209 11:51:42.687625  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined IP address 192.168.39.169 and MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:42.687702  662109 main.go:141] libmachine: (no-preload-820741) DBG | Using SSH client type: external
	I1209 11:51:42.687790  662109 main.go:141] libmachine: (no-preload-820741) DBG | Using SSH private key: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/no-preload-820741/id_rsa (-rw-------)
	I1209 11:51:42.687824  662109 main.go:141] libmachine: (no-preload-820741) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.169 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20068-609844/.minikube/machines/no-preload-820741/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1209 11:51:42.687844  662109 main.go:141] libmachine: (no-preload-820741) DBG | About to run SSH command:
	I1209 11:51:42.687857  662109 main.go:141] libmachine: (no-preload-820741) DBG | exit 0
	I1209 11:51:42.822609  662109 main.go:141] libmachine: (no-preload-820741) DBG | SSH cmd err, output: <nil>: 
	I1209 11:51:42.822996  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetConfigRaw
	I1209 11:51:42.823665  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetIP
	I1209 11:51:42.826484  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:42.826783  662109 main.go:141] libmachine: (no-preload-820741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4c:0e", ip: ""} in network mk-no-preload-820741: {Iface:virbr1 ExpiryTime:2024-12-09 12:43:41 +0000 UTC Type:0 Mac:52:54:00:27:4c:0e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:no-preload-820741 Clientid:01:52:54:00:27:4c:0e}
	I1209 11:51:42.826808  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined IP address 192.168.39.169 and MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:42.827050  662109 profile.go:143] Saving config to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/no-preload-820741/config.json ...
	I1209 11:51:42.827323  662109 machine.go:93] provisionDockerMachine start ...
	I1209 11:51:42.827346  662109 main.go:141] libmachine: (no-preload-820741) Calling .DriverName
	I1209 11:51:42.827620  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHHostname
	I1209 11:51:42.830224  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:42.830569  662109 main.go:141] libmachine: (no-preload-820741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4c:0e", ip: ""} in network mk-no-preload-820741: {Iface:virbr1 ExpiryTime:2024-12-09 12:43:41 +0000 UTC Type:0 Mac:52:54:00:27:4c:0e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:no-preload-820741 Clientid:01:52:54:00:27:4c:0e}
	I1209 11:51:42.830599  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined IP address 192.168.39.169 and MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:42.830717  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHPort
	I1209 11:51:42.830909  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHKeyPath
	I1209 11:51:42.831107  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHKeyPath
	I1209 11:51:42.831274  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHUsername
	I1209 11:51:42.831454  662109 main.go:141] libmachine: Using SSH client type: native
	I1209 11:51:42.831790  662109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.169 22 <nil> <nil>}
	I1209 11:51:42.831807  662109 main.go:141] libmachine: About to run SSH command:
	hostname
	I1209 11:51:42.938456  662109 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1209 11:51:42.938500  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetMachineName
	I1209 11:51:42.938778  662109 buildroot.go:166] provisioning hostname "no-preload-820741"
	I1209 11:51:42.938813  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetMachineName
	I1209 11:51:42.939023  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHHostname
	I1209 11:51:42.941706  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:42.942236  662109 main.go:141] libmachine: (no-preload-820741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4c:0e", ip: ""} in network mk-no-preload-820741: {Iface:virbr1 ExpiryTime:2024-12-09 12:43:41 +0000 UTC Type:0 Mac:52:54:00:27:4c:0e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:no-preload-820741 Clientid:01:52:54:00:27:4c:0e}
	I1209 11:51:42.942267  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined IP address 192.168.39.169 and MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:42.942390  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHPort
	I1209 11:51:42.942606  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHKeyPath
	I1209 11:51:42.942773  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHKeyPath
	I1209 11:51:42.942922  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHUsername
	I1209 11:51:42.943177  662109 main.go:141] libmachine: Using SSH client type: native
	I1209 11:51:42.943382  662109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.169 22 <nil> <nil>}
	I1209 11:51:42.943406  662109 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-820741 && echo "no-preload-820741" | sudo tee /etc/hostname
	I1209 11:51:43.065816  662109 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-820741
	
	I1209 11:51:43.065849  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHHostname
	I1209 11:51:43.068607  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:43.068916  662109 main.go:141] libmachine: (no-preload-820741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4c:0e", ip: ""} in network mk-no-preload-820741: {Iface:virbr1 ExpiryTime:2024-12-09 12:43:41 +0000 UTC Type:0 Mac:52:54:00:27:4c:0e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:no-preload-820741 Clientid:01:52:54:00:27:4c:0e}
	I1209 11:51:43.068951  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined IP address 192.168.39.169 and MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:43.069127  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHPort
	I1209 11:51:43.069256  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHKeyPath
	I1209 11:51:43.069351  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHKeyPath
	I1209 11:51:43.069514  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHUsername
	I1209 11:51:43.069637  662109 main.go:141] libmachine: Using SSH client type: native
	I1209 11:51:43.069841  662109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.169 22 <nil> <nil>}
	I1209 11:51:43.069861  662109 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-820741' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-820741/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-820741' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 11:51:43.182210  662109 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 11:51:43.182257  662109 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20068-609844/.minikube CaCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20068-609844/.minikube}
	I1209 11:51:43.182289  662109 buildroot.go:174] setting up certificates
	I1209 11:51:43.182305  662109 provision.go:84] configureAuth start
	I1209 11:51:43.182323  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetMachineName
	I1209 11:51:43.182674  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetIP
	I1209 11:51:43.185513  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:43.185872  662109 main.go:141] libmachine: (no-preload-820741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4c:0e", ip: ""} in network mk-no-preload-820741: {Iface:virbr1 ExpiryTime:2024-12-09 12:43:41 +0000 UTC Type:0 Mac:52:54:00:27:4c:0e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:no-preload-820741 Clientid:01:52:54:00:27:4c:0e}
	I1209 11:51:43.185897  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined IP address 192.168.39.169 and MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:43.186018  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHHostname
	I1209 11:51:43.188128  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:43.188482  662109 main.go:141] libmachine: (no-preload-820741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4c:0e", ip: ""} in network mk-no-preload-820741: {Iface:virbr1 ExpiryTime:2024-12-09 12:43:41 +0000 UTC Type:0 Mac:52:54:00:27:4c:0e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:no-preload-820741 Clientid:01:52:54:00:27:4c:0e}
	I1209 11:51:43.188534  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined IP address 192.168.39.169 and MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:43.188668  662109 provision.go:143] copyHostCerts
	I1209 11:51:43.188752  662109 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem, removing ...
	I1209 11:51:43.188774  662109 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem
	I1209 11:51:43.188840  662109 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem (1679 bytes)
	I1209 11:51:43.188928  662109 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem, removing ...
	I1209 11:51:43.188936  662109 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem
	I1209 11:51:43.188963  662109 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem (1082 bytes)
	I1209 11:51:43.189019  662109 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem, removing ...
	I1209 11:51:43.189027  662109 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem
	I1209 11:51:43.189049  662109 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem (1123 bytes)
	I1209 11:51:43.189104  662109 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem org=jenkins.no-preload-820741 san=[127.0.0.1 192.168.39.169 localhost minikube no-preload-820741]
	I1209 11:51:43.488258  662109 provision.go:177] copyRemoteCerts
	I1209 11:51:43.488336  662109 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 11:51:43.488367  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHHostname
	I1209 11:51:43.491689  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:43.492025  662109 main.go:141] libmachine: (no-preload-820741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4c:0e", ip: ""} in network mk-no-preload-820741: {Iface:virbr1 ExpiryTime:2024-12-09 12:43:41 +0000 UTC Type:0 Mac:52:54:00:27:4c:0e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:no-preload-820741 Clientid:01:52:54:00:27:4c:0e}
	I1209 11:51:43.492059  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined IP address 192.168.39.169 and MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:43.492267  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHPort
	I1209 11:51:43.492465  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHKeyPath
	I1209 11:51:43.492635  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHUsername
	I1209 11:51:43.492768  662109 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/no-preload-820741/id_rsa Username:docker}
	I1209 11:51:43.577708  662109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1209 11:51:43.602000  662109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1209 11:51:43.627251  662109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1209 11:51:43.651591  662109 provision.go:87] duration metric: took 469.266358ms to configureAuth
	I1209 11:51:43.651626  662109 buildroot.go:189] setting minikube options for container-runtime
	I1209 11:51:43.651863  662109 config.go:182] Loaded profile config "no-preload-820741": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 11:51:43.652059  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHHostname
	I1209 11:51:43.655150  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:43.655489  662109 main.go:141] libmachine: (no-preload-820741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4c:0e", ip: ""} in network mk-no-preload-820741: {Iface:virbr1 ExpiryTime:2024-12-09 12:43:41 +0000 UTC Type:0 Mac:52:54:00:27:4c:0e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:no-preload-820741 Clientid:01:52:54:00:27:4c:0e}
	I1209 11:51:43.655518  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined IP address 192.168.39.169 and MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:43.655738  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHPort
	I1209 11:51:43.655963  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHKeyPath
	I1209 11:51:43.656146  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHKeyPath
	I1209 11:51:43.656295  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHUsername
	I1209 11:51:43.656483  662109 main.go:141] libmachine: Using SSH client type: native
	I1209 11:51:43.656688  662109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.169 22 <nil> <nil>}
	I1209 11:51:43.656710  662109 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 11:51:43.870704  662109 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 11:51:43.870738  662109 machine.go:96] duration metric: took 1.043398486s to provisionDockerMachine
	I1209 11:51:43.870756  662109 start.go:293] postStartSetup for "no-preload-820741" (driver="kvm2")
	I1209 11:51:43.870771  662109 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 11:51:43.870796  662109 main.go:141] libmachine: (no-preload-820741) Calling .DriverName
	I1209 11:51:43.871158  662109 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 11:51:43.871186  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHHostname
	I1209 11:51:43.873863  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:43.874207  662109 main.go:141] libmachine: (no-preload-820741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4c:0e", ip: ""} in network mk-no-preload-820741: {Iface:virbr1 ExpiryTime:2024-12-09 12:43:41 +0000 UTC Type:0 Mac:52:54:00:27:4c:0e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:no-preload-820741 Clientid:01:52:54:00:27:4c:0e}
	I1209 11:51:43.874230  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined IP address 192.168.39.169 and MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:43.874408  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHPort
	I1209 11:51:43.874610  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHKeyPath
	I1209 11:51:43.874800  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHUsername
	I1209 11:51:43.874925  662109 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/no-preload-820741/id_rsa Username:docker}
	I1209 11:51:43.956874  662109 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 11:51:43.960825  662109 info.go:137] Remote host: Buildroot 2023.02.9
	I1209 11:51:43.960853  662109 filesync.go:126] Scanning /home/jenkins/minikube-integration/20068-609844/.minikube/addons for local assets ...
	I1209 11:51:43.960919  662109 filesync.go:126] Scanning /home/jenkins/minikube-integration/20068-609844/.minikube/files for local assets ...
	I1209 11:51:43.960993  662109 filesync.go:149] local asset: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem -> 6170172.pem in /etc/ssl/certs
	I1209 11:51:43.961095  662109 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 11:51:43.970138  662109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem --> /etc/ssl/certs/6170172.pem (1708 bytes)
	I1209 11:51:43.991975  662109 start.go:296] duration metric: took 121.20118ms for postStartSetup
	I1209 11:51:43.992020  662109 fix.go:56] duration metric: took 19.276442325s for fixHost
	I1209 11:51:43.992043  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHHostname
	I1209 11:51:43.994707  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:43.995035  662109 main.go:141] libmachine: (no-preload-820741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4c:0e", ip: ""} in network mk-no-preload-820741: {Iface:virbr1 ExpiryTime:2024-12-09 12:43:41 +0000 UTC Type:0 Mac:52:54:00:27:4c:0e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:no-preload-820741 Clientid:01:52:54:00:27:4c:0e}
	I1209 11:51:43.995069  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined IP address 192.168.39.169 and MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:43.995186  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHPort
	I1209 11:51:43.995403  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHKeyPath
	I1209 11:51:43.995568  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHKeyPath
	I1209 11:51:43.995716  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHUsername
	I1209 11:51:43.995927  662109 main.go:141] libmachine: Using SSH client type: native
	I1209 11:51:43.996107  662109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.169 22 <nil> <nil>}
	I1209 11:51:43.996117  662109 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1209 11:51:44.102890  662109 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733745104.077047488
	
	I1209 11:51:44.102914  662109 fix.go:216] guest clock: 1733745104.077047488
	I1209 11:51:44.102922  662109 fix.go:229] Guest: 2024-12-09 11:51:44.077047488 +0000 UTC Remote: 2024-12-09 11:51:43.992024296 +0000 UTC m=+262.463051778 (delta=85.023192ms)
	I1209 11:51:44.102952  662109 fix.go:200] guest clock delta is within tolerance: 85.023192ms
	I1209 11:51:44.102957  662109 start.go:83] releasing machines lock for "no-preload-820741", held for 19.387413234s
	I1209 11:51:44.102980  662109 main.go:141] libmachine: (no-preload-820741) Calling .DriverName
	I1209 11:51:44.103272  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetIP
	I1209 11:51:44.105929  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:44.106314  662109 main.go:141] libmachine: (no-preload-820741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4c:0e", ip: ""} in network mk-no-preload-820741: {Iface:virbr1 ExpiryTime:2024-12-09 12:43:41 +0000 UTC Type:0 Mac:52:54:00:27:4c:0e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:no-preload-820741 Clientid:01:52:54:00:27:4c:0e}
	I1209 11:51:44.106341  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined IP address 192.168.39.169 and MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:44.106567  662109 main.go:141] libmachine: (no-preload-820741) Calling .DriverName
	I1209 11:51:44.107102  662109 main.go:141] libmachine: (no-preload-820741) Calling .DriverName
	I1209 11:51:44.107323  662109 main.go:141] libmachine: (no-preload-820741) Calling .DriverName
	I1209 11:51:44.107453  662109 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 11:51:44.107507  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHHostname
	I1209 11:51:44.107640  662109 ssh_runner.go:195] Run: cat /version.json
	I1209 11:51:44.107672  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHHostname
	I1209 11:51:44.110422  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:44.110792  662109 main.go:141] libmachine: (no-preload-820741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4c:0e", ip: ""} in network mk-no-preload-820741: {Iface:virbr1 ExpiryTime:2024-12-09 12:43:41 +0000 UTC Type:0 Mac:52:54:00:27:4c:0e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:no-preload-820741 Clientid:01:52:54:00:27:4c:0e}
	I1209 11:51:44.110822  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined IP address 192.168.39.169 and MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:44.110840  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:44.110984  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHPort
	I1209 11:51:44.111194  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHKeyPath
	I1209 11:51:44.111376  662109 main.go:141] libmachine: (no-preload-820741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4c:0e", ip: ""} in network mk-no-preload-820741: {Iface:virbr1 ExpiryTime:2024-12-09 12:43:41 +0000 UTC Type:0 Mac:52:54:00:27:4c:0e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:no-preload-820741 Clientid:01:52:54:00:27:4c:0e}
	I1209 11:51:44.111395  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined IP address 192.168.39.169 and MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:44.111408  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHUsername
	I1209 11:51:44.111569  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHPort
	I1209 11:51:44.111589  662109 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/no-preload-820741/id_rsa Username:docker}
	I1209 11:51:44.111722  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHKeyPath
	I1209 11:51:44.111827  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHUsername
	I1209 11:51:44.111986  662109 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/no-preload-820741/id_rsa Username:docker}
	I1209 11:51:44.228799  662109 ssh_runner.go:195] Run: systemctl --version
	I1209 11:51:44.234678  662109 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 11:51:44.383290  662109 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 11:51:44.388906  662109 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 11:51:44.388981  662109 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 11:51:44.405271  662109 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1209 11:51:44.405308  662109 start.go:495] detecting cgroup driver to use...
	I1209 11:51:44.405389  662109 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 11:51:44.425480  662109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 11:51:44.439827  662109 docker.go:217] disabling cri-docker service (if available) ...
	I1209 11:51:44.439928  662109 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 11:51:44.454750  662109 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 11:51:44.470828  662109 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 11:51:44.595400  662109 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 11:51:44.756743  662109 docker.go:233] disabling docker service ...
	I1209 11:51:44.756817  662109 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 11:51:44.774069  662109 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 11:51:44.788188  662109 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 11:51:44.909156  662109 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 11:51:45.036992  662109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 11:51:45.051284  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 11:51:45.071001  662109 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1209 11:51:45.071074  662109 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:51:45.081491  662109 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 11:51:45.081549  662109 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:51:45.091476  662109 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:51:45.103237  662109 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:51:45.114723  662109 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 11:51:45.126330  662109 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:51:45.136501  662109 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:51:45.152804  662109 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:51:45.163221  662109 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 11:51:45.173297  662109 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1209 11:51:45.173379  662109 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1209 11:51:45.186209  662109 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 11:51:45.195773  662109 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 11:51:45.339593  662109 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 11:51:45.438766  662109 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 11:51:45.438851  662109 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 11:51:45.444775  662109 start.go:563] Will wait 60s for crictl version
	I1209 11:51:45.444847  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:51:45.449585  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 11:51:45.493796  662109 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1209 11:51:45.493899  662109 ssh_runner.go:195] Run: crio --version
	I1209 11:51:45.521391  662109 ssh_runner.go:195] Run: crio --version
	I1209 11:51:45.551249  662109 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1209 11:51:45.552714  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetIP
	I1209 11:51:45.555910  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:45.556271  662109 main.go:141] libmachine: (no-preload-820741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4c:0e", ip: ""} in network mk-no-preload-820741: {Iface:virbr1 ExpiryTime:2024-12-09 12:43:41 +0000 UTC Type:0 Mac:52:54:00:27:4c:0e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:no-preload-820741 Clientid:01:52:54:00:27:4c:0e}
	I1209 11:51:45.556298  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined IP address 192.168.39.169 and MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:51:45.556571  662109 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1209 11:51:45.560718  662109 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 11:51:45.573027  662109 kubeadm.go:883] updating cluster {Name:no-preload-820741 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:no-preload-820741 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.169 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 11:51:45.573171  662109 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 11:51:45.573226  662109 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 11:51:45.613696  662109 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1209 11:51:45.613724  662109 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.2 registry.k8s.io/kube-controller-manager:v1.31.2 registry.k8s.io/kube-scheduler:v1.31.2 registry.k8s.io/kube-proxy:v1.31.2 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1209 11:51:45.613810  662109 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1209 11:51:45.613847  662109 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.2
	I1209 11:51:45.613864  662109 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.2
	I1209 11:51:45.613880  662109 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1209 11:51:45.613857  662109 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1209 11:51:45.613939  662109 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1209 11:51:45.613801  662109 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.2
	I1209 11:51:45.613810  662109 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 11:51:45.615885  662109 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1209 11:51:45.615983  662109 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.2
	I1209 11:51:45.615889  662109 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1209 11:51:45.615885  662109 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 11:51:45.615893  662109 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1209 11:51:45.615891  662109 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1209 11:51:45.615897  662109 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.2
	I1209 11:51:45.615893  662109 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.2
	I1209 11:51:45.819757  662109 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1209 11:51:45.836546  662109 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1209 11:51:45.851918  662109 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.2
	I1209 11:51:45.857461  662109 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.2
	I1209 11:51:45.857468  662109 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.2
	I1209 11:51:45.863981  662109 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1209 11:51:45.864038  662109 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1209 11:51:45.864122  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:51:45.865289  662109 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.2
	I1209 11:51:45.868361  662109 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I1209 11:51:46.030476  662109 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.2" does not exist at hash "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173" in container runtime
	I1209 11:51:46.030525  662109 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.2
	I1209 11:51:46.030582  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:51:46.030525  662109 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.2" needs transfer: "registry.k8s.io/kube-proxy:v1.31.2" does not exist at hash "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38" in container runtime
	I1209 11:51:46.030603  662109 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.2" does not exist at hash "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856" in container runtime
	I1209 11:51:46.030625  662109 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.2
	I1209 11:51:46.030652  662109 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.2
	I1209 11:51:46.030694  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:51:46.030655  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:51:46.030720  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1209 11:51:46.030760  662109 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.2" does not exist at hash "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503" in container runtime
	I1209 11:51:46.030794  662109 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1209 11:51:46.030823  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:51:46.030823  662109 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I1209 11:51:46.030845  662109 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I1209 11:51:46.030868  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:51:46.041983  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1209 11:51:46.042072  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1209 11:51:46.042088  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1209 11:51:46.086909  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1209 11:51:46.086966  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1209 11:51:46.086997  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1209 11:51:46.141636  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1209 11:51:46.141723  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1209 11:51:46.141779  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1209 11:51:46.249908  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1209 11:51:46.249972  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1209 11:51:46.250024  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1209 11:51:46.250056  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1209 11:51:46.266345  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1209 11:51:46.266425  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1209 11:51:46.376691  662109 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1209 11:51:46.376784  662109 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2
	I1209 11:51:46.376904  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1209 11:51:46.376937  662109 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1209 11:51:46.376911  662109 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1209 11:51:46.376980  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1209 11:51:46.407997  662109 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2
	I1209 11:51:46.408015  662109 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2
	I1209 11:51:46.408120  662109 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.2
	I1209 11:51:46.408120  662109 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1209 11:51:46.450341  662109 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.2 (exists)
	I1209 11:51:46.450374  662109 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1209 11:51:46.450445  662109 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1209 11:51:46.450503  662109 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I1209 11:51:46.450537  662109 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I1209 11:51:46.450541  662109 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2
	I1209 11:51:46.450570  662109 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.2 (exists)
	I1209 11:51:46.450621  662109 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I1209 11:51:46.450621  662109 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1209 11:51:46.450621  662109 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.2 (exists)
	I1209 11:51:44.128421  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .Start
	I1209 11:51:44.128663  662586 main.go:141] libmachine: (old-k8s-version-014592) Ensuring networks are active...
	I1209 11:51:44.129435  662586 main.go:141] libmachine: (old-k8s-version-014592) Ensuring network default is active
	I1209 11:51:44.129805  662586 main.go:141] libmachine: (old-k8s-version-014592) Ensuring network mk-old-k8s-version-014592 is active
	I1209 11:51:44.130314  662586 main.go:141] libmachine: (old-k8s-version-014592) Getting domain xml...
	I1209 11:51:44.131070  662586 main.go:141] libmachine: (old-k8s-version-014592) Creating domain...
	I1209 11:51:45.405214  662586 main.go:141] libmachine: (old-k8s-version-014592) Waiting to get IP...
	I1209 11:51:45.406116  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:51:45.406680  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | unable to find current IP address of domain old-k8s-version-014592 in network mk-old-k8s-version-014592
	I1209 11:51:45.406716  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | I1209 11:51:45.406613  663492 retry.go:31] will retry after 249.130873ms: waiting for machine to come up
	I1209 11:51:45.657224  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:51:45.657727  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | unable to find current IP address of domain old-k8s-version-014592 in network mk-old-k8s-version-014592
	I1209 11:51:45.657756  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | I1209 11:51:45.657687  663492 retry.go:31] will retry after 363.458278ms: waiting for machine to come up
	I1209 11:51:46.023431  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:51:46.023912  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | unable to find current IP address of domain old-k8s-version-014592 in network mk-old-k8s-version-014592
	I1209 11:51:46.023945  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | I1209 11:51:46.023851  663492 retry.go:31] will retry after 313.220722ms: waiting for machine to come up
	I1209 11:51:46.339300  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:51:46.339850  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | unable to find current IP address of domain old-k8s-version-014592 in network mk-old-k8s-version-014592
	I1209 11:51:46.339876  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | I1209 11:51:46.339791  663492 retry.go:31] will retry after 517.613322ms: waiting for machine to come up
	I1209 11:51:46.859825  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:51:46.860229  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | unable to find current IP address of domain old-k8s-version-014592 in network mk-old-k8s-version-014592
	I1209 11:51:46.860260  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | I1209 11:51:46.860198  663492 retry.go:31] will retry after 710.195232ms: waiting for machine to come up
	I1209 11:51:47.572460  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:51:47.573030  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | unable to find current IP address of domain old-k8s-version-014592 in network mk-old-k8s-version-014592
	I1209 11:51:47.573080  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | I1209 11:51:47.573008  663492 retry.go:31] will retry after 620.717522ms: waiting for machine to come up
	I1209 11:51:46.869631  662109 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 11:51:48.822213  662109 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2: (2.371704342s)
	I1209 11:51:48.822263  662109 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2 from cache
	I1209 11:51:48.822262  662109 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0: (2.371603127s)
	I1209 11:51:48.822296  662109 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I1209 11:51:48.822295  662109 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.2: (2.371584353s)
	I1209 11:51:48.822298  662109 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1209 11:51:48.822309  662109 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.2 (exists)
	I1209 11:51:48.822324  662109 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.952666874s)
	I1209 11:51:48.822364  662109 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1209 11:51:48.822367  662109 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1209 11:51:48.822416  662109 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 11:51:48.822460  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:51:50.794288  662109 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (1.971891497s)
	I1209 11:51:50.794330  662109 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1209 11:51:50.794357  662109 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.2
	I1209 11:51:50.794357  662109 ssh_runner.go:235] Completed: which crictl: (1.971876587s)
	I1209 11:51:50.794417  662109 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2
	I1209 11:51:50.794437  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 11:51:48.195603  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:51:48.196140  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | unable to find current IP address of domain old-k8s-version-014592 in network mk-old-k8s-version-014592
	I1209 11:51:48.196172  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | I1209 11:51:48.196083  663492 retry.go:31] will retry after 747.45082ms: waiting for machine to come up
	I1209 11:51:48.945230  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:51:48.945682  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | unable to find current IP address of domain old-k8s-version-014592 in network mk-old-k8s-version-014592
	I1209 11:51:48.945737  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | I1209 11:51:48.945661  663492 retry.go:31] will retry after 1.307189412s: waiting for machine to come up
	I1209 11:51:50.254747  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:51:50.255335  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | unable to find current IP address of domain old-k8s-version-014592 in network mk-old-k8s-version-014592
	I1209 11:51:50.255359  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | I1209 11:51:50.255276  663492 retry.go:31] will retry after 1.269881759s: waiting for machine to come up
	I1209 11:51:51.526966  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:51:51.527400  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | unable to find current IP address of domain old-k8s-version-014592 in network mk-old-k8s-version-014592
	I1209 11:51:51.527431  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | I1209 11:51:51.527348  663492 retry.go:31] will retry after 1.424091669s: waiting for machine to come up
	I1209 11:51:52.958981  662109 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.164517823s)
	I1209 11:51:52.959044  662109 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2: (2.164597978s)
	I1209 11:51:52.959089  662109 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2 from cache
	I1209 11:51:52.959120  662109 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1209 11:51:52.959057  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 11:51:52.959203  662109 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1209 11:51:53.007629  662109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 11:51:54.832641  662109 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2: (1.873398185s)
	I1209 11:51:54.832686  662109 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2 from cache
	I1209 11:51:54.832694  662109 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.825022672s)
	I1209 11:51:54.832714  662109 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I1209 11:51:54.832748  662109 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1209 11:51:54.832769  662109 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I1209 11:51:54.832853  662109 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1209 11:51:52.953290  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:51:52.953711  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | unable to find current IP address of domain old-k8s-version-014592 in network mk-old-k8s-version-014592
	I1209 11:51:52.953743  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | I1209 11:51:52.953658  663492 retry.go:31] will retry after 2.009829783s: waiting for machine to come up
	I1209 11:51:54.965818  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:51:54.966337  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | unable to find current IP address of domain old-k8s-version-014592 in network mk-old-k8s-version-014592
	I1209 11:51:54.966372  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | I1209 11:51:54.966285  663492 retry.go:31] will retry after 2.209879817s: waiting for machine to come up
	I1209 11:51:57.177397  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:51:57.177870  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | unable to find current IP address of domain old-k8s-version-014592 in network mk-old-k8s-version-014592
	I1209 11:51:57.177901  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | I1209 11:51:57.177805  663492 retry.go:31] will retry after 2.999056002s: waiting for machine to come up
	I1209 11:51:58.433813  662109 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.600992195s)
	I1209 11:51:58.433889  662109 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I1209 11:51:58.433913  662109 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1209 11:51:58.433831  662109 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (3.600948593s)
	I1209 11:51:58.433947  662109 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1209 11:51:58.433961  662109 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1209 11:51:59.792012  662109 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2: (1.35801884s)
	I1209 11:51:59.792049  662109 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2 from cache
	I1209 11:51:59.792078  662109 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1209 11:51:59.792127  662109 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1209 11:52:00.635140  662109 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1209 11:52:00.635193  662109 cache_images.go:123] Successfully loaded all cached images
	I1209 11:52:00.635212  662109 cache_images.go:92] duration metric: took 15.021464053s to LoadCachedImages
	I1209 11:52:00.635232  662109 kubeadm.go:934] updating node { 192.168.39.169 8443 v1.31.2 crio true true} ...
	I1209 11:52:00.635395  662109 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-820741 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.169
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:no-preload-820741 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 11:52:00.635481  662109 ssh_runner.go:195] Run: crio config
	I1209 11:52:00.680321  662109 cni.go:84] Creating CNI manager for ""
	I1209 11:52:00.680345  662109 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 11:52:00.680370  662109 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1209 11:52:00.680394  662109 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.169 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-820741 NodeName:no-preload-820741 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.169"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.169 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 11:52:00.680545  662109 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.169
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-820741"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.169"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.169"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 11:52:00.680614  662109 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1209 11:52:00.690391  662109 binaries.go:44] Found k8s binaries, skipping transfer
	I1209 11:52:00.690484  662109 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 11:52:00.699034  662109 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1209 11:52:00.714710  662109 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 11:52:00.730375  662109 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2297 bytes)
	I1209 11:52:00.747519  662109 ssh_runner.go:195] Run: grep 192.168.39.169	control-plane.minikube.internal$ /etc/hosts
	I1209 11:52:00.751163  662109 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.169	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 11:52:00.762405  662109 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 11:52:00.881308  662109 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 11:52:00.898028  662109 certs.go:68] Setting up /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/no-preload-820741 for IP: 192.168.39.169
	I1209 11:52:00.898060  662109 certs.go:194] generating shared ca certs ...
	I1209 11:52:00.898085  662109 certs.go:226] acquiring lock for ca certs: {Name:mk2df5887e08965d909a9c950da5dfffb8a04ddc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 11:52:00.898349  662109 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.key
	I1209 11:52:00.898415  662109 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.key
	I1209 11:52:00.898429  662109 certs.go:256] generating profile certs ...
	I1209 11:52:00.898565  662109 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/no-preload-820741/client.key
	I1209 11:52:00.898646  662109 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/no-preload-820741/apiserver.key.814e22a1
	I1209 11:52:00.898701  662109 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/no-preload-820741/proxy-client.key
	I1209 11:52:00.898859  662109 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017.pem (1338 bytes)
	W1209 11:52:00.898904  662109 certs.go:480] ignoring /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017_empty.pem, impossibly tiny 0 bytes
	I1209 11:52:00.898918  662109 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem (1675 bytes)
	I1209 11:52:00.898949  662109 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem (1082 bytes)
	I1209 11:52:00.898982  662109 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem (1123 bytes)
	I1209 11:52:00.899007  662109 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem (1679 bytes)
	I1209 11:52:00.899045  662109 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem (1708 bytes)
	I1209 11:52:00.899994  662109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 11:52:00.943848  662109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1209 11:52:00.970587  662109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 11:52:01.025164  662109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1209 11:52:01.055766  662109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/no-preload-820741/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1209 11:52:01.089756  662109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/no-preload-820741/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1209 11:52:01.112171  662109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/no-preload-820741/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 11:52:01.135928  662109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/no-preload-820741/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1209 11:52:01.157703  662109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017.pem --> /usr/share/ca-certificates/617017.pem (1338 bytes)
	I1209 11:52:01.179806  662109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem --> /usr/share/ca-certificates/6170172.pem (1708 bytes)
	I1209 11:52:01.201663  662109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 11:52:01.223314  662109 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 11:52:01.239214  662109 ssh_runner.go:195] Run: openssl version
	I1209 11:52:01.244687  662109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/617017.pem && ln -fs /usr/share/ca-certificates/617017.pem /etc/ssl/certs/617017.pem"
	I1209 11:52:01.254630  662109 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/617017.pem
	I1209 11:52:01.258801  662109 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 10:45 /usr/share/ca-certificates/617017.pem
	I1209 11:52:01.258849  662109 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/617017.pem
	I1209 11:52:01.264219  662109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/617017.pem /etc/ssl/certs/51391683.0"
	I1209 11:52:01.274077  662109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6170172.pem && ln -fs /usr/share/ca-certificates/6170172.pem /etc/ssl/certs/6170172.pem"
	I1209 11:52:01.284511  662109 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6170172.pem
	I1209 11:52:01.289141  662109 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 10:45 /usr/share/ca-certificates/6170172.pem
	I1209 11:52:01.289216  662109 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6170172.pem
	I1209 11:52:01.295079  662109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6170172.pem /etc/ssl/certs/3ec20f2e.0"
	I1209 11:52:01.305606  662109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1209 11:52:01.315795  662109 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 11:52:01.320085  662109 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 10:34 /usr/share/ca-certificates/minikubeCA.pem
	I1209 11:52:01.320147  662109 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 11:52:01.325590  662109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1209 11:52:01.335747  662109 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 11:52:01.340113  662109 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1209 11:52:01.346217  662109 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1209 11:52:01.351799  662109 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1209 11:52:01.357441  662109 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1209 11:52:01.362784  662109 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1209 11:52:01.368210  662109 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1209 11:52:01.373975  662109 kubeadm.go:392] StartCluster: {Name:no-preload-820741 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:no-preload-820741 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.169 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 11:52:01.374101  662109 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 11:52:01.374160  662109 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 11:52:01.409780  662109 cri.go:89] found id: ""
	I1209 11:52:01.409852  662109 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 11:52:01.419505  662109 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1209 11:52:01.419550  662109 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1209 11:52:01.419603  662109 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1209 11:52:01.429000  662109 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1209 11:52:01.429999  662109 kubeconfig.go:125] found "no-preload-820741" server: "https://192.168.39.169:8443"
	I1209 11:52:01.432151  662109 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1209 11:52:01.440964  662109 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.169
	I1209 11:52:01.441003  662109 kubeadm.go:1160] stopping kube-system containers ...
	I1209 11:52:01.441021  662109 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1209 11:52:01.441084  662109 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 11:52:01.474788  662109 cri.go:89] found id: ""
	I1209 11:52:01.474865  662109 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1209 11:52:01.491360  662109 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 11:52:01.500483  662109 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 11:52:01.500505  662109 kubeadm.go:157] found existing configuration files:
	
	I1209 11:52:01.500558  662109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 11:52:01.509190  662109 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 11:52:01.509251  662109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 11:52:01.518248  662109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 11:52:01.526845  662109 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 11:52:01.526909  662109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 11:52:01.535849  662109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 11:52:01.544609  662109 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 11:52:01.544672  662109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 11:52:01.553527  662109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 11:52:01.561876  662109 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 11:52:01.561928  662109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 11:52:00.178781  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:00.179225  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | unable to find current IP address of domain old-k8s-version-014592 in network mk-old-k8s-version-014592
	I1209 11:52:00.179273  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | I1209 11:52:00.179165  663492 retry.go:31] will retry after 4.532370187s: waiting for machine to come up
	I1209 11:52:05.915073  663024 start.go:364] duration metric: took 2m6.318720193s to acquireMachinesLock for "default-k8s-diff-port-482476"
	I1209 11:52:05.915166  663024 start.go:96] Skipping create...Using existing machine configuration
	I1209 11:52:05.915179  663024 fix.go:54] fixHost starting: 
	I1209 11:52:05.915652  663024 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:52:05.915716  663024 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:52:05.933810  663024 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35679
	I1209 11:52:05.934363  663024 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:52:05.935019  663024 main.go:141] libmachine: Using API Version  1
	I1209 11:52:05.935071  663024 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:52:05.935489  663024 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:52:05.935682  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .DriverName
	I1209 11:52:05.935879  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetState
	I1209 11:52:05.937627  663024 fix.go:112] recreateIfNeeded on default-k8s-diff-port-482476: state=Stopped err=<nil>
	I1209 11:52:05.937660  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .DriverName
	W1209 11:52:05.937842  663024 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 11:52:05.939893  663024 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-482476" ...
	I1209 11:52:01.570657  662109 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 11:52:01.579782  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:01.680268  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:02.573653  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:02.762024  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:02.826444  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:02.932170  662109 api_server.go:52] waiting for apiserver process to appear ...
	I1209 11:52:02.932291  662109 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:03.432933  662109 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:03.933186  662109 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:03.948529  662109 api_server.go:72] duration metric: took 1.016357501s to wait for apiserver process to appear ...
	I1209 11:52:03.948565  662109 api_server.go:88] waiting for apiserver healthz status ...
	I1209 11:52:03.948595  662109 api_server.go:253] Checking apiserver healthz at https://192.168.39.169:8443/healthz ...
	I1209 11:52:06.443635  662109 api_server.go:279] https://192.168.39.169:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1209 11:52:06.443675  662109 api_server.go:103] status: https://192.168.39.169:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1209 11:52:06.443692  662109 api_server.go:253] Checking apiserver healthz at https://192.168.39.169:8443/healthz ...
	I1209 11:52:06.490801  662109 api_server.go:279] https://192.168.39.169:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1209 11:52:06.490839  662109 api_server.go:103] status: https://192.168.39.169:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1209 11:52:06.490860  662109 api_server.go:253] Checking apiserver healthz at https://192.168.39.169:8443/healthz ...
	I1209 11:52:06.502460  662109 api_server.go:279] https://192.168.39.169:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1209 11:52:06.502497  662109 api_server.go:103] status: https://192.168.39.169:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1209 11:52:04.713201  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:04.713711  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has current primary IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:04.713817  662586 main.go:141] libmachine: (old-k8s-version-014592) Found IP for machine: 192.168.61.132
	I1209 11:52:04.713853  662586 main.go:141] libmachine: (old-k8s-version-014592) Reserving static IP address...
	I1209 11:52:04.714267  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "old-k8s-version-014592", mac: "52:54:00:54:72:3e", ip: "192.168.61.132"} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:51:55 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:52:04.714298  662586 main.go:141] libmachine: (old-k8s-version-014592) Reserved static IP address: 192.168.61.132
	I1209 11:52:04.714318  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | skip adding static IP to network mk-old-k8s-version-014592 - found existing host DHCP lease matching {name: "old-k8s-version-014592", mac: "52:54:00:54:72:3e", ip: "192.168.61.132"}
	I1209 11:52:04.714332  662586 main.go:141] libmachine: (old-k8s-version-014592) Waiting for SSH to be available...
	I1209 11:52:04.714347  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | Getting to WaitForSSH function...
	I1209 11:52:04.716632  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:04.716972  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:72:3e", ip: ""} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:51:55 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:52:04.717005  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:04.717129  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | Using SSH client type: external
	I1209 11:52:04.717157  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | Using SSH private key: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/old-k8s-version-014592/id_rsa (-rw-------)
	I1209 11:52:04.717192  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.132 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20068-609844/.minikube/machines/old-k8s-version-014592/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1209 11:52:04.717206  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | About to run SSH command:
	I1209 11:52:04.717223  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | exit 0
	I1209 11:52:04.846290  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | SSH cmd err, output: <nil>: 
	I1209 11:52:04.846675  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetConfigRaw
	I1209 11:52:04.847483  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetIP
	I1209 11:52:04.850430  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:04.850859  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:72:3e", ip: ""} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:51:55 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:52:04.850888  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:04.851113  662586 profile.go:143] Saving config to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/old-k8s-version-014592/config.json ...
	I1209 11:52:04.851328  662586 machine.go:93] provisionDockerMachine start ...
	I1209 11:52:04.851348  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .DriverName
	I1209 11:52:04.851547  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHHostname
	I1209 11:52:04.854318  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:04.854622  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:72:3e", ip: ""} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:51:55 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:52:04.854654  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:04.854782  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHPort
	I1209 11:52:04.854959  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHKeyPath
	I1209 11:52:04.855134  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHKeyPath
	I1209 11:52:04.855276  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHUsername
	I1209 11:52:04.855438  662586 main.go:141] libmachine: Using SSH client type: native
	I1209 11:52:04.855696  662586 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.132 22 <nil> <nil>}
	I1209 11:52:04.855709  662586 main.go:141] libmachine: About to run SSH command:
	hostname
	I1209 11:52:04.963021  662586 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1209 11:52:04.963059  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetMachineName
	I1209 11:52:04.963344  662586 buildroot.go:166] provisioning hostname "old-k8s-version-014592"
	I1209 11:52:04.963368  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetMachineName
	I1209 11:52:04.963545  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHHostname
	I1209 11:52:04.966102  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:04.966461  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:72:3e", ip: ""} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:51:55 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:52:04.966496  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:04.966607  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHPort
	I1209 11:52:04.966780  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHKeyPath
	I1209 11:52:04.966919  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHKeyPath
	I1209 11:52:04.967056  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHUsername
	I1209 11:52:04.967221  662586 main.go:141] libmachine: Using SSH client type: native
	I1209 11:52:04.967407  662586 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.132 22 <nil> <nil>}
	I1209 11:52:04.967419  662586 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-014592 && echo "old-k8s-version-014592" | sudo tee /etc/hostname
	I1209 11:52:05.094147  662586 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-014592
	
	I1209 11:52:05.094210  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHHostname
	I1209 11:52:05.097298  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.097729  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:72:3e", ip: ""} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:51:55 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:52:05.097765  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.097949  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHPort
	I1209 11:52:05.098197  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHKeyPath
	I1209 11:52:05.098460  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHKeyPath
	I1209 11:52:05.098632  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHUsername
	I1209 11:52:05.098829  662586 main.go:141] libmachine: Using SSH client type: native
	I1209 11:52:05.099046  662586 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.132 22 <nil> <nil>}
	I1209 11:52:05.099082  662586 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-014592' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-014592/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-014592' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 11:52:05.210739  662586 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 11:52:05.210785  662586 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20068-609844/.minikube CaCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20068-609844/.minikube}
	I1209 11:52:05.210846  662586 buildroot.go:174] setting up certificates
	I1209 11:52:05.210859  662586 provision.go:84] configureAuth start
	I1209 11:52:05.210881  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetMachineName
	I1209 11:52:05.211210  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetIP
	I1209 11:52:05.214546  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.214937  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:72:3e", ip: ""} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:51:55 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:52:05.214967  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.215167  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHHostname
	I1209 11:52:05.217866  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.218269  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:72:3e", ip: ""} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:51:55 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:52:05.218300  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.218452  662586 provision.go:143] copyHostCerts
	I1209 11:52:05.218530  662586 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem, removing ...
	I1209 11:52:05.218558  662586 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem
	I1209 11:52:05.218630  662586 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem (1082 bytes)
	I1209 11:52:05.218807  662586 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem, removing ...
	I1209 11:52:05.218820  662586 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem
	I1209 11:52:05.218863  662586 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem (1123 bytes)
	I1209 11:52:05.218943  662586 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem, removing ...
	I1209 11:52:05.218953  662586 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem
	I1209 11:52:05.218983  662586 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem (1679 bytes)
	I1209 11:52:05.219060  662586 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-014592 san=[127.0.0.1 192.168.61.132 localhost minikube old-k8s-version-014592]
	I1209 11:52:05.292744  662586 provision.go:177] copyRemoteCerts
	I1209 11:52:05.292830  662586 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 11:52:05.292867  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHHostname
	I1209 11:52:05.296244  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.296670  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:72:3e", ip: ""} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:51:55 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:52:05.296712  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.296896  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHPort
	I1209 11:52:05.297111  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHKeyPath
	I1209 11:52:05.297330  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHUsername
	I1209 11:52:05.297514  662586 sshutil.go:53] new ssh client: &{IP:192.168.61.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/old-k8s-version-014592/id_rsa Username:docker}
	I1209 11:52:05.381148  662586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1209 11:52:05.404883  662586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1209 11:52:05.433421  662586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1209 11:52:05.456775  662586 provision.go:87] duration metric: took 245.894878ms to configureAuth
	I1209 11:52:05.456811  662586 buildroot.go:189] setting minikube options for container-runtime
	I1209 11:52:05.457003  662586 config.go:182] Loaded profile config "old-k8s-version-014592": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1209 11:52:05.457082  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHHostname
	I1209 11:52:05.459984  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.460372  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:72:3e", ip: ""} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:51:55 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:52:05.460415  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.460631  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHPort
	I1209 11:52:05.460851  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHKeyPath
	I1209 11:52:05.461021  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHKeyPath
	I1209 11:52:05.461217  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHUsername
	I1209 11:52:05.461481  662586 main.go:141] libmachine: Using SSH client type: native
	I1209 11:52:05.461702  662586 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.132 22 <nil> <nil>}
	I1209 11:52:05.461722  662586 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 11:52:05.683276  662586 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 11:52:05.683311  662586 machine.go:96] duration metric: took 831.968459ms to provisionDockerMachine
	I1209 11:52:05.683335  662586 start.go:293] postStartSetup for "old-k8s-version-014592" (driver="kvm2")
	I1209 11:52:05.683349  662586 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 11:52:05.683391  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .DriverName
	I1209 11:52:05.683809  662586 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 11:52:05.683850  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHHostname
	I1209 11:52:05.687116  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.687540  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:72:3e", ip: ""} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:51:55 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:52:05.687579  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.687787  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHPort
	I1209 11:52:05.688013  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHKeyPath
	I1209 11:52:05.688204  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHUsername
	I1209 11:52:05.688439  662586 sshutil.go:53] new ssh client: &{IP:192.168.61.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/old-k8s-version-014592/id_rsa Username:docker}
	I1209 11:52:05.768777  662586 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 11:52:05.772572  662586 info.go:137] Remote host: Buildroot 2023.02.9
	I1209 11:52:05.772603  662586 filesync.go:126] Scanning /home/jenkins/minikube-integration/20068-609844/.minikube/addons for local assets ...
	I1209 11:52:05.772690  662586 filesync.go:126] Scanning /home/jenkins/minikube-integration/20068-609844/.minikube/files for local assets ...
	I1209 11:52:05.772813  662586 filesync.go:149] local asset: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem -> 6170172.pem in /etc/ssl/certs
	I1209 11:52:05.772942  662586 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 11:52:05.784153  662586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem --> /etc/ssl/certs/6170172.pem (1708 bytes)
	I1209 11:52:05.808677  662586 start.go:296] duration metric: took 125.320445ms for postStartSetup
	I1209 11:52:05.808736  662586 fix.go:56] duration metric: took 21.705557963s for fixHost
	I1209 11:52:05.808766  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHHostname
	I1209 11:52:05.811685  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.812053  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:72:3e", ip: ""} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:51:55 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:52:05.812090  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.812426  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHPort
	I1209 11:52:05.812639  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHKeyPath
	I1209 11:52:05.812853  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHKeyPath
	I1209 11:52:05.812996  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHUsername
	I1209 11:52:05.813345  662586 main.go:141] libmachine: Using SSH client type: native
	I1209 11:52:05.813562  662586 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.132 22 <nil> <nil>}
	I1209 11:52:05.813572  662586 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1209 11:52:05.914863  662586 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733745125.875320243
	
	I1209 11:52:05.914892  662586 fix.go:216] guest clock: 1733745125.875320243
	I1209 11:52:05.914906  662586 fix.go:229] Guest: 2024-12-09 11:52:05.875320243 +0000 UTC Remote: 2024-12-09 11:52:05.808742373 +0000 UTC m=+218.159686894 (delta=66.57787ms)
	I1209 11:52:05.914941  662586 fix.go:200] guest clock delta is within tolerance: 66.57787ms
	I1209 11:52:05.914952  662586 start.go:83] releasing machines lock for "old-k8s-version-014592", held for 21.811813657s
	I1209 11:52:05.914983  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .DriverName
	I1209 11:52:05.915289  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetIP
	I1209 11:52:05.918015  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.918513  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:72:3e", ip: ""} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:51:55 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:52:05.918546  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.918662  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .DriverName
	I1209 11:52:05.919315  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .DriverName
	I1209 11:52:05.919508  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .DriverName
	I1209 11:52:05.919628  662586 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 11:52:05.919684  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHHostname
	I1209 11:52:05.919739  662586 ssh_runner.go:195] Run: cat /version.json
	I1209 11:52:05.919767  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHHostname
	I1209 11:52:05.922529  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.922816  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.923096  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:72:3e", ip: ""} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:51:55 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:52:05.923121  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.923223  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:72:3e", ip: ""} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:51:55 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:52:05.923258  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:05.923291  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHPort
	I1209 11:52:05.923459  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHPort
	I1209 11:52:05.923602  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHKeyPath
	I1209 11:52:05.923616  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHKeyPath
	I1209 11:52:05.923848  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHUsername
	I1209 11:52:05.923900  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetSSHUsername
	I1209 11:52:05.924030  662586 sshutil.go:53] new ssh client: &{IP:192.168.61.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/old-k8s-version-014592/id_rsa Username:docker}
	I1209 11:52:05.924104  662586 sshutil.go:53] new ssh client: &{IP:192.168.61.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/old-k8s-version-014592/id_rsa Username:docker}
	I1209 11:52:06.037215  662586 ssh_runner.go:195] Run: systemctl --version
	I1209 11:52:06.043193  662586 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 11:52:06.193717  662586 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 11:52:06.199693  662586 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 11:52:06.199786  662586 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 11:52:06.216007  662586 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1209 11:52:06.216040  662586 start.go:495] detecting cgroup driver to use...
	I1209 11:52:06.216131  662586 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 11:52:06.233631  662586 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 11:52:06.249730  662586 docker.go:217] disabling cri-docker service (if available) ...
	I1209 11:52:06.249817  662586 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 11:52:06.265290  662586 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 11:52:06.281676  662586 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 11:52:06.432116  662586 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 11:52:06.605899  662586 docker.go:233] disabling docker service ...
	I1209 11:52:06.606004  662586 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 11:52:06.622861  662586 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 11:52:06.637605  662586 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 11:52:06.772842  662586 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 11:52:06.905950  662586 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 11:52:06.923048  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 11:52:06.943483  662586 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1209 11:52:06.943542  662586 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:52:06.957647  662586 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 11:52:06.957725  662586 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:52:06.970221  662586 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:52:06.981243  662586 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:52:06.992084  662586 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 11:52:07.004284  662586 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 11:52:07.014329  662586 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1209 11:52:07.014411  662586 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1209 11:52:07.028104  662586 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 11:52:07.038782  662586 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 11:52:07.155779  662586 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 11:52:07.271726  662586 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 11:52:07.271815  662586 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 11:52:07.276994  662586 start.go:563] Will wait 60s for crictl version
	I1209 11:52:07.277061  662586 ssh_runner.go:195] Run: which crictl
	I1209 11:52:07.281212  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 11:52:07.328839  662586 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1209 11:52:07.328959  662586 ssh_runner.go:195] Run: crio --version
	I1209 11:52:07.360632  662586 ssh_runner.go:195] Run: crio --version
	I1209 11:52:07.393046  662586 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1209 11:52:07.394357  662586 main.go:141] libmachine: (old-k8s-version-014592) Calling .GetIP
	I1209 11:52:07.398002  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:07.398539  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:72:3e", ip: ""} in network mk-old-k8s-version-014592: {Iface:virbr3 ExpiryTime:2024-12-09 12:51:55 +0000 UTC Type:0 Mac:52:54:00:54:72:3e Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:old-k8s-version-014592 Clientid:01:52:54:00:54:72:3e}
	I1209 11:52:07.398564  662586 main.go:141] libmachine: (old-k8s-version-014592) DBG | domain old-k8s-version-014592 has defined IP address 192.168.61.132 and MAC address 52:54:00:54:72:3e in network mk-old-k8s-version-014592
	I1209 11:52:07.398893  662586 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1209 11:52:07.404512  662586 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 11:52:07.417822  662586 kubeadm.go:883] updating cluster {Name:old-k8s-version-014592 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-014592 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.132 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 11:52:07.418006  662586 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1209 11:52:07.418108  662586 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 11:52:07.473163  662586 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1209 11:52:07.473249  662586 ssh_runner.go:195] Run: which lz4
	I1209 11:52:07.478501  662586 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1209 11:52:07.483744  662586 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1209 11:52:07.483786  662586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1209 11:52:06.949438  662109 api_server.go:253] Checking apiserver healthz at https://192.168.39.169:8443/healthz ...
	I1209 11:52:06.959097  662109 api_server.go:279] https://192.168.39.169:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1209 11:52:06.959150  662109 api_server.go:103] status: https://192.168.39.169:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1209 11:52:07.449249  662109 api_server.go:253] Checking apiserver healthz at https://192.168.39.169:8443/healthz ...
	I1209 11:52:07.466817  662109 api_server.go:279] https://192.168.39.169:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1209 11:52:07.466860  662109 api_server.go:103] status: https://192.168.39.169:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1209 11:52:07.948998  662109 api_server.go:253] Checking apiserver healthz at https://192.168.39.169:8443/healthz ...
	I1209 11:52:07.958340  662109 api_server.go:279] https://192.168.39.169:8443/healthz returned 200:
	ok
	I1209 11:52:07.966049  662109 api_server.go:141] control plane version: v1.31.2
	I1209 11:52:07.966095  662109 api_server.go:131] duration metric: took 4.017521352s to wait for apiserver health ...
	I1209 11:52:07.966111  662109 cni.go:84] Creating CNI manager for ""
	I1209 11:52:07.966121  662109 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 11:52:07.967962  662109 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1209 11:52:05.941206  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .Start
	I1209 11:52:05.941411  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Ensuring networks are active...
	I1209 11:52:05.942245  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Ensuring network default is active
	I1209 11:52:05.942724  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Ensuring network mk-default-k8s-diff-port-482476 is active
	I1209 11:52:05.943274  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Getting domain xml...
	I1209 11:52:05.944080  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Creating domain...
	I1209 11:52:07.394633  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting to get IP...
	I1209 11:52:07.396032  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:07.397444  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | unable to find current IP address of domain default-k8s-diff-port-482476 in network mk-default-k8s-diff-port-482476
	I1209 11:52:07.397560  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | I1209 11:52:07.397434  663663 retry.go:31] will retry after 205.256699ms: waiting for machine to come up
	I1209 11:52:07.604209  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:07.604884  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | unable to find current IP address of domain default-k8s-diff-port-482476 in network mk-default-k8s-diff-port-482476
	I1209 11:52:07.604920  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | I1209 11:52:07.604828  663663 retry.go:31] will retry after 291.255961ms: waiting for machine to come up
	I1209 11:52:07.897467  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:07.898992  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | unable to find current IP address of domain default-k8s-diff-port-482476 in network mk-default-k8s-diff-port-482476
	I1209 11:52:07.899020  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | I1209 11:52:07.898866  663663 retry.go:31] will retry after 437.180412ms: waiting for machine to come up
	I1209 11:52:08.337664  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:08.338195  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | unable to find current IP address of domain default-k8s-diff-port-482476 in network mk-default-k8s-diff-port-482476
	I1209 11:52:08.338235  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | I1209 11:52:08.338151  663663 retry.go:31] will retry after 603.826089ms: waiting for machine to come up
	I1209 11:52:08.944048  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:08.944672  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | unable to find current IP address of domain default-k8s-diff-port-482476 in network mk-default-k8s-diff-port-482476
	I1209 11:52:08.944702  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | I1209 11:52:08.944612  663663 retry.go:31] will retry after 557.882868ms: waiting for machine to come up
	I1209 11:52:07.969367  662109 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1209 11:52:07.986045  662109 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1209 11:52:08.075377  662109 system_pods.go:43] waiting for kube-system pods to appear ...
	I1209 11:52:08.091609  662109 system_pods.go:59] 8 kube-system pods found
	I1209 11:52:08.091648  662109 system_pods.go:61] "coredns-7c65d6cfc9-z647g" [0e15e13e-efe6-4ae2-8bac-205aadf8f95a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1209 11:52:08.091656  662109 system_pods.go:61] "etcd-no-preload-820741" [3ea97088-f3b5-4c8f-ac11-7ab96f37bc5b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1209 11:52:08.091664  662109 system_pods.go:61] "kube-apiserver-no-preload-820741" [bd0d3e20-bdab-41c1-bb25-0c5149b4e456] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1209 11:52:08.091670  662109 system_pods.go:61] "kube-controller-manager-no-preload-820741" [5eda77af-a169-4be5-9f96-6ad5406f9036] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1209 11:52:08.091675  662109 system_pods.go:61] "kube-proxy-hpvvp" [0945206c-8d1e-47e0-b35b-9011073423b2] Running
	I1209 11:52:08.091681  662109 system_pods.go:61] "kube-scheduler-no-preload-820741" [e7003668-a70d-4334-bb99-c63c463d87e0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1209 11:52:08.091686  662109 system_pods.go:61] "metrics-server-6867b74b74-pwcsr" [40d4df7e-de82-478b-a77b-b27208d8262e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1209 11:52:08.091691  662109 system_pods.go:61] "storage-provisioner" [aeba46d3-ecf1-4923-b89c-75b34e75a06d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1209 11:52:08.091699  662109 system_pods.go:74] duration metric: took 16.289433ms to wait for pod list to return data ...
	I1209 11:52:08.091707  662109 node_conditions.go:102] verifying NodePressure condition ...
	I1209 11:52:08.096961  662109 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1209 11:52:08.097010  662109 node_conditions.go:123] node cpu capacity is 2
	I1209 11:52:08.097047  662109 node_conditions.go:105] duration metric: took 5.334194ms to run NodePressure ...
	I1209 11:52:08.097073  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:08.573868  662109 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1209 11:52:08.583670  662109 kubeadm.go:739] kubelet initialised
	I1209 11:52:08.583700  662109 kubeadm.go:740] duration metric: took 9.800796ms waiting for restarted kubelet to initialise ...
	I1209 11:52:08.583713  662109 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 11:52:08.592490  662109 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-z647g" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:08.600581  662109 pod_ready.go:98] node "no-preload-820741" hosting pod "coredns-7c65d6cfc9-z647g" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-820741" has status "Ready":"False"
	I1209 11:52:08.600611  662109 pod_ready.go:82] duration metric: took 8.087599ms for pod "coredns-7c65d6cfc9-z647g" in "kube-system" namespace to be "Ready" ...
	E1209 11:52:08.600623  662109 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-820741" hosting pod "coredns-7c65d6cfc9-z647g" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-820741" has status "Ready":"False"
	I1209 11:52:08.600633  662109 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-820741" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:08.609663  662109 pod_ready.go:98] node "no-preload-820741" hosting pod "etcd-no-preload-820741" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-820741" has status "Ready":"False"
	I1209 11:52:08.609698  662109 pod_ready.go:82] duration metric: took 9.054194ms for pod "etcd-no-preload-820741" in "kube-system" namespace to be "Ready" ...
	E1209 11:52:08.609712  662109 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-820741" hosting pod "etcd-no-preload-820741" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-820741" has status "Ready":"False"
	I1209 11:52:08.609722  662109 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-820741" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:08.615482  662109 pod_ready.go:98] node "no-preload-820741" hosting pod "kube-apiserver-no-preload-820741" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-820741" has status "Ready":"False"
	I1209 11:52:08.615514  662109 pod_ready.go:82] duration metric: took 5.78152ms for pod "kube-apiserver-no-preload-820741" in "kube-system" namespace to be "Ready" ...
	E1209 11:52:08.615526  662109 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-820741" hosting pod "kube-apiserver-no-preload-820741" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-820741" has status "Ready":"False"
	I1209 11:52:08.615536  662109 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-820741" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:08.623662  662109 pod_ready.go:98] node "no-preload-820741" hosting pod "kube-controller-manager-no-preload-820741" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-820741" has status "Ready":"False"
	I1209 11:52:08.623698  662109 pod_ready.go:82] duration metric: took 8.151877ms for pod "kube-controller-manager-no-preload-820741" in "kube-system" namespace to be "Ready" ...
	E1209 11:52:08.623713  662109 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-820741" hosting pod "kube-controller-manager-no-preload-820741" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-820741" has status "Ready":"False"
	I1209 11:52:08.623722  662109 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-hpvvp" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:08.978286  662109 pod_ready.go:98] node "no-preload-820741" hosting pod "kube-proxy-hpvvp" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-820741" has status "Ready":"False"
	I1209 11:52:08.978323  662109 pod_ready.go:82] duration metric: took 354.589596ms for pod "kube-proxy-hpvvp" in "kube-system" namespace to be "Ready" ...
	E1209 11:52:08.978344  662109 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-820741" hosting pod "kube-proxy-hpvvp" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-820741" has status "Ready":"False"
	I1209 11:52:08.978356  662109 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-820741" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:09.378434  662109 pod_ready.go:98] node "no-preload-820741" hosting pod "kube-scheduler-no-preload-820741" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-820741" has status "Ready":"False"
	I1209 11:52:09.378471  662109 pod_ready.go:82] duration metric: took 400.107028ms for pod "kube-scheduler-no-preload-820741" in "kube-system" namespace to be "Ready" ...
	E1209 11:52:09.378484  662109 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-820741" hosting pod "kube-scheduler-no-preload-820741" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-820741" has status "Ready":"False"
	I1209 11:52:09.378494  662109 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:09.778087  662109 pod_ready.go:98] node "no-preload-820741" hosting pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-820741" has status "Ready":"False"
	I1209 11:52:09.778117  662109 pod_ready.go:82] duration metric: took 399.613592ms for pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace to be "Ready" ...
	E1209 11:52:09.778129  662109 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-820741" hosting pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-820741" has status "Ready":"False"
	I1209 11:52:09.778138  662109 pod_ready.go:39] duration metric: took 1.194413796s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 11:52:09.778162  662109 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1209 11:52:09.793629  662109 ops.go:34] apiserver oom_adj: -16
	I1209 11:52:09.793663  662109 kubeadm.go:597] duration metric: took 8.374104555s to restartPrimaryControlPlane
	I1209 11:52:09.793681  662109 kubeadm.go:394] duration metric: took 8.419719684s to StartCluster
	I1209 11:52:09.793708  662109 settings.go:142] acquiring lock: {Name:mkd819f3469f56ef651f2e5bb461e93bb6b2b5fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 11:52:09.793848  662109 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20068-609844/kubeconfig
	I1209 11:52:09.796407  662109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/kubeconfig: {Name:mk32876a1ab47f13c0d7c0ba1ee5e32de01b4a4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 11:52:09.796774  662109 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.169 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 11:52:09.796837  662109 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1209 11:52:09.796954  662109 addons.go:69] Setting storage-provisioner=true in profile "no-preload-820741"
	I1209 11:52:09.796975  662109 addons.go:234] Setting addon storage-provisioner=true in "no-preload-820741"
	W1209 11:52:09.796984  662109 addons.go:243] addon storage-provisioner should already be in state true
	I1209 11:52:09.797023  662109 host.go:66] Checking if "no-preload-820741" exists ...
	I1209 11:52:09.797048  662109 config.go:182] Loaded profile config "no-preload-820741": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 11:52:09.797086  662109 addons.go:69] Setting default-storageclass=true in profile "no-preload-820741"
	I1209 11:52:09.797110  662109 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-820741"
	I1209 11:52:09.797119  662109 addons.go:69] Setting metrics-server=true in profile "no-preload-820741"
	I1209 11:52:09.797150  662109 addons.go:234] Setting addon metrics-server=true in "no-preload-820741"
	W1209 11:52:09.797160  662109 addons.go:243] addon metrics-server should already be in state true
	I1209 11:52:09.797204  662109 host.go:66] Checking if "no-preload-820741" exists ...
	I1209 11:52:09.797545  662109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:52:09.797571  662109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:52:09.797579  662109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:52:09.797596  662109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:52:09.797611  662109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:52:09.797620  662109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:52:09.799690  662109 out.go:177] * Verifying Kubernetes components...
	I1209 11:52:09.801035  662109 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 11:52:09.814968  662109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33991
	I1209 11:52:09.815010  662109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34319
	I1209 11:52:09.815576  662109 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:52:09.815715  662109 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:52:09.816340  662109 main.go:141] libmachine: Using API Version  1
	I1209 11:52:09.816361  662109 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:52:09.816666  662109 main.go:141] libmachine: Using API Version  1
	I1209 11:52:09.816683  662109 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:52:09.816745  662109 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:52:09.817402  662109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:52:09.817449  662109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:52:09.818118  662109 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:52:09.818680  662109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:52:09.818718  662109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:52:09.842345  662109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37501
	I1209 11:52:09.842582  662109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44183
	I1209 11:52:09.842703  662109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38793
	I1209 11:52:09.843479  662109 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:52:09.843608  662109 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:52:09.843667  662109 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:52:09.843973  662109 main.go:141] libmachine: Using API Version  1
	I1209 11:52:09.843999  662109 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:52:09.844168  662109 main.go:141] libmachine: Using API Version  1
	I1209 11:52:09.844180  662109 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:52:09.844575  662109 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:52:09.844773  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetState
	I1209 11:52:09.845107  662109 main.go:141] libmachine: Using API Version  1
	I1209 11:52:09.845122  662109 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:52:09.845633  662109 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:52:09.845887  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetState
	I1209 11:52:09.847386  662109 main.go:141] libmachine: (no-preload-820741) Calling .DriverName
	I1209 11:52:09.848553  662109 main.go:141] libmachine: (no-preload-820741) Calling .DriverName
	I1209 11:52:09.849410  662109 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1209 11:52:09.849690  662109 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:52:09.850230  662109 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 11:52:09.850303  662109 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1209 11:52:09.850323  662109 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1209 11:52:09.850346  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHHostname
	I1209 11:52:09.851051  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetState
	I1209 11:52:09.851404  662109 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 11:52:09.851426  662109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1209 11:52:09.851447  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHHostname
	I1209 11:52:09.855303  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:52:09.855935  662109 addons.go:234] Setting addon default-storageclass=true in "no-preload-820741"
	W1209 11:52:09.855958  662109 addons.go:243] addon default-storageclass should already be in state true
	I1209 11:52:09.855991  662109 host.go:66] Checking if "no-preload-820741" exists ...
	I1209 11:52:09.856373  662109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:52:09.856429  662109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:52:09.857583  662109 main.go:141] libmachine: (no-preload-820741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4c:0e", ip: ""} in network mk-no-preload-820741: {Iface:virbr1 ExpiryTime:2024-12-09 12:43:41 +0000 UTC Type:0 Mac:52:54:00:27:4c:0e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:no-preload-820741 Clientid:01:52:54:00:27:4c:0e}
	I1209 11:52:09.857614  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined IP address 192.168.39.169 and MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:52:09.857874  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHPort
	I1209 11:52:09.858206  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHKeyPath
	I1209 11:52:09.858588  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHUsername
	I1209 11:52:09.858766  662109 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/no-preload-820741/id_rsa Username:docker}
	I1209 11:52:09.859464  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:52:09.859875  662109 main.go:141] libmachine: (no-preload-820741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4c:0e", ip: ""} in network mk-no-preload-820741: {Iface:virbr1 ExpiryTime:2024-12-09 12:43:41 +0000 UTC Type:0 Mac:52:54:00:27:4c:0e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:no-preload-820741 Clientid:01:52:54:00:27:4c:0e}
	I1209 11:52:09.859897  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined IP address 192.168.39.169 and MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:52:09.860238  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHPort
	I1209 11:52:09.860449  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHKeyPath
	I1209 11:52:09.860597  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHUsername
	I1209 11:52:09.860736  662109 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/no-preload-820741/id_rsa Username:docker}
	I1209 11:52:09.880235  662109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38341
	I1209 11:52:09.880846  662109 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:52:09.881409  662109 main.go:141] libmachine: Using API Version  1
	I1209 11:52:09.881429  662109 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:52:09.881855  662109 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:52:09.882651  662109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:52:09.882711  662109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:52:09.904576  662109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36779
	I1209 11:52:09.905132  662109 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:52:09.905765  662109 main.go:141] libmachine: Using API Version  1
	I1209 11:52:09.905788  662109 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:52:09.906224  662109 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:52:09.906469  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetState
	I1209 11:52:09.908475  662109 main.go:141] libmachine: (no-preload-820741) Calling .DriverName
	I1209 11:52:09.908715  662109 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1209 11:52:09.908735  662109 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1209 11:52:09.908756  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHHostname
	I1209 11:52:09.912294  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:52:09.912928  662109 main.go:141] libmachine: (no-preload-820741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4c:0e", ip: ""} in network mk-no-preload-820741: {Iface:virbr1 ExpiryTime:2024-12-09 12:43:41 +0000 UTC Type:0 Mac:52:54:00:27:4c:0e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:no-preload-820741 Clientid:01:52:54:00:27:4c:0e}
	I1209 11:52:09.912963  662109 main.go:141] libmachine: (no-preload-820741) DBG | domain no-preload-820741 has defined IP address 192.168.39.169 and MAC address 52:54:00:27:4c:0e in network mk-no-preload-820741
	I1209 11:52:09.913128  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHPort
	I1209 11:52:09.913383  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHKeyPath
	I1209 11:52:09.913563  662109 main.go:141] libmachine: (no-preload-820741) Calling .GetSSHUsername
	I1209 11:52:09.913711  662109 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/no-preload-820741/id_rsa Username:docker}
	I1209 11:52:10.141200  662109 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 11:52:10.172182  662109 node_ready.go:35] waiting up to 6m0s for node "no-preload-820741" to be "Ready" ...
	I1209 11:52:10.306617  662109 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1209 11:52:10.306646  662109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1209 11:52:10.321962  662109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 11:52:10.326125  662109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1209 11:52:10.360534  662109 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1209 11:52:10.360568  662109 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1209 11:52:10.470875  662109 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1209 11:52:10.470917  662109 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1209 11:52:10.555610  662109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1209 11:52:11.721480  662109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.395310752s)
	I1209 11:52:11.721571  662109 main.go:141] libmachine: Making call to close driver server
	I1209 11:52:11.721638  662109 main.go:141] libmachine: (no-preload-820741) Calling .Close
	I1209 11:52:11.721581  662109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.165925756s)
	I1209 11:52:11.721735  662109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.399738143s)
	I1209 11:52:11.721753  662109 main.go:141] libmachine: Making call to close driver server
	I1209 11:52:11.721766  662109 main.go:141] libmachine: (no-preload-820741) Calling .Close
	I1209 11:52:11.721765  662109 main.go:141] libmachine: Making call to close driver server
	I1209 11:52:11.721779  662109 main.go:141] libmachine: (no-preload-820741) Calling .Close
	I1209 11:52:11.722002  662109 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:52:11.722014  662109 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:52:11.722021  662109 main.go:141] libmachine: Making call to close driver server
	I1209 11:52:11.722028  662109 main.go:141] libmachine: (no-preload-820741) Calling .Close
	I1209 11:52:11.722201  662109 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:52:11.722213  662109 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:52:11.722221  662109 main.go:141] libmachine: Making call to close driver server
	I1209 11:52:11.722226  662109 main.go:141] libmachine: (no-preload-820741) Calling .Close
	I1209 11:52:11.722320  662109 main.go:141] libmachine: (no-preload-820741) DBG | Closing plugin on server side
	I1209 11:52:11.722329  662109 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:52:11.722349  662109 main.go:141] libmachine: (no-preload-820741) DBG | Closing plugin on server side
	I1209 11:52:11.722360  662109 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:52:11.722384  662109 main.go:141] libmachine: Making call to close driver server
	I1209 11:52:11.722395  662109 main.go:141] libmachine: (no-preload-820741) Calling .Close
	I1209 11:52:11.722424  662109 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:52:11.722438  662109 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:52:11.722465  662109 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:52:11.722475  662109 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:52:11.722490  662109 addons.go:475] Verifying addon metrics-server=true in "no-preload-820741"
	I1209 11:52:11.722560  662109 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:52:11.722579  662109 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:52:11.722564  662109 main.go:141] libmachine: (no-preload-820741) DBG | Closing plugin on server side
	I1209 11:52:11.729638  662109 main.go:141] libmachine: Making call to close driver server
	I1209 11:52:11.729660  662109 main.go:141] libmachine: (no-preload-820741) Calling .Close
	I1209 11:52:11.729934  662109 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:52:11.729950  662109 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:52:11.731642  662109 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I1209 11:52:09.097654  662586 crio.go:462] duration metric: took 1.619191765s to copy over tarball
	I1209 11:52:09.097748  662586 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1209 11:52:12.304496  662586 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.20670295s)
	I1209 11:52:12.304543  662586 crio.go:469] duration metric: took 3.206852542s to extract the tarball
	I1209 11:52:12.304553  662586 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1209 11:52:12.347991  662586 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 11:52:12.385411  662586 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1209 11:52:12.385438  662586 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1209 11:52:12.385533  662586 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 11:52:12.385557  662586 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1209 11:52:12.385570  662586 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1209 11:52:12.385609  662586 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 11:52:12.385641  662586 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1209 11:52:12.385650  662586 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1209 11:52:12.385645  662586 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1209 11:52:12.385620  662586 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1209 11:52:12.387326  662586 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1209 11:52:12.387335  662586 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 11:52:12.387371  662586 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1209 11:52:12.387372  662586 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1209 11:52:12.387328  662586 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1209 11:52:12.387338  662586 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 11:52:12.387328  662586 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1209 11:52:12.387383  662586 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1209 11:52:12.621631  662586 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1209 11:52:12.623694  662586 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 11:52:12.632536  662586 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1209 11:52:12.634550  662586 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1209 11:52:12.638401  662586 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1209 11:52:12.641071  662586 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1209 11:52:12.645344  662586 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1209 11:52:09.504566  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:09.505124  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | unable to find current IP address of domain default-k8s-diff-port-482476 in network mk-default-k8s-diff-port-482476
	I1209 11:52:09.505155  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | I1209 11:52:09.505076  663663 retry.go:31] will retry after 636.87343ms: waiting for machine to come up
	I1209 11:52:10.144387  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:10.145090  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | unable to find current IP address of domain default-k8s-diff-port-482476 in network mk-default-k8s-diff-port-482476
	I1209 11:52:10.145119  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | I1209 11:52:10.145037  663663 retry.go:31] will retry after 716.448577ms: waiting for machine to come up
	I1209 11:52:10.863113  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:10.863816  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | unable to find current IP address of domain default-k8s-diff-port-482476 in network mk-default-k8s-diff-port-482476
	I1209 11:52:10.863848  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | I1209 11:52:10.863762  663663 retry.go:31] will retry after 901.007245ms: waiting for machine to come up
	I1209 11:52:11.766356  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:11.766745  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | unable to find current IP address of domain default-k8s-diff-port-482476 in network mk-default-k8s-diff-port-482476
	I1209 11:52:11.766773  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | I1209 11:52:11.766688  663663 retry.go:31] will retry after 1.570604193s: waiting for machine to come up
	I1209 11:52:13.339318  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:13.339796  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | unable to find current IP address of domain default-k8s-diff-port-482476 in network mk-default-k8s-diff-port-482476
	I1209 11:52:13.339828  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | I1209 11:52:13.339744  663663 retry.go:31] will retry after 1.928200683s: waiting for machine to come up
	I1209 11:52:11.732956  662109 addons.go:510] duration metric: took 1.936137102s for enable addons: enabled=[metrics-server storage-provisioner default-storageclass]
	I1209 11:52:12.175844  662109 node_ready.go:53] node "no-preload-820741" has status "Ready":"False"
	I1209 11:52:14.504491  662109 node_ready.go:53] node "no-preload-820741" has status "Ready":"False"
	I1209 11:52:12.756066  662586 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1209 11:52:12.756121  662586 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1209 11:52:12.756134  662586 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1209 11:52:12.756175  662586 ssh_runner.go:195] Run: which crictl
	I1209 11:52:12.756179  662586 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 11:52:12.756230  662586 ssh_runner.go:195] Run: which crictl
	I1209 11:52:12.808091  662586 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1209 11:52:12.808139  662586 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1209 11:52:12.808186  662586 ssh_runner.go:195] Run: which crictl
	I1209 11:52:12.809593  662586 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1209 11:52:12.809622  662586 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1209 11:52:12.809637  662586 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1209 11:52:12.809659  662586 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1209 11:52:12.809682  662586 ssh_runner.go:195] Run: which crictl
	I1209 11:52:12.809712  662586 ssh_runner.go:195] Run: which crictl
	I1209 11:52:12.809775  662586 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1209 11:52:12.809803  662586 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1209 11:52:12.809829  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 11:52:12.809841  662586 ssh_runner.go:195] Run: which crictl
	I1209 11:52:12.809724  662586 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1209 11:52:12.809873  662586 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1209 11:52:12.809898  662586 ssh_runner.go:195] Run: which crictl
	I1209 11:52:12.809933  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1209 11:52:12.812256  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1209 11:52:12.819121  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1209 11:52:12.825106  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1209 11:52:12.910431  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1209 11:52:12.910501  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 11:52:12.910560  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1209 11:52:12.910503  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1209 11:52:12.910638  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1209 11:52:12.910713  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1209 11:52:12.930461  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1209 11:52:13.079147  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1209 11:52:13.079189  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1209 11:52:13.079233  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1209 11:52:13.079276  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1209 11:52:13.079418  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1209 11:52:13.079447  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1209 11:52:13.079517  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1209 11:52:13.224753  662586 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1209 11:52:13.227126  662586 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1209 11:52:13.227190  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1209 11:52:13.227253  662586 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1209 11:52:13.227291  662586 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1209 11:52:13.227332  662586 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1209 11:52:13.227393  662586 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1209 11:52:13.277747  662586 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1209 11:52:13.285286  662586 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1209 11:52:13.663858  662586 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 11:52:13.805603  662586 cache_images.go:92] duration metric: took 1.420145666s to LoadCachedImages
	W1209 11:52:13.805814  662586 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20068-609844/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I1209 11:52:13.805848  662586 kubeadm.go:934] updating node { 192.168.61.132 8443 v1.20.0 crio true true} ...
	I1209 11:52:13.805980  662586 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-014592 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.132
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-014592 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 11:52:13.806079  662586 ssh_runner.go:195] Run: crio config
	I1209 11:52:13.870766  662586 cni.go:84] Creating CNI manager for ""
	I1209 11:52:13.870797  662586 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 11:52:13.870813  662586 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1209 11:52:13.870841  662586 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.132 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-014592 NodeName:old-k8s-version-014592 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.132"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.132 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1209 11:52:13.871050  662586 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.132
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-014592"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.132
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.132"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 11:52:13.871136  662586 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1209 11:52:13.881556  662586 binaries.go:44] Found k8s binaries, skipping transfer
	I1209 11:52:13.881628  662586 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 11:52:13.891122  662586 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1209 11:52:13.908181  662586 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 11:52:13.925041  662586 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1209 11:52:13.941567  662586 ssh_runner.go:195] Run: grep 192.168.61.132	control-plane.minikube.internal$ /etc/hosts
	I1209 11:52:13.945502  662586 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.132	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 11:52:13.957476  662586 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 11:52:14.091699  662586 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 11:52:14.108772  662586 certs.go:68] Setting up /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/old-k8s-version-014592 for IP: 192.168.61.132
	I1209 11:52:14.108810  662586 certs.go:194] generating shared ca certs ...
	I1209 11:52:14.108838  662586 certs.go:226] acquiring lock for ca certs: {Name:mk2df5887e08965d909a9c950da5dfffb8a04ddc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 11:52:14.109024  662586 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.key
	I1209 11:52:14.109087  662586 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.key
	I1209 11:52:14.109105  662586 certs.go:256] generating profile certs ...
	I1209 11:52:14.109248  662586 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/old-k8s-version-014592/client.key
	I1209 11:52:14.109323  662586 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/old-k8s-version-014592/apiserver.key.28078577
	I1209 11:52:14.109383  662586 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/old-k8s-version-014592/proxy-client.key
	I1209 11:52:14.109572  662586 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017.pem (1338 bytes)
	W1209 11:52:14.109609  662586 certs.go:480] ignoring /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017_empty.pem, impossibly tiny 0 bytes
	I1209 11:52:14.109619  662586 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem (1675 bytes)
	I1209 11:52:14.109659  662586 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem (1082 bytes)
	I1209 11:52:14.109697  662586 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem (1123 bytes)
	I1209 11:52:14.109737  662586 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem (1679 bytes)
	I1209 11:52:14.109802  662586 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem (1708 bytes)
	I1209 11:52:14.110497  662586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 11:52:14.145815  662586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1209 11:52:14.179452  662586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 11:52:14.217469  662586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1209 11:52:14.250288  662586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/old-k8s-version-014592/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1209 11:52:14.287110  662586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/old-k8s-version-014592/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1209 11:52:14.317190  662586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/old-k8s-version-014592/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 11:52:14.356825  662586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/old-k8s-version-014592/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1209 11:52:14.379756  662586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017.pem --> /usr/share/ca-certificates/617017.pem (1338 bytes)
	I1209 11:52:14.402045  662586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem --> /usr/share/ca-certificates/6170172.pem (1708 bytes)
	I1209 11:52:14.425287  662586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 11:52:14.448025  662586 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 11:52:14.464144  662586 ssh_runner.go:195] Run: openssl version
	I1209 11:52:14.470256  662586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/617017.pem && ln -fs /usr/share/ca-certificates/617017.pem /etc/ssl/certs/617017.pem"
	I1209 11:52:14.481298  662586 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/617017.pem
	I1209 11:52:14.485849  662586 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 10:45 /usr/share/ca-certificates/617017.pem
	I1209 11:52:14.485904  662586 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/617017.pem
	I1209 11:52:14.492321  662586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/617017.pem /etc/ssl/certs/51391683.0"
	I1209 11:52:14.504155  662586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6170172.pem && ln -fs /usr/share/ca-certificates/6170172.pem /etc/ssl/certs/6170172.pem"
	I1209 11:52:14.515819  662586 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6170172.pem
	I1209 11:52:14.520876  662586 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 10:45 /usr/share/ca-certificates/6170172.pem
	I1209 11:52:14.520955  662586 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6170172.pem
	I1209 11:52:14.527295  662586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6170172.pem /etc/ssl/certs/3ec20f2e.0"
	I1209 11:52:14.538319  662586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1209 11:52:14.549753  662586 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 11:52:14.554273  662586 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 10:34 /usr/share/ca-certificates/minikubeCA.pem
	I1209 11:52:14.554341  662586 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 11:52:14.559893  662586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1209 11:52:14.570744  662586 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 11:52:14.575763  662586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1209 11:52:14.582279  662586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1209 11:52:14.588549  662586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1209 11:52:14.594376  662586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1209 11:52:14.599758  662586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1209 11:52:14.605497  662586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1209 11:52:14.611083  662586 kubeadm.go:392] StartCluster: {Name:old-k8s-version-014592 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-014592 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.132 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 11:52:14.611213  662586 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 11:52:14.611288  662586 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 11:52:14.649447  662586 cri.go:89] found id: ""
	I1209 11:52:14.649538  662586 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 11:52:14.660070  662586 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1209 11:52:14.660094  662586 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1209 11:52:14.660145  662586 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1209 11:52:14.670412  662586 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1209 11:52:14.671387  662586 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-014592" does not appear in /home/jenkins/minikube-integration/20068-609844/kubeconfig
	I1209 11:52:14.672043  662586 kubeconfig.go:62] /home/jenkins/minikube-integration/20068-609844/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-014592" cluster setting kubeconfig missing "old-k8s-version-014592" context setting]
	I1209 11:52:14.673337  662586 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/kubeconfig: {Name:mk32876a1ab47f13c0d7c0ba1ee5e32de01b4a4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 11:52:14.708285  662586 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1209 11:52:14.719486  662586 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.132
	I1209 11:52:14.719535  662586 kubeadm.go:1160] stopping kube-system containers ...
	I1209 11:52:14.719563  662586 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1209 11:52:14.719635  662586 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 11:52:14.755280  662586 cri.go:89] found id: ""
	I1209 11:52:14.755369  662586 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1209 11:52:14.771385  662586 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 11:52:14.781364  662586 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 11:52:14.781387  662586 kubeadm.go:157] found existing configuration files:
	
	I1209 11:52:14.781455  662586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 11:52:14.790942  662586 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 11:52:14.791016  662586 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 11:52:14.800481  662586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 11:52:14.809875  662586 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 11:52:14.809948  662586 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 11:52:14.819619  662586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 11:52:14.831670  662586 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 11:52:14.831750  662586 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 11:52:14.844244  662586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 11:52:14.853328  662586 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 11:52:14.853403  662586 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 11:52:14.862428  662586 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 11:52:14.871346  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:15.007799  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:15.697594  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:15.921787  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:16.031826  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:16.132199  662586 api_server.go:52] waiting for apiserver process to appear ...
	I1209 11:52:16.132310  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:16.633329  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:17.133389  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:17.632581  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:15.270255  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:15.270804  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | unable to find current IP address of domain default-k8s-diff-port-482476 in network mk-default-k8s-diff-port-482476
	I1209 11:52:15.270836  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | I1209 11:52:15.270741  663663 retry.go:31] will retry after 2.90998032s: waiting for machine to come up
	I1209 11:52:18.182069  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:18.182741  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | unable to find current IP address of domain default-k8s-diff-port-482476 in network mk-default-k8s-diff-port-482476
	I1209 11:52:18.182774  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | I1209 11:52:18.182689  663663 retry.go:31] will retry after 3.196470388s: waiting for machine to come up
	I1209 11:52:16.676188  662109 node_ready.go:53] node "no-preload-820741" has status "Ready":"False"
	I1209 11:52:17.175894  662109 node_ready.go:49] node "no-preload-820741" has status "Ready":"True"
	I1209 11:52:17.175928  662109 node_ready.go:38] duration metric: took 7.003696159s for node "no-preload-820741" to be "Ready" ...
	I1209 11:52:17.175945  662109 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 11:52:17.180647  662109 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-z647g" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:19.188583  662109 pod_ready.go:103] pod "coredns-7c65d6cfc9-z647g" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:18.133165  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:18.632403  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:19.132416  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:19.633332  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:20.133343  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:20.632968  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:21.133411  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:21.632656  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:22.132876  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:22.632816  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:21.381260  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:21.381912  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | unable to find current IP address of domain default-k8s-diff-port-482476 in network mk-default-k8s-diff-port-482476
	I1209 11:52:21.381943  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | I1209 11:52:21.381834  663663 retry.go:31] will retry after 3.621023528s: waiting for machine to come up
	I1209 11:52:26.142813  661546 start.go:364] duration metric: took 56.424295065s to acquireMachinesLock for "embed-certs-005123"
	I1209 11:52:26.142877  661546 start.go:96] Skipping create...Using existing machine configuration
	I1209 11:52:26.142886  661546 fix.go:54] fixHost starting: 
	I1209 11:52:26.143376  661546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:52:26.143416  661546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:52:26.164438  661546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45591
	I1209 11:52:26.165041  661546 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:52:26.165779  661546 main.go:141] libmachine: Using API Version  1
	I1209 11:52:26.165828  661546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:52:26.166318  661546 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:52:26.166544  661546 main.go:141] libmachine: (embed-certs-005123) Calling .DriverName
	I1209 11:52:26.166745  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetState
	I1209 11:52:26.168534  661546 fix.go:112] recreateIfNeeded on embed-certs-005123: state=Stopped err=<nil>
	I1209 11:52:26.168564  661546 main.go:141] libmachine: (embed-certs-005123) Calling .DriverName
	W1209 11:52:26.168753  661546 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 11:52:26.170973  661546 out.go:177] * Restarting existing kvm2 VM for "embed-certs-005123" ...
	I1209 11:52:26.172269  661546 main.go:141] libmachine: (embed-certs-005123) Calling .Start
	I1209 11:52:26.172500  661546 main.go:141] libmachine: (embed-certs-005123) Ensuring networks are active...
	I1209 11:52:26.173391  661546 main.go:141] libmachine: (embed-certs-005123) Ensuring network default is active
	I1209 11:52:26.173747  661546 main.go:141] libmachine: (embed-certs-005123) Ensuring network mk-embed-certs-005123 is active
	I1209 11:52:26.174208  661546 main.go:141] libmachine: (embed-certs-005123) Getting domain xml...
	I1209 11:52:26.174990  661546 main.go:141] libmachine: (embed-certs-005123) Creating domain...
	I1209 11:52:21.687274  662109 pod_ready.go:103] pod "coredns-7c65d6cfc9-z647g" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:23.688011  662109 pod_ready.go:103] pod "coredns-7c65d6cfc9-z647g" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:24.187886  662109 pod_ready.go:93] pod "coredns-7c65d6cfc9-z647g" in "kube-system" namespace has status "Ready":"True"
	I1209 11:52:24.187917  662109 pod_ready.go:82] duration metric: took 7.007243363s for pod "coredns-7c65d6cfc9-z647g" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:24.187928  662109 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-820741" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:24.193936  662109 pod_ready.go:93] pod "etcd-no-preload-820741" in "kube-system" namespace has status "Ready":"True"
	I1209 11:52:24.193958  662109 pod_ready.go:82] duration metric: took 6.02353ms for pod "etcd-no-preload-820741" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:24.193966  662109 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-820741" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:24.203685  662109 pod_ready.go:93] pod "kube-apiserver-no-preload-820741" in "kube-system" namespace has status "Ready":"True"
	I1209 11:52:24.203712  662109 pod_ready.go:82] duration metric: took 9.739287ms for pod "kube-apiserver-no-preload-820741" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:24.203722  662109 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-820741" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:24.210004  662109 pod_ready.go:93] pod "kube-controller-manager-no-preload-820741" in "kube-system" namespace has status "Ready":"True"
	I1209 11:52:24.210034  662109 pod_ready.go:82] duration metric: took 6.304008ms for pod "kube-controller-manager-no-preload-820741" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:24.210048  662109 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-hpvvp" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:24.216225  662109 pod_ready.go:93] pod "kube-proxy-hpvvp" in "kube-system" namespace has status "Ready":"True"
	I1209 11:52:24.216249  662109 pod_ready.go:82] duration metric: took 6.193945ms for pod "kube-proxy-hpvvp" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:24.216258  662109 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-820741" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:24.584682  662109 pod_ready.go:93] pod "kube-scheduler-no-preload-820741" in "kube-system" namespace has status "Ready":"True"
	I1209 11:52:24.584711  662109 pod_ready.go:82] duration metric: took 368.445803ms for pod "kube-scheduler-no-preload-820741" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:24.584724  662109 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:25.004323  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.004761  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Found IP for machine: 192.168.50.25
	I1209 11:52:25.004791  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has current primary IP address 192.168.50.25 and MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.004798  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Reserving static IP address...
	I1209 11:52:25.005275  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-482476", mac: "52:54:00:f0:c9:8a", ip: "192.168.50.25"} in network mk-default-k8s-diff-port-482476: {Iface:virbr2 ExpiryTime:2024-12-09 12:52:18 +0000 UTC Type:0 Mac:52:54:00:f0:c9:8a Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:default-k8s-diff-port-482476 Clientid:01:52:54:00:f0:c9:8a}
	I1209 11:52:25.005301  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | skip adding static IP to network mk-default-k8s-diff-port-482476 - found existing host DHCP lease matching {name: "default-k8s-diff-port-482476", mac: "52:54:00:f0:c9:8a", ip: "192.168.50.25"}
	I1209 11:52:25.005314  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Reserved static IP address: 192.168.50.25
	I1209 11:52:25.005328  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Waiting for SSH to be available...
	I1209 11:52:25.005342  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | Getting to WaitForSSH function...
	I1209 11:52:25.007758  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.008146  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:c9:8a", ip: ""} in network mk-default-k8s-diff-port-482476: {Iface:virbr2 ExpiryTime:2024-12-09 12:52:18 +0000 UTC Type:0 Mac:52:54:00:f0:c9:8a Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:default-k8s-diff-port-482476 Clientid:01:52:54:00:f0:c9:8a}
	I1209 11:52:25.008189  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined IP address 192.168.50.25 and MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.008291  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | Using SSH client type: external
	I1209 11:52:25.008318  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | Using SSH private key: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/default-k8s-diff-port-482476/id_rsa (-rw-------)
	I1209 11:52:25.008348  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.25 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20068-609844/.minikube/machines/default-k8s-diff-port-482476/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1209 11:52:25.008361  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | About to run SSH command:
	I1209 11:52:25.008369  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | exit 0
	I1209 11:52:25.130532  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | SSH cmd err, output: <nil>: 
	I1209 11:52:25.130901  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetConfigRaw
	I1209 11:52:25.131568  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetIP
	I1209 11:52:25.134487  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.134816  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:c9:8a", ip: ""} in network mk-default-k8s-diff-port-482476: {Iface:virbr2 ExpiryTime:2024-12-09 12:52:18 +0000 UTC Type:0 Mac:52:54:00:f0:c9:8a Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:default-k8s-diff-port-482476 Clientid:01:52:54:00:f0:c9:8a}
	I1209 11:52:25.134854  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined IP address 192.168.50.25 and MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.135163  663024 profile.go:143] Saving config to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/default-k8s-diff-port-482476/config.json ...
	I1209 11:52:25.135451  663024 machine.go:93] provisionDockerMachine start ...
	I1209 11:52:25.135480  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .DriverName
	I1209 11:52:25.135736  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHHostname
	I1209 11:52:25.138444  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.138853  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:c9:8a", ip: ""} in network mk-default-k8s-diff-port-482476: {Iface:virbr2 ExpiryTime:2024-12-09 12:52:18 +0000 UTC Type:0 Mac:52:54:00:f0:c9:8a Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:default-k8s-diff-port-482476 Clientid:01:52:54:00:f0:c9:8a}
	I1209 11:52:25.138894  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined IP address 192.168.50.25 and MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.138981  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHPort
	I1209 11:52:25.139188  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHKeyPath
	I1209 11:52:25.139327  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHKeyPath
	I1209 11:52:25.139491  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHUsername
	I1209 11:52:25.139655  663024 main.go:141] libmachine: Using SSH client type: native
	I1209 11:52:25.139895  663024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.25 22 <nil> <nil>}
	I1209 11:52:25.139906  663024 main.go:141] libmachine: About to run SSH command:
	hostname
	I1209 11:52:25.242441  663024 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1209 11:52:25.242472  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetMachineName
	I1209 11:52:25.242837  663024 buildroot.go:166] provisioning hostname "default-k8s-diff-port-482476"
	I1209 11:52:25.242878  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetMachineName
	I1209 11:52:25.243093  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHHostname
	I1209 11:52:25.245995  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.246447  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:c9:8a", ip: ""} in network mk-default-k8s-diff-port-482476: {Iface:virbr2 ExpiryTime:2024-12-09 12:52:18 +0000 UTC Type:0 Mac:52:54:00:f0:c9:8a Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:default-k8s-diff-port-482476 Clientid:01:52:54:00:f0:c9:8a}
	I1209 11:52:25.246478  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined IP address 192.168.50.25 and MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.246685  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHPort
	I1209 11:52:25.246900  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHKeyPath
	I1209 11:52:25.247052  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHKeyPath
	I1209 11:52:25.247175  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHUsername
	I1209 11:52:25.247330  663024 main.go:141] libmachine: Using SSH client type: native
	I1209 11:52:25.247518  663024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.25 22 <nil> <nil>}
	I1209 11:52:25.247531  663024 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-482476 && echo "default-k8s-diff-port-482476" | sudo tee /etc/hostname
	I1209 11:52:25.361366  663024 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-482476
	
	I1209 11:52:25.361397  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHHostname
	I1209 11:52:25.364194  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.364608  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:c9:8a", ip: ""} in network mk-default-k8s-diff-port-482476: {Iface:virbr2 ExpiryTime:2024-12-09 12:52:18 +0000 UTC Type:0 Mac:52:54:00:f0:c9:8a Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:default-k8s-diff-port-482476 Clientid:01:52:54:00:f0:c9:8a}
	I1209 11:52:25.364639  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined IP address 192.168.50.25 and MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.364813  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHPort
	I1209 11:52:25.365064  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHKeyPath
	I1209 11:52:25.365267  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHKeyPath
	I1209 11:52:25.365369  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHUsername
	I1209 11:52:25.365613  663024 main.go:141] libmachine: Using SSH client type: native
	I1209 11:52:25.365790  663024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.25 22 <nil> <nil>}
	I1209 11:52:25.365808  663024 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-482476' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-482476/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-482476' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 11:52:25.475311  663024 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 11:52:25.475346  663024 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20068-609844/.minikube CaCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20068-609844/.minikube}
	I1209 11:52:25.475386  663024 buildroot.go:174] setting up certificates
	I1209 11:52:25.475403  663024 provision.go:84] configureAuth start
	I1209 11:52:25.475412  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetMachineName
	I1209 11:52:25.475711  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetIP
	I1209 11:52:25.478574  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.478903  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:c9:8a", ip: ""} in network mk-default-k8s-diff-port-482476: {Iface:virbr2 ExpiryTime:2024-12-09 12:52:18 +0000 UTC Type:0 Mac:52:54:00:f0:c9:8a Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:default-k8s-diff-port-482476 Clientid:01:52:54:00:f0:c9:8a}
	I1209 11:52:25.478935  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined IP address 192.168.50.25 and MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.479055  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHHostname
	I1209 11:52:25.481280  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.481655  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:c9:8a", ip: ""} in network mk-default-k8s-diff-port-482476: {Iface:virbr2 ExpiryTime:2024-12-09 12:52:18 +0000 UTC Type:0 Mac:52:54:00:f0:c9:8a Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:default-k8s-diff-port-482476 Clientid:01:52:54:00:f0:c9:8a}
	I1209 11:52:25.481688  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined IP address 192.168.50.25 and MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.481788  663024 provision.go:143] copyHostCerts
	I1209 11:52:25.481845  663024 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem, removing ...
	I1209 11:52:25.481876  663024 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem
	I1209 11:52:25.481957  663024 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem (1123 bytes)
	I1209 11:52:25.482056  663024 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem, removing ...
	I1209 11:52:25.482065  663024 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem
	I1209 11:52:25.482090  663024 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem (1679 bytes)
	I1209 11:52:25.482243  663024 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem, removing ...
	I1209 11:52:25.482254  663024 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem
	I1209 11:52:25.482279  663024 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem (1082 bytes)
	I1209 11:52:25.482336  663024 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-482476 san=[127.0.0.1 192.168.50.25 default-k8s-diff-port-482476 localhost minikube]
	I1209 11:52:25.534856  663024 provision.go:177] copyRemoteCerts
	I1209 11:52:25.534921  663024 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 11:52:25.534951  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHHostname
	I1209 11:52:25.537732  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.538138  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:c9:8a", ip: ""} in network mk-default-k8s-diff-port-482476: {Iface:virbr2 ExpiryTime:2024-12-09 12:52:18 +0000 UTC Type:0 Mac:52:54:00:f0:c9:8a Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:default-k8s-diff-port-482476 Clientid:01:52:54:00:f0:c9:8a}
	I1209 11:52:25.538190  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined IP address 192.168.50.25 and MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.538390  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHPort
	I1209 11:52:25.538611  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHKeyPath
	I1209 11:52:25.538783  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHUsername
	I1209 11:52:25.538943  663024 sshutil.go:53] new ssh client: &{IP:192.168.50.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/default-k8s-diff-port-482476/id_rsa Username:docker}
	I1209 11:52:25.619772  663024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1209 11:52:25.643527  663024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1209 11:52:25.668517  663024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1209 11:52:25.693573  663024 provision.go:87] duration metric: took 218.153182ms to configureAuth
	I1209 11:52:25.693615  663024 buildroot.go:189] setting minikube options for container-runtime
	I1209 11:52:25.693807  663024 config.go:182] Loaded profile config "default-k8s-diff-port-482476": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 11:52:25.693906  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHHostname
	I1209 11:52:25.696683  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.697058  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:c9:8a", ip: ""} in network mk-default-k8s-diff-port-482476: {Iface:virbr2 ExpiryTime:2024-12-09 12:52:18 +0000 UTC Type:0 Mac:52:54:00:f0:c9:8a Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:default-k8s-diff-port-482476 Clientid:01:52:54:00:f0:c9:8a}
	I1209 11:52:25.697092  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined IP address 192.168.50.25 and MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.697344  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHPort
	I1209 11:52:25.697548  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHKeyPath
	I1209 11:52:25.697741  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHKeyPath
	I1209 11:52:25.697868  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHUsername
	I1209 11:52:25.698033  663024 main.go:141] libmachine: Using SSH client type: native
	I1209 11:52:25.698229  663024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.25 22 <nil> <nil>}
	I1209 11:52:25.698254  663024 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 11:52:25.915568  663024 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 11:52:25.915595  663024 machine.go:96] duration metric: took 780.126343ms to provisionDockerMachine
	I1209 11:52:25.915610  663024 start.go:293] postStartSetup for "default-k8s-diff-port-482476" (driver="kvm2")
	I1209 11:52:25.915620  663024 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 11:52:25.915644  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .DriverName
	I1209 11:52:25.916005  663024 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 11:52:25.916047  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHHostname
	I1209 11:52:25.919268  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.919602  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:c9:8a", ip: ""} in network mk-default-k8s-diff-port-482476: {Iface:virbr2 ExpiryTime:2024-12-09 12:52:18 +0000 UTC Type:0 Mac:52:54:00:f0:c9:8a Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:default-k8s-diff-port-482476 Clientid:01:52:54:00:f0:c9:8a}
	I1209 11:52:25.919628  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined IP address 192.168.50.25 and MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:25.919775  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHPort
	I1209 11:52:25.919967  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHKeyPath
	I1209 11:52:25.920133  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHUsername
	I1209 11:52:25.920285  663024 sshutil.go:53] new ssh client: &{IP:192.168.50.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/default-k8s-diff-port-482476/id_rsa Username:docker}
	I1209 11:52:26.000530  663024 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 11:52:26.004544  663024 info.go:137] Remote host: Buildroot 2023.02.9
	I1209 11:52:26.004574  663024 filesync.go:126] Scanning /home/jenkins/minikube-integration/20068-609844/.minikube/addons for local assets ...
	I1209 11:52:26.004651  663024 filesync.go:126] Scanning /home/jenkins/minikube-integration/20068-609844/.minikube/files for local assets ...
	I1209 11:52:26.004759  663024 filesync.go:149] local asset: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem -> 6170172.pem in /etc/ssl/certs
	I1209 11:52:26.004885  663024 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 11:52:26.013444  663024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem --> /etc/ssl/certs/6170172.pem (1708 bytes)
	I1209 11:52:26.036052  663024 start.go:296] duration metric: took 120.422739ms for postStartSetup
	I1209 11:52:26.036110  663024 fix.go:56] duration metric: took 20.120932786s for fixHost
	I1209 11:52:26.036135  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHHostname
	I1209 11:52:26.039079  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:26.039445  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:c9:8a", ip: ""} in network mk-default-k8s-diff-port-482476: {Iface:virbr2 ExpiryTime:2024-12-09 12:52:18 +0000 UTC Type:0 Mac:52:54:00:f0:c9:8a Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:default-k8s-diff-port-482476 Clientid:01:52:54:00:f0:c9:8a}
	I1209 11:52:26.039478  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined IP address 192.168.50.25 and MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:26.039797  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHPort
	I1209 11:52:26.040065  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHKeyPath
	I1209 11:52:26.040228  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHKeyPath
	I1209 11:52:26.040427  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHUsername
	I1209 11:52:26.040620  663024 main.go:141] libmachine: Using SSH client type: native
	I1209 11:52:26.040906  663024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.25 22 <nil> <nil>}
	I1209 11:52:26.040924  663024 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1209 11:52:26.142590  663024 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733745146.090497627
	
	I1209 11:52:26.142623  663024 fix.go:216] guest clock: 1733745146.090497627
	I1209 11:52:26.142634  663024 fix.go:229] Guest: 2024-12-09 11:52:26.090497627 +0000 UTC Remote: 2024-12-09 11:52:26.036115182 +0000 UTC m=+146.587055001 (delta=54.382445ms)
	I1209 11:52:26.142669  663024 fix.go:200] guest clock delta is within tolerance: 54.382445ms
	I1209 11:52:26.142681  663024 start.go:83] releasing machines lock for "default-k8s-diff-port-482476", held for 20.227543026s
	I1209 11:52:26.142723  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .DriverName
	I1209 11:52:26.143032  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetIP
	I1209 11:52:26.146118  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:26.146602  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:c9:8a", ip: ""} in network mk-default-k8s-diff-port-482476: {Iface:virbr2 ExpiryTime:2024-12-09 12:52:18 +0000 UTC Type:0 Mac:52:54:00:f0:c9:8a Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:default-k8s-diff-port-482476 Clientid:01:52:54:00:f0:c9:8a}
	I1209 11:52:26.146634  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined IP address 192.168.50.25 and MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:26.146841  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .DriverName
	I1209 11:52:26.147440  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .DriverName
	I1209 11:52:26.147709  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .DriverName
	I1209 11:52:26.147833  663024 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 11:52:26.147872  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHHostname
	I1209 11:52:26.147980  663024 ssh_runner.go:195] Run: cat /version.json
	I1209 11:52:26.148009  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHHostname
	I1209 11:52:26.151002  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:26.151346  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:26.151379  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:c9:8a", ip: ""} in network mk-default-k8s-diff-port-482476: {Iface:virbr2 ExpiryTime:2024-12-09 12:52:18 +0000 UTC Type:0 Mac:52:54:00:f0:c9:8a Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:default-k8s-diff-port-482476 Clientid:01:52:54:00:f0:c9:8a}
	I1209 11:52:26.151410  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined IP address 192.168.50.25 and MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:26.151534  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHPort
	I1209 11:52:26.151729  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHKeyPath
	I1209 11:52:26.151848  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:c9:8a", ip: ""} in network mk-default-k8s-diff-port-482476: {Iface:virbr2 ExpiryTime:2024-12-09 12:52:18 +0000 UTC Type:0 Mac:52:54:00:f0:c9:8a Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:default-k8s-diff-port-482476 Clientid:01:52:54:00:f0:c9:8a}
	I1209 11:52:26.151876  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined IP address 192.168.50.25 and MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:26.151904  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHUsername
	I1209 11:52:26.152003  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHPort
	I1209 11:52:26.152082  663024 sshutil.go:53] new ssh client: &{IP:192.168.50.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/default-k8s-diff-port-482476/id_rsa Username:docker}
	I1209 11:52:26.152159  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHKeyPath
	I1209 11:52:26.152322  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHUsername
	I1209 11:52:26.152565  663024 sshutil.go:53] new ssh client: &{IP:192.168.50.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/default-k8s-diff-port-482476/id_rsa Username:docker}
	I1209 11:52:26.231575  663024 ssh_runner.go:195] Run: systemctl --version
	I1209 11:52:26.267939  663024 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 11:52:26.418953  663024 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 11:52:26.426243  663024 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 11:52:26.426337  663024 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 11:52:26.448407  663024 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1209 11:52:26.448442  663024 start.go:495] detecting cgroup driver to use...
	I1209 11:52:26.448540  663024 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 11:52:26.469675  663024 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 11:52:26.488825  663024 docker.go:217] disabling cri-docker service (if available) ...
	I1209 11:52:26.488902  663024 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 11:52:26.507716  663024 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 11:52:26.525232  663024 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 11:52:26.664062  663024 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 11:52:26.854813  663024 docker.go:233] disabling docker service ...
	I1209 11:52:26.854883  663024 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 11:52:26.870021  663024 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 11:52:26.883610  663024 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 11:52:27.001237  663024 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 11:52:27.126865  663024 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 11:52:27.144121  663024 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 11:52:27.168073  663024 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1209 11:52:27.168242  663024 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:52:27.180516  663024 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 11:52:27.180587  663024 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:52:27.191681  663024 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:52:27.204047  663024 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:52:27.214157  663024 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 11:52:27.225934  663024 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:52:27.236691  663024 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:52:27.258774  663024 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:52:27.271986  663024 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 11:52:27.283488  663024 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1209 11:52:27.283539  663024 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1209 11:52:27.299065  663024 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 11:52:27.309203  663024 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 11:52:27.431740  663024 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 11:52:27.529577  663024 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 11:52:27.529668  663024 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 11:52:27.534733  663024 start.go:563] Will wait 60s for crictl version
	I1209 11:52:27.534800  663024 ssh_runner.go:195] Run: which crictl
	I1209 11:52:27.538544  663024 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 11:52:27.577577  663024 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1209 11:52:27.577684  663024 ssh_runner.go:195] Run: crio --version
	I1209 11:52:27.607938  663024 ssh_runner.go:195] Run: crio --version
	I1209 11:52:27.645210  663024 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1209 11:52:23.133393  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:23.632776  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:24.133286  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:24.632415  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:25.132348  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:25.632478  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:26.132982  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:26.632517  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:27.132692  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:27.633291  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:27.646510  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetIP
	I1209 11:52:27.650014  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:27.650439  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:c9:8a", ip: ""} in network mk-default-k8s-diff-port-482476: {Iface:virbr2 ExpiryTime:2024-12-09 12:52:18 +0000 UTC Type:0 Mac:52:54:00:f0:c9:8a Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:default-k8s-diff-port-482476 Clientid:01:52:54:00:f0:c9:8a}
	I1209 11:52:27.650469  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined IP address 192.168.50.25 and MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:52:27.650705  663024 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1209 11:52:27.654738  663024 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 11:52:27.668671  663024 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-482476 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:default-k8s-diff-port-482476 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.25 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 11:52:27.668808  663024 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 11:52:27.668873  663024 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 11:52:27.709582  663024 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1209 11:52:27.709679  663024 ssh_runner.go:195] Run: which lz4
	I1209 11:52:27.713702  663024 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1209 11:52:27.717851  663024 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1209 11:52:27.717887  663024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1209 11:52:29.037160  663024 crio.go:462] duration metric: took 1.32348676s to copy over tarball
	I1209 11:52:29.037262  663024 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1209 11:52:27.500098  661546 main.go:141] libmachine: (embed-certs-005123) Waiting to get IP...
	I1209 11:52:27.501088  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:27.501538  661546 main.go:141] libmachine: (embed-certs-005123) DBG | unable to find current IP address of domain embed-certs-005123 in network mk-embed-certs-005123
	I1209 11:52:27.501605  661546 main.go:141] libmachine: (embed-certs-005123) DBG | I1209 11:52:27.501510  663907 retry.go:31] will retry after 191.187925ms: waiting for machine to come up
	I1209 11:52:27.694017  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:27.694574  661546 main.go:141] libmachine: (embed-certs-005123) DBG | unable to find current IP address of domain embed-certs-005123 in network mk-embed-certs-005123
	I1209 11:52:27.694605  661546 main.go:141] libmachine: (embed-certs-005123) DBG | I1209 11:52:27.694512  663907 retry.go:31] will retry after 256.268ms: waiting for machine to come up
	I1209 11:52:27.952185  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:27.952863  661546 main.go:141] libmachine: (embed-certs-005123) DBG | unable to find current IP address of domain embed-certs-005123 in network mk-embed-certs-005123
	I1209 11:52:27.952908  661546 main.go:141] libmachine: (embed-certs-005123) DBG | I1209 11:52:27.952759  663907 retry.go:31] will retry after 460.272204ms: waiting for machine to come up
	I1209 11:52:28.414403  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:28.414925  661546 main.go:141] libmachine: (embed-certs-005123) DBG | unable to find current IP address of domain embed-certs-005123 in network mk-embed-certs-005123
	I1209 11:52:28.414967  661546 main.go:141] libmachine: (embed-certs-005123) DBG | I1209 11:52:28.414873  663907 retry.go:31] will retry after 450.761189ms: waiting for machine to come up
	I1209 11:52:28.867687  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:28.868350  661546 main.go:141] libmachine: (embed-certs-005123) DBG | unable to find current IP address of domain embed-certs-005123 in network mk-embed-certs-005123
	I1209 11:52:28.868389  661546 main.go:141] libmachine: (embed-certs-005123) DBG | I1209 11:52:28.868313  663907 retry.go:31] will retry after 615.800863ms: waiting for machine to come up
	I1209 11:52:29.486566  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:29.487179  661546 main.go:141] libmachine: (embed-certs-005123) DBG | unable to find current IP address of domain embed-certs-005123 in network mk-embed-certs-005123
	I1209 11:52:29.487218  661546 main.go:141] libmachine: (embed-certs-005123) DBG | I1209 11:52:29.487108  663907 retry.go:31] will retry after 628.641045ms: waiting for machine to come up
	I1209 11:52:30.117051  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:30.117424  661546 main.go:141] libmachine: (embed-certs-005123) DBG | unable to find current IP address of domain embed-certs-005123 in network mk-embed-certs-005123
	I1209 11:52:30.117459  661546 main.go:141] libmachine: (embed-certs-005123) DBG | I1209 11:52:30.117356  663907 retry.go:31] will retry after 902.465226ms: waiting for machine to come up
	I1209 11:52:31.021756  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:31.022268  661546 main.go:141] libmachine: (embed-certs-005123) DBG | unable to find current IP address of domain embed-certs-005123 in network mk-embed-certs-005123
	I1209 11:52:31.022298  661546 main.go:141] libmachine: (embed-certs-005123) DBG | I1209 11:52:31.022229  663907 retry.go:31] will retry after 918.939368ms: waiting for machine to come up
	I1209 11:52:26.594953  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:29.093499  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:28.132379  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:28.633377  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:29.132983  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:29.633370  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:30.132748  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:30.633383  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:31.133450  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:31.633210  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:32.132406  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:32.632598  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:31.234956  663024 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.197609203s)
	I1209 11:52:31.235007  663024 crio.go:469] duration metric: took 2.197798334s to extract the tarball
	I1209 11:52:31.235018  663024 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1209 11:52:31.275616  663024 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 11:52:31.320918  663024 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 11:52:31.320945  663024 cache_images.go:84] Images are preloaded, skipping loading
	I1209 11:52:31.320961  663024 kubeadm.go:934] updating node { 192.168.50.25 8444 v1.31.2 crio true true} ...
	I1209 11:52:31.321122  663024 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-482476 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.25
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-482476 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 11:52:31.321246  663024 ssh_runner.go:195] Run: crio config
	I1209 11:52:31.367805  663024 cni.go:84] Creating CNI manager for ""
	I1209 11:52:31.367827  663024 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 11:52:31.367839  663024 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1209 11:52:31.367863  663024 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.25 APIServerPort:8444 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-482476 NodeName:default-k8s-diff-port-482476 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.25"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.25 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 11:52:31.368005  663024 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.25
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-482476"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.25"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.25"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 11:52:31.368074  663024 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1209 11:52:31.377831  663024 binaries.go:44] Found k8s binaries, skipping transfer
	I1209 11:52:31.377902  663024 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 11:52:31.386872  663024 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I1209 11:52:31.403764  663024 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 11:52:31.419295  663024 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2305 bytes)
	I1209 11:52:31.435856  663024 ssh_runner.go:195] Run: grep 192.168.50.25	control-plane.minikube.internal$ /etc/hosts
	I1209 11:52:31.439480  663024 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.25	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 11:52:31.455136  663024 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 11:52:31.573295  663024 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 11:52:31.589679  663024 certs.go:68] Setting up /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/default-k8s-diff-port-482476 for IP: 192.168.50.25
	I1209 11:52:31.589703  663024 certs.go:194] generating shared ca certs ...
	I1209 11:52:31.589741  663024 certs.go:226] acquiring lock for ca certs: {Name:mk2df5887e08965d909a9c950da5dfffb8a04ddc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 11:52:31.589930  663024 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.key
	I1209 11:52:31.589982  663024 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.key
	I1209 11:52:31.589995  663024 certs.go:256] generating profile certs ...
	I1209 11:52:31.590137  663024 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/default-k8s-diff-port-482476/client.key
	I1209 11:52:31.590256  663024 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/default-k8s-diff-port-482476/apiserver.key.e2346b12
	I1209 11:52:31.590322  663024 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/default-k8s-diff-port-482476/proxy-client.key
	I1209 11:52:31.590479  663024 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017.pem (1338 bytes)
	W1209 11:52:31.590522  663024 certs.go:480] ignoring /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017_empty.pem, impossibly tiny 0 bytes
	I1209 11:52:31.590535  663024 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem (1675 bytes)
	I1209 11:52:31.590571  663024 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem (1082 bytes)
	I1209 11:52:31.590612  663024 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem (1123 bytes)
	I1209 11:52:31.590649  663024 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem (1679 bytes)
	I1209 11:52:31.590710  663024 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem (1708 bytes)
	I1209 11:52:31.591643  663024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 11:52:31.634363  663024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1209 11:52:31.660090  663024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 11:52:31.692933  663024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1209 11:52:31.726010  663024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/default-k8s-diff-port-482476/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1209 11:52:31.757565  663024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/default-k8s-diff-port-482476/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1209 11:52:31.781368  663024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/default-k8s-diff-port-482476/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 11:52:31.805233  663024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/default-k8s-diff-port-482476/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1209 11:52:31.828391  663024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem --> /usr/share/ca-certificates/6170172.pem (1708 bytes)
	I1209 11:52:31.850407  663024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 11:52:31.873159  663024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017.pem --> /usr/share/ca-certificates/617017.pem (1338 bytes)
	I1209 11:52:31.895503  663024 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 11:52:31.911754  663024 ssh_runner.go:195] Run: openssl version
	I1209 11:52:31.917771  663024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6170172.pem && ln -fs /usr/share/ca-certificates/6170172.pem /etc/ssl/certs/6170172.pem"
	I1209 11:52:31.929857  663024 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6170172.pem
	I1209 11:52:31.934518  663024 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 10:45 /usr/share/ca-certificates/6170172.pem
	I1209 11:52:31.934596  663024 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6170172.pem
	I1209 11:52:31.940382  663024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6170172.pem /etc/ssl/certs/3ec20f2e.0"
	I1209 11:52:31.951417  663024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1209 11:52:31.961966  663024 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 11:52:31.966234  663024 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 10:34 /usr/share/ca-certificates/minikubeCA.pem
	I1209 11:52:31.966286  663024 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 11:52:31.972070  663024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1209 11:52:31.982547  663024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/617017.pem && ln -fs /usr/share/ca-certificates/617017.pem /etc/ssl/certs/617017.pem"
	I1209 11:52:31.993215  663024 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/617017.pem
	I1209 11:52:31.997579  663024 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 10:45 /usr/share/ca-certificates/617017.pem
	I1209 11:52:31.997641  663024 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/617017.pem
	I1209 11:52:32.003050  663024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/617017.pem /etc/ssl/certs/51391683.0"
	I1209 11:52:32.013463  663024 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 11:52:32.017936  663024 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1209 11:52:32.024029  663024 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1209 11:52:32.029686  663024 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1209 11:52:32.035260  663024 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1209 11:52:32.040696  663024 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1209 11:52:32.046116  663024 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1209 11:52:32.051521  663024 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-482476 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.2 ClusterName:default-k8s-diff-port-482476 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.25 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 11:52:32.051605  663024 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 11:52:32.051676  663024 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 11:52:32.092529  663024 cri.go:89] found id: ""
	I1209 11:52:32.092623  663024 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 11:52:32.103153  663024 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1209 11:52:32.103183  663024 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1209 11:52:32.103247  663024 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1209 11:52:32.113029  663024 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1209 11:52:32.114506  663024 kubeconfig.go:125] found "default-k8s-diff-port-482476" server: "https://192.168.50.25:8444"
	I1209 11:52:32.116929  663024 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1209 11:52:32.127055  663024 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.25
	I1209 11:52:32.127108  663024 kubeadm.go:1160] stopping kube-system containers ...
	I1209 11:52:32.127124  663024 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1209 11:52:32.127189  663024 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 11:52:32.169401  663024 cri.go:89] found id: ""
	I1209 11:52:32.169507  663024 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1209 11:52:32.187274  663024 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 11:52:32.196843  663024 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 11:52:32.196867  663024 kubeadm.go:157] found existing configuration files:
	
	I1209 11:52:32.196925  663024 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1209 11:52:32.205670  663024 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 11:52:32.205754  663024 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 11:52:32.214977  663024 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1209 11:52:32.223707  663024 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 11:52:32.223782  663024 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 11:52:32.232514  663024 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1209 11:52:32.240999  663024 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 11:52:32.241076  663024 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 11:52:32.250049  663024 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1209 11:52:32.258782  663024 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 11:52:32.258846  663024 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 11:52:32.268447  663024 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 11:52:32.277875  663024 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:32.394016  663024 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:33.494978  663024 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.100920844s)
	I1209 11:52:33.495030  663024 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:33.719319  663024 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:33.787272  663024 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:33.882783  663024 api_server.go:52] waiting for apiserver process to appear ...
	I1209 11:52:33.882876  663024 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:34.383090  663024 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:31.942735  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:31.943207  661546 main.go:141] libmachine: (embed-certs-005123) DBG | unable to find current IP address of domain embed-certs-005123 in network mk-embed-certs-005123
	I1209 11:52:31.943244  661546 main.go:141] libmachine: (embed-certs-005123) DBG | I1209 11:52:31.943141  663907 retry.go:31] will retry after 1.153139191s: waiting for machine to come up
	I1209 11:52:33.097672  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:33.098233  661546 main.go:141] libmachine: (embed-certs-005123) DBG | unable to find current IP address of domain embed-certs-005123 in network mk-embed-certs-005123
	I1209 11:52:33.098299  661546 main.go:141] libmachine: (embed-certs-005123) DBG | I1209 11:52:33.098199  663907 retry.go:31] will retry after 2.002880852s: waiting for machine to come up
	I1209 11:52:35.103239  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:35.103693  661546 main.go:141] libmachine: (embed-certs-005123) DBG | unable to find current IP address of domain embed-certs-005123 in network mk-embed-certs-005123
	I1209 11:52:35.103724  661546 main.go:141] libmachine: (embed-certs-005123) DBG | I1209 11:52:35.103639  663907 retry.go:31] will retry after 2.219510124s: waiting for machine to come up
	I1209 11:52:31.593184  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:34.090877  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:36.094569  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:33.132924  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:33.632884  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:34.132528  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:34.632989  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:35.133398  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:35.632376  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:36.132936  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:36.633152  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:37.133343  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:37.633367  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:34.883172  663024 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:35.384008  663024 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:35.883940  663024 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:35.901453  663024 api_server.go:72] duration metric: took 2.018670363s to wait for apiserver process to appear ...
	I1209 11:52:35.901489  663024 api_server.go:88] waiting for apiserver healthz status ...
	I1209 11:52:35.901524  663024 api_server.go:253] Checking apiserver healthz at https://192.168.50.25:8444/healthz ...
	I1209 11:52:38.225976  663024 api_server.go:279] https://192.168.50.25:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1209 11:52:38.226017  663024 api_server.go:103] status: https://192.168.50.25:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1209 11:52:38.226037  663024 api_server.go:253] Checking apiserver healthz at https://192.168.50.25:8444/healthz ...
	I1209 11:52:38.269459  663024 api_server.go:279] https://192.168.50.25:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1209 11:52:38.269549  663024 api_server.go:103] status: https://192.168.50.25:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1209 11:52:38.401652  663024 api_server.go:253] Checking apiserver healthz at https://192.168.50.25:8444/healthz ...
	I1209 11:52:38.407995  663024 api_server.go:279] https://192.168.50.25:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1209 11:52:38.408028  663024 api_server.go:103] status: https://192.168.50.25:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1209 11:52:38.902416  663024 api_server.go:253] Checking apiserver healthz at https://192.168.50.25:8444/healthz ...
	I1209 11:52:38.914550  663024 api_server.go:279] https://192.168.50.25:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1209 11:52:38.914579  663024 api_server.go:103] status: https://192.168.50.25:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1209 11:52:39.401719  663024 api_server.go:253] Checking apiserver healthz at https://192.168.50.25:8444/healthz ...
	I1209 11:52:39.409382  663024 api_server.go:279] https://192.168.50.25:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1209 11:52:39.409427  663024 api_server.go:103] status: https://192.168.50.25:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1209 11:52:39.902488  663024 api_server.go:253] Checking apiserver healthz at https://192.168.50.25:8444/healthz ...
	I1209 11:52:39.907511  663024 api_server.go:279] https://192.168.50.25:8444/healthz returned 200:
	ok
	I1209 11:52:39.914532  663024 api_server.go:141] control plane version: v1.31.2
	I1209 11:52:39.914562  663024 api_server.go:131] duration metric: took 4.013066199s to wait for apiserver health ...
	I1209 11:52:39.914586  663024 cni.go:84] Creating CNI manager for ""
	I1209 11:52:39.914594  663024 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 11:52:39.915954  663024 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1209 11:52:37.324833  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:37.325397  661546 main.go:141] libmachine: (embed-certs-005123) DBG | unable to find current IP address of domain embed-certs-005123 in network mk-embed-certs-005123
	I1209 11:52:37.325430  661546 main.go:141] libmachine: (embed-certs-005123) DBG | I1209 11:52:37.325338  663907 retry.go:31] will retry after 3.636796307s: waiting for machine to come up
	I1209 11:52:40.966039  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:40.966438  661546 main.go:141] libmachine: (embed-certs-005123) DBG | unable to find current IP address of domain embed-certs-005123 in network mk-embed-certs-005123
	I1209 11:52:40.966463  661546 main.go:141] libmachine: (embed-certs-005123) DBG | I1209 11:52:40.966419  663907 retry.go:31] will retry after 3.704289622s: waiting for machine to come up
	I1209 11:52:38.592804  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:40.593407  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:38.133368  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:38.632475  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:39.132993  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:39.633225  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:40.132552  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:40.633292  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:41.132443  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:41.632994  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:42.132631  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:42.633378  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:39.917397  663024 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1209 11:52:39.928995  663024 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1209 11:52:39.953045  663024 system_pods.go:43] waiting for kube-system pods to appear ...
	I1209 11:52:39.962582  663024 system_pods.go:59] 8 kube-system pods found
	I1209 11:52:39.962628  663024 system_pods.go:61] "coredns-7c65d6cfc9-zzrgn" [dca7a835-3b66-4515-b571-6420afc42c44] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1209 11:52:39.962639  663024 system_pods.go:61] "etcd-default-k8s-diff-port-482476" [2323dbbc-9e7f-4047-b0be-b68b851f4986] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1209 11:52:39.962649  663024 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-482476" [0b7a4936-5282-46a4-a08a-e225b303f6f2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1209 11:52:39.962658  663024 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-482476" [c6ff79a0-2177-4c79-8021-c523f8d53e9b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1209 11:52:39.962666  663024 system_pods.go:61] "kube-proxy-6th5d" [0cff6df1-1adb-4b7e-8d59-a837db026339] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1209 11:52:39.962682  663024 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-482476" [524125eb-afd4-4e20-b0f0-e58019e84962] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1209 11:52:39.962694  663024 system_pods.go:61] "metrics-server-6867b74b74-bpccn" [7426c800-9ff7-4778-82a0-6c71fd05a222] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1209 11:52:39.962702  663024 system_pods.go:61] "storage-provisioner" [4478313a-58e8-4d24-ab0b-c087e664200d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1209 11:52:39.962711  663024 system_pods.go:74] duration metric: took 9.637672ms to wait for pod list to return data ...
	I1209 11:52:39.962725  663024 node_conditions.go:102] verifying NodePressure condition ...
	I1209 11:52:39.969576  663024 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1209 11:52:39.969611  663024 node_conditions.go:123] node cpu capacity is 2
	I1209 11:52:39.969627  663024 node_conditions.go:105] duration metric: took 6.893708ms to run NodePressure ...
	I1209 11:52:39.969660  663024 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:40.340239  663024 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1209 11:52:40.345384  663024 kubeadm.go:739] kubelet initialised
	I1209 11:52:40.345412  663024 kubeadm.go:740] duration metric: took 5.145751ms waiting for restarted kubelet to initialise ...
	I1209 11:52:40.345425  663024 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 11:52:40.350721  663024 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-zzrgn" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:42.357138  663024 pod_ready.go:103] pod "coredns-7c65d6cfc9-zzrgn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:44.361981  663024 pod_ready.go:103] pod "coredns-7c65d6cfc9-zzrgn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:44.674598  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:44.675048  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has current primary IP address 192.168.72.218 and MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:44.675068  661546 main.go:141] libmachine: (embed-certs-005123) Found IP for machine: 192.168.72.218
	I1209 11:52:44.675075  661546 main.go:141] libmachine: (embed-certs-005123) Reserving static IP address...
	I1209 11:52:44.675492  661546 main.go:141] libmachine: (embed-certs-005123) DBG | found host DHCP lease matching {name: "embed-certs-005123", mac: "52:54:00:ee:a0:a8", ip: "192.168.72.218"} in network mk-embed-certs-005123: {Iface:virbr4 ExpiryTime:2024-12-09 12:52:37 +0000 UTC Type:0 Mac:52:54:00:ee:a0:a8 Iaid: IPaddr:192.168.72.218 Prefix:24 Hostname:embed-certs-005123 Clientid:01:52:54:00:ee:a0:a8}
	I1209 11:52:44.675522  661546 main.go:141] libmachine: (embed-certs-005123) DBG | skip adding static IP to network mk-embed-certs-005123 - found existing host DHCP lease matching {name: "embed-certs-005123", mac: "52:54:00:ee:a0:a8", ip: "192.168.72.218"}
	I1209 11:52:44.675537  661546 main.go:141] libmachine: (embed-certs-005123) Reserved static IP address: 192.168.72.218
	I1209 11:52:44.675555  661546 main.go:141] libmachine: (embed-certs-005123) Waiting for SSH to be available...
	I1209 11:52:44.675566  661546 main.go:141] libmachine: (embed-certs-005123) DBG | Getting to WaitForSSH function...
	I1209 11:52:44.677490  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:44.677814  661546 main.go:141] libmachine: (embed-certs-005123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:a0:a8", ip: ""} in network mk-embed-certs-005123: {Iface:virbr4 ExpiryTime:2024-12-09 12:52:37 +0000 UTC Type:0 Mac:52:54:00:ee:a0:a8 Iaid: IPaddr:192.168.72.218 Prefix:24 Hostname:embed-certs-005123 Clientid:01:52:54:00:ee:a0:a8}
	I1209 11:52:44.677860  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined IP address 192.168.72.218 and MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:44.677952  661546 main.go:141] libmachine: (embed-certs-005123) DBG | Using SSH client type: external
	I1209 11:52:44.678012  661546 main.go:141] libmachine: (embed-certs-005123) DBG | Using SSH private key: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/embed-certs-005123/id_rsa (-rw-------)
	I1209 11:52:44.678042  661546 main.go:141] libmachine: (embed-certs-005123) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.218 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20068-609844/.minikube/machines/embed-certs-005123/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1209 11:52:44.678056  661546 main.go:141] libmachine: (embed-certs-005123) DBG | About to run SSH command:
	I1209 11:52:44.678068  661546 main.go:141] libmachine: (embed-certs-005123) DBG | exit 0
	I1209 11:52:44.798377  661546 main.go:141] libmachine: (embed-certs-005123) DBG | SSH cmd err, output: <nil>: 
	I1209 11:52:44.798782  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetConfigRaw
	I1209 11:52:44.799532  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetIP
	I1209 11:52:44.801853  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:44.802223  661546 main.go:141] libmachine: (embed-certs-005123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:a0:a8", ip: ""} in network mk-embed-certs-005123: {Iface:virbr4 ExpiryTime:2024-12-09 12:52:37 +0000 UTC Type:0 Mac:52:54:00:ee:a0:a8 Iaid: IPaddr:192.168.72.218 Prefix:24 Hostname:embed-certs-005123 Clientid:01:52:54:00:ee:a0:a8}
	I1209 11:52:44.802255  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined IP address 192.168.72.218 and MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:44.802539  661546 profile.go:143] Saving config to /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/embed-certs-005123/config.json ...
	I1209 11:52:44.802777  661546 machine.go:93] provisionDockerMachine start ...
	I1209 11:52:44.802799  661546 main.go:141] libmachine: (embed-certs-005123) Calling .DriverName
	I1209 11:52:44.802994  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHHostname
	I1209 11:52:44.805481  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:44.805803  661546 main.go:141] libmachine: (embed-certs-005123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:a0:a8", ip: ""} in network mk-embed-certs-005123: {Iface:virbr4 ExpiryTime:2024-12-09 12:52:37 +0000 UTC Type:0 Mac:52:54:00:ee:a0:a8 Iaid: IPaddr:192.168.72.218 Prefix:24 Hostname:embed-certs-005123 Clientid:01:52:54:00:ee:a0:a8}
	I1209 11:52:44.805838  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined IP address 192.168.72.218 and MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:44.805999  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHPort
	I1209 11:52:44.806219  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHKeyPath
	I1209 11:52:44.806386  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHKeyPath
	I1209 11:52:44.806555  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHUsername
	I1209 11:52:44.806716  661546 main.go:141] libmachine: Using SSH client type: native
	I1209 11:52:44.806886  661546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.218 22 <nil> <nil>}
	I1209 11:52:44.806897  661546 main.go:141] libmachine: About to run SSH command:
	hostname
	I1209 11:52:44.914443  661546 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1209 11:52:44.914480  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetMachineName
	I1209 11:52:44.914783  661546 buildroot.go:166] provisioning hostname "embed-certs-005123"
	I1209 11:52:44.914810  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetMachineName
	I1209 11:52:44.914973  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHHostname
	I1209 11:52:44.918053  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:44.918471  661546 main.go:141] libmachine: (embed-certs-005123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:a0:a8", ip: ""} in network mk-embed-certs-005123: {Iface:virbr4 ExpiryTime:2024-12-09 12:52:37 +0000 UTC Type:0 Mac:52:54:00:ee:a0:a8 Iaid: IPaddr:192.168.72.218 Prefix:24 Hostname:embed-certs-005123 Clientid:01:52:54:00:ee:a0:a8}
	I1209 11:52:44.918508  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined IP address 192.168.72.218 and MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:44.918701  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHPort
	I1209 11:52:44.918935  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHKeyPath
	I1209 11:52:44.919087  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHKeyPath
	I1209 11:52:44.919267  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHUsername
	I1209 11:52:44.919452  661546 main.go:141] libmachine: Using SSH client type: native
	I1209 11:52:44.919624  661546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.218 22 <nil> <nil>}
	I1209 11:52:44.919645  661546 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-005123 && echo "embed-certs-005123" | sudo tee /etc/hostname
	I1209 11:52:45.032725  661546 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-005123
	
	I1209 11:52:45.032760  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHHostname
	I1209 11:52:45.035820  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:45.036222  661546 main.go:141] libmachine: (embed-certs-005123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:a0:a8", ip: ""} in network mk-embed-certs-005123: {Iface:virbr4 ExpiryTime:2024-12-09 12:52:37 +0000 UTC Type:0 Mac:52:54:00:ee:a0:a8 Iaid: IPaddr:192.168.72.218 Prefix:24 Hostname:embed-certs-005123 Clientid:01:52:54:00:ee:a0:a8}
	I1209 11:52:45.036263  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined IP address 192.168.72.218 and MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:45.036466  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHPort
	I1209 11:52:45.036666  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHKeyPath
	I1209 11:52:45.036864  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHKeyPath
	I1209 11:52:45.037003  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHUsername
	I1209 11:52:45.037189  661546 main.go:141] libmachine: Using SSH client type: native
	I1209 11:52:45.037396  661546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.218 22 <nil> <nil>}
	I1209 11:52:45.037413  661546 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-005123' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-005123/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-005123' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 11:52:45.147189  661546 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 11:52:45.147225  661546 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20068-609844/.minikube CaCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20068-609844/.minikube}
	I1209 11:52:45.147284  661546 buildroot.go:174] setting up certificates
	I1209 11:52:45.147299  661546 provision.go:84] configureAuth start
	I1209 11:52:45.147313  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetMachineName
	I1209 11:52:45.147667  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetIP
	I1209 11:52:45.150526  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:45.150965  661546 main.go:141] libmachine: (embed-certs-005123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:a0:a8", ip: ""} in network mk-embed-certs-005123: {Iface:virbr4 ExpiryTime:2024-12-09 12:52:37 +0000 UTC Type:0 Mac:52:54:00:ee:a0:a8 Iaid: IPaddr:192.168.72.218 Prefix:24 Hostname:embed-certs-005123 Clientid:01:52:54:00:ee:a0:a8}
	I1209 11:52:45.151009  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined IP address 192.168.72.218 and MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:45.151118  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHHostname
	I1209 11:52:45.153778  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:45.154178  661546 main.go:141] libmachine: (embed-certs-005123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:a0:a8", ip: ""} in network mk-embed-certs-005123: {Iface:virbr4 ExpiryTime:2024-12-09 12:52:37 +0000 UTC Type:0 Mac:52:54:00:ee:a0:a8 Iaid: IPaddr:192.168.72.218 Prefix:24 Hostname:embed-certs-005123 Clientid:01:52:54:00:ee:a0:a8}
	I1209 11:52:45.154213  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined IP address 192.168.72.218 and MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:45.154382  661546 provision.go:143] copyHostCerts
	I1209 11:52:45.154455  661546 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem, removing ...
	I1209 11:52:45.154478  661546 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem
	I1209 11:52:45.154549  661546 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/ca.pem (1082 bytes)
	I1209 11:52:45.154673  661546 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem, removing ...
	I1209 11:52:45.154685  661546 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem
	I1209 11:52:45.154717  661546 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/cert.pem (1123 bytes)
	I1209 11:52:45.154816  661546 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem, removing ...
	I1209 11:52:45.154829  661546 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem
	I1209 11:52:45.154857  661546 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20068-609844/.minikube/key.pem (1679 bytes)
	I1209 11:52:45.154935  661546 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem org=jenkins.embed-certs-005123 san=[127.0.0.1 192.168.72.218 embed-certs-005123 localhost minikube]
	I1209 11:52:45.382712  661546 provision.go:177] copyRemoteCerts
	I1209 11:52:45.382772  661546 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 11:52:45.382801  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHHostname
	I1209 11:52:45.385625  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:45.386020  661546 main.go:141] libmachine: (embed-certs-005123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:a0:a8", ip: ""} in network mk-embed-certs-005123: {Iface:virbr4 ExpiryTime:2024-12-09 12:52:37 +0000 UTC Type:0 Mac:52:54:00:ee:a0:a8 Iaid: IPaddr:192.168.72.218 Prefix:24 Hostname:embed-certs-005123 Clientid:01:52:54:00:ee:a0:a8}
	I1209 11:52:45.386050  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined IP address 192.168.72.218 and MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:45.386241  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHPort
	I1209 11:52:45.386448  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHKeyPath
	I1209 11:52:45.386626  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHUsername
	I1209 11:52:45.386765  661546 sshutil.go:53] new ssh client: &{IP:192.168.72.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/embed-certs-005123/id_rsa Username:docker}
	I1209 11:52:45.464427  661546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1209 11:52:45.488111  661546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1209 11:52:45.511231  661546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1209 11:52:45.534104  661546 provision.go:87] duration metric: took 386.787703ms to configureAuth
	I1209 11:52:45.534141  661546 buildroot.go:189] setting minikube options for container-runtime
	I1209 11:52:45.534411  661546 config.go:182] Loaded profile config "embed-certs-005123": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 11:52:45.534526  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHHostname
	I1209 11:52:45.537936  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:45.538370  661546 main.go:141] libmachine: (embed-certs-005123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:a0:a8", ip: ""} in network mk-embed-certs-005123: {Iface:virbr4 ExpiryTime:2024-12-09 12:52:37 +0000 UTC Type:0 Mac:52:54:00:ee:a0:a8 Iaid: IPaddr:192.168.72.218 Prefix:24 Hostname:embed-certs-005123 Clientid:01:52:54:00:ee:a0:a8}
	I1209 11:52:45.538402  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined IP address 192.168.72.218 and MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:45.538584  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHPort
	I1209 11:52:45.538826  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHKeyPath
	I1209 11:52:45.539019  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHKeyPath
	I1209 11:52:45.539150  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHUsername
	I1209 11:52:45.539378  661546 main.go:141] libmachine: Using SSH client type: native
	I1209 11:52:45.539551  661546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.218 22 <nil> <nil>}
	I1209 11:52:45.539568  661546 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 11:52:45.771215  661546 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 11:52:45.771259  661546 machine.go:96] duration metric: took 968.466766ms to provisionDockerMachine
	I1209 11:52:45.771276  661546 start.go:293] postStartSetup for "embed-certs-005123" (driver="kvm2")
	I1209 11:52:45.771287  661546 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 11:52:45.771316  661546 main.go:141] libmachine: (embed-certs-005123) Calling .DriverName
	I1209 11:52:45.771673  661546 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 11:52:45.771709  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHHostname
	I1209 11:52:45.774881  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:45.775294  661546 main.go:141] libmachine: (embed-certs-005123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:a0:a8", ip: ""} in network mk-embed-certs-005123: {Iface:virbr4 ExpiryTime:2024-12-09 12:52:37 +0000 UTC Type:0 Mac:52:54:00:ee:a0:a8 Iaid: IPaddr:192.168.72.218 Prefix:24 Hostname:embed-certs-005123 Clientid:01:52:54:00:ee:a0:a8}
	I1209 11:52:45.775340  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined IP address 192.168.72.218 and MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:45.775510  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHPort
	I1209 11:52:45.775714  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHKeyPath
	I1209 11:52:45.775899  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHUsername
	I1209 11:52:45.776065  661546 sshutil.go:53] new ssh client: &{IP:192.168.72.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/embed-certs-005123/id_rsa Username:docker}
	I1209 11:52:45.856991  661546 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 11:52:45.862195  661546 info.go:137] Remote host: Buildroot 2023.02.9
	I1209 11:52:45.862224  661546 filesync.go:126] Scanning /home/jenkins/minikube-integration/20068-609844/.minikube/addons for local assets ...
	I1209 11:52:45.862295  661546 filesync.go:126] Scanning /home/jenkins/minikube-integration/20068-609844/.minikube/files for local assets ...
	I1209 11:52:45.862368  661546 filesync.go:149] local asset: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem -> 6170172.pem in /etc/ssl/certs
	I1209 11:52:45.862497  661546 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 11:52:45.874850  661546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem --> /etc/ssl/certs/6170172.pem (1708 bytes)
	I1209 11:52:45.899279  661546 start.go:296] duration metric: took 127.984399ms for postStartSetup
	I1209 11:52:45.899332  661546 fix.go:56] duration metric: took 19.756446591s for fixHost
	I1209 11:52:45.899362  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHHostname
	I1209 11:52:45.902428  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:45.902828  661546 main.go:141] libmachine: (embed-certs-005123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:a0:a8", ip: ""} in network mk-embed-certs-005123: {Iface:virbr4 ExpiryTime:2024-12-09 12:52:37 +0000 UTC Type:0 Mac:52:54:00:ee:a0:a8 Iaid: IPaddr:192.168.72.218 Prefix:24 Hostname:embed-certs-005123 Clientid:01:52:54:00:ee:a0:a8}
	I1209 11:52:45.902861  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined IP address 192.168.72.218 and MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:45.903117  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHPort
	I1209 11:52:45.903344  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHKeyPath
	I1209 11:52:45.903554  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHKeyPath
	I1209 11:52:45.903704  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHUsername
	I1209 11:52:45.903955  661546 main.go:141] libmachine: Using SSH client type: native
	I1209 11:52:45.904191  661546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.218 22 <nil> <nil>}
	I1209 11:52:45.904209  661546 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1209 11:52:46.007164  661546 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733745165.964649155
	
	I1209 11:52:46.007194  661546 fix.go:216] guest clock: 1733745165.964649155
	I1209 11:52:46.007217  661546 fix.go:229] Guest: 2024-12-09 11:52:45.964649155 +0000 UTC Remote: 2024-12-09 11:52:45.899337716 +0000 UTC m=+369.711404421 (delta=65.311439ms)
	I1209 11:52:46.007267  661546 fix.go:200] guest clock delta is within tolerance: 65.311439ms
	I1209 11:52:46.007280  661546 start.go:83] releasing machines lock for "embed-certs-005123", held for 19.864428938s
	I1209 11:52:46.007313  661546 main.go:141] libmachine: (embed-certs-005123) Calling .DriverName
	I1209 11:52:46.007616  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetIP
	I1209 11:52:46.011273  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:46.011799  661546 main.go:141] libmachine: (embed-certs-005123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:a0:a8", ip: ""} in network mk-embed-certs-005123: {Iface:virbr4 ExpiryTime:2024-12-09 12:52:37 +0000 UTC Type:0 Mac:52:54:00:ee:a0:a8 Iaid: IPaddr:192.168.72.218 Prefix:24 Hostname:embed-certs-005123 Clientid:01:52:54:00:ee:a0:a8}
	I1209 11:52:46.011830  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined IP address 192.168.72.218 and MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:46.012074  661546 main.go:141] libmachine: (embed-certs-005123) Calling .DriverName
	I1209 11:52:46.012681  661546 main.go:141] libmachine: (embed-certs-005123) Calling .DriverName
	I1209 11:52:46.012907  661546 main.go:141] libmachine: (embed-certs-005123) Calling .DriverName
	I1209 11:52:46.013027  661546 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 11:52:46.013099  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHHostname
	I1209 11:52:46.013170  661546 ssh_runner.go:195] Run: cat /version.json
	I1209 11:52:46.013196  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHHostname
	I1209 11:52:46.016473  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:46.016764  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:46.016840  661546 main.go:141] libmachine: (embed-certs-005123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:a0:a8", ip: ""} in network mk-embed-certs-005123: {Iface:virbr4 ExpiryTime:2024-12-09 12:52:37 +0000 UTC Type:0 Mac:52:54:00:ee:a0:a8 Iaid: IPaddr:192.168.72.218 Prefix:24 Hostname:embed-certs-005123 Clientid:01:52:54:00:ee:a0:a8}
	I1209 11:52:46.016875  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined IP address 192.168.72.218 and MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:46.016964  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHPort
	I1209 11:52:46.017186  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHKeyPath
	I1209 11:52:46.017287  661546 main.go:141] libmachine: (embed-certs-005123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:a0:a8", ip: ""} in network mk-embed-certs-005123: {Iface:virbr4 ExpiryTime:2024-12-09 12:52:37 +0000 UTC Type:0 Mac:52:54:00:ee:a0:a8 Iaid: IPaddr:192.168.72.218 Prefix:24 Hostname:embed-certs-005123 Clientid:01:52:54:00:ee:a0:a8}
	I1209 11:52:46.017401  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHUsername
	I1209 11:52:46.017442  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined IP address 192.168.72.218 and MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:46.017480  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHPort
	I1209 11:52:46.017553  661546 sshutil.go:53] new ssh client: &{IP:192.168.72.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/embed-certs-005123/id_rsa Username:docker}
	I1209 11:52:46.017785  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHKeyPath
	I1209 11:52:46.017911  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHUsername
	I1209 11:52:46.018075  661546 sshutil.go:53] new ssh client: &{IP:192.168.72.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/embed-certs-005123/id_rsa Username:docker}
	I1209 11:52:46.129248  661546 ssh_runner.go:195] Run: systemctl --version
	I1209 11:52:46.136309  661546 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 11:52:43.091899  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:45.592415  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:46.287879  661546 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 11:52:46.293689  661546 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 11:52:46.293770  661546 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 11:52:46.311972  661546 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1209 11:52:46.312009  661546 start.go:495] detecting cgroup driver to use...
	I1209 11:52:46.312085  661546 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 11:52:46.329406  661546 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 11:52:46.344607  661546 docker.go:217] disabling cri-docker service (if available) ...
	I1209 11:52:46.344664  661546 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 11:52:46.360448  661546 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 11:52:46.374509  661546 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 11:52:46.503687  661546 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 11:52:46.649152  661546 docker.go:233] disabling docker service ...
	I1209 11:52:46.649234  661546 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 11:52:46.663277  661546 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 11:52:46.677442  661546 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 11:52:46.832667  661546 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 11:52:46.949826  661546 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 11:52:46.963119  661546 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 11:52:46.981743  661546 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1209 11:52:46.981834  661546 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:52:46.991634  661546 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 11:52:46.991706  661546 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:52:47.004032  661546 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:52:47.015001  661546 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:52:47.025000  661546 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 11:52:47.035513  661546 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:52:47.045431  661546 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:52:47.061931  661546 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 11:52:47.071531  661546 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 11:52:47.080492  661546 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1209 11:52:47.080559  661546 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1209 11:52:47.094021  661546 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 11:52:47.104015  661546 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 11:52:47.226538  661546 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 11:52:47.318832  661546 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 11:52:47.318911  661546 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 11:52:47.323209  661546 start.go:563] Will wait 60s for crictl version
	I1209 11:52:47.323276  661546 ssh_runner.go:195] Run: which crictl
	I1209 11:52:47.326773  661546 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 11:52:47.365536  661546 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1209 11:52:47.365629  661546 ssh_runner.go:195] Run: crio --version
	I1209 11:52:47.392781  661546 ssh_runner.go:195] Run: crio --version
	I1209 11:52:47.422945  661546 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1209 11:52:43.133189  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:43.632726  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:44.132804  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:44.632952  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:45.132474  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:45.633318  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:46.133116  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:46.632595  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:47.133211  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:47.633233  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:46.858128  663024 pod_ready.go:103] pod "coredns-7c65d6cfc9-zzrgn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:49.358845  663024 pod_ready.go:103] pod "coredns-7c65d6cfc9-zzrgn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:47.423936  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetIP
	I1209 11:52:47.426959  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:47.427401  661546 main.go:141] libmachine: (embed-certs-005123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:a0:a8", ip: ""} in network mk-embed-certs-005123: {Iface:virbr4 ExpiryTime:2024-12-09 12:52:37 +0000 UTC Type:0 Mac:52:54:00:ee:a0:a8 Iaid: IPaddr:192.168.72.218 Prefix:24 Hostname:embed-certs-005123 Clientid:01:52:54:00:ee:a0:a8}
	I1209 11:52:47.427425  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined IP address 192.168.72.218 and MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:52:47.427636  661546 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1209 11:52:47.432509  661546 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 11:52:47.448620  661546 kubeadm.go:883] updating cluster {Name:embed-certs-005123 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:embed-certs-005123 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.218 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 11:52:47.448772  661546 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 11:52:47.448824  661546 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 11:52:47.485100  661546 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1209 11:52:47.485173  661546 ssh_runner.go:195] Run: which lz4
	I1209 11:52:47.489202  661546 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1209 11:52:47.493060  661546 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1209 11:52:47.493093  661546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1209 11:52:48.772297  661546 crio.go:462] duration metric: took 1.283133931s to copy over tarball
	I1209 11:52:48.772381  661546 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1209 11:52:50.959318  661546 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.18690714s)
	I1209 11:52:50.959352  661546 crio.go:469] duration metric: took 2.187018432s to extract the tarball
	I1209 11:52:50.959359  661546 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1209 11:52:50.995746  661546 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 11:52:51.037764  661546 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 11:52:51.037792  661546 cache_images.go:84] Images are preloaded, skipping loading
	I1209 11:52:51.037799  661546 kubeadm.go:934] updating node { 192.168.72.218 8443 v1.31.2 crio true true} ...
	I1209 11:52:51.037909  661546 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-005123 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.218
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:embed-certs-005123 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 11:52:51.037972  661546 ssh_runner.go:195] Run: crio config
	I1209 11:52:51.080191  661546 cni.go:84] Creating CNI manager for ""
	I1209 11:52:51.080220  661546 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 11:52:51.080231  661546 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1209 11:52:51.080258  661546 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.218 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-005123 NodeName:embed-certs-005123 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.218"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.218 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 11:52:51.080442  661546 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.218
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-005123"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.218"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.218"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 11:52:51.080544  661546 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1209 11:52:51.091894  661546 binaries.go:44] Found k8s binaries, skipping transfer
	I1209 11:52:51.091975  661546 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 11:52:51.101702  661546 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I1209 11:52:51.117636  661546 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 11:52:51.133662  661546 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2298 bytes)
	I1209 11:52:51.151725  661546 ssh_runner.go:195] Run: grep 192.168.72.218	control-plane.minikube.internal$ /etc/hosts
	I1209 11:52:51.155759  661546 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.218	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 11:52:51.167480  661546 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 11:52:47.592707  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:50.093177  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:48.132348  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:48.632894  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:49.133272  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:49.633015  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:50.132977  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:50.632533  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:51.132939  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:51.632463  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:52.133082  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:52.633298  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:50.357709  663024 pod_ready.go:93] pod "coredns-7c65d6cfc9-zzrgn" in "kube-system" namespace has status "Ready":"True"
	I1209 11:52:50.357740  663024 pod_ready.go:82] duration metric: took 10.006992001s for pod "coredns-7c65d6cfc9-zzrgn" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:50.357752  663024 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-482476" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:50.363374  663024 pod_ready.go:93] pod "etcd-default-k8s-diff-port-482476" in "kube-system" namespace has status "Ready":"True"
	I1209 11:52:50.363403  663024 pod_ready.go:82] duration metric: took 5.642657ms for pod "etcd-default-k8s-diff-port-482476" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:50.363417  663024 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-482476" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:50.368456  663024 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-482476" in "kube-system" namespace has status "Ready":"True"
	I1209 11:52:50.368478  663024 pod_ready.go:82] duration metric: took 5.053713ms for pod "kube-apiserver-default-k8s-diff-port-482476" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:50.368488  663024 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-482476" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:50.374156  663024 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-482476" in "kube-system" namespace has status "Ready":"True"
	I1209 11:52:50.374205  663024 pod_ready.go:82] duration metric: took 5.708489ms for pod "kube-controller-manager-default-k8s-diff-port-482476" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:50.374219  663024 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-6th5d" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:50.378734  663024 pod_ready.go:93] pod "kube-proxy-6th5d" in "kube-system" namespace has status "Ready":"True"
	I1209 11:52:50.378752  663024 pod_ready.go:82] duration metric: took 4.526066ms for pod "kube-proxy-6th5d" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:50.378760  663024 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-482476" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:52.384763  663024 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-482476" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:53.389110  663024 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-482476" in "kube-system" namespace has status "Ready":"True"
	I1209 11:52:53.389146  663024 pod_ready.go:82] duration metric: took 3.010378852s for pod "kube-scheduler-default-k8s-diff-port-482476" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:53.389162  663024 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:51.305408  661546 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 11:52:51.330738  661546 certs.go:68] Setting up /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/embed-certs-005123 for IP: 192.168.72.218
	I1209 11:52:51.330766  661546 certs.go:194] generating shared ca certs ...
	I1209 11:52:51.330791  661546 certs.go:226] acquiring lock for ca certs: {Name:mk2df5887e08965d909a9c950da5dfffb8a04ddc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 11:52:51.331002  661546 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20068-609844/.minikube/ca.key
	I1209 11:52:51.331099  661546 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.key
	I1209 11:52:51.331116  661546 certs.go:256] generating profile certs ...
	I1209 11:52:51.331252  661546 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/embed-certs-005123/client.key
	I1209 11:52:51.331333  661546 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/embed-certs-005123/apiserver.key.a40d22b0
	I1209 11:52:51.331400  661546 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/embed-certs-005123/proxy-client.key
	I1209 11:52:51.331595  661546 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017.pem (1338 bytes)
	W1209 11:52:51.331631  661546 certs.go:480] ignoring /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017_empty.pem, impossibly tiny 0 bytes
	I1209 11:52:51.331645  661546 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca-key.pem (1675 bytes)
	I1209 11:52:51.331680  661546 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/ca.pem (1082 bytes)
	I1209 11:52:51.331717  661546 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/cert.pem (1123 bytes)
	I1209 11:52:51.331747  661546 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/certs/key.pem (1679 bytes)
	I1209 11:52:51.331824  661546 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem (1708 bytes)
	I1209 11:52:51.332728  661546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 11:52:51.366002  661546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1209 11:52:51.400591  661546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 11:52:51.431219  661546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1209 11:52:51.459334  661546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/embed-certs-005123/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1209 11:52:51.487240  661546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/embed-certs-005123/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1209 11:52:51.522273  661546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/embed-certs-005123/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 11:52:51.545757  661546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/embed-certs-005123/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1209 11:52:51.572793  661546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 11:52:51.595719  661546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/certs/617017.pem --> /usr/share/ca-certificates/617017.pem (1338 bytes)
	I1209 11:52:51.618456  661546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/ssl/certs/6170172.pem --> /usr/share/ca-certificates/6170172.pem (1708 bytes)
	I1209 11:52:51.643337  661546 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 11:52:51.659719  661546 ssh_runner.go:195] Run: openssl version
	I1209 11:52:51.665339  661546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/617017.pem && ln -fs /usr/share/ca-certificates/617017.pem /etc/ssl/certs/617017.pem"
	I1209 11:52:51.676145  661546 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/617017.pem
	I1209 11:52:51.680615  661546 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 10:45 /usr/share/ca-certificates/617017.pem
	I1209 11:52:51.680670  661546 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/617017.pem
	I1209 11:52:51.686782  661546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/617017.pem /etc/ssl/certs/51391683.0"
	I1209 11:52:51.697398  661546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6170172.pem && ln -fs /usr/share/ca-certificates/6170172.pem /etc/ssl/certs/6170172.pem"
	I1209 11:52:51.707438  661546 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6170172.pem
	I1209 11:52:51.711764  661546 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 10:45 /usr/share/ca-certificates/6170172.pem
	I1209 11:52:51.711832  661546 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6170172.pem
	I1209 11:52:51.717278  661546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6170172.pem /etc/ssl/certs/3ec20f2e.0"
	I1209 11:52:51.727774  661546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1209 11:52:51.738575  661546 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 11:52:51.742996  661546 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 10:34 /usr/share/ca-certificates/minikubeCA.pem
	I1209 11:52:51.743057  661546 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 11:52:51.748505  661546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1209 11:52:51.758738  661546 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 11:52:51.763005  661546 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1209 11:52:51.768964  661546 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1209 11:52:51.775011  661546 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1209 11:52:51.780810  661546 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1209 11:52:51.786716  661546 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1209 11:52:51.792351  661546 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1209 11:52:51.798098  661546 kubeadm.go:392] StartCluster: {Name:embed-certs-005123 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:embed-certs-005123 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.218 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 11:52:51.798239  661546 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 11:52:51.798296  661546 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 11:52:51.840669  661546 cri.go:89] found id: ""
	I1209 11:52:51.840755  661546 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 11:52:51.850404  661546 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1209 11:52:51.850429  661546 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1209 11:52:51.850474  661546 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1209 11:52:51.859350  661546 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1209 11:52:51.860405  661546 kubeconfig.go:125] found "embed-certs-005123" server: "https://192.168.72.218:8443"
	I1209 11:52:51.862591  661546 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1209 11:52:51.872497  661546 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.218
	I1209 11:52:51.872539  661546 kubeadm.go:1160] stopping kube-system containers ...
	I1209 11:52:51.872558  661546 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1209 11:52:51.872638  661546 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 11:52:51.913221  661546 cri.go:89] found id: ""
	I1209 11:52:51.913316  661546 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1209 11:52:51.929885  661546 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 11:52:51.940078  661546 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 11:52:51.940105  661546 kubeadm.go:157] found existing configuration files:
	
	I1209 11:52:51.940166  661546 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 11:52:51.948911  661546 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 11:52:51.948977  661546 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 11:52:51.958278  661546 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 11:52:51.966808  661546 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 11:52:51.966879  661546 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 11:52:51.975480  661546 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 11:52:51.984071  661546 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 11:52:51.984127  661546 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 11:52:51.992480  661546 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 11:52:52.000798  661546 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 11:52:52.000873  661546 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 11:52:52.009553  661546 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 11:52:52.019274  661546 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:52.133477  661546 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:53.081976  661546 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:53.293871  661546 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:53.364259  661546 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:53.452043  661546 api_server.go:52] waiting for apiserver process to appear ...
	I1209 11:52:53.452147  661546 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:53.952743  661546 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:54.452498  661546 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:54.952482  661546 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:55.452783  661546 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:55.483411  661546 api_server.go:72] duration metric: took 2.0313706s to wait for apiserver process to appear ...
	I1209 11:52:55.483448  661546 api_server.go:88] waiting for apiserver healthz status ...
	I1209 11:52:55.483473  661546 api_server.go:253] Checking apiserver healthz at https://192.168.72.218:8443/healthz ...
	I1209 11:52:55.483982  661546 api_server.go:269] stopped: https://192.168.72.218:8443/healthz: Get "https://192.168.72.218:8443/healthz": dial tcp 192.168.72.218:8443: connect: connection refused
	I1209 11:52:55.983589  661546 api_server.go:253] Checking apiserver healthz at https://192.168.72.218:8443/healthz ...
	I1209 11:52:52.592309  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:55.257400  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:53.132520  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:53.633385  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:54.132432  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:54.632974  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:55.132958  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:55.633343  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:56.132687  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:56.633236  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:57.133489  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:57.633105  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:55.396602  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:57.397077  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:58.136225  661546 api_server.go:279] https://192.168.72.218:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1209 11:52:58.136259  661546 api_server.go:103] status: https://192.168.72.218:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1209 11:52:58.136276  661546 api_server.go:253] Checking apiserver healthz at https://192.168.72.218:8443/healthz ...
	I1209 11:52:58.174521  661546 api_server.go:279] https://192.168.72.218:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1209 11:52:58.174583  661546 api_server.go:103] status: https://192.168.72.218:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1209 11:52:58.484089  661546 api_server.go:253] Checking apiserver healthz at https://192.168.72.218:8443/healthz ...
	I1209 11:52:58.489495  661546 api_server.go:279] https://192.168.72.218:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1209 11:52:58.489536  661546 api_server.go:103] status: https://192.168.72.218:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1209 11:52:58.984185  661546 api_server.go:253] Checking apiserver healthz at https://192.168.72.218:8443/healthz ...
	I1209 11:52:58.990889  661546 api_server.go:279] https://192.168.72.218:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1209 11:52:58.990932  661546 api_server.go:103] status: https://192.168.72.218:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1209 11:52:59.484415  661546 api_server.go:253] Checking apiserver healthz at https://192.168.72.218:8443/healthz ...
	I1209 11:52:59.490878  661546 api_server.go:279] https://192.168.72.218:8443/healthz returned 200:
	ok
	I1209 11:52:59.498196  661546 api_server.go:141] control plane version: v1.31.2
	I1209 11:52:59.498231  661546 api_server.go:131] duration metric: took 4.014775842s to wait for apiserver health ...
	I1209 11:52:59.498241  661546 cni.go:84] Creating CNI manager for ""
	I1209 11:52:59.498247  661546 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 11:52:59.499779  661546 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1209 11:52:59.500941  661546 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1209 11:52:59.514201  661546 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1209 11:52:59.544391  661546 system_pods.go:43] waiting for kube-system pods to appear ...
	I1209 11:52:59.555798  661546 system_pods.go:59] 8 kube-system pods found
	I1209 11:52:59.555837  661546 system_pods.go:61] "coredns-7c65d6cfc9-cdnjm" [7cb724f8-c570-4a19-808d-da994ec43eaa] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1209 11:52:59.555849  661546 system_pods.go:61] "etcd-embed-certs-005123" [bf194765-7520-4b5d-a1e5-b49830a0f620] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1209 11:52:59.555858  661546 system_pods.go:61] "kube-apiserver-embed-certs-005123" [470f6c19-0112-4b0d-89d9-b792e912cf6d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1209 11:52:59.555863  661546 system_pods.go:61] "kube-controller-manager-embed-certs-005123" [b42748b2-f3a9-4d29-a832-a30d54b329c2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1209 11:52:59.555868  661546 system_pods.go:61] "kube-proxy-b7bf2" [f9aab69c-2232-4f56-a502-ffd033f7ac10] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1209 11:52:59.555877  661546 system_pods.go:61] "kube-scheduler-embed-certs-005123" [e61a8e3c-c1d3-4dab-abb2-6f5221bc5d25] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1209 11:52:59.555885  661546 system_pods.go:61] "metrics-server-6867b74b74-x4kvn" [210cb99c-e3e7-4337-bed4-985cb98143dd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1209 11:52:59.555893  661546 system_pods.go:61] "storage-provisioner" [f2f7d9e2-1121-4df2-adb7-a0af32f957ff] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1209 11:52:59.555903  661546 system_pods.go:74] duration metric: took 11.485008ms to wait for pod list to return data ...
	I1209 11:52:59.555913  661546 node_conditions.go:102] verifying NodePressure condition ...
	I1209 11:52:59.560077  661546 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1209 11:52:59.560100  661546 node_conditions.go:123] node cpu capacity is 2
	I1209 11:52:59.560110  661546 node_conditions.go:105] duration metric: took 4.192476ms to run NodePressure ...
	I1209 11:52:59.560132  661546 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 11:52:59.890141  661546 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1209 11:52:59.895382  661546 kubeadm.go:739] kubelet initialised
	I1209 11:52:59.895414  661546 kubeadm.go:740] duration metric: took 5.227549ms waiting for restarted kubelet to initialise ...
	I1209 11:52:59.895425  661546 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 11:52:59.901454  661546 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-cdnjm" in "kube-system" namespace to be "Ready" ...
	I1209 11:52:57.593336  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:00.094942  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:52:58.132858  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:58.633386  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:59.132544  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:59.633427  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:00.133402  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:00.632719  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:01.132786  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:01.632909  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:02.133197  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:02.632620  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:52:59.896691  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:02.396546  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:01.907730  661546 pod_ready.go:103] pod "coredns-7c65d6cfc9-cdnjm" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:03.910835  661546 pod_ready.go:103] pod "coredns-7c65d6cfc9-cdnjm" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:02.591692  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:05.090892  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:03.133091  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:03.633389  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:04.132587  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:04.633239  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:05.132773  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:05.632456  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:06.132989  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:06.632584  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:07.133153  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:07.633389  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:04.895599  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:06.896541  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:08.912963  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:06.408122  661546 pod_ready.go:103] pod "coredns-7c65d6cfc9-cdnjm" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:08.412579  661546 pod_ready.go:103] pod "coredns-7c65d6cfc9-cdnjm" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:10.419673  661546 pod_ready.go:93] pod "coredns-7c65d6cfc9-cdnjm" in "kube-system" namespace has status "Ready":"True"
	I1209 11:53:10.419702  661546 pod_ready.go:82] duration metric: took 10.518223469s for pod "coredns-7c65d6cfc9-cdnjm" in "kube-system" namespace to be "Ready" ...
	I1209 11:53:10.419716  661546 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-005123" in "kube-system" namespace to be "Ready" ...
	I1209 11:53:07.591181  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:10.091248  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:08.132885  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:08.633192  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:09.132446  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:09.633385  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:10.132534  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:10.632399  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:11.132877  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:11.633091  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:12.132592  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:12.633185  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:11.396121  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:13.901605  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:12.425696  661546 pod_ready.go:103] pod "etcd-embed-certs-005123" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:13.926007  661546 pod_ready.go:93] pod "etcd-embed-certs-005123" in "kube-system" namespace has status "Ready":"True"
	I1209 11:53:13.926041  661546 pod_ready.go:82] duration metric: took 3.50631846s for pod "etcd-embed-certs-005123" in "kube-system" namespace to be "Ready" ...
	I1209 11:53:13.926053  661546 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-005123" in "kube-system" namespace to be "Ready" ...
	I1209 11:53:13.931124  661546 pod_ready.go:93] pod "kube-apiserver-embed-certs-005123" in "kube-system" namespace has status "Ready":"True"
	I1209 11:53:13.931150  661546 pod_ready.go:82] duration metric: took 5.090118ms for pod "kube-apiserver-embed-certs-005123" in "kube-system" namespace to be "Ready" ...
	I1209 11:53:13.931163  661546 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-005123" in "kube-system" namespace to be "Ready" ...
	I1209 11:53:13.935763  661546 pod_ready.go:93] pod "kube-controller-manager-embed-certs-005123" in "kube-system" namespace has status "Ready":"True"
	I1209 11:53:13.935783  661546 pod_ready.go:82] duration metric: took 4.613902ms for pod "kube-controller-manager-embed-certs-005123" in "kube-system" namespace to be "Ready" ...
	I1209 11:53:13.935792  661546 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-b7bf2" in "kube-system" namespace to be "Ready" ...
	I1209 11:53:13.940013  661546 pod_ready.go:93] pod "kube-proxy-b7bf2" in "kube-system" namespace has status "Ready":"True"
	I1209 11:53:13.940037  661546 pod_ready.go:82] duration metric: took 4.238468ms for pod "kube-proxy-b7bf2" in "kube-system" namespace to be "Ready" ...
	I1209 11:53:13.940050  661546 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-005123" in "kube-system" namespace to be "Ready" ...
	I1209 11:53:13.944480  661546 pod_ready.go:93] pod "kube-scheduler-embed-certs-005123" in "kube-system" namespace has status "Ready":"True"
	I1209 11:53:13.944497  661546 pod_ready.go:82] duration metric: took 4.439334ms for pod "kube-scheduler-embed-certs-005123" in "kube-system" namespace to be "Ready" ...
	I1209 11:53:13.944504  661546 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace to be "Ready" ...
	I1209 11:53:15.951194  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:12.091413  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:14.591239  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:13.132852  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:13.632863  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:14.132638  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:14.632522  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:15.133201  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:15.632442  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:16.132620  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:53:16.132747  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:53:16.171708  662586 cri.go:89] found id: ""
	I1209 11:53:16.171748  662586 logs.go:282] 0 containers: []
	W1209 11:53:16.171761  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:53:16.171768  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:53:16.171823  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:53:16.206350  662586 cri.go:89] found id: ""
	I1209 11:53:16.206381  662586 logs.go:282] 0 containers: []
	W1209 11:53:16.206390  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:53:16.206398  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:53:16.206468  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:53:16.239292  662586 cri.go:89] found id: ""
	I1209 11:53:16.239325  662586 logs.go:282] 0 containers: []
	W1209 11:53:16.239334  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:53:16.239341  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:53:16.239397  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:53:16.275809  662586 cri.go:89] found id: ""
	I1209 11:53:16.275841  662586 logs.go:282] 0 containers: []
	W1209 11:53:16.275850  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:53:16.275856  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:53:16.275913  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:53:16.310434  662586 cri.go:89] found id: ""
	I1209 11:53:16.310466  662586 logs.go:282] 0 containers: []
	W1209 11:53:16.310474  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:53:16.310480  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:53:16.310540  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:53:16.347697  662586 cri.go:89] found id: ""
	I1209 11:53:16.347729  662586 logs.go:282] 0 containers: []
	W1209 11:53:16.347738  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:53:16.347745  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:53:16.347801  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:53:16.380949  662586 cri.go:89] found id: ""
	I1209 11:53:16.380977  662586 logs.go:282] 0 containers: []
	W1209 11:53:16.380985  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:53:16.380992  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:53:16.381074  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:53:16.415236  662586 cri.go:89] found id: ""
	I1209 11:53:16.415268  662586 logs.go:282] 0 containers: []
	W1209 11:53:16.415290  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:53:16.415304  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:53:16.415321  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:53:16.459614  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:53:16.459645  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:53:16.509575  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:53:16.509617  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:53:16.522864  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:53:16.522898  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:53:16.644997  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:53:16.645059  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:53:16.645106  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:53:16.396028  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:18.397195  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:17.951721  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:19.952199  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:16.591767  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:19.091470  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:21.095835  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:19.220978  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:19.233506  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:53:19.233597  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:53:19.268975  662586 cri.go:89] found id: ""
	I1209 11:53:19.269007  662586 logs.go:282] 0 containers: []
	W1209 11:53:19.269019  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:53:19.269027  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:53:19.269103  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:53:19.304898  662586 cri.go:89] found id: ""
	I1209 11:53:19.304935  662586 logs.go:282] 0 containers: []
	W1209 11:53:19.304949  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:53:19.304957  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:53:19.305034  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:53:19.344798  662586 cri.go:89] found id: ""
	I1209 11:53:19.344835  662586 logs.go:282] 0 containers: []
	W1209 11:53:19.344846  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:53:19.344855  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:53:19.344925  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:53:19.395335  662586 cri.go:89] found id: ""
	I1209 11:53:19.395377  662586 logs.go:282] 0 containers: []
	W1209 11:53:19.395387  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:53:19.395395  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:53:19.395464  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:53:19.430334  662586 cri.go:89] found id: ""
	I1209 11:53:19.430364  662586 logs.go:282] 0 containers: []
	W1209 11:53:19.430377  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:53:19.430386  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:53:19.430465  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:53:19.468732  662586 cri.go:89] found id: ""
	I1209 11:53:19.468766  662586 logs.go:282] 0 containers: []
	W1209 11:53:19.468775  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:53:19.468782  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:53:19.468836  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:53:19.503194  662586 cri.go:89] found id: ""
	I1209 11:53:19.503242  662586 logs.go:282] 0 containers: []
	W1209 11:53:19.503255  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:53:19.503263  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:53:19.503328  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:53:19.537074  662586 cri.go:89] found id: ""
	I1209 11:53:19.537114  662586 logs.go:282] 0 containers: []
	W1209 11:53:19.537125  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:53:19.537135  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:53:19.537151  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:53:19.590081  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:53:19.590130  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:53:19.604350  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:53:19.604388  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:53:19.683073  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:53:19.683106  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:53:19.683124  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:53:19.763564  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:53:19.763611  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:53:22.302792  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:22.315992  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:53:22.316079  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:53:22.350464  662586 cri.go:89] found id: ""
	I1209 11:53:22.350495  662586 logs.go:282] 0 containers: []
	W1209 11:53:22.350505  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:53:22.350511  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:53:22.350569  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:53:22.382832  662586 cri.go:89] found id: ""
	I1209 11:53:22.382867  662586 logs.go:282] 0 containers: []
	W1209 11:53:22.382880  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:53:22.382889  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:53:22.382958  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:53:22.417826  662586 cri.go:89] found id: ""
	I1209 11:53:22.417859  662586 logs.go:282] 0 containers: []
	W1209 11:53:22.417871  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:53:22.417880  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:53:22.417963  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:53:22.451545  662586 cri.go:89] found id: ""
	I1209 11:53:22.451579  662586 logs.go:282] 0 containers: []
	W1209 11:53:22.451588  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:53:22.451594  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:53:22.451659  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:53:22.488413  662586 cri.go:89] found id: ""
	I1209 11:53:22.488448  662586 logs.go:282] 0 containers: []
	W1209 11:53:22.488458  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:53:22.488464  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:53:22.488531  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:53:22.523891  662586 cri.go:89] found id: ""
	I1209 11:53:22.523916  662586 logs.go:282] 0 containers: []
	W1209 11:53:22.523925  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:53:22.523931  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:53:22.523990  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:53:22.555828  662586 cri.go:89] found id: ""
	I1209 11:53:22.555866  662586 logs.go:282] 0 containers: []
	W1209 11:53:22.555879  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:53:22.555887  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:53:22.555960  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:53:22.592133  662586 cri.go:89] found id: ""
	I1209 11:53:22.592171  662586 logs.go:282] 0 containers: []
	W1209 11:53:22.592181  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:53:22.592192  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:53:22.592209  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:53:22.641928  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:53:22.641966  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:53:22.655182  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:53:22.655215  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 11:53:20.896189  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:23.397242  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:21.957934  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:24.451292  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:23.591147  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:25.591982  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	W1209 11:53:22.724320  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:53:22.724343  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:53:22.724359  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:53:22.811692  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:53:22.811743  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:53:25.347903  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:25.360839  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:53:25.360907  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:53:25.392880  662586 cri.go:89] found id: ""
	I1209 11:53:25.392917  662586 logs.go:282] 0 containers: []
	W1209 11:53:25.392930  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:53:25.392939  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:53:25.393008  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:53:25.427862  662586 cri.go:89] found id: ""
	I1209 11:53:25.427905  662586 logs.go:282] 0 containers: []
	W1209 11:53:25.427914  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:53:25.427921  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:53:25.428009  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:53:25.463733  662586 cri.go:89] found id: ""
	I1209 11:53:25.463767  662586 logs.go:282] 0 containers: []
	W1209 11:53:25.463778  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:53:25.463788  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:53:25.463884  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:53:25.501653  662586 cri.go:89] found id: ""
	I1209 11:53:25.501681  662586 logs.go:282] 0 containers: []
	W1209 11:53:25.501690  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:53:25.501697  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:53:25.501751  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:53:25.535368  662586 cri.go:89] found id: ""
	I1209 11:53:25.535410  662586 logs.go:282] 0 containers: []
	W1209 11:53:25.535422  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:53:25.535431  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:53:25.535511  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:53:25.569709  662586 cri.go:89] found id: ""
	I1209 11:53:25.569739  662586 logs.go:282] 0 containers: []
	W1209 11:53:25.569748  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:53:25.569761  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:53:25.569827  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:53:25.604352  662586 cri.go:89] found id: ""
	I1209 11:53:25.604389  662586 logs.go:282] 0 containers: []
	W1209 11:53:25.604404  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:53:25.604413  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:53:25.604477  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:53:25.635832  662586 cri.go:89] found id: ""
	I1209 11:53:25.635865  662586 logs.go:282] 0 containers: []
	W1209 11:53:25.635878  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:53:25.635892  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:53:25.635908  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:53:25.650611  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:53:25.650647  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:53:25.721092  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:53:25.721121  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:53:25.721139  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:53:25.795552  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:53:25.795598  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:53:25.858088  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:53:25.858161  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:53:25.898217  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:28.395882  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:26.950691  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:28.951203  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:28.091306  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:30.091842  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:28.410683  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:28.422993  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:53:28.423072  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:53:28.455054  662586 cri.go:89] found id: ""
	I1209 11:53:28.455083  662586 logs.go:282] 0 containers: []
	W1209 11:53:28.455092  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:53:28.455098  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:53:28.455162  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:53:28.493000  662586 cri.go:89] found id: ""
	I1209 11:53:28.493037  662586 logs.go:282] 0 containers: []
	W1209 11:53:28.493046  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:53:28.493052  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:53:28.493104  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:53:28.526294  662586 cri.go:89] found id: ""
	I1209 11:53:28.526333  662586 logs.go:282] 0 containers: []
	W1209 11:53:28.526346  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:53:28.526354  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:53:28.526417  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:53:28.560383  662586 cri.go:89] found id: ""
	I1209 11:53:28.560414  662586 logs.go:282] 0 containers: []
	W1209 11:53:28.560423  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:53:28.560430  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:53:28.560485  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:53:28.595906  662586 cri.go:89] found id: ""
	I1209 11:53:28.595935  662586 logs.go:282] 0 containers: []
	W1209 11:53:28.595946  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:53:28.595954  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:53:28.596021  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:53:28.629548  662586 cri.go:89] found id: ""
	I1209 11:53:28.629584  662586 logs.go:282] 0 containers: []
	W1209 11:53:28.629597  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:53:28.629607  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:53:28.629673  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:53:28.666362  662586 cri.go:89] found id: ""
	I1209 11:53:28.666398  662586 logs.go:282] 0 containers: []
	W1209 11:53:28.666410  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:53:28.666418  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:53:28.666494  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:53:28.697704  662586 cri.go:89] found id: ""
	I1209 11:53:28.697736  662586 logs.go:282] 0 containers: []
	W1209 11:53:28.697746  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:53:28.697756  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:53:28.697769  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:53:28.745774  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:53:28.745816  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:53:28.759543  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:53:28.759582  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:53:28.834772  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:53:28.834795  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:53:28.834812  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:53:28.913137  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:53:28.913178  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:53:31.460658  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:31.473503  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:53:31.473575  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:53:31.506710  662586 cri.go:89] found id: ""
	I1209 11:53:31.506748  662586 logs.go:282] 0 containers: []
	W1209 11:53:31.506760  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:53:31.506770  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:53:31.506842  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:53:31.544127  662586 cri.go:89] found id: ""
	I1209 11:53:31.544188  662586 logs.go:282] 0 containers: []
	W1209 11:53:31.544202  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:53:31.544211  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:53:31.544289  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:53:31.591081  662586 cri.go:89] found id: ""
	I1209 11:53:31.591116  662586 logs.go:282] 0 containers: []
	W1209 11:53:31.591128  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:53:31.591135  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:53:31.591213  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:53:31.629311  662586 cri.go:89] found id: ""
	I1209 11:53:31.629340  662586 logs.go:282] 0 containers: []
	W1209 11:53:31.629348  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:53:31.629355  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:53:31.629432  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:53:31.671035  662586 cri.go:89] found id: ""
	I1209 11:53:31.671069  662586 logs.go:282] 0 containers: []
	W1209 11:53:31.671081  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:53:31.671090  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:53:31.671162  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:53:31.705753  662586 cri.go:89] found id: ""
	I1209 11:53:31.705792  662586 logs.go:282] 0 containers: []
	W1209 11:53:31.705805  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:53:31.705815  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:53:31.705889  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:53:31.739118  662586 cri.go:89] found id: ""
	I1209 11:53:31.739146  662586 logs.go:282] 0 containers: []
	W1209 11:53:31.739155  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:53:31.739162  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:53:31.739225  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:53:31.771085  662586 cri.go:89] found id: ""
	I1209 11:53:31.771120  662586 logs.go:282] 0 containers: []
	W1209 11:53:31.771129  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:53:31.771139  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:53:31.771152  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:53:31.820993  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:53:31.821049  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:53:31.835576  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:53:31.835612  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:53:31.903011  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:53:31.903039  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:53:31.903056  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:53:31.977784  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:53:31.977830  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:53:30.896197  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:33.395937  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:31.450832  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:33.451161  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:35.451446  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:32.590724  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:34.592352  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:34.514654  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:34.529156  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:53:34.529236  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:53:34.567552  662586 cri.go:89] found id: ""
	I1209 11:53:34.567580  662586 logs.go:282] 0 containers: []
	W1209 11:53:34.567590  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:53:34.567598  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:53:34.567665  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:53:34.608863  662586 cri.go:89] found id: ""
	I1209 11:53:34.608891  662586 logs.go:282] 0 containers: []
	W1209 11:53:34.608900  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:53:34.608907  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:53:34.608970  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:53:34.647204  662586 cri.go:89] found id: ""
	I1209 11:53:34.647242  662586 logs.go:282] 0 containers: []
	W1209 11:53:34.647254  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:53:34.647263  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:53:34.647333  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:53:34.682511  662586 cri.go:89] found id: ""
	I1209 11:53:34.682565  662586 logs.go:282] 0 containers: []
	W1209 11:53:34.682580  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:53:34.682596  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:53:34.682674  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:53:34.717557  662586 cri.go:89] found id: ""
	I1209 11:53:34.717585  662586 logs.go:282] 0 containers: []
	W1209 11:53:34.717595  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:53:34.717602  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:53:34.717670  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:53:34.749814  662586 cri.go:89] found id: ""
	I1209 11:53:34.749851  662586 logs.go:282] 0 containers: []
	W1209 11:53:34.749865  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:53:34.749876  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:53:34.749949  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:53:34.782732  662586 cri.go:89] found id: ""
	I1209 11:53:34.782766  662586 logs.go:282] 0 containers: []
	W1209 11:53:34.782776  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:53:34.782782  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:53:34.782846  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:53:34.817114  662586 cri.go:89] found id: ""
	I1209 11:53:34.817149  662586 logs.go:282] 0 containers: []
	W1209 11:53:34.817162  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:53:34.817175  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:53:34.817192  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:53:34.885963  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:53:34.885986  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:53:34.886001  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:53:34.969858  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:53:34.969905  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:53:35.006981  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:53:35.007024  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:53:35.055360  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:53:35.055401  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:53:37.570641  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:37.595904  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:53:37.595986  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:53:37.642205  662586 cri.go:89] found id: ""
	I1209 11:53:37.642248  662586 logs.go:282] 0 containers: []
	W1209 11:53:37.642261  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:53:37.642270  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:53:37.642347  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:53:37.676666  662586 cri.go:89] found id: ""
	I1209 11:53:37.676692  662586 logs.go:282] 0 containers: []
	W1209 11:53:37.676701  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:53:37.676707  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:53:37.676760  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:53:35.396037  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:37.896489  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:37.952569  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:40.450464  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:37.092250  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:39.092392  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:37.714201  662586 cri.go:89] found id: ""
	I1209 11:53:37.714233  662586 logs.go:282] 0 containers: []
	W1209 11:53:37.714243  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:53:37.714249  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:53:37.714311  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:53:37.748018  662586 cri.go:89] found id: ""
	I1209 11:53:37.748047  662586 logs.go:282] 0 containers: []
	W1209 11:53:37.748058  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:53:37.748067  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:53:37.748127  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:53:37.783763  662586 cri.go:89] found id: ""
	I1209 11:53:37.783799  662586 logs.go:282] 0 containers: []
	W1209 11:53:37.783807  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:53:37.783823  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:53:37.783898  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:53:37.822470  662586 cri.go:89] found id: ""
	I1209 11:53:37.822502  662586 logs.go:282] 0 containers: []
	W1209 11:53:37.822514  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:53:37.822523  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:53:37.822585  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:53:37.858493  662586 cri.go:89] found id: ""
	I1209 11:53:37.858527  662586 logs.go:282] 0 containers: []
	W1209 11:53:37.858537  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:53:37.858543  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:53:37.858599  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:53:37.899263  662586 cri.go:89] found id: ""
	I1209 11:53:37.899288  662586 logs.go:282] 0 containers: []
	W1209 11:53:37.899295  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:53:37.899304  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:53:37.899321  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:53:37.972531  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:53:37.972559  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:53:37.972575  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:53:38.046271  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:53:38.046315  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:53:38.088829  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:53:38.088867  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:53:38.141935  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:53:38.141985  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:53:40.657131  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:40.669884  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:53:40.669954  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:53:40.704291  662586 cri.go:89] found id: ""
	I1209 11:53:40.704332  662586 logs.go:282] 0 containers: []
	W1209 11:53:40.704345  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:53:40.704357  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:53:40.704435  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:53:40.738637  662586 cri.go:89] found id: ""
	I1209 11:53:40.738673  662586 logs.go:282] 0 containers: []
	W1209 11:53:40.738684  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:53:40.738690  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:53:40.738747  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:53:40.770737  662586 cri.go:89] found id: ""
	I1209 11:53:40.770774  662586 logs.go:282] 0 containers: []
	W1209 11:53:40.770787  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:53:40.770796  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:53:40.770865  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:53:40.805667  662586 cri.go:89] found id: ""
	I1209 11:53:40.805702  662586 logs.go:282] 0 containers: []
	W1209 11:53:40.805729  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:53:40.805739  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:53:40.805812  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:53:40.838444  662586 cri.go:89] found id: ""
	I1209 11:53:40.838482  662586 logs.go:282] 0 containers: []
	W1209 11:53:40.838496  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:53:40.838505  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:53:40.838578  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:53:40.871644  662586 cri.go:89] found id: ""
	I1209 11:53:40.871679  662586 logs.go:282] 0 containers: []
	W1209 11:53:40.871691  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:53:40.871700  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:53:40.871763  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:53:40.907242  662586 cri.go:89] found id: ""
	I1209 11:53:40.907275  662586 logs.go:282] 0 containers: []
	W1209 11:53:40.907284  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:53:40.907291  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:53:40.907359  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:53:40.941542  662586 cri.go:89] found id: ""
	I1209 11:53:40.941570  662586 logs.go:282] 0 containers: []
	W1209 11:53:40.941583  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:53:40.941595  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:53:40.941616  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:53:41.022344  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:53:41.022373  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:53:41.022387  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:53:41.097083  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:53:41.097129  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:53:41.135303  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:53:41.135349  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:53:41.191400  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:53:41.191447  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:53:40.396681  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:42.895118  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:42.451217  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:44.951893  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:41.591753  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:44.090762  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:46.091821  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:43.705246  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:43.717939  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:53:43.718001  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:53:43.750027  662586 cri.go:89] found id: ""
	I1209 11:53:43.750066  662586 logs.go:282] 0 containers: []
	W1209 11:53:43.750079  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:53:43.750087  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:53:43.750156  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:53:43.782028  662586 cri.go:89] found id: ""
	I1209 11:53:43.782067  662586 logs.go:282] 0 containers: []
	W1209 11:53:43.782081  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:53:43.782090  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:53:43.782153  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:53:43.815509  662586 cri.go:89] found id: ""
	I1209 11:53:43.815549  662586 logs.go:282] 0 containers: []
	W1209 11:53:43.815562  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:53:43.815570  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:53:43.815629  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:53:43.852803  662586 cri.go:89] found id: ""
	I1209 11:53:43.852834  662586 logs.go:282] 0 containers: []
	W1209 11:53:43.852842  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:53:43.852850  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:53:43.852915  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:53:43.886761  662586 cri.go:89] found id: ""
	I1209 11:53:43.886789  662586 logs.go:282] 0 containers: []
	W1209 11:53:43.886798  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:53:43.886805  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:53:43.886883  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:53:43.924427  662586 cri.go:89] found id: ""
	I1209 11:53:43.924458  662586 logs.go:282] 0 containers: []
	W1209 11:53:43.924466  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:53:43.924478  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:53:43.924542  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:53:43.960351  662586 cri.go:89] found id: ""
	I1209 11:53:43.960381  662586 logs.go:282] 0 containers: []
	W1209 11:53:43.960398  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:53:43.960407  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:53:43.960476  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:53:43.993933  662586 cri.go:89] found id: ""
	I1209 11:53:43.993960  662586 logs.go:282] 0 containers: []
	W1209 11:53:43.993969  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:53:43.993979  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:53:43.994002  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:53:44.006915  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:53:44.006952  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:53:44.078928  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:53:44.078981  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:53:44.078999  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:53:44.158129  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:53:44.158188  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:53:44.199543  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:53:44.199577  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:53:46.748607  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:46.762381  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:53:46.762494  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:53:46.795674  662586 cri.go:89] found id: ""
	I1209 11:53:46.795713  662586 logs.go:282] 0 containers: []
	W1209 11:53:46.795727  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:53:46.795737  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:53:46.795812  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:53:46.834027  662586 cri.go:89] found id: ""
	I1209 11:53:46.834055  662586 logs.go:282] 0 containers: []
	W1209 11:53:46.834065  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:53:46.834072  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:53:46.834124  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:53:46.872116  662586 cri.go:89] found id: ""
	I1209 11:53:46.872156  662586 logs.go:282] 0 containers: []
	W1209 11:53:46.872169  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:53:46.872179  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:53:46.872264  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:53:46.906571  662586 cri.go:89] found id: ""
	I1209 11:53:46.906599  662586 logs.go:282] 0 containers: []
	W1209 11:53:46.906608  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:53:46.906615  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:53:46.906676  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:53:46.938266  662586 cri.go:89] found id: ""
	I1209 11:53:46.938303  662586 logs.go:282] 0 containers: []
	W1209 11:53:46.938315  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:53:46.938323  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:53:46.938381  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:53:46.972281  662586 cri.go:89] found id: ""
	I1209 11:53:46.972318  662586 logs.go:282] 0 containers: []
	W1209 11:53:46.972329  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:53:46.972337  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:53:46.972391  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:53:47.004797  662586 cri.go:89] found id: ""
	I1209 11:53:47.004828  662586 logs.go:282] 0 containers: []
	W1209 11:53:47.004837  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:53:47.004843  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:53:47.004908  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:53:47.035877  662586 cri.go:89] found id: ""
	I1209 11:53:47.035905  662586 logs.go:282] 0 containers: []
	W1209 11:53:47.035917  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:53:47.035931  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:53:47.035947  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:53:47.087654  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:53:47.087706  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:53:47.102311  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:53:47.102346  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:53:47.195370  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:53:47.195396  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:53:47.195414  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:53:47.279103  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:53:47.279158  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:53:44.895382  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:46.895838  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:48.896133  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:47.453879  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:49.951686  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:48.591393  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:51.090806  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:49.817942  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:49.830291  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:53:49.830357  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:53:49.862917  662586 cri.go:89] found id: ""
	I1209 11:53:49.862950  662586 logs.go:282] 0 containers: []
	W1209 11:53:49.862959  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:53:49.862965  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:53:49.863033  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:53:49.894971  662586 cri.go:89] found id: ""
	I1209 11:53:49.895005  662586 logs.go:282] 0 containers: []
	W1209 11:53:49.895018  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:53:49.895027  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:53:49.895097  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:53:49.931737  662586 cri.go:89] found id: ""
	I1209 11:53:49.931775  662586 logs.go:282] 0 containers: []
	W1209 11:53:49.931786  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:53:49.931800  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:53:49.931862  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:53:49.971064  662586 cri.go:89] found id: ""
	I1209 11:53:49.971097  662586 logs.go:282] 0 containers: []
	W1209 11:53:49.971109  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:53:49.971118  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:53:49.971210  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:53:50.005354  662586 cri.go:89] found id: ""
	I1209 11:53:50.005393  662586 logs.go:282] 0 containers: []
	W1209 11:53:50.005417  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:53:50.005427  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:53:50.005501  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:53:50.044209  662586 cri.go:89] found id: ""
	I1209 11:53:50.044240  662586 logs.go:282] 0 containers: []
	W1209 11:53:50.044249  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:53:50.044257  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:53:50.044313  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:53:50.076360  662586 cri.go:89] found id: ""
	I1209 11:53:50.076408  662586 logs.go:282] 0 containers: []
	W1209 11:53:50.076418  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:53:50.076426  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:53:50.076494  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:53:50.112125  662586 cri.go:89] found id: ""
	I1209 11:53:50.112168  662586 logs.go:282] 0 containers: []
	W1209 11:53:50.112196  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:53:50.112210  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:53:50.112228  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:53:50.164486  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:53:50.164530  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:53:50.178489  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:53:50.178525  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:53:50.250131  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:53:50.250165  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:53:50.250196  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:53:50.329733  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:53:50.329779  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:53:50.896354  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:53.395149  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:52.450595  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:54.450939  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:53.092311  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:55.590766  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:52.874887  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:52.888518  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:53:52.888607  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:53:52.924361  662586 cri.go:89] found id: ""
	I1209 11:53:52.924389  662586 logs.go:282] 0 containers: []
	W1209 11:53:52.924398  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:53:52.924404  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:53:52.924467  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:53:52.957769  662586 cri.go:89] found id: ""
	I1209 11:53:52.957803  662586 logs.go:282] 0 containers: []
	W1209 11:53:52.957816  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:53:52.957824  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:53:52.957891  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:53:52.990339  662586 cri.go:89] found id: ""
	I1209 11:53:52.990376  662586 logs.go:282] 0 containers: []
	W1209 11:53:52.990388  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:53:52.990397  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:53:52.990461  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:53:53.022959  662586 cri.go:89] found id: ""
	I1209 11:53:53.023003  662586 logs.go:282] 0 containers: []
	W1209 11:53:53.023017  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:53:53.023028  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:53:53.023111  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:53:53.060271  662586 cri.go:89] found id: ""
	I1209 11:53:53.060299  662586 logs.go:282] 0 containers: []
	W1209 11:53:53.060315  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:53:53.060321  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:53:53.060390  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:53:53.093470  662586 cri.go:89] found id: ""
	I1209 11:53:53.093500  662586 logs.go:282] 0 containers: []
	W1209 11:53:53.093511  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:53:53.093519  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:53:53.093575  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:53:53.128902  662586 cri.go:89] found id: ""
	I1209 11:53:53.128941  662586 logs.go:282] 0 containers: []
	W1209 11:53:53.128955  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:53:53.128963  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:53:53.129036  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:53:53.161927  662586 cri.go:89] found id: ""
	I1209 11:53:53.161955  662586 logs.go:282] 0 containers: []
	W1209 11:53:53.161964  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:53:53.161974  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:53:53.161988  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:53:53.214098  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:53:53.214140  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:53:53.229191  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:53:53.229232  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:53:53.308648  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:53:53.308678  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:53:53.308695  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:53:53.386776  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:53:53.386816  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:53:55.929307  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:55.942217  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:53:55.942285  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:53:55.983522  662586 cri.go:89] found id: ""
	I1209 11:53:55.983563  662586 logs.go:282] 0 containers: []
	W1209 11:53:55.983572  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:53:55.983579  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:53:55.983645  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:53:56.017262  662586 cri.go:89] found id: ""
	I1209 11:53:56.017293  662586 logs.go:282] 0 containers: []
	W1209 11:53:56.017308  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:53:56.017314  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:53:56.017367  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:53:56.052385  662586 cri.go:89] found id: ""
	I1209 11:53:56.052419  662586 logs.go:282] 0 containers: []
	W1209 11:53:56.052429  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:53:56.052436  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:53:56.052489  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:53:56.085385  662586 cri.go:89] found id: ""
	I1209 11:53:56.085432  662586 logs.go:282] 0 containers: []
	W1209 11:53:56.085444  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:53:56.085452  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:53:56.085519  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:53:56.122754  662586 cri.go:89] found id: ""
	I1209 11:53:56.122785  662586 logs.go:282] 0 containers: []
	W1209 11:53:56.122794  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:53:56.122800  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:53:56.122862  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:53:56.159033  662586 cri.go:89] found id: ""
	I1209 11:53:56.159061  662586 logs.go:282] 0 containers: []
	W1209 11:53:56.159070  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:53:56.159077  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:53:56.159128  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:53:56.198022  662586 cri.go:89] found id: ""
	I1209 11:53:56.198058  662586 logs.go:282] 0 containers: []
	W1209 11:53:56.198070  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:53:56.198078  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:53:56.198148  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:53:56.231475  662586 cri.go:89] found id: ""
	I1209 11:53:56.231515  662586 logs.go:282] 0 containers: []
	W1209 11:53:56.231528  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:53:56.231542  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:53:56.231559  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:53:56.304922  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:53:56.304969  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:53:56.339875  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:53:56.339916  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:53:56.392893  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:53:56.392929  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:53:56.406334  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:53:56.406376  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:53:56.474037  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:53:55.895077  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:57.895835  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:56.452163  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:58.950981  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:57.590943  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:00.091057  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:53:58.974725  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:53:58.987817  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:53:58.987890  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:53:59.020951  662586 cri.go:89] found id: ""
	I1209 11:53:59.020987  662586 logs.go:282] 0 containers: []
	W1209 11:53:59.020996  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:53:59.021003  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:53:59.021055  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:53:59.055675  662586 cri.go:89] found id: ""
	I1209 11:53:59.055715  662586 logs.go:282] 0 containers: []
	W1209 11:53:59.055727  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:53:59.055733  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:53:59.055800  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:53:59.090099  662586 cri.go:89] found id: ""
	I1209 11:53:59.090138  662586 logs.go:282] 0 containers: []
	W1209 11:53:59.090150  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:53:59.090158  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:53:59.090252  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:53:59.124680  662586 cri.go:89] found id: ""
	I1209 11:53:59.124718  662586 logs.go:282] 0 containers: []
	W1209 11:53:59.124730  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:53:59.124739  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:53:59.124802  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:53:59.157772  662586 cri.go:89] found id: ""
	I1209 11:53:59.157808  662586 logs.go:282] 0 containers: []
	W1209 11:53:59.157819  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:53:59.157828  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:53:59.157892  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:53:59.191098  662586 cri.go:89] found id: ""
	I1209 11:53:59.191132  662586 logs.go:282] 0 containers: []
	W1209 11:53:59.191141  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:53:59.191148  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:53:59.191212  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:53:59.224050  662586 cri.go:89] found id: ""
	I1209 11:53:59.224090  662586 logs.go:282] 0 containers: []
	W1209 11:53:59.224102  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:53:59.224110  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:53:59.224198  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:53:59.262361  662586 cri.go:89] found id: ""
	I1209 11:53:59.262397  662586 logs.go:282] 0 containers: []
	W1209 11:53:59.262418  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:53:59.262432  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:53:59.262456  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:53:59.276811  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:53:59.276844  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:53:59.349465  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:53:59.349492  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:53:59.349506  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:53:59.429146  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:53:59.429192  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:53:59.470246  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:53:59.470287  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:02.021651  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:02.036039  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:02.036109  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:02.070999  662586 cri.go:89] found id: ""
	I1209 11:54:02.071034  662586 logs.go:282] 0 containers: []
	W1209 11:54:02.071045  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:02.071052  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:02.071119  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:02.107506  662586 cri.go:89] found id: ""
	I1209 11:54:02.107536  662586 logs.go:282] 0 containers: []
	W1209 11:54:02.107546  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:02.107554  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:02.107624  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:02.146279  662586 cri.go:89] found id: ""
	I1209 11:54:02.146314  662586 logs.go:282] 0 containers: []
	W1209 11:54:02.146326  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:02.146342  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:02.146408  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:02.178349  662586 cri.go:89] found id: ""
	I1209 11:54:02.178378  662586 logs.go:282] 0 containers: []
	W1209 11:54:02.178387  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:02.178402  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:02.178460  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:02.211916  662586 cri.go:89] found id: ""
	I1209 11:54:02.211952  662586 logs.go:282] 0 containers: []
	W1209 11:54:02.211963  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:02.211969  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:02.212038  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:02.246334  662586 cri.go:89] found id: ""
	I1209 11:54:02.246370  662586 logs.go:282] 0 containers: []
	W1209 11:54:02.246380  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:02.246387  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:02.246452  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:02.280111  662586 cri.go:89] found id: ""
	I1209 11:54:02.280157  662586 logs.go:282] 0 containers: []
	W1209 11:54:02.280168  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:02.280175  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:02.280246  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:02.314141  662586 cri.go:89] found id: ""
	I1209 11:54:02.314188  662586 logs.go:282] 0 containers: []
	W1209 11:54:02.314203  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:02.314216  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:02.314236  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:02.327220  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:02.327253  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:02.396099  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:02.396127  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:02.396142  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:02.478096  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:02.478148  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:02.515144  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:02.515175  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:53:59.896109  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:02.396485  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:04.396925  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:01.450279  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:03.450732  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:05.451265  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:02.092010  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:04.092427  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:05.069286  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:05.082453  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:05.082540  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:05.116263  662586 cri.go:89] found id: ""
	I1209 11:54:05.116299  662586 logs.go:282] 0 containers: []
	W1209 11:54:05.116313  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:05.116321  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:05.116388  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:05.150736  662586 cri.go:89] found id: ""
	I1209 11:54:05.150775  662586 logs.go:282] 0 containers: []
	W1209 11:54:05.150788  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:05.150796  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:05.150864  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:05.183757  662586 cri.go:89] found id: ""
	I1209 11:54:05.183792  662586 logs.go:282] 0 containers: []
	W1209 11:54:05.183804  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:05.183812  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:05.183873  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:05.215986  662586 cri.go:89] found id: ""
	I1209 11:54:05.216017  662586 logs.go:282] 0 containers: []
	W1209 11:54:05.216029  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:05.216038  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:05.216096  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:05.247648  662586 cri.go:89] found id: ""
	I1209 11:54:05.247686  662586 logs.go:282] 0 containers: []
	W1209 11:54:05.247698  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:05.247707  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:05.247776  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:05.279455  662586 cri.go:89] found id: ""
	I1209 11:54:05.279484  662586 logs.go:282] 0 containers: []
	W1209 11:54:05.279495  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:05.279504  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:05.279567  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:05.320334  662586 cri.go:89] found id: ""
	I1209 11:54:05.320374  662586 logs.go:282] 0 containers: []
	W1209 11:54:05.320387  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:05.320398  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:05.320490  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:05.353475  662586 cri.go:89] found id: ""
	I1209 11:54:05.353503  662586 logs.go:282] 0 containers: []
	W1209 11:54:05.353512  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:05.353522  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:05.353536  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:05.368320  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:05.368357  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:05.442152  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:05.442193  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:05.442212  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:05.523726  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:05.523768  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:05.562405  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:05.562438  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:06.895764  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:08.897032  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:07.454237  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:09.456440  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:06.591474  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:08.591578  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:11.091599  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:08.115564  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:08.129426  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:08.129523  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:08.162412  662586 cri.go:89] found id: ""
	I1209 11:54:08.162454  662586 logs.go:282] 0 containers: []
	W1209 11:54:08.162467  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:08.162477  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:08.162543  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:08.196821  662586 cri.go:89] found id: ""
	I1209 11:54:08.196860  662586 logs.go:282] 0 containers: []
	W1209 11:54:08.196873  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:08.196882  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:08.196949  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:08.233068  662586 cri.go:89] found id: ""
	I1209 11:54:08.233106  662586 logs.go:282] 0 containers: []
	W1209 11:54:08.233117  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:08.233124  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:08.233184  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:08.268683  662586 cri.go:89] found id: ""
	I1209 11:54:08.268715  662586 logs.go:282] 0 containers: []
	W1209 11:54:08.268724  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:08.268731  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:08.268790  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:08.303237  662586 cri.go:89] found id: ""
	I1209 11:54:08.303276  662586 logs.go:282] 0 containers: []
	W1209 11:54:08.303288  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:08.303309  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:08.303393  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:08.339513  662586 cri.go:89] found id: ""
	I1209 11:54:08.339543  662586 logs.go:282] 0 containers: []
	W1209 11:54:08.339551  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:08.339557  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:08.339612  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:08.376237  662586 cri.go:89] found id: ""
	I1209 11:54:08.376268  662586 logs.go:282] 0 containers: []
	W1209 11:54:08.376289  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:08.376298  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:08.376363  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:08.410530  662586 cri.go:89] found id: ""
	I1209 11:54:08.410560  662586 logs.go:282] 0 containers: []
	W1209 11:54:08.410568  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:08.410577  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:08.410589  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:08.460064  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:08.460101  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:08.474547  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:08.474582  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:08.544231  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:08.544260  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:08.544277  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:08.624727  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:08.624775  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:11.167943  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:11.183210  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:11.183294  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:11.221326  662586 cri.go:89] found id: ""
	I1209 11:54:11.221356  662586 logs.go:282] 0 containers: []
	W1209 11:54:11.221365  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:11.221371  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:11.221434  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:11.254688  662586 cri.go:89] found id: ""
	I1209 11:54:11.254721  662586 logs.go:282] 0 containers: []
	W1209 11:54:11.254730  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:11.254736  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:11.254801  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:11.287611  662586 cri.go:89] found id: ""
	I1209 11:54:11.287649  662586 logs.go:282] 0 containers: []
	W1209 11:54:11.287660  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:11.287666  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:11.287738  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:11.320533  662586 cri.go:89] found id: ""
	I1209 11:54:11.320565  662586 logs.go:282] 0 containers: []
	W1209 11:54:11.320574  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:11.320580  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:11.320638  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:11.362890  662586 cri.go:89] found id: ""
	I1209 11:54:11.362923  662586 logs.go:282] 0 containers: []
	W1209 11:54:11.362933  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:11.362939  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:11.363007  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:11.418729  662586 cri.go:89] found id: ""
	I1209 11:54:11.418762  662586 logs.go:282] 0 containers: []
	W1209 11:54:11.418772  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:11.418779  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:11.418837  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:11.455336  662586 cri.go:89] found id: ""
	I1209 11:54:11.455374  662586 logs.go:282] 0 containers: []
	W1209 11:54:11.455388  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:11.455397  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:11.455479  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:11.491307  662586 cri.go:89] found id: ""
	I1209 11:54:11.491334  662586 logs.go:282] 0 containers: []
	W1209 11:54:11.491344  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:11.491355  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:11.491369  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:11.543161  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:11.543204  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:11.556633  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:11.556670  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:11.626971  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:11.627001  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:11.627025  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:11.702061  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:11.702107  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:11.396167  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:13.897097  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:11.952029  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:14.451701  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:13.590749  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:15.591845  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:14.245241  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:14.258461  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:14.258544  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:14.292108  662586 cri.go:89] found id: ""
	I1209 11:54:14.292147  662586 logs.go:282] 0 containers: []
	W1209 11:54:14.292156  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:14.292163  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:14.292219  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:14.327347  662586 cri.go:89] found id: ""
	I1209 11:54:14.327381  662586 logs.go:282] 0 containers: []
	W1209 11:54:14.327394  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:14.327403  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:14.327484  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:14.361188  662586 cri.go:89] found id: ""
	I1209 11:54:14.361220  662586 logs.go:282] 0 containers: []
	W1209 11:54:14.361229  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:14.361236  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:14.361290  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:14.394898  662586 cri.go:89] found id: ""
	I1209 11:54:14.394936  662586 logs.go:282] 0 containers: []
	W1209 11:54:14.394948  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:14.394960  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:14.395027  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:14.429326  662586 cri.go:89] found id: ""
	I1209 11:54:14.429402  662586 logs.go:282] 0 containers: []
	W1209 11:54:14.429420  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:14.429431  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:14.429510  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:14.462903  662586 cri.go:89] found id: ""
	I1209 11:54:14.462938  662586 logs.go:282] 0 containers: []
	W1209 11:54:14.462947  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:14.462954  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:14.463009  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:14.496362  662586 cri.go:89] found id: ""
	I1209 11:54:14.496396  662586 logs.go:282] 0 containers: []
	W1209 11:54:14.496409  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:14.496418  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:14.496562  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:14.530052  662586 cri.go:89] found id: ""
	I1209 11:54:14.530085  662586 logs.go:282] 0 containers: []
	W1209 11:54:14.530098  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:14.530111  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:14.530131  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:14.543096  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:14.543133  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:14.611030  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:14.611055  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:14.611067  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:14.684984  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:14.685041  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:14.722842  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:14.722881  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:17.275868  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:17.288812  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:17.288898  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:17.323732  662586 cri.go:89] found id: ""
	I1209 11:54:17.323766  662586 logs.go:282] 0 containers: []
	W1209 11:54:17.323777  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:17.323786  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:17.323852  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:17.367753  662586 cri.go:89] found id: ""
	I1209 11:54:17.367788  662586 logs.go:282] 0 containers: []
	W1209 11:54:17.367801  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:17.367810  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:17.367878  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:17.411444  662586 cri.go:89] found id: ""
	I1209 11:54:17.411476  662586 logs.go:282] 0 containers: []
	W1209 11:54:17.411488  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:17.411496  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:17.411563  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:17.450790  662586 cri.go:89] found id: ""
	I1209 11:54:17.450821  662586 logs.go:282] 0 containers: []
	W1209 11:54:17.450832  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:17.450840  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:17.450913  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:17.488824  662586 cri.go:89] found id: ""
	I1209 11:54:17.488859  662586 logs.go:282] 0 containers: []
	W1209 11:54:17.488869  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:17.488876  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:17.488948  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:17.522051  662586 cri.go:89] found id: ""
	I1209 11:54:17.522085  662586 logs.go:282] 0 containers: []
	W1209 11:54:17.522094  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:17.522102  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:17.522165  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:17.556653  662586 cri.go:89] found id: ""
	I1209 11:54:17.556687  662586 logs.go:282] 0 containers: []
	W1209 11:54:17.556700  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:17.556707  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:17.556783  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:17.591303  662586 cri.go:89] found id: ""
	I1209 11:54:17.591337  662586 logs.go:282] 0 containers: []
	W1209 11:54:17.591355  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:17.591367  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:17.591384  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:17.656675  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:17.656699  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:17.656712  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:16.396574  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:18.896050  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:16.950508  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:19.451294  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:18.091307  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:20.091489  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:17.739894  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:17.739939  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:17.789486  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:17.789517  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:17.843606  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:17.843648  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:20.361896  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:20.378015  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:20.378105  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:20.412252  662586 cri.go:89] found id: ""
	I1209 11:54:20.412299  662586 logs.go:282] 0 containers: []
	W1209 11:54:20.412311  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:20.412327  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:20.412396  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:20.443638  662586 cri.go:89] found id: ""
	I1209 11:54:20.443671  662586 logs.go:282] 0 containers: []
	W1209 11:54:20.443682  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:20.443690  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:20.443758  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:20.478578  662586 cri.go:89] found id: ""
	I1209 11:54:20.478613  662586 logs.go:282] 0 containers: []
	W1209 11:54:20.478625  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:20.478634  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:20.478704  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:20.512232  662586 cri.go:89] found id: ""
	I1209 11:54:20.512266  662586 logs.go:282] 0 containers: []
	W1209 11:54:20.512279  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:20.512295  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:20.512357  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:20.544358  662586 cri.go:89] found id: ""
	I1209 11:54:20.544398  662586 logs.go:282] 0 containers: []
	W1209 11:54:20.544413  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:20.544429  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:20.544494  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:20.579476  662586 cri.go:89] found id: ""
	I1209 11:54:20.579513  662586 logs.go:282] 0 containers: []
	W1209 11:54:20.579525  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:20.579533  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:20.579600  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:20.613851  662586 cri.go:89] found id: ""
	I1209 11:54:20.613884  662586 logs.go:282] 0 containers: []
	W1209 11:54:20.613897  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:20.613903  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:20.613973  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:20.647311  662586 cri.go:89] found id: ""
	I1209 11:54:20.647342  662586 logs.go:282] 0 containers: []
	W1209 11:54:20.647351  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:20.647362  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:20.647375  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:20.695798  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:20.695839  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:20.709443  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:20.709478  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:20.779211  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:20.779237  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:20.779253  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:20.857966  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:20.858012  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:20.896168  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:22.896667  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:21.455716  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:23.950823  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:25.952038  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:22.592225  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:25.091934  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:23.398095  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:23.412622  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:23.412686  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:23.446582  662586 cri.go:89] found id: ""
	I1209 11:54:23.446616  662586 logs.go:282] 0 containers: []
	W1209 11:54:23.446628  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:23.446637  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:23.446705  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:23.487896  662586 cri.go:89] found id: ""
	I1209 11:54:23.487926  662586 logs.go:282] 0 containers: []
	W1209 11:54:23.487935  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:23.487941  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:23.488007  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:23.521520  662586 cri.go:89] found id: ""
	I1209 11:54:23.521559  662586 logs.go:282] 0 containers: []
	W1209 11:54:23.521571  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:23.521579  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:23.521651  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:23.561296  662586 cri.go:89] found id: ""
	I1209 11:54:23.561329  662586 logs.go:282] 0 containers: []
	W1209 11:54:23.561342  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:23.561350  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:23.561417  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:23.604936  662586 cri.go:89] found id: ""
	I1209 11:54:23.604965  662586 logs.go:282] 0 containers: []
	W1209 11:54:23.604976  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:23.604985  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:23.605055  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:23.665193  662586 cri.go:89] found id: ""
	I1209 11:54:23.665225  662586 logs.go:282] 0 containers: []
	W1209 11:54:23.665237  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:23.665247  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:23.665315  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:23.700202  662586 cri.go:89] found id: ""
	I1209 11:54:23.700239  662586 logs.go:282] 0 containers: []
	W1209 11:54:23.700251  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:23.700259  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:23.700336  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:23.734877  662586 cri.go:89] found id: ""
	I1209 11:54:23.734907  662586 logs.go:282] 0 containers: []
	W1209 11:54:23.734917  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:23.734927  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:23.734941  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:23.817328  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:23.817371  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:23.855052  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:23.855085  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:23.909107  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:23.909154  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:23.924198  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:23.924227  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:23.991976  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:26.492366  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:26.506223  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:26.506299  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:26.544932  662586 cri.go:89] found id: ""
	I1209 11:54:26.544974  662586 logs.go:282] 0 containers: []
	W1209 11:54:26.544987  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:26.544997  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:26.545080  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:26.579581  662586 cri.go:89] found id: ""
	I1209 11:54:26.579621  662586 logs.go:282] 0 containers: []
	W1209 11:54:26.579634  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:26.579643  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:26.579716  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:26.612510  662586 cri.go:89] found id: ""
	I1209 11:54:26.612545  662586 logs.go:282] 0 containers: []
	W1209 11:54:26.612567  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:26.612577  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:26.612646  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:26.646273  662586 cri.go:89] found id: ""
	I1209 11:54:26.646306  662586 logs.go:282] 0 containers: []
	W1209 11:54:26.646316  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:26.646322  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:26.646376  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:26.682027  662586 cri.go:89] found id: ""
	I1209 11:54:26.682063  662586 logs.go:282] 0 containers: []
	W1209 11:54:26.682072  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:26.682078  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:26.682132  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:26.715822  662586 cri.go:89] found id: ""
	I1209 11:54:26.715876  662586 logs.go:282] 0 containers: []
	W1209 11:54:26.715889  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:26.715898  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:26.715964  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:26.755976  662586 cri.go:89] found id: ""
	I1209 11:54:26.756016  662586 logs.go:282] 0 containers: []
	W1209 11:54:26.756031  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:26.756040  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:26.756122  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:26.787258  662586 cri.go:89] found id: ""
	I1209 11:54:26.787297  662586 logs.go:282] 0 containers: []
	W1209 11:54:26.787308  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:26.787319  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:26.787333  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:26.800534  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:26.800573  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:26.865767  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:26.865798  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:26.865824  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:26.950409  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:26.950460  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:26.994281  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:26.994320  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:25.396411  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:27.894846  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:28.451141  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:30.455101  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:27.591769  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:30.091528  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:29.544568  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:29.565182  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:29.565263  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:29.625116  662586 cri.go:89] found id: ""
	I1209 11:54:29.625155  662586 logs.go:282] 0 containers: []
	W1209 11:54:29.625168  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:29.625181  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:29.625257  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:29.673689  662586 cri.go:89] found id: ""
	I1209 11:54:29.673727  662586 logs.go:282] 0 containers: []
	W1209 11:54:29.673739  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:29.673747  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:29.673811  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:29.705925  662586 cri.go:89] found id: ""
	I1209 11:54:29.705959  662586 logs.go:282] 0 containers: []
	W1209 11:54:29.705971  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:29.705979  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:29.706033  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:29.738731  662586 cri.go:89] found id: ""
	I1209 11:54:29.738759  662586 logs.go:282] 0 containers: []
	W1209 11:54:29.738767  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:29.738774  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:29.738832  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:29.770778  662586 cri.go:89] found id: ""
	I1209 11:54:29.770814  662586 logs.go:282] 0 containers: []
	W1209 11:54:29.770826  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:29.770833  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:29.770899  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:29.801925  662586 cri.go:89] found id: ""
	I1209 11:54:29.801961  662586 logs.go:282] 0 containers: []
	W1209 11:54:29.801973  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:29.801981  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:29.802050  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:29.833681  662586 cri.go:89] found id: ""
	I1209 11:54:29.833712  662586 logs.go:282] 0 containers: []
	W1209 11:54:29.833722  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:29.833727  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:29.833791  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:29.873666  662586 cri.go:89] found id: ""
	I1209 11:54:29.873700  662586 logs.go:282] 0 containers: []
	W1209 11:54:29.873712  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:29.873722  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:29.873735  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:29.914855  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:29.914895  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:29.967730  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:29.967772  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:29.982037  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:29.982070  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:30.047168  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:30.047195  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:30.047212  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:32.623371  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:32.636346  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:32.636411  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:32.677709  662586 cri.go:89] found id: ""
	I1209 11:54:32.677736  662586 logs.go:282] 0 containers: []
	W1209 11:54:32.677744  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:32.677753  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:32.677805  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:29.896176  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:32.395216  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:32.952287  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:35.451456  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:32.092615  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:34.591397  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:32.710906  662586 cri.go:89] found id: ""
	I1209 11:54:32.710933  662586 logs.go:282] 0 containers: []
	W1209 11:54:32.710942  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:32.710948  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:32.711000  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:32.744623  662586 cri.go:89] found id: ""
	I1209 11:54:32.744654  662586 logs.go:282] 0 containers: []
	W1209 11:54:32.744667  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:32.744676  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:32.744736  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:32.779334  662586 cri.go:89] found id: ""
	I1209 11:54:32.779364  662586 logs.go:282] 0 containers: []
	W1209 11:54:32.779375  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:32.779382  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:32.779443  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:32.814998  662586 cri.go:89] found id: ""
	I1209 11:54:32.815032  662586 logs.go:282] 0 containers: []
	W1209 11:54:32.815046  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:32.815055  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:32.815128  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:32.850054  662586 cri.go:89] found id: ""
	I1209 11:54:32.850099  662586 logs.go:282] 0 containers: []
	W1209 11:54:32.850116  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:32.850127  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:32.850213  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:32.885769  662586 cri.go:89] found id: ""
	I1209 11:54:32.885805  662586 logs.go:282] 0 containers: []
	W1209 11:54:32.885818  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:32.885827  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:32.885899  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:32.927973  662586 cri.go:89] found id: ""
	I1209 11:54:32.928001  662586 logs.go:282] 0 containers: []
	W1209 11:54:32.928010  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:32.928019  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:32.928032  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:32.981915  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:32.981966  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:32.995817  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:32.995851  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:33.062409  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:33.062445  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:33.062462  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:33.146967  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:33.147011  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:35.688225  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:35.701226  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:35.701325  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:35.738628  662586 cri.go:89] found id: ""
	I1209 11:54:35.738655  662586 logs.go:282] 0 containers: []
	W1209 11:54:35.738663  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:35.738670  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:35.738737  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:35.771125  662586 cri.go:89] found id: ""
	I1209 11:54:35.771163  662586 logs.go:282] 0 containers: []
	W1209 11:54:35.771177  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:35.771187  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:35.771260  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:35.806244  662586 cri.go:89] found id: ""
	I1209 11:54:35.806277  662586 logs.go:282] 0 containers: []
	W1209 11:54:35.806290  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:35.806301  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:35.806376  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:35.839871  662586 cri.go:89] found id: ""
	I1209 11:54:35.839912  662586 logs.go:282] 0 containers: []
	W1209 11:54:35.839925  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:35.839932  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:35.840010  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:35.874994  662586 cri.go:89] found id: ""
	I1209 11:54:35.875034  662586 logs.go:282] 0 containers: []
	W1209 11:54:35.875046  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:35.875054  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:35.875129  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:35.910802  662586 cri.go:89] found id: ""
	I1209 11:54:35.910834  662586 logs.go:282] 0 containers: []
	W1209 11:54:35.910846  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:35.910855  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:35.910927  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:35.944633  662586 cri.go:89] found id: ""
	I1209 11:54:35.944663  662586 logs.go:282] 0 containers: []
	W1209 11:54:35.944672  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:35.944678  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:35.944749  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:35.982732  662586 cri.go:89] found id: ""
	I1209 11:54:35.982781  662586 logs.go:282] 0 containers: []
	W1209 11:54:35.982796  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:35.982811  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:35.982830  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:35.996271  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:35.996302  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:36.063463  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:36.063533  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:36.063554  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:36.141789  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:36.141833  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:36.187015  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:36.187047  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:34.895890  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:37.396472  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:37.951404  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:40.452814  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:37.091548  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:39.092168  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:38.739585  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:38.754322  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:38.754394  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:38.792497  662586 cri.go:89] found id: ""
	I1209 11:54:38.792525  662586 logs.go:282] 0 containers: []
	W1209 11:54:38.792535  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:38.792543  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:38.792608  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:38.829730  662586 cri.go:89] found id: ""
	I1209 11:54:38.829759  662586 logs.go:282] 0 containers: []
	W1209 11:54:38.829768  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:38.829774  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:38.829834  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:38.869942  662586 cri.go:89] found id: ""
	I1209 11:54:38.869981  662586 logs.go:282] 0 containers: []
	W1209 11:54:38.869994  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:38.870015  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:38.870085  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:38.906001  662586 cri.go:89] found id: ""
	I1209 11:54:38.906041  662586 logs.go:282] 0 containers: []
	W1209 11:54:38.906054  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:38.906063  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:38.906133  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:38.944389  662586 cri.go:89] found id: ""
	I1209 11:54:38.944427  662586 logs.go:282] 0 containers: []
	W1209 11:54:38.944445  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:38.944453  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:38.944534  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:38.979633  662586 cri.go:89] found id: ""
	I1209 11:54:38.979665  662586 logs.go:282] 0 containers: []
	W1209 11:54:38.979674  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:38.979681  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:38.979735  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:39.016366  662586 cri.go:89] found id: ""
	I1209 11:54:39.016402  662586 logs.go:282] 0 containers: []
	W1209 11:54:39.016416  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:39.016424  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:39.016489  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:39.049084  662586 cri.go:89] found id: ""
	I1209 11:54:39.049116  662586 logs.go:282] 0 containers: []
	W1209 11:54:39.049125  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:39.049134  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:39.049148  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:39.113953  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:39.113985  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:39.114004  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:39.191715  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:39.191767  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:39.232127  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:39.232167  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:39.281406  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:39.281448  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:41.795395  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:41.810293  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:41.810364  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:41.849819  662586 cri.go:89] found id: ""
	I1209 11:54:41.849858  662586 logs.go:282] 0 containers: []
	W1209 11:54:41.849872  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:41.849882  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:41.849952  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:41.883871  662586 cri.go:89] found id: ""
	I1209 11:54:41.883908  662586 logs.go:282] 0 containers: []
	W1209 11:54:41.883934  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:41.883942  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:41.884017  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:41.918194  662586 cri.go:89] found id: ""
	I1209 11:54:41.918230  662586 logs.go:282] 0 containers: []
	W1209 11:54:41.918239  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:41.918245  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:41.918312  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:41.950878  662586 cri.go:89] found id: ""
	I1209 11:54:41.950912  662586 logs.go:282] 0 containers: []
	W1209 11:54:41.950924  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:41.950933  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:41.950995  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:41.982922  662586 cri.go:89] found id: ""
	I1209 11:54:41.982964  662586 logs.go:282] 0 containers: []
	W1209 11:54:41.982976  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:41.982985  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:41.983064  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:42.014066  662586 cri.go:89] found id: ""
	I1209 11:54:42.014107  662586 logs.go:282] 0 containers: []
	W1209 11:54:42.014120  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:42.014129  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:42.014229  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:42.048017  662586 cri.go:89] found id: ""
	I1209 11:54:42.048056  662586 logs.go:282] 0 containers: []
	W1209 11:54:42.048070  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:42.048079  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:42.048146  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:42.080585  662586 cri.go:89] found id: ""
	I1209 11:54:42.080614  662586 logs.go:282] 0 containers: []
	W1209 11:54:42.080624  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:42.080634  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:42.080646  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:42.135012  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:42.135054  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:42.148424  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:42.148462  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:42.219179  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:42.219206  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:42.219230  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:42.305855  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:42.305902  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:39.895830  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:41.896255  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:44.398373  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:42.949835  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:44.951542  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:41.590831  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:43.592053  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:45.593044  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:44.843158  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:44.856317  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:44.856380  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:44.890940  662586 cri.go:89] found id: ""
	I1209 11:54:44.890984  662586 logs.go:282] 0 containers: []
	W1209 11:54:44.891003  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:44.891012  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:44.891081  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:44.923657  662586 cri.go:89] found id: ""
	I1209 11:54:44.923684  662586 logs.go:282] 0 containers: []
	W1209 11:54:44.923692  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:44.923698  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:44.923769  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:44.957512  662586 cri.go:89] found id: ""
	I1209 11:54:44.957545  662586 logs.go:282] 0 containers: []
	W1209 11:54:44.957558  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:44.957566  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:44.957636  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:44.998084  662586 cri.go:89] found id: ""
	I1209 11:54:44.998112  662586 logs.go:282] 0 containers: []
	W1209 11:54:44.998121  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:44.998128  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:44.998210  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:45.030335  662586 cri.go:89] found id: ""
	I1209 11:54:45.030360  662586 logs.go:282] 0 containers: []
	W1209 11:54:45.030369  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:45.030375  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:45.030447  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:45.063098  662586 cri.go:89] found id: ""
	I1209 11:54:45.063127  662586 logs.go:282] 0 containers: []
	W1209 11:54:45.063135  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:45.063141  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:45.063210  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:45.098430  662586 cri.go:89] found id: ""
	I1209 11:54:45.098458  662586 logs.go:282] 0 containers: []
	W1209 11:54:45.098466  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:45.098472  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:45.098526  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:45.132064  662586 cri.go:89] found id: ""
	I1209 11:54:45.132094  662586 logs.go:282] 0 containers: []
	W1209 11:54:45.132102  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:45.132113  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:45.132131  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:45.185512  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:45.185556  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:45.199543  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:45.199572  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:45.268777  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:45.268803  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:45.268817  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:45.352250  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:45.352299  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:46.897153  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:49.395935  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:46.952862  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:49.450006  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:48.092394  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:50.591937  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:47.892201  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:47.906961  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:47.907053  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:47.941349  662586 cri.go:89] found id: ""
	I1209 11:54:47.941394  662586 logs.go:282] 0 containers: []
	W1209 11:54:47.941408  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:47.941418  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:47.941479  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:47.981086  662586 cri.go:89] found id: ""
	I1209 11:54:47.981120  662586 logs.go:282] 0 containers: []
	W1209 11:54:47.981133  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:47.981141  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:47.981210  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:48.014105  662586 cri.go:89] found id: ""
	I1209 11:54:48.014142  662586 logs.go:282] 0 containers: []
	W1209 11:54:48.014151  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:48.014162  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:48.014249  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:48.049506  662586 cri.go:89] found id: ""
	I1209 11:54:48.049535  662586 logs.go:282] 0 containers: []
	W1209 11:54:48.049544  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:48.049552  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:48.049619  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:48.084284  662586 cri.go:89] found id: ""
	I1209 11:54:48.084314  662586 logs.go:282] 0 containers: []
	W1209 11:54:48.084324  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:48.084336  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:48.084406  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:48.117318  662586 cri.go:89] found id: ""
	I1209 11:54:48.117349  662586 logs.go:282] 0 containers: []
	W1209 11:54:48.117362  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:48.117371  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:48.117441  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:48.150121  662586 cri.go:89] found id: ""
	I1209 11:54:48.150151  662586 logs.go:282] 0 containers: []
	W1209 11:54:48.150187  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:48.150198  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:48.150266  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:48.180919  662586 cri.go:89] found id: ""
	I1209 11:54:48.180947  662586 logs.go:282] 0 containers: []
	W1209 11:54:48.180955  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:48.180966  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:48.180978  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:48.249572  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:48.249602  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:48.249617  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:48.324508  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:48.324552  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:48.363856  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:48.363901  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:48.415662  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:48.415721  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:50.929811  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:50.943650  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:50.943714  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:50.976444  662586 cri.go:89] found id: ""
	I1209 11:54:50.976480  662586 logs.go:282] 0 containers: []
	W1209 11:54:50.976493  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:50.976502  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:50.976574  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:51.016567  662586 cri.go:89] found id: ""
	I1209 11:54:51.016600  662586 logs.go:282] 0 containers: []
	W1209 11:54:51.016613  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:51.016621  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:51.016699  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:51.048933  662586 cri.go:89] found id: ""
	I1209 11:54:51.048967  662586 logs.go:282] 0 containers: []
	W1209 11:54:51.048977  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:51.048986  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:51.049073  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:51.083292  662586 cri.go:89] found id: ""
	I1209 11:54:51.083333  662586 logs.go:282] 0 containers: []
	W1209 11:54:51.083345  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:51.083354  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:51.083423  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:51.118505  662586 cri.go:89] found id: ""
	I1209 11:54:51.118547  662586 logs.go:282] 0 containers: []
	W1209 11:54:51.118560  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:51.118571  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:51.118644  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:51.152818  662586 cri.go:89] found id: ""
	I1209 11:54:51.152847  662586 logs.go:282] 0 containers: []
	W1209 11:54:51.152856  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:51.152870  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:51.152922  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:51.186953  662586 cri.go:89] found id: ""
	I1209 11:54:51.186981  662586 logs.go:282] 0 containers: []
	W1209 11:54:51.186991  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:51.186997  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:51.187063  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:51.219305  662586 cri.go:89] found id: ""
	I1209 11:54:51.219337  662586 logs.go:282] 0 containers: []
	W1209 11:54:51.219348  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:51.219361  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:51.219380  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:51.256295  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:51.256338  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:51.313751  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:51.313806  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:51.326940  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:51.326977  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:51.397395  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:51.397428  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:51.397445  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:51.396434  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:53.896554  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:51.456719  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:53.951566  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:52.592043  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:55.091800  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:53.975557  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:53.989509  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:53.989581  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:54.024363  662586 cri.go:89] found id: ""
	I1209 11:54:54.024403  662586 logs.go:282] 0 containers: []
	W1209 11:54:54.024416  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:54.024423  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:54.024484  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:54.062618  662586 cri.go:89] found id: ""
	I1209 11:54:54.062649  662586 logs.go:282] 0 containers: []
	W1209 11:54:54.062659  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:54.062667  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:54.062739  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:54.100194  662586 cri.go:89] found id: ""
	I1209 11:54:54.100231  662586 logs.go:282] 0 containers: []
	W1209 11:54:54.100243  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:54.100252  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:54.100324  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:54.135302  662586 cri.go:89] found id: ""
	I1209 11:54:54.135341  662586 logs.go:282] 0 containers: []
	W1209 11:54:54.135354  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:54.135363  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:54.135447  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:54.170898  662586 cri.go:89] found id: ""
	I1209 11:54:54.170940  662586 logs.go:282] 0 containers: []
	W1209 11:54:54.170953  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:54.170963  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:54.171035  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:54.205098  662586 cri.go:89] found id: ""
	I1209 11:54:54.205138  662586 logs.go:282] 0 containers: []
	W1209 11:54:54.205151  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:54.205159  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:54.205223  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:54.239153  662586 cri.go:89] found id: ""
	I1209 11:54:54.239210  662586 logs.go:282] 0 containers: []
	W1209 11:54:54.239226  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:54.239234  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:54.239307  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:54.278213  662586 cri.go:89] found id: ""
	I1209 11:54:54.278248  662586 logs.go:282] 0 containers: []
	W1209 11:54:54.278260  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:54.278275  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:54.278296  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:54.348095  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:54.348128  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:54.348156  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:54.427181  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:54.427224  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:54.467623  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:54.467656  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:54.519690  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:54.519734  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:57.033524  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:54:57.046420  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:54:57.046518  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:54:57.079588  662586 cri.go:89] found id: ""
	I1209 11:54:57.079616  662586 logs.go:282] 0 containers: []
	W1209 11:54:57.079626  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:54:57.079633  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:54:57.079687  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:54:57.114944  662586 cri.go:89] found id: ""
	I1209 11:54:57.114973  662586 logs.go:282] 0 containers: []
	W1209 11:54:57.114982  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:54:57.114988  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:54:57.115043  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:54:57.147667  662586 cri.go:89] found id: ""
	I1209 11:54:57.147708  662586 logs.go:282] 0 containers: []
	W1209 11:54:57.147721  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:54:57.147730  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:54:57.147794  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:54:57.182339  662586 cri.go:89] found id: ""
	I1209 11:54:57.182370  662586 logs.go:282] 0 containers: []
	W1209 11:54:57.182386  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:54:57.182395  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:54:57.182470  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:54:57.223129  662586 cri.go:89] found id: ""
	I1209 11:54:57.223170  662586 logs.go:282] 0 containers: []
	W1209 11:54:57.223186  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:54:57.223197  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:54:57.223270  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:54:57.262351  662586 cri.go:89] found id: ""
	I1209 11:54:57.262386  662586 logs.go:282] 0 containers: []
	W1209 11:54:57.262398  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:54:57.262409  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:54:57.262471  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:54:57.298743  662586 cri.go:89] found id: ""
	I1209 11:54:57.298772  662586 logs.go:282] 0 containers: []
	W1209 11:54:57.298782  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:54:57.298789  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:54:57.298856  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:54:57.339030  662586 cri.go:89] found id: ""
	I1209 11:54:57.339064  662586 logs.go:282] 0 containers: []
	W1209 11:54:57.339073  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:54:57.339085  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:54:57.339122  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:54:57.352603  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:54:57.352637  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:54:57.426627  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:54:57.426653  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:54:57.426669  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:54:57.515357  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:54:57.515401  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:54:57.554882  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:54:57.554925  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:54:56.396610  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:58.895822  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:56.451429  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:58.951440  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:54:57.590864  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:00.091967  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:00.112082  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:00.124977  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:00.125056  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:00.159003  662586 cri.go:89] found id: ""
	I1209 11:55:00.159032  662586 logs.go:282] 0 containers: []
	W1209 11:55:00.159041  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:00.159048  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:00.159101  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:00.192479  662586 cri.go:89] found id: ""
	I1209 11:55:00.192515  662586 logs.go:282] 0 containers: []
	W1209 11:55:00.192527  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:00.192533  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:00.192587  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:00.226146  662586 cri.go:89] found id: ""
	I1209 11:55:00.226194  662586 logs.go:282] 0 containers: []
	W1209 11:55:00.226208  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:00.226216  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:00.226273  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:00.260389  662586 cri.go:89] found id: ""
	I1209 11:55:00.260420  662586 logs.go:282] 0 containers: []
	W1209 11:55:00.260430  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:00.260442  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:00.260500  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:00.296091  662586 cri.go:89] found id: ""
	I1209 11:55:00.296121  662586 logs.go:282] 0 containers: []
	W1209 11:55:00.296131  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:00.296138  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:00.296195  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:00.332101  662586 cri.go:89] found id: ""
	I1209 11:55:00.332137  662586 logs.go:282] 0 containers: []
	W1209 11:55:00.332150  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:00.332158  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:00.332244  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:00.377329  662586 cri.go:89] found id: ""
	I1209 11:55:00.377358  662586 logs.go:282] 0 containers: []
	W1209 11:55:00.377368  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:00.377374  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:00.377438  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:00.415660  662586 cri.go:89] found id: ""
	I1209 11:55:00.415688  662586 logs.go:282] 0 containers: []
	W1209 11:55:00.415751  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:00.415767  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:00.415781  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:00.467734  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:00.467776  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:00.481244  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:00.481280  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:00.545721  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:00.545755  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:00.545777  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:00.624482  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:00.624533  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:01.396452  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:03.895539  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:01.452337  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:03.950752  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:05.951246  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:02.092654  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:04.592173  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:03.168340  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:03.183354  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:03.183439  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:03.223131  662586 cri.go:89] found id: ""
	I1209 11:55:03.223171  662586 logs.go:282] 0 containers: []
	W1209 11:55:03.223185  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:03.223193  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:03.223263  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:03.256561  662586 cri.go:89] found id: ""
	I1209 11:55:03.256595  662586 logs.go:282] 0 containers: []
	W1209 11:55:03.256603  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:03.256609  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:03.256667  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:03.289670  662586 cri.go:89] found id: ""
	I1209 11:55:03.289707  662586 logs.go:282] 0 containers: []
	W1209 11:55:03.289722  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:03.289738  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:03.289813  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:03.323687  662586 cri.go:89] found id: ""
	I1209 11:55:03.323714  662586 logs.go:282] 0 containers: []
	W1209 11:55:03.323724  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:03.323730  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:03.323786  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:03.358163  662586 cri.go:89] found id: ""
	I1209 11:55:03.358221  662586 logs.go:282] 0 containers: []
	W1209 11:55:03.358233  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:03.358241  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:03.358311  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:03.399688  662586 cri.go:89] found id: ""
	I1209 11:55:03.399721  662586 logs.go:282] 0 containers: []
	W1209 11:55:03.399734  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:03.399744  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:03.399812  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:03.433909  662586 cri.go:89] found id: ""
	I1209 11:55:03.433939  662586 logs.go:282] 0 containers: []
	W1209 11:55:03.433948  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:03.433954  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:03.434011  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:03.470208  662586 cri.go:89] found id: ""
	I1209 11:55:03.470239  662586 logs.go:282] 0 containers: []
	W1209 11:55:03.470248  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:03.470270  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:03.470289  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:03.545801  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:03.545848  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:03.584357  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:03.584389  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:03.641241  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:03.641283  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:03.657034  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:03.657080  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:03.731285  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:06.232380  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:06.246339  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:06.246411  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:06.281323  662586 cri.go:89] found id: ""
	I1209 11:55:06.281362  662586 logs.go:282] 0 containers: []
	W1209 11:55:06.281377  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:06.281385  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:06.281444  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:06.318225  662586 cri.go:89] found id: ""
	I1209 11:55:06.318261  662586 logs.go:282] 0 containers: []
	W1209 11:55:06.318277  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:06.318293  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:06.318364  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:06.353649  662586 cri.go:89] found id: ""
	I1209 11:55:06.353685  662586 logs.go:282] 0 containers: []
	W1209 11:55:06.353699  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:06.353708  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:06.353782  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:06.395204  662586 cri.go:89] found id: ""
	I1209 11:55:06.395242  662586 logs.go:282] 0 containers: []
	W1209 11:55:06.395257  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:06.395266  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:06.395335  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:06.436421  662586 cri.go:89] found id: ""
	I1209 11:55:06.436452  662586 logs.go:282] 0 containers: []
	W1209 11:55:06.436462  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:06.436469  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:06.436524  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:06.472218  662586 cri.go:89] found id: ""
	I1209 11:55:06.472246  662586 logs.go:282] 0 containers: []
	W1209 11:55:06.472255  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:06.472268  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:06.472335  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:06.506585  662586 cri.go:89] found id: ""
	I1209 11:55:06.506629  662586 logs.go:282] 0 containers: []
	W1209 11:55:06.506640  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:06.506647  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:06.506702  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:06.541442  662586 cri.go:89] found id: ""
	I1209 11:55:06.541472  662586 logs.go:282] 0 containers: []
	W1209 11:55:06.541481  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:06.541493  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:06.541512  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:06.592642  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:06.592682  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:06.606764  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:06.606805  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:06.677693  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:06.677720  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:06.677740  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:06.766074  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:06.766124  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:05.896263  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:08.396283  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:07.951409  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:10.451540  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:06.592724  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:09.091961  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:09.305144  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:09.319352  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:09.319444  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:09.357918  662586 cri.go:89] found id: ""
	I1209 11:55:09.358027  662586 logs.go:282] 0 containers: []
	W1209 11:55:09.358066  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:09.358077  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:09.358139  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:09.413181  662586 cri.go:89] found id: ""
	I1209 11:55:09.413213  662586 logs.go:282] 0 containers: []
	W1209 11:55:09.413226  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:09.413234  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:09.413310  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:09.448417  662586 cri.go:89] found id: ""
	I1209 11:55:09.448460  662586 logs.go:282] 0 containers: []
	W1209 11:55:09.448471  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:09.448480  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:09.448566  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:09.489732  662586 cri.go:89] found id: ""
	I1209 11:55:09.489765  662586 logs.go:282] 0 containers: []
	W1209 11:55:09.489775  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:09.489781  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:09.489845  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:09.524919  662586 cri.go:89] found id: ""
	I1209 11:55:09.524948  662586 logs.go:282] 0 containers: []
	W1209 11:55:09.524959  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:09.524968  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:09.525051  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:09.563268  662586 cri.go:89] found id: ""
	I1209 11:55:09.563301  662586 logs.go:282] 0 containers: []
	W1209 11:55:09.563311  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:09.563318  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:09.563373  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:09.598747  662586 cri.go:89] found id: ""
	I1209 11:55:09.598780  662586 logs.go:282] 0 containers: []
	W1209 11:55:09.598790  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:09.598798  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:09.598866  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:09.634447  662586 cri.go:89] found id: ""
	I1209 11:55:09.634479  662586 logs.go:282] 0 containers: []
	W1209 11:55:09.634492  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:09.634505  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:09.634520  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:09.647380  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:09.647419  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:09.721335  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:09.721363  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:09.721380  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:09.801039  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:09.801088  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:09.840929  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:09.840971  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:12.393810  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:12.407553  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:12.407654  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:12.444391  662586 cri.go:89] found id: ""
	I1209 11:55:12.444437  662586 logs.go:282] 0 containers: []
	W1209 11:55:12.444450  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:12.444459  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:12.444533  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:12.482714  662586 cri.go:89] found id: ""
	I1209 11:55:12.482752  662586 logs.go:282] 0 containers: []
	W1209 11:55:12.482764  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:12.482771  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:12.482853  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:12.518139  662586 cri.go:89] found id: ""
	I1209 11:55:12.518187  662586 logs.go:282] 0 containers: []
	W1209 11:55:12.518202  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:12.518211  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:12.518281  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:12.556903  662586 cri.go:89] found id: ""
	I1209 11:55:12.556938  662586 logs.go:282] 0 containers: []
	W1209 11:55:12.556950  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:12.556958  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:12.557028  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:12.591915  662586 cri.go:89] found id: ""
	I1209 11:55:12.591953  662586 logs.go:282] 0 containers: []
	W1209 11:55:12.591963  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:12.591971  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:12.592038  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:12.629767  662586 cri.go:89] found id: ""
	I1209 11:55:12.629797  662586 logs.go:282] 0 containers: []
	W1209 11:55:12.629806  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:12.629812  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:12.629878  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:12.667677  662586 cri.go:89] found id: ""
	I1209 11:55:12.667710  662586 logs.go:282] 0 containers: []
	W1209 11:55:12.667720  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:12.667727  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:12.667781  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:10.896109  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:12.896992  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:12.451770  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:14.952359  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:11.591952  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:14.092213  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:12.705720  662586 cri.go:89] found id: ""
	I1209 11:55:12.705747  662586 logs.go:282] 0 containers: []
	W1209 11:55:12.705756  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:12.705766  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:12.705780  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:12.758399  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:12.758441  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:12.772297  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:12.772336  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:12.839545  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:12.839569  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:12.839582  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:12.918424  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:12.918467  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:15.458122  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:15.473193  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:15.473284  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:15.508756  662586 cri.go:89] found id: ""
	I1209 11:55:15.508790  662586 logs.go:282] 0 containers: []
	W1209 11:55:15.508799  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:15.508806  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:15.508862  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:15.544735  662586 cri.go:89] found id: ""
	I1209 11:55:15.544770  662586 logs.go:282] 0 containers: []
	W1209 11:55:15.544782  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:15.544791  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:15.544866  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:15.577169  662586 cri.go:89] found id: ""
	I1209 11:55:15.577200  662586 logs.go:282] 0 containers: []
	W1209 11:55:15.577210  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:15.577216  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:15.577277  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:15.610662  662586 cri.go:89] found id: ""
	I1209 11:55:15.610690  662586 logs.go:282] 0 containers: []
	W1209 11:55:15.610700  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:15.610707  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:15.610763  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:15.645339  662586 cri.go:89] found id: ""
	I1209 11:55:15.645375  662586 logs.go:282] 0 containers: []
	W1209 11:55:15.645386  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:15.645394  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:15.645469  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:15.682044  662586 cri.go:89] found id: ""
	I1209 11:55:15.682079  662586 logs.go:282] 0 containers: []
	W1209 11:55:15.682096  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:15.682106  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:15.682201  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:15.717193  662586 cri.go:89] found id: ""
	I1209 11:55:15.717228  662586 logs.go:282] 0 containers: []
	W1209 11:55:15.717245  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:15.717256  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:15.717332  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:15.751756  662586 cri.go:89] found id: ""
	I1209 11:55:15.751792  662586 logs.go:282] 0 containers: []
	W1209 11:55:15.751803  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:15.751813  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:15.751827  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:15.811010  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:15.811063  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:15.842556  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:15.842597  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:15.920169  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:15.920195  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:15.920209  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:16.003180  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:16.003226  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:15.395666  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:17.396041  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:19.396262  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:17.451272  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:19.951638  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:16.591423  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:18.592456  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:21.090108  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:18.542563  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:18.555968  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:18.556059  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:18.588746  662586 cri.go:89] found id: ""
	I1209 11:55:18.588780  662586 logs.go:282] 0 containers: []
	W1209 11:55:18.588790  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:18.588797  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:18.588854  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:18.623664  662586 cri.go:89] found id: ""
	I1209 11:55:18.623707  662586 logs.go:282] 0 containers: []
	W1209 11:55:18.623720  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:18.623728  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:18.623798  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:18.659012  662586 cri.go:89] found id: ""
	I1209 11:55:18.659051  662586 logs.go:282] 0 containers: []
	W1209 11:55:18.659065  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:18.659074  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:18.659148  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:18.693555  662586 cri.go:89] found id: ""
	I1209 11:55:18.693588  662586 logs.go:282] 0 containers: []
	W1209 11:55:18.693600  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:18.693607  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:18.693661  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:18.726609  662586 cri.go:89] found id: ""
	I1209 11:55:18.726641  662586 logs.go:282] 0 containers: []
	W1209 11:55:18.726652  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:18.726659  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:18.726712  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:18.760654  662586 cri.go:89] found id: ""
	I1209 11:55:18.760682  662586 logs.go:282] 0 containers: []
	W1209 11:55:18.760694  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:18.760704  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:18.760761  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:18.794656  662586 cri.go:89] found id: ""
	I1209 11:55:18.794688  662586 logs.go:282] 0 containers: []
	W1209 11:55:18.794699  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:18.794706  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:18.794769  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:18.829988  662586 cri.go:89] found id: ""
	I1209 11:55:18.830030  662586 logs.go:282] 0 containers: []
	W1209 11:55:18.830045  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:18.830059  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:18.830073  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:18.872523  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:18.872558  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:18.929408  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:18.929449  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:18.943095  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:18.943133  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:19.009125  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:19.009150  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:19.009164  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:21.587418  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:21.606271  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:21.606358  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:21.653536  662586 cri.go:89] found id: ""
	I1209 11:55:21.653574  662586 logs.go:282] 0 containers: []
	W1209 11:55:21.653586  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:21.653595  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:21.653671  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:21.687023  662586 cri.go:89] found id: ""
	I1209 11:55:21.687049  662586 logs.go:282] 0 containers: []
	W1209 11:55:21.687060  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:21.687068  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:21.687131  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:21.720112  662586 cri.go:89] found id: ""
	I1209 11:55:21.720150  662586 logs.go:282] 0 containers: []
	W1209 11:55:21.720163  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:21.720171  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:21.720243  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:21.754697  662586 cri.go:89] found id: ""
	I1209 11:55:21.754729  662586 logs.go:282] 0 containers: []
	W1209 11:55:21.754740  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:21.754749  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:21.754814  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:21.793926  662586 cri.go:89] found id: ""
	I1209 11:55:21.793957  662586 logs.go:282] 0 containers: []
	W1209 11:55:21.793967  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:21.793973  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:21.794040  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:21.827572  662586 cri.go:89] found id: ""
	I1209 11:55:21.827609  662586 logs.go:282] 0 containers: []
	W1209 11:55:21.827622  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:21.827633  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:21.827700  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:21.861442  662586 cri.go:89] found id: ""
	I1209 11:55:21.861472  662586 logs.go:282] 0 containers: []
	W1209 11:55:21.861490  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:21.861499  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:21.861565  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:21.894858  662586 cri.go:89] found id: ""
	I1209 11:55:21.894884  662586 logs.go:282] 0 containers: []
	W1209 11:55:21.894892  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:21.894901  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:21.894914  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:21.942567  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:21.942625  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:21.956849  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:21.956879  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:22.020700  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:22.020724  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:22.020738  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:22.095730  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:22.095767  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:21.896304  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:24.395936  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:21.951928  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:24.450997  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:23.090962  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:25.091816  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:24.631715  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:24.644165  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:24.644252  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:24.677720  662586 cri.go:89] found id: ""
	I1209 11:55:24.677757  662586 logs.go:282] 0 containers: []
	W1209 11:55:24.677769  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:24.677778  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:24.677835  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:24.711053  662586 cri.go:89] found id: ""
	I1209 11:55:24.711086  662586 logs.go:282] 0 containers: []
	W1209 11:55:24.711095  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:24.711101  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:24.711154  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:24.744107  662586 cri.go:89] found id: ""
	I1209 11:55:24.744139  662586 logs.go:282] 0 containers: []
	W1209 11:55:24.744148  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:24.744154  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:24.744210  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:24.777811  662586 cri.go:89] found id: ""
	I1209 11:55:24.777853  662586 logs.go:282] 0 containers: []
	W1209 11:55:24.777866  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:24.777876  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:24.777938  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:24.810524  662586 cri.go:89] found id: ""
	I1209 11:55:24.810558  662586 logs.go:282] 0 containers: []
	W1209 11:55:24.810571  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:24.810580  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:24.810648  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:24.843551  662586 cri.go:89] found id: ""
	I1209 11:55:24.843582  662586 logs.go:282] 0 containers: []
	W1209 11:55:24.843590  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:24.843597  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:24.843649  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:24.875342  662586 cri.go:89] found id: ""
	I1209 11:55:24.875371  662586 logs.go:282] 0 containers: []
	W1209 11:55:24.875384  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:24.875390  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:24.875446  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:24.910298  662586 cri.go:89] found id: ""
	I1209 11:55:24.910329  662586 logs.go:282] 0 containers: []
	W1209 11:55:24.910340  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:24.910352  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:24.910377  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:24.962151  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:24.962204  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:24.976547  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:24.976577  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:25.050606  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:25.050635  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:25.050652  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:25.134204  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:25.134254  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:27.671220  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:27.685132  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:27.685194  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:26.895311  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:28.895954  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:26.950106  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:28.950915  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:30.952019  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:27.591908  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:30.090353  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:27.718113  662586 cri.go:89] found id: ""
	I1209 11:55:27.718141  662586 logs.go:282] 0 containers: []
	W1209 11:55:27.718150  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:27.718160  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:27.718242  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:27.752350  662586 cri.go:89] found id: ""
	I1209 11:55:27.752384  662586 logs.go:282] 0 containers: []
	W1209 11:55:27.752395  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:27.752401  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:27.752481  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:27.797360  662586 cri.go:89] found id: ""
	I1209 11:55:27.797393  662586 logs.go:282] 0 containers: []
	W1209 11:55:27.797406  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:27.797415  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:27.797488  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:27.834549  662586 cri.go:89] found id: ""
	I1209 11:55:27.834579  662586 logs.go:282] 0 containers: []
	W1209 11:55:27.834588  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:27.834594  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:27.834655  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:27.874403  662586 cri.go:89] found id: ""
	I1209 11:55:27.874440  662586 logs.go:282] 0 containers: []
	W1209 11:55:27.874465  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:27.874474  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:27.874557  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:27.914324  662586 cri.go:89] found id: ""
	I1209 11:55:27.914360  662586 logs.go:282] 0 containers: []
	W1209 11:55:27.914373  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:27.914380  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:27.914450  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:27.948001  662586 cri.go:89] found id: ""
	I1209 11:55:27.948043  662586 logs.go:282] 0 containers: []
	W1209 11:55:27.948056  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:27.948066  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:27.948219  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:27.982329  662586 cri.go:89] found id: ""
	I1209 11:55:27.982360  662586 logs.go:282] 0 containers: []
	W1209 11:55:27.982369  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:27.982379  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:27.982391  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:28.038165  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:28.038228  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:28.051578  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:28.051609  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:28.119914  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:28.119937  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:28.119951  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:28.195634  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:28.195679  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:30.735392  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:30.748430  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:30.748521  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:30.780500  662586 cri.go:89] found id: ""
	I1209 11:55:30.780528  662586 logs.go:282] 0 containers: []
	W1209 11:55:30.780537  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:30.780544  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:30.780606  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:30.812430  662586 cri.go:89] found id: ""
	I1209 11:55:30.812462  662586 logs.go:282] 0 containers: []
	W1209 11:55:30.812470  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:30.812477  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:30.812530  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:30.854030  662586 cri.go:89] found id: ""
	I1209 11:55:30.854057  662586 logs.go:282] 0 containers: []
	W1209 11:55:30.854066  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:30.854073  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:30.854130  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:30.892144  662586 cri.go:89] found id: ""
	I1209 11:55:30.892182  662586 logs.go:282] 0 containers: []
	W1209 11:55:30.892202  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:30.892211  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:30.892284  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:30.927540  662586 cri.go:89] found id: ""
	I1209 11:55:30.927576  662586 logs.go:282] 0 containers: []
	W1209 11:55:30.927590  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:30.927597  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:30.927660  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:30.963820  662586 cri.go:89] found id: ""
	I1209 11:55:30.963852  662586 logs.go:282] 0 containers: []
	W1209 11:55:30.963861  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:30.963867  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:30.963920  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:30.997793  662586 cri.go:89] found id: ""
	I1209 11:55:30.997819  662586 logs.go:282] 0 containers: []
	W1209 11:55:30.997828  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:30.997836  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:30.997902  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:31.031649  662586 cri.go:89] found id: ""
	I1209 11:55:31.031699  662586 logs.go:282] 0 containers: []
	W1209 11:55:31.031712  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:31.031726  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:31.031746  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:31.101464  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:31.101492  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:31.101509  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:31.184635  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:31.184681  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:31.222690  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:31.222732  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:31.276518  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:31.276566  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:30.896544  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:33.395861  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:33.451560  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:35.952567  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:32.091788  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:34.592091  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:33.790941  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:33.805299  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:33.805390  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:33.844205  662586 cri.go:89] found id: ""
	I1209 11:55:33.844241  662586 logs.go:282] 0 containers: []
	W1209 11:55:33.844253  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:33.844262  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:33.844337  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:33.883378  662586 cri.go:89] found id: ""
	I1209 11:55:33.883410  662586 logs.go:282] 0 containers: []
	W1209 11:55:33.883424  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:33.883431  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:33.883505  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:33.920007  662586 cri.go:89] found id: ""
	I1209 11:55:33.920049  662586 logs.go:282] 0 containers: []
	W1209 11:55:33.920061  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:33.920074  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:33.920141  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:33.956111  662586 cri.go:89] found id: ""
	I1209 11:55:33.956163  662586 logs.go:282] 0 containers: []
	W1209 11:55:33.956175  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:33.956183  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:33.956241  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:33.990057  662586 cri.go:89] found id: ""
	I1209 11:55:33.990092  662586 logs.go:282] 0 containers: []
	W1209 11:55:33.990102  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:33.990109  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:33.990166  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:34.023046  662586 cri.go:89] found id: ""
	I1209 11:55:34.023082  662586 logs.go:282] 0 containers: []
	W1209 11:55:34.023096  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:34.023103  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:34.023171  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:34.055864  662586 cri.go:89] found id: ""
	I1209 11:55:34.055898  662586 logs.go:282] 0 containers: []
	W1209 11:55:34.055909  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:34.055916  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:34.055987  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:34.091676  662586 cri.go:89] found id: ""
	I1209 11:55:34.091710  662586 logs.go:282] 0 containers: []
	W1209 11:55:34.091722  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:34.091733  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:34.091747  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:34.142959  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:34.143002  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:34.156431  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:34.156466  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:34.230277  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:34.230303  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:34.230320  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:34.313660  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:34.313713  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:36.850056  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:36.862486  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:36.862582  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:36.893134  662586 cri.go:89] found id: ""
	I1209 11:55:36.893163  662586 logs.go:282] 0 containers: []
	W1209 11:55:36.893173  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:36.893179  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:36.893257  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:36.927438  662586 cri.go:89] found id: ""
	I1209 11:55:36.927469  662586 logs.go:282] 0 containers: []
	W1209 11:55:36.927479  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:36.927485  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:36.927546  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:36.958787  662586 cri.go:89] found id: ""
	I1209 11:55:36.958818  662586 logs.go:282] 0 containers: []
	W1209 11:55:36.958829  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:36.958837  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:36.958901  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:36.995470  662586 cri.go:89] found id: ""
	I1209 11:55:36.995508  662586 logs.go:282] 0 containers: []
	W1209 11:55:36.995520  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:36.995529  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:36.995590  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:37.026705  662586 cri.go:89] found id: ""
	I1209 11:55:37.026736  662586 logs.go:282] 0 containers: []
	W1209 11:55:37.026746  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:37.026752  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:37.026805  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:37.059717  662586 cri.go:89] found id: ""
	I1209 11:55:37.059748  662586 logs.go:282] 0 containers: []
	W1209 11:55:37.059756  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:37.059762  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:37.059820  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:37.094049  662586 cri.go:89] found id: ""
	I1209 11:55:37.094076  662586 logs.go:282] 0 containers: []
	W1209 11:55:37.094088  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:37.094097  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:37.094190  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:37.128684  662586 cri.go:89] found id: ""
	I1209 11:55:37.128715  662586 logs.go:282] 0 containers: []
	W1209 11:55:37.128724  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:37.128735  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:37.128755  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:37.177932  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:37.177973  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:37.191218  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:37.191252  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:37.256488  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:37.256521  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:37.256538  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:37.330603  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:37.330647  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:35.895823  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:37.895972  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:37.952764  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:40.450704  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:37.092013  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:39.591402  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:39.868604  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:39.881991  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:39.882063  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:39.916750  662586 cri.go:89] found id: ""
	I1209 11:55:39.916786  662586 logs.go:282] 0 containers: []
	W1209 11:55:39.916799  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:39.916806  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:39.916874  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:39.957744  662586 cri.go:89] found id: ""
	I1209 11:55:39.957773  662586 logs.go:282] 0 containers: []
	W1209 11:55:39.957781  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:39.957788  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:39.957854  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:39.994613  662586 cri.go:89] found id: ""
	I1209 11:55:39.994645  662586 logs.go:282] 0 containers: []
	W1209 11:55:39.994654  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:39.994661  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:39.994726  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:40.032606  662586 cri.go:89] found id: ""
	I1209 11:55:40.032635  662586 logs.go:282] 0 containers: []
	W1209 11:55:40.032644  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:40.032650  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:40.032710  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:40.067172  662586 cri.go:89] found id: ""
	I1209 11:55:40.067204  662586 logs.go:282] 0 containers: []
	W1209 11:55:40.067214  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:40.067221  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:40.067278  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:40.101391  662586 cri.go:89] found id: ""
	I1209 11:55:40.101423  662586 logs.go:282] 0 containers: []
	W1209 11:55:40.101432  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:40.101439  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:40.101510  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:40.133160  662586 cri.go:89] found id: ""
	I1209 11:55:40.133196  662586 logs.go:282] 0 containers: []
	W1209 11:55:40.133209  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:40.133217  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:40.133283  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:40.166105  662586 cri.go:89] found id: ""
	I1209 11:55:40.166137  662586 logs.go:282] 0 containers: []
	W1209 11:55:40.166145  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:40.166160  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:40.166187  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:40.231525  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:40.231559  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:40.231582  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:40.311298  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:40.311354  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:40.350040  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:40.350077  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:40.404024  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:40.404061  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:39.896541  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:42.396800  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:42.453720  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:44.950595  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:42.091300  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:44.591230  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:42.917868  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:42.930289  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:42.930357  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:42.962822  662586 cri.go:89] found id: ""
	I1209 11:55:42.962856  662586 logs.go:282] 0 containers: []
	W1209 11:55:42.962869  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:42.962878  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:42.962950  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:42.996932  662586 cri.go:89] found id: ""
	I1209 11:55:42.996962  662586 logs.go:282] 0 containers: []
	W1209 11:55:42.996972  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:42.996979  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:42.997040  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:43.031782  662586 cri.go:89] found id: ""
	I1209 11:55:43.031824  662586 logs.go:282] 0 containers: []
	W1209 11:55:43.031837  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:43.031846  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:43.031915  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:43.064717  662586 cri.go:89] found id: ""
	I1209 11:55:43.064751  662586 logs.go:282] 0 containers: []
	W1209 11:55:43.064764  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:43.064774  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:43.064851  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:43.097248  662586 cri.go:89] found id: ""
	I1209 11:55:43.097278  662586 logs.go:282] 0 containers: []
	W1209 11:55:43.097287  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:43.097294  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:43.097356  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:43.135726  662586 cri.go:89] found id: ""
	I1209 11:55:43.135766  662586 logs.go:282] 0 containers: []
	W1209 11:55:43.135779  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:43.135788  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:43.135881  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:43.171120  662586 cri.go:89] found id: ""
	I1209 11:55:43.171148  662586 logs.go:282] 0 containers: []
	W1209 11:55:43.171157  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:43.171163  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:43.171216  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:43.207488  662586 cri.go:89] found id: ""
	I1209 11:55:43.207523  662586 logs.go:282] 0 containers: []
	W1209 11:55:43.207533  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:43.207545  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:43.207565  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:43.276112  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:43.276142  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:43.276159  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:43.354942  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:43.354990  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:43.392755  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:43.392800  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:43.445708  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:43.445752  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:45.962533  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:45.975508  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:45.975589  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:46.009619  662586 cri.go:89] found id: ""
	I1209 11:55:46.009653  662586 logs.go:282] 0 containers: []
	W1209 11:55:46.009663  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:46.009670  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:46.009726  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:46.042218  662586 cri.go:89] found id: ""
	I1209 11:55:46.042250  662586 logs.go:282] 0 containers: []
	W1209 11:55:46.042259  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:46.042265  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:46.042318  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:46.076204  662586 cri.go:89] found id: ""
	I1209 11:55:46.076239  662586 logs.go:282] 0 containers: []
	W1209 11:55:46.076249  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:46.076255  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:46.076326  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:46.113117  662586 cri.go:89] found id: ""
	I1209 11:55:46.113145  662586 logs.go:282] 0 containers: []
	W1209 11:55:46.113154  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:46.113160  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:46.113225  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:46.148232  662586 cri.go:89] found id: ""
	I1209 11:55:46.148277  662586 logs.go:282] 0 containers: []
	W1209 11:55:46.148293  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:46.148303  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:46.148379  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:46.185028  662586 cri.go:89] found id: ""
	I1209 11:55:46.185083  662586 logs.go:282] 0 containers: []
	W1209 11:55:46.185096  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:46.185106  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:46.185200  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:46.222882  662586 cri.go:89] found id: ""
	I1209 11:55:46.222920  662586 logs.go:282] 0 containers: []
	W1209 11:55:46.222933  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:46.222941  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:46.223007  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:46.263486  662586 cri.go:89] found id: ""
	I1209 11:55:46.263528  662586 logs.go:282] 0 containers: []
	W1209 11:55:46.263538  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:46.263549  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:46.263565  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:46.340524  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:46.340550  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:46.340567  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:46.422768  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:46.422810  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:46.464344  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:46.464382  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:46.517311  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:46.517354  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:44.895283  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:46.895427  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:48.895674  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:46.952912  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:48.953432  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:46.591521  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:49.093057  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:49.031192  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:49.043840  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:49.043929  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:49.077648  662586 cri.go:89] found id: ""
	I1209 11:55:49.077705  662586 logs.go:282] 0 containers: []
	W1209 11:55:49.077720  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:49.077730  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:49.077802  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:49.114111  662586 cri.go:89] found id: ""
	I1209 11:55:49.114138  662586 logs.go:282] 0 containers: []
	W1209 11:55:49.114146  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:49.114154  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:49.114236  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:49.147870  662586 cri.go:89] found id: ""
	I1209 11:55:49.147908  662586 logs.go:282] 0 containers: []
	W1209 11:55:49.147917  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:49.147923  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:49.147976  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:49.185223  662586 cri.go:89] found id: ""
	I1209 11:55:49.185256  662586 logs.go:282] 0 containers: []
	W1209 11:55:49.185269  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:49.185277  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:49.185350  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:49.218037  662586 cri.go:89] found id: ""
	I1209 11:55:49.218068  662586 logs.go:282] 0 containers: []
	W1209 11:55:49.218077  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:49.218084  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:49.218138  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:49.255483  662586 cri.go:89] found id: ""
	I1209 11:55:49.255522  662586 logs.go:282] 0 containers: []
	W1209 11:55:49.255535  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:49.255549  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:49.255629  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:49.288623  662586 cri.go:89] found id: ""
	I1209 11:55:49.288650  662586 logs.go:282] 0 containers: []
	W1209 11:55:49.288659  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:49.288666  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:49.288732  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:49.322880  662586 cri.go:89] found id: ""
	I1209 11:55:49.322913  662586 logs.go:282] 0 containers: []
	W1209 11:55:49.322921  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:49.322930  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:49.322943  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:49.372380  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:49.372428  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:49.385877  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:49.385914  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:49.460078  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:49.460101  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:49.460114  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:49.534588  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:49.534647  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:52.071408  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:52.084198  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:52.084276  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:52.118908  662586 cri.go:89] found id: ""
	I1209 11:55:52.118937  662586 logs.go:282] 0 containers: []
	W1209 11:55:52.118950  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:52.118958  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:52.119026  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:52.156494  662586 cri.go:89] found id: ""
	I1209 11:55:52.156521  662586 logs.go:282] 0 containers: []
	W1209 11:55:52.156530  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:52.156535  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:52.156586  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:52.196037  662586 cri.go:89] found id: ""
	I1209 11:55:52.196075  662586 logs.go:282] 0 containers: []
	W1209 11:55:52.196094  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:52.196102  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:52.196177  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:52.229436  662586 cri.go:89] found id: ""
	I1209 11:55:52.229465  662586 logs.go:282] 0 containers: []
	W1209 11:55:52.229477  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:52.229486  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:52.229558  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:52.268751  662586 cri.go:89] found id: ""
	I1209 11:55:52.268785  662586 logs.go:282] 0 containers: []
	W1209 11:55:52.268797  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:52.268805  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:52.268871  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:52.302405  662586 cri.go:89] found id: ""
	I1209 11:55:52.302436  662586 logs.go:282] 0 containers: []
	W1209 11:55:52.302446  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:52.302453  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:52.302522  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:52.338641  662586 cri.go:89] found id: ""
	I1209 11:55:52.338676  662586 logs.go:282] 0 containers: []
	W1209 11:55:52.338688  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:52.338698  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:52.338754  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:52.375541  662586 cri.go:89] found id: ""
	I1209 11:55:52.375578  662586 logs.go:282] 0 containers: []
	W1209 11:55:52.375591  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:52.375604  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:52.375624  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:52.389140  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:52.389190  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:52.460520  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:52.460546  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:52.460562  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:52.535234  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:52.535280  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:52.573317  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:52.573354  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:50.896292  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:52.896875  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:51.453540  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:53.456640  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:55.950197  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:51.590899  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:53.591317  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:56.092219  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:55.124068  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:55.136800  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:55.136868  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:55.169724  662586 cri.go:89] found id: ""
	I1209 11:55:55.169757  662586 logs.go:282] 0 containers: []
	W1209 11:55:55.169769  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:55.169777  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:55.169843  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:55.207466  662586 cri.go:89] found id: ""
	I1209 11:55:55.207514  662586 logs.go:282] 0 containers: []
	W1209 11:55:55.207528  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:55.207537  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:55.207600  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:55.241761  662586 cri.go:89] found id: ""
	I1209 11:55:55.241790  662586 logs.go:282] 0 containers: []
	W1209 11:55:55.241801  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:55.241809  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:55.241874  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:55.274393  662586 cri.go:89] found id: ""
	I1209 11:55:55.274434  662586 logs.go:282] 0 containers: []
	W1209 11:55:55.274447  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:55.274455  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:55.274522  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:55.307942  662586 cri.go:89] found id: ""
	I1209 11:55:55.307988  662586 logs.go:282] 0 containers: []
	W1209 11:55:55.308002  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:55.308012  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:55.308088  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:55.340074  662586 cri.go:89] found id: ""
	I1209 11:55:55.340107  662586 logs.go:282] 0 containers: []
	W1209 11:55:55.340116  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:55.340122  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:55.340196  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:55.388077  662586 cri.go:89] found id: ""
	I1209 11:55:55.388119  662586 logs.go:282] 0 containers: []
	W1209 11:55:55.388140  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:55.388149  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:55.388230  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:55.422923  662586 cri.go:89] found id: ""
	I1209 11:55:55.422961  662586 logs.go:282] 0 containers: []
	W1209 11:55:55.422975  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:55.422990  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:55.423008  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:55.476178  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:55.476219  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:55.489891  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:55.489919  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:55.555705  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:55:55.555726  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:55.555745  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:55.634818  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:55.634862  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:55.396320  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:57.895122  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:57.951119  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:00.451659  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:58.092427  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:00.590304  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:55:58.173169  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:55:58.188529  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:55:58.188620  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:55:58.225602  662586 cri.go:89] found id: ""
	I1209 11:55:58.225630  662586 logs.go:282] 0 containers: []
	W1209 11:55:58.225641  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:55:58.225649  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:55:58.225709  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:55:58.259597  662586 cri.go:89] found id: ""
	I1209 11:55:58.259638  662586 logs.go:282] 0 containers: []
	W1209 11:55:58.259652  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:55:58.259662  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:55:58.259744  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:55:58.293287  662586 cri.go:89] found id: ""
	I1209 11:55:58.293320  662586 logs.go:282] 0 containers: []
	W1209 11:55:58.293329  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:55:58.293336  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:55:58.293390  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:55:58.326581  662586 cri.go:89] found id: ""
	I1209 11:55:58.326611  662586 logs.go:282] 0 containers: []
	W1209 11:55:58.326622  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:55:58.326630  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:55:58.326699  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:55:58.359636  662586 cri.go:89] found id: ""
	I1209 11:55:58.359665  662586 logs.go:282] 0 containers: []
	W1209 11:55:58.359675  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:55:58.359681  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:55:58.359736  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:55:58.396767  662586 cri.go:89] found id: ""
	I1209 11:55:58.396798  662586 logs.go:282] 0 containers: []
	W1209 11:55:58.396809  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:55:58.396818  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:55:58.396887  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:55:58.428907  662586 cri.go:89] found id: ""
	I1209 11:55:58.428941  662586 logs.go:282] 0 containers: []
	W1209 11:55:58.428954  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:55:58.428962  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:55:58.429032  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:55:58.466082  662586 cri.go:89] found id: ""
	I1209 11:55:58.466124  662586 logs.go:282] 0 containers: []
	W1209 11:55:58.466136  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:55:58.466149  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:55:58.466186  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:55:58.542333  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:55:58.542378  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:58.582397  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:55:58.582436  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:55:58.632980  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:55:58.633030  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:55:58.648464  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:55:58.648514  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:55:58.711714  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:56:01.212475  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:56:01.225574  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:56:01.225642  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:56:01.259666  662586 cri.go:89] found id: ""
	I1209 11:56:01.259704  662586 logs.go:282] 0 containers: []
	W1209 11:56:01.259718  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:56:01.259726  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:56:01.259800  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:56:01.295433  662586 cri.go:89] found id: ""
	I1209 11:56:01.295474  662586 logs.go:282] 0 containers: []
	W1209 11:56:01.295495  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:56:01.295503  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:56:01.295561  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:56:01.330316  662586 cri.go:89] found id: ""
	I1209 11:56:01.330352  662586 logs.go:282] 0 containers: []
	W1209 11:56:01.330364  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:56:01.330373  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:56:01.330447  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:56:01.366762  662586 cri.go:89] found id: ""
	I1209 11:56:01.366797  662586 logs.go:282] 0 containers: []
	W1209 11:56:01.366808  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:56:01.366814  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:56:01.366878  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:56:01.403511  662586 cri.go:89] found id: ""
	I1209 11:56:01.403539  662586 logs.go:282] 0 containers: []
	W1209 11:56:01.403547  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:56:01.403553  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:56:01.403604  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:56:01.436488  662586 cri.go:89] found id: ""
	I1209 11:56:01.436526  662586 logs.go:282] 0 containers: []
	W1209 11:56:01.436538  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:56:01.436546  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:56:01.436617  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:56:01.471647  662586 cri.go:89] found id: ""
	I1209 11:56:01.471676  662586 logs.go:282] 0 containers: []
	W1209 11:56:01.471685  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:56:01.471690  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:56:01.471744  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:56:01.504065  662586 cri.go:89] found id: ""
	I1209 11:56:01.504099  662586 logs.go:282] 0 containers: []
	W1209 11:56:01.504111  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:56:01.504124  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:56:01.504143  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:56:01.553434  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:56:01.553482  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:56:01.567537  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:56:01.567579  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:56:01.636968  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:56:01.636995  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:56:01.637012  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:56:01.713008  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:56:01.713049  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:55:59.896841  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:02.396972  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:02.451893  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:04.453118  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:02.591218  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:04.592199  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:04.253143  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:56:04.266428  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:56:04.266512  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:56:04.298769  662586 cri.go:89] found id: ""
	I1209 11:56:04.298810  662586 logs.go:282] 0 containers: []
	W1209 11:56:04.298823  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:56:04.298833  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:56:04.298913  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:56:04.330392  662586 cri.go:89] found id: ""
	I1209 11:56:04.330428  662586 logs.go:282] 0 containers: []
	W1209 11:56:04.330441  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:56:04.330449  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:56:04.330528  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:56:04.362409  662586 cri.go:89] found id: ""
	I1209 11:56:04.362443  662586 logs.go:282] 0 containers: []
	W1209 11:56:04.362455  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:56:04.362463  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:56:04.362544  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:56:04.396853  662586 cri.go:89] found id: ""
	I1209 11:56:04.396884  662586 logs.go:282] 0 containers: []
	W1209 11:56:04.396893  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:56:04.396899  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:56:04.396966  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:56:04.430425  662586 cri.go:89] found id: ""
	I1209 11:56:04.430461  662586 logs.go:282] 0 containers: []
	W1209 11:56:04.430470  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:56:04.430477  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:56:04.430531  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:56:04.465354  662586 cri.go:89] found id: ""
	I1209 11:56:04.465391  662586 logs.go:282] 0 containers: []
	W1209 11:56:04.465403  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:56:04.465411  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:56:04.465480  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:56:04.500114  662586 cri.go:89] found id: ""
	I1209 11:56:04.500156  662586 logs.go:282] 0 containers: []
	W1209 11:56:04.500167  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:56:04.500179  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:56:04.500259  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:56:04.534853  662586 cri.go:89] found id: ""
	I1209 11:56:04.534888  662586 logs.go:282] 0 containers: []
	W1209 11:56:04.534902  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:56:04.534914  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:56:04.534928  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:56:04.586419  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:56:04.586457  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:56:04.600690  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:56:04.600728  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:56:04.669645  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:56:04.669685  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:56:04.669703  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:56:04.747973  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:56:04.748026  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:56:07.288721  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:56:07.302905  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:56:07.302975  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:56:07.336686  662586 cri.go:89] found id: ""
	I1209 11:56:07.336720  662586 logs.go:282] 0 containers: []
	W1209 11:56:07.336728  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:56:07.336735  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:56:07.336798  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:56:07.370119  662586 cri.go:89] found id: ""
	I1209 11:56:07.370150  662586 logs.go:282] 0 containers: []
	W1209 11:56:07.370159  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:56:07.370165  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:56:07.370245  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:56:07.402818  662586 cri.go:89] found id: ""
	I1209 11:56:07.402845  662586 logs.go:282] 0 containers: []
	W1209 11:56:07.402853  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:56:07.402861  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:56:07.402923  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:56:07.437694  662586 cri.go:89] found id: ""
	I1209 11:56:07.437722  662586 logs.go:282] 0 containers: []
	W1209 11:56:07.437732  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:56:07.437741  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:56:07.437806  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:56:07.474576  662586 cri.go:89] found id: ""
	I1209 11:56:07.474611  662586 logs.go:282] 0 containers: []
	W1209 11:56:07.474622  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:56:07.474629  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:56:07.474705  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:56:07.508538  662586 cri.go:89] found id: ""
	I1209 11:56:07.508575  662586 logs.go:282] 0 containers: []
	W1209 11:56:07.508585  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:56:07.508592  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:56:07.508661  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:56:07.548863  662586 cri.go:89] found id: ""
	I1209 11:56:07.548897  662586 logs.go:282] 0 containers: []
	W1209 11:56:07.548911  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:56:07.548922  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:56:07.549093  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:56:07.592515  662586 cri.go:89] found id: ""
	I1209 11:56:07.592543  662586 logs.go:282] 0 containers: []
	W1209 11:56:07.592555  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:56:07.592564  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:56:07.592579  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:56:07.652176  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:56:07.652219  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:56:04.895898  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:07.395712  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:09.398273  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:06.950668  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:09.450539  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:07.091573  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:09.591049  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:07.703040  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:56:07.703094  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:56:07.717880  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:56:07.717924  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:56:07.783396  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:56:07.783425  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:56:07.783441  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:56:10.362395  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:56:10.377478  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:56:10.377574  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:56:10.411923  662586 cri.go:89] found id: ""
	I1209 11:56:10.411956  662586 logs.go:282] 0 containers: []
	W1209 11:56:10.411969  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:56:10.411978  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:56:10.412049  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:56:10.444601  662586 cri.go:89] found id: ""
	I1209 11:56:10.444633  662586 logs.go:282] 0 containers: []
	W1209 11:56:10.444642  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:56:10.444648  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:56:10.444705  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:56:10.486720  662586 cri.go:89] found id: ""
	I1209 11:56:10.486753  662586 logs.go:282] 0 containers: []
	W1209 11:56:10.486763  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:56:10.486769  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:56:10.486822  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:56:10.523535  662586 cri.go:89] found id: ""
	I1209 11:56:10.523572  662586 logs.go:282] 0 containers: []
	W1209 11:56:10.523581  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:56:10.523587  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:56:10.523641  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:56:10.557701  662586 cri.go:89] found id: ""
	I1209 11:56:10.557741  662586 logs.go:282] 0 containers: []
	W1209 11:56:10.557754  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:56:10.557762  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:56:10.557834  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:56:10.593914  662586 cri.go:89] found id: ""
	I1209 11:56:10.593949  662586 logs.go:282] 0 containers: []
	W1209 11:56:10.593959  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:56:10.593965  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:56:10.594017  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:56:10.626367  662586 cri.go:89] found id: ""
	I1209 11:56:10.626469  662586 logs.go:282] 0 containers: []
	W1209 11:56:10.626482  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:56:10.626489  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:56:10.626547  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:56:10.665415  662586 cri.go:89] found id: ""
	I1209 11:56:10.665446  662586 logs.go:282] 0 containers: []
	W1209 11:56:10.665456  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:56:10.665467  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:56:10.665480  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:56:10.747483  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:56:10.747532  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:56:10.787728  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:56:10.787758  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:56:10.840678  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:56:10.840722  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:56:10.855774  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:56:10.855809  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:56:10.929638  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:56:11.896254  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:14.395661  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:11.451031  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:13.452502  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:15.951720  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:11.592197  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:13.593711  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:16.091641  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:13.430793  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:56:13.446156  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:56:13.446261  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:56:13.491624  662586 cri.go:89] found id: ""
	I1209 11:56:13.491662  662586 logs.go:282] 0 containers: []
	W1209 11:56:13.491675  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 11:56:13.491684  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:56:13.491758  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:56:13.537619  662586 cri.go:89] found id: ""
	I1209 11:56:13.537653  662586 logs.go:282] 0 containers: []
	W1209 11:56:13.537666  662586 logs.go:284] No container was found matching "etcd"
	I1209 11:56:13.537675  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:56:13.537750  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:56:13.585761  662586 cri.go:89] found id: ""
	I1209 11:56:13.585796  662586 logs.go:282] 0 containers: []
	W1209 11:56:13.585810  662586 logs.go:284] No container was found matching "coredns"
	I1209 11:56:13.585819  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:56:13.585883  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:56:13.620740  662586 cri.go:89] found id: ""
	I1209 11:56:13.620774  662586 logs.go:282] 0 containers: []
	W1209 11:56:13.620785  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 11:56:13.620791  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:56:13.620858  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:56:13.654405  662586 cri.go:89] found id: ""
	I1209 11:56:13.654433  662586 logs.go:282] 0 containers: []
	W1209 11:56:13.654442  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 11:56:13.654448  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:56:13.654509  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:56:13.687520  662586 cri.go:89] found id: ""
	I1209 11:56:13.687547  662586 logs.go:282] 0 containers: []
	W1209 11:56:13.687558  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 11:56:13.687566  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:56:13.687642  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:56:13.721105  662586 cri.go:89] found id: ""
	I1209 11:56:13.721140  662586 logs.go:282] 0 containers: []
	W1209 11:56:13.721153  662586 logs.go:284] No container was found matching "kindnet"
	I1209 11:56:13.721162  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 11:56:13.721238  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 11:56:13.753900  662586 cri.go:89] found id: ""
	I1209 11:56:13.753933  662586 logs.go:282] 0 containers: []
	W1209 11:56:13.753945  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 11:56:13.753960  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 11:56:13.753978  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:56:13.805864  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 11:56:13.805909  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:56:13.819356  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:56:13.819393  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 11:56:13.896097  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 11:56:13.896128  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:56:13.896150  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:56:13.979041  662586 logs.go:123] Gathering logs for container status ...
	I1209 11:56:13.979084  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:56:16.516777  662586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:56:16.529916  662586 kubeadm.go:597] duration metric: took 4m1.869807937s to restartPrimaryControlPlane
	W1209 11:56:16.530015  662586 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1209 11:56:16.530067  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1209 11:56:16.396353  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:18.896097  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:18.451294  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:20.452525  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:18.092780  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:20.593275  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:18.635832  662586 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.105742271s)
	I1209 11:56:18.635914  662586 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 11:56:18.651678  662586 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 11:56:18.661965  662586 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 11:56:18.672060  662586 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 11:56:18.672082  662586 kubeadm.go:157] found existing configuration files:
	
	I1209 11:56:18.672147  662586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 11:56:18.681627  662586 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 11:56:18.681697  662586 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 11:56:18.691514  662586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 11:56:18.701210  662586 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 11:56:18.701292  662586 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 11:56:18.710934  662586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 11:56:18.720506  662586 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 11:56:18.720583  662586 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 11:56:18.729996  662586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 11:56:18.739425  662586 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 11:56:18.739486  662586 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 11:56:18.748788  662586 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1209 11:56:18.981849  662586 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1209 11:56:21.396764  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:23.894781  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:22.950912  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:24.951678  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:23.091306  662109 pod_ready.go:103] pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:24.592439  662109 pod_ready.go:82] duration metric: took 4m0.007699806s for pod "metrics-server-6867b74b74-pwcsr" in "kube-system" namespace to be "Ready" ...
	E1209 11:56:24.592477  662109 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1209 11:56:24.592486  662109 pod_ready.go:39] duration metric: took 4m7.416528348s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 11:56:24.592504  662109 api_server.go:52] waiting for apiserver process to appear ...
	I1209 11:56:24.592537  662109 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:56:24.592590  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:56:24.643050  662109 cri.go:89] found id: "478ca5095dcdb90c166952aee1291e7de907591fa02ac65de132b6d7e6ea79cb"
	I1209 11:56:24.643085  662109 cri.go:89] found id: ""
	I1209 11:56:24.643094  662109 logs.go:282] 1 containers: [478ca5095dcdb90c166952aee1291e7de907591fa02ac65de132b6d7e6ea79cb]
	I1209 11:56:24.643151  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:24.647529  662109 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:56:24.647590  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:56:24.683125  662109 cri.go:89] found id: "13e00a6fef368f36aa166cfe9a5a6e37476d0bff708b09a552a102e6103aff16"
	I1209 11:56:24.683150  662109 cri.go:89] found id: ""
	I1209 11:56:24.683159  662109 logs.go:282] 1 containers: [13e00a6fef368f36aa166cfe9a5a6e37476d0bff708b09a552a102e6103aff16]
	I1209 11:56:24.683222  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:24.687584  662109 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:56:24.687706  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:56:24.720663  662109 cri.go:89] found id: "909852cc820d2c5faeb9b92e83a833c397dbbf46bbc715f79ef8b77f0a338f42"
	I1209 11:56:24.720699  662109 cri.go:89] found id: ""
	I1209 11:56:24.720708  662109 logs.go:282] 1 containers: [909852cc820d2c5faeb9b92e83a833c397dbbf46bbc715f79ef8b77f0a338f42]
	I1209 11:56:24.720769  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:24.724881  662109 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:56:24.724942  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:56:24.766055  662109 cri.go:89] found id: "73b01a8a4080f1488054462bb97f98c08528c799629927ce6a684fb0ade63413"
	I1209 11:56:24.766081  662109 cri.go:89] found id: ""
	I1209 11:56:24.766091  662109 logs.go:282] 1 containers: [73b01a8a4080f1488054462bb97f98c08528c799629927ce6a684fb0ade63413]
	I1209 11:56:24.766152  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:24.770491  662109 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:56:24.770557  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:56:24.804523  662109 cri.go:89] found id: "de64a319ab30ae80695633af0e1c9206551e2408918b743c2258216259de56d2"
	I1209 11:56:24.804549  662109 cri.go:89] found id: ""
	I1209 11:56:24.804558  662109 logs.go:282] 1 containers: [de64a319ab30ae80695633af0e1c9206551e2408918b743c2258216259de56d2]
	I1209 11:56:24.804607  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:24.808452  662109 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:56:24.808528  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:56:24.846043  662109 cri.go:89] found id: "b6662f1bed1995329291aee23d70ac4157125e45c75dd92e34e07fa257f2be2d"
	I1209 11:56:24.846072  662109 cri.go:89] found id: ""
	I1209 11:56:24.846084  662109 logs.go:282] 1 containers: [b6662f1bed1995329291aee23d70ac4157125e45c75dd92e34e07fa257f2be2d]
	I1209 11:56:24.846140  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:24.849991  662109 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:56:24.850057  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:56:24.884853  662109 cri.go:89] found id: ""
	I1209 11:56:24.884889  662109 logs.go:282] 0 containers: []
	W1209 11:56:24.884902  662109 logs.go:284] No container was found matching "kindnet"
	I1209 11:56:24.884912  662109 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 11:56:24.884983  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 11:56:24.920103  662109 cri.go:89] found id: "d184b6139f52f80605886a70426659453eee61916ead187b881c64f6b4c59bbb"
	I1209 11:56:24.920131  662109 cri.go:89] found id: "0ef403336ca71e96a468d654fafbbe9fd9c37a3d04d2578e06771b425f20004f"
	I1209 11:56:24.920135  662109 cri.go:89] found id: ""
	I1209 11:56:24.920152  662109 logs.go:282] 2 containers: [d184b6139f52f80605886a70426659453eee61916ead187b881c64f6b4c59bbb 0ef403336ca71e96a468d654fafbbe9fd9c37a3d04d2578e06771b425f20004f]
	I1209 11:56:24.920223  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:24.924212  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:24.928416  662109 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:56:24.928436  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 11:56:25.077407  662109 logs.go:123] Gathering logs for kube-apiserver [478ca5095dcdb90c166952aee1291e7de907591fa02ac65de132b6d7e6ea79cb] ...
	I1209 11:56:25.077468  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 478ca5095dcdb90c166952aee1291e7de907591fa02ac65de132b6d7e6ea79cb"
	I1209 11:56:25.125600  662109 logs.go:123] Gathering logs for storage-provisioner [d184b6139f52f80605886a70426659453eee61916ead187b881c64f6b4c59bbb] ...
	I1209 11:56:25.125649  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d184b6139f52f80605886a70426659453eee61916ead187b881c64f6b4c59bbb"
	I1209 11:56:25.163222  662109 logs.go:123] Gathering logs for container status ...
	I1209 11:56:25.163268  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:56:25.208430  662109 logs.go:123] Gathering logs for storage-provisioner [0ef403336ca71e96a468d654fafbbe9fd9c37a3d04d2578e06771b425f20004f] ...
	I1209 11:56:25.208465  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0ef403336ca71e96a468d654fafbbe9fd9c37a3d04d2578e06771b425f20004f"
	I1209 11:56:25.245884  662109 logs.go:123] Gathering logs for kubelet ...
	I1209 11:56:25.245917  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:56:25.318723  662109 logs.go:123] Gathering logs for dmesg ...
	I1209 11:56:25.318775  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:56:25.333173  662109 logs.go:123] Gathering logs for etcd [13e00a6fef368f36aa166cfe9a5a6e37476d0bff708b09a552a102e6103aff16] ...
	I1209 11:56:25.333207  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 13e00a6fef368f36aa166cfe9a5a6e37476d0bff708b09a552a102e6103aff16"
	I1209 11:56:25.394636  662109 logs.go:123] Gathering logs for coredns [909852cc820d2c5faeb9b92e83a833c397dbbf46bbc715f79ef8b77f0a338f42] ...
	I1209 11:56:25.394683  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 909852cc820d2c5faeb9b92e83a833c397dbbf46bbc715f79ef8b77f0a338f42"
	I1209 11:56:25.435210  662109 logs.go:123] Gathering logs for kube-scheduler [73b01a8a4080f1488054462bb97f98c08528c799629927ce6a684fb0ade63413] ...
	I1209 11:56:25.435248  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 73b01a8a4080f1488054462bb97f98c08528c799629927ce6a684fb0ade63413"
	I1209 11:56:25.482142  662109 logs.go:123] Gathering logs for kube-proxy [de64a319ab30ae80695633af0e1c9206551e2408918b743c2258216259de56d2] ...
	I1209 11:56:25.482184  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de64a319ab30ae80695633af0e1c9206551e2408918b743c2258216259de56d2"
	I1209 11:56:25.516975  662109 logs.go:123] Gathering logs for kube-controller-manager [b6662f1bed1995329291aee23d70ac4157125e45c75dd92e34e07fa257f2be2d] ...
	I1209 11:56:25.517006  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b6662f1bed1995329291aee23d70ac4157125e45c75dd92e34e07fa257f2be2d"
	I1209 11:56:25.565526  662109 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:56:25.565565  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:56:25.896281  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:28.395529  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:27.454449  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:29.950704  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:28.549071  662109 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:56:28.567288  662109 api_server.go:72] duration metric: took 4m18.770451099s to wait for apiserver process to appear ...
	I1209 11:56:28.567319  662109 api_server.go:88] waiting for apiserver healthz status ...
	I1209 11:56:28.567367  662109 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:56:28.567418  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:56:28.603341  662109 cri.go:89] found id: "478ca5095dcdb90c166952aee1291e7de907591fa02ac65de132b6d7e6ea79cb"
	I1209 11:56:28.603365  662109 cri.go:89] found id: ""
	I1209 11:56:28.603372  662109 logs.go:282] 1 containers: [478ca5095dcdb90c166952aee1291e7de907591fa02ac65de132b6d7e6ea79cb]
	I1209 11:56:28.603423  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:28.607416  662109 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:56:28.607493  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:56:28.647437  662109 cri.go:89] found id: "13e00a6fef368f36aa166cfe9a5a6e37476d0bff708b09a552a102e6103aff16"
	I1209 11:56:28.647465  662109 cri.go:89] found id: ""
	I1209 11:56:28.647477  662109 logs.go:282] 1 containers: [13e00a6fef368f36aa166cfe9a5a6e37476d0bff708b09a552a102e6103aff16]
	I1209 11:56:28.647539  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:28.651523  662109 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:56:28.651584  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:56:28.687889  662109 cri.go:89] found id: "909852cc820d2c5faeb9b92e83a833c397dbbf46bbc715f79ef8b77f0a338f42"
	I1209 11:56:28.687920  662109 cri.go:89] found id: ""
	I1209 11:56:28.687929  662109 logs.go:282] 1 containers: [909852cc820d2c5faeb9b92e83a833c397dbbf46bbc715f79ef8b77f0a338f42]
	I1209 11:56:28.687983  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:28.692025  662109 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:56:28.692100  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:56:28.728934  662109 cri.go:89] found id: "73b01a8a4080f1488054462bb97f98c08528c799629927ce6a684fb0ade63413"
	I1209 11:56:28.728961  662109 cri.go:89] found id: ""
	I1209 11:56:28.728969  662109 logs.go:282] 1 containers: [73b01a8a4080f1488054462bb97f98c08528c799629927ce6a684fb0ade63413]
	I1209 11:56:28.729020  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:28.733217  662109 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:56:28.733300  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:56:28.768700  662109 cri.go:89] found id: "de64a319ab30ae80695633af0e1c9206551e2408918b743c2258216259de56d2"
	I1209 11:56:28.768726  662109 cri.go:89] found id: ""
	I1209 11:56:28.768735  662109 logs.go:282] 1 containers: [de64a319ab30ae80695633af0e1c9206551e2408918b743c2258216259de56d2]
	I1209 11:56:28.768790  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:28.772844  662109 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:56:28.772921  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:56:28.812073  662109 cri.go:89] found id: "b6662f1bed1995329291aee23d70ac4157125e45c75dd92e34e07fa257f2be2d"
	I1209 11:56:28.812104  662109 cri.go:89] found id: ""
	I1209 11:56:28.812116  662109 logs.go:282] 1 containers: [b6662f1bed1995329291aee23d70ac4157125e45c75dd92e34e07fa257f2be2d]
	I1209 11:56:28.812195  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:28.816542  662109 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:56:28.816612  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:56:28.850959  662109 cri.go:89] found id: ""
	I1209 11:56:28.850997  662109 logs.go:282] 0 containers: []
	W1209 11:56:28.851010  662109 logs.go:284] No container was found matching "kindnet"
	I1209 11:56:28.851018  662109 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 11:56:28.851075  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 11:56:28.894115  662109 cri.go:89] found id: "d184b6139f52f80605886a70426659453eee61916ead187b881c64f6b4c59bbb"
	I1209 11:56:28.894142  662109 cri.go:89] found id: "0ef403336ca71e96a468d654fafbbe9fd9c37a3d04d2578e06771b425f20004f"
	I1209 11:56:28.894148  662109 cri.go:89] found id: ""
	I1209 11:56:28.894157  662109 logs.go:282] 2 containers: [d184b6139f52f80605886a70426659453eee61916ead187b881c64f6b4c59bbb 0ef403336ca71e96a468d654fafbbe9fd9c37a3d04d2578e06771b425f20004f]
	I1209 11:56:28.894228  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:28.899260  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:28.903033  662109 logs.go:123] Gathering logs for dmesg ...
	I1209 11:56:28.903055  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:56:28.916411  662109 logs.go:123] Gathering logs for kube-apiserver [478ca5095dcdb90c166952aee1291e7de907591fa02ac65de132b6d7e6ea79cb] ...
	I1209 11:56:28.916447  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 478ca5095dcdb90c166952aee1291e7de907591fa02ac65de132b6d7e6ea79cb"
	I1209 11:56:28.965873  662109 logs.go:123] Gathering logs for coredns [909852cc820d2c5faeb9b92e83a833c397dbbf46bbc715f79ef8b77f0a338f42] ...
	I1209 11:56:28.965911  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 909852cc820d2c5faeb9b92e83a833c397dbbf46bbc715f79ef8b77f0a338f42"
	I1209 11:56:29.003553  662109 logs.go:123] Gathering logs for kube-scheduler [73b01a8a4080f1488054462bb97f98c08528c799629927ce6a684fb0ade63413] ...
	I1209 11:56:29.003591  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 73b01a8a4080f1488054462bb97f98c08528c799629927ce6a684fb0ade63413"
	I1209 11:56:29.038945  662109 logs.go:123] Gathering logs for kube-proxy [de64a319ab30ae80695633af0e1c9206551e2408918b743c2258216259de56d2] ...
	I1209 11:56:29.038989  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de64a319ab30ae80695633af0e1c9206551e2408918b743c2258216259de56d2"
	I1209 11:56:29.079595  662109 logs.go:123] Gathering logs for storage-provisioner [d184b6139f52f80605886a70426659453eee61916ead187b881c64f6b4c59bbb] ...
	I1209 11:56:29.079636  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d184b6139f52f80605886a70426659453eee61916ead187b881c64f6b4c59bbb"
	I1209 11:56:29.117632  662109 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:56:29.117665  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:56:29.556193  662109 logs.go:123] Gathering logs for kubelet ...
	I1209 11:56:29.556245  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:56:29.629530  662109 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:56:29.629571  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 11:56:29.746102  662109 logs.go:123] Gathering logs for etcd [13e00a6fef368f36aa166cfe9a5a6e37476d0bff708b09a552a102e6103aff16] ...
	I1209 11:56:29.746137  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 13e00a6fef368f36aa166cfe9a5a6e37476d0bff708b09a552a102e6103aff16"
	I1209 11:56:29.799342  662109 logs.go:123] Gathering logs for kube-controller-manager [b6662f1bed1995329291aee23d70ac4157125e45c75dd92e34e07fa257f2be2d] ...
	I1209 11:56:29.799379  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b6662f1bed1995329291aee23d70ac4157125e45c75dd92e34e07fa257f2be2d"
	I1209 11:56:29.851197  662109 logs.go:123] Gathering logs for storage-provisioner [0ef403336ca71e96a468d654fafbbe9fd9c37a3d04d2578e06771b425f20004f] ...
	I1209 11:56:29.851254  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0ef403336ca71e96a468d654fafbbe9fd9c37a3d04d2578e06771b425f20004f"
	I1209 11:56:29.884688  662109 logs.go:123] Gathering logs for container status ...
	I1209 11:56:29.884725  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:56:30.396025  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:32.396195  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:34.396605  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:31.951405  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:34.451838  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:32.425773  662109 api_server.go:253] Checking apiserver healthz at https://192.168.39.169:8443/healthz ...
	I1209 11:56:32.432276  662109 api_server.go:279] https://192.168.39.169:8443/healthz returned 200:
	ok
	I1209 11:56:32.433602  662109 api_server.go:141] control plane version: v1.31.2
	I1209 11:56:32.433634  662109 api_server.go:131] duration metric: took 3.866306159s to wait for apiserver health ...
	I1209 11:56:32.433647  662109 system_pods.go:43] waiting for kube-system pods to appear ...
	I1209 11:56:32.433680  662109 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 11:56:32.433744  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 11:56:32.471560  662109 cri.go:89] found id: "478ca5095dcdb90c166952aee1291e7de907591fa02ac65de132b6d7e6ea79cb"
	I1209 11:56:32.471593  662109 cri.go:89] found id: ""
	I1209 11:56:32.471604  662109 logs.go:282] 1 containers: [478ca5095dcdb90c166952aee1291e7de907591fa02ac65de132b6d7e6ea79cb]
	I1209 11:56:32.471684  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:32.475735  662109 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 11:56:32.475809  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 11:56:32.509788  662109 cri.go:89] found id: "13e00a6fef368f36aa166cfe9a5a6e37476d0bff708b09a552a102e6103aff16"
	I1209 11:56:32.509821  662109 cri.go:89] found id: ""
	I1209 11:56:32.509833  662109 logs.go:282] 1 containers: [13e00a6fef368f36aa166cfe9a5a6e37476d0bff708b09a552a102e6103aff16]
	I1209 11:56:32.509889  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:32.513849  662109 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 11:56:32.513908  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 11:56:32.547022  662109 cri.go:89] found id: "909852cc820d2c5faeb9b92e83a833c397dbbf46bbc715f79ef8b77f0a338f42"
	I1209 11:56:32.547046  662109 cri.go:89] found id: ""
	I1209 11:56:32.547055  662109 logs.go:282] 1 containers: [909852cc820d2c5faeb9b92e83a833c397dbbf46bbc715f79ef8b77f0a338f42]
	I1209 11:56:32.547113  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:32.551393  662109 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 11:56:32.551476  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 11:56:32.586478  662109 cri.go:89] found id: "73b01a8a4080f1488054462bb97f98c08528c799629927ce6a684fb0ade63413"
	I1209 11:56:32.586516  662109 cri.go:89] found id: ""
	I1209 11:56:32.586536  662109 logs.go:282] 1 containers: [73b01a8a4080f1488054462bb97f98c08528c799629927ce6a684fb0ade63413]
	I1209 11:56:32.586605  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:32.592876  662109 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 11:56:32.592950  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 11:56:32.626775  662109 cri.go:89] found id: "de64a319ab30ae80695633af0e1c9206551e2408918b743c2258216259de56d2"
	I1209 11:56:32.626803  662109 cri.go:89] found id: ""
	I1209 11:56:32.626812  662109 logs.go:282] 1 containers: [de64a319ab30ae80695633af0e1c9206551e2408918b743c2258216259de56d2]
	I1209 11:56:32.626869  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:32.630757  662109 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 11:56:32.630825  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 11:56:32.663980  662109 cri.go:89] found id: "b6662f1bed1995329291aee23d70ac4157125e45c75dd92e34e07fa257f2be2d"
	I1209 11:56:32.664013  662109 cri.go:89] found id: ""
	I1209 11:56:32.664026  662109 logs.go:282] 1 containers: [b6662f1bed1995329291aee23d70ac4157125e45c75dd92e34e07fa257f2be2d]
	I1209 11:56:32.664093  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:32.668368  662109 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 11:56:32.668449  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 11:56:32.704638  662109 cri.go:89] found id: ""
	I1209 11:56:32.704675  662109 logs.go:282] 0 containers: []
	W1209 11:56:32.704688  662109 logs.go:284] No container was found matching "kindnet"
	I1209 11:56:32.704695  662109 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 11:56:32.704752  662109 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 11:56:32.743694  662109 cri.go:89] found id: "d184b6139f52f80605886a70426659453eee61916ead187b881c64f6b4c59bbb"
	I1209 11:56:32.743729  662109 cri.go:89] found id: "0ef403336ca71e96a468d654fafbbe9fd9c37a3d04d2578e06771b425f20004f"
	I1209 11:56:32.743735  662109 cri.go:89] found id: ""
	I1209 11:56:32.743746  662109 logs.go:282] 2 containers: [d184b6139f52f80605886a70426659453eee61916ead187b881c64f6b4c59bbb 0ef403336ca71e96a468d654fafbbe9fd9c37a3d04d2578e06771b425f20004f]
	I1209 11:56:32.743814  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:32.749146  662109 ssh_runner.go:195] Run: which crictl
	I1209 11:56:32.753226  662109 logs.go:123] Gathering logs for kube-scheduler [73b01a8a4080f1488054462bb97f98c08528c799629927ce6a684fb0ade63413] ...
	I1209 11:56:32.753253  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 73b01a8a4080f1488054462bb97f98c08528c799629927ce6a684fb0ade63413"
	I1209 11:56:32.787832  662109 logs.go:123] Gathering logs for kube-proxy [de64a319ab30ae80695633af0e1c9206551e2408918b743c2258216259de56d2] ...
	I1209 11:56:32.787877  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de64a319ab30ae80695633af0e1c9206551e2408918b743c2258216259de56d2"
	I1209 11:56:32.824859  662109 logs.go:123] Gathering logs for kube-controller-manager [b6662f1bed1995329291aee23d70ac4157125e45c75dd92e34e07fa257f2be2d] ...
	I1209 11:56:32.824891  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b6662f1bed1995329291aee23d70ac4157125e45c75dd92e34e07fa257f2be2d"
	I1209 11:56:32.881776  662109 logs.go:123] Gathering logs for storage-provisioner [d184b6139f52f80605886a70426659453eee61916ead187b881c64f6b4c59bbb] ...
	I1209 11:56:32.881808  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d184b6139f52f80605886a70426659453eee61916ead187b881c64f6b4c59bbb"
	I1209 11:56:32.919018  662109 logs.go:123] Gathering logs for storage-provisioner [0ef403336ca71e96a468d654fafbbe9fd9c37a3d04d2578e06771b425f20004f] ...
	I1209 11:56:32.919064  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0ef403336ca71e96a468d654fafbbe9fd9c37a3d04d2578e06771b425f20004f"
	I1209 11:56:32.956839  662109 logs.go:123] Gathering logs for CRI-O ...
	I1209 11:56:32.956869  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 11:56:33.334255  662109 logs.go:123] Gathering logs for kubelet ...
	I1209 11:56:33.334300  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 11:56:33.406008  662109 logs.go:123] Gathering logs for etcd [13e00a6fef368f36aa166cfe9a5a6e37476d0bff708b09a552a102e6103aff16] ...
	I1209 11:56:33.406049  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 13e00a6fef368f36aa166cfe9a5a6e37476d0bff708b09a552a102e6103aff16"
	I1209 11:56:33.453689  662109 logs.go:123] Gathering logs for kube-apiserver [478ca5095dcdb90c166952aee1291e7de907591fa02ac65de132b6d7e6ea79cb] ...
	I1209 11:56:33.453724  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 478ca5095dcdb90c166952aee1291e7de907591fa02ac65de132b6d7e6ea79cb"
	I1209 11:56:33.496168  662109 logs.go:123] Gathering logs for coredns [909852cc820d2c5faeb9b92e83a833c397dbbf46bbc715f79ef8b77f0a338f42] ...
	I1209 11:56:33.496209  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 909852cc820d2c5faeb9b92e83a833c397dbbf46bbc715f79ef8b77f0a338f42"
	I1209 11:56:33.532057  662109 logs.go:123] Gathering logs for container status ...
	I1209 11:56:33.532090  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 11:56:33.575050  662109 logs.go:123] Gathering logs for dmesg ...
	I1209 11:56:33.575087  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 11:56:33.588543  662109 logs.go:123] Gathering logs for describe nodes ...
	I1209 11:56:33.588575  662109 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 11:56:36.194483  662109 system_pods.go:59] 8 kube-system pods found
	I1209 11:56:36.194516  662109 system_pods.go:61] "coredns-7c65d6cfc9-z647g" [0e15e13e-efe6-4ae2-8bac-205aadf8f95a] Running
	I1209 11:56:36.194522  662109 system_pods.go:61] "etcd-no-preload-820741" [3ea97088-f3b5-4c8f-ac11-7ab96f37bc5b] Running
	I1209 11:56:36.194527  662109 system_pods.go:61] "kube-apiserver-no-preload-820741" [bd0d3e20-bdab-41c1-bb25-0c5149b4e456] Running
	I1209 11:56:36.194531  662109 system_pods.go:61] "kube-controller-manager-no-preload-820741" [5eda77af-a169-4be5-9f96-6ad5406f9036] Running
	I1209 11:56:36.194534  662109 system_pods.go:61] "kube-proxy-hpvvp" [0945206c-8d1e-47e0-b35b-9011073423b2] Running
	I1209 11:56:36.194538  662109 system_pods.go:61] "kube-scheduler-no-preload-820741" [e7003668-a70d-4334-bb99-c63c463d87e0] Running
	I1209 11:56:36.194543  662109 system_pods.go:61] "metrics-server-6867b74b74-pwcsr" [40d4df7e-de82-478b-a77b-b27208d8262e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1209 11:56:36.194549  662109 system_pods.go:61] "storage-provisioner" [aeba46d3-ecf1-4923-b89c-75b34e75a06d] Running
	I1209 11:56:36.194559  662109 system_pods.go:74] duration metric: took 3.76090495s to wait for pod list to return data ...
	I1209 11:56:36.194567  662109 default_sa.go:34] waiting for default service account to be created ...
	I1209 11:56:36.197070  662109 default_sa.go:45] found service account: "default"
	I1209 11:56:36.197094  662109 default_sa.go:55] duration metric: took 2.520926ms for default service account to be created ...
	I1209 11:56:36.197104  662109 system_pods.go:116] waiting for k8s-apps to be running ...
	I1209 11:56:36.201494  662109 system_pods.go:86] 8 kube-system pods found
	I1209 11:56:36.201518  662109 system_pods.go:89] "coredns-7c65d6cfc9-z647g" [0e15e13e-efe6-4ae2-8bac-205aadf8f95a] Running
	I1209 11:56:36.201524  662109 system_pods.go:89] "etcd-no-preload-820741" [3ea97088-f3b5-4c8f-ac11-7ab96f37bc5b] Running
	I1209 11:56:36.201528  662109 system_pods.go:89] "kube-apiserver-no-preload-820741" [bd0d3e20-bdab-41c1-bb25-0c5149b4e456] Running
	I1209 11:56:36.201533  662109 system_pods.go:89] "kube-controller-manager-no-preload-820741" [5eda77af-a169-4be5-9f96-6ad5406f9036] Running
	I1209 11:56:36.201537  662109 system_pods.go:89] "kube-proxy-hpvvp" [0945206c-8d1e-47e0-b35b-9011073423b2] Running
	I1209 11:56:36.201540  662109 system_pods.go:89] "kube-scheduler-no-preload-820741" [e7003668-a70d-4334-bb99-c63c463d87e0] Running
	I1209 11:56:36.201547  662109 system_pods.go:89] "metrics-server-6867b74b74-pwcsr" [40d4df7e-de82-478b-a77b-b27208d8262e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1209 11:56:36.201551  662109 system_pods.go:89] "storage-provisioner" [aeba46d3-ecf1-4923-b89c-75b34e75a06d] Running
	I1209 11:56:36.201558  662109 system_pods.go:126] duration metric: took 4.448871ms to wait for k8s-apps to be running ...
	I1209 11:56:36.201567  662109 system_svc.go:44] waiting for kubelet service to be running ....
	I1209 11:56:36.201628  662109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 11:56:36.217457  662109 system_svc.go:56] duration metric: took 15.878252ms WaitForService to wait for kubelet
	I1209 11:56:36.217503  662109 kubeadm.go:582] duration metric: took 4m26.420670146s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 11:56:36.217527  662109 node_conditions.go:102] verifying NodePressure condition ...
	I1209 11:56:36.220498  662109 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1209 11:56:36.220526  662109 node_conditions.go:123] node cpu capacity is 2
	I1209 11:56:36.220572  662109 node_conditions.go:105] duration metric: took 3.039367ms to run NodePressure ...
	I1209 11:56:36.220586  662109 start.go:241] waiting for startup goroutines ...
	I1209 11:56:36.220597  662109 start.go:246] waiting for cluster config update ...
	I1209 11:56:36.220628  662109 start.go:255] writing updated cluster config ...
	I1209 11:56:36.220974  662109 ssh_runner.go:195] Run: rm -f paused
	I1209 11:56:36.272920  662109 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1209 11:56:36.274686  662109 out.go:177] * Done! kubectl is now configured to use "no-preload-820741" cluster and "default" namespace by default
	I1209 11:56:36.895681  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:38.896066  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:36.951281  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:39.455225  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:41.395880  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:43.895464  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:41.951287  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:44.451357  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:45.896184  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:48.398617  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:46.451733  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:48.950857  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:50.950964  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:50.895678  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:52.896291  663024 pod_ready.go:103] pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:53.389365  663024 pod_ready.go:82] duration metric: took 4m0.00015362s for pod "metrics-server-6867b74b74-bpccn" in "kube-system" namespace to be "Ready" ...
	E1209 11:56:53.389414  663024 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1209 11:56:53.389440  663024 pod_ready.go:39] duration metric: took 4m13.044002506s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 11:56:53.389480  663024 kubeadm.go:597] duration metric: took 4m21.286289463s to restartPrimaryControlPlane
	W1209 11:56:53.389572  663024 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1209 11:56:53.389610  663024 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1209 11:56:52.951153  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:55.451223  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:56:57.950413  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:57:00.449904  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:57:02.450069  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:57:04.451074  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:57:06.950873  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:57:08.951176  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:57:11.450596  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:57:13.451552  661546 pod_ready.go:103] pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace has status "Ready":"False"
	I1209 11:57:13.944884  661546 pod_ready.go:82] duration metric: took 4m0.000348644s for pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace to be "Ready" ...
	E1209 11:57:13.944919  661546 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-x4kvn" in "kube-system" namespace to be "Ready" (will not retry!)
	I1209 11:57:13.944943  661546 pod_ready.go:39] duration metric: took 4m14.049505666s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 11:57:13.944980  661546 kubeadm.go:597] duration metric: took 4m22.094543781s to restartPrimaryControlPlane
	W1209 11:57:13.945086  661546 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1209 11:57:13.945123  661546 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1209 11:57:19.569119  663024 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.179481312s)
	I1209 11:57:19.569196  663024 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 11:57:19.583584  663024 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 11:57:19.592807  663024 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 11:57:19.602121  663024 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 11:57:19.602190  663024 kubeadm.go:157] found existing configuration files:
	
	I1209 11:57:19.602249  663024 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1209 11:57:19.611109  663024 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 11:57:19.611187  663024 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 11:57:19.620264  663024 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1209 11:57:19.629026  663024 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 11:57:19.629103  663024 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 11:57:19.638036  663024 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1209 11:57:19.646265  663024 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 11:57:19.646331  663024 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 11:57:19.655187  663024 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1209 11:57:19.663908  663024 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 11:57:19.663962  663024 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 11:57:19.673002  663024 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1209 11:57:19.717664  663024 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1209 11:57:19.717737  663024 kubeadm.go:310] [preflight] Running pre-flight checks
	I1209 11:57:19.818945  663024 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1209 11:57:19.819065  663024 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1209 11:57:19.819160  663024 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1209 11:57:19.828186  663024 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1209 11:57:19.829831  663024 out.go:235]   - Generating certificates and keys ...
	I1209 11:57:19.829938  663024 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1209 11:57:19.830031  663024 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1209 11:57:19.830145  663024 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1209 11:57:19.830252  663024 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1209 11:57:19.830377  663024 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1209 11:57:19.830470  663024 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1209 11:57:19.830568  663024 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1209 11:57:19.830644  663024 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1209 11:57:19.830745  663024 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1209 11:57:19.830825  663024 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1209 11:57:19.830878  663024 kubeadm.go:310] [certs] Using the existing "sa" key
	I1209 11:57:19.830963  663024 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1209 11:57:19.961813  663024 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1209 11:57:20.436964  663024 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1209 11:57:20.652041  663024 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1209 11:57:20.837664  663024 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1209 11:57:20.892035  663024 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1209 11:57:20.892497  663024 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1209 11:57:20.895295  663024 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1209 11:57:20.896871  663024 out.go:235]   - Booting up control plane ...
	I1209 11:57:20.896992  663024 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1209 11:57:20.897139  663024 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1209 11:57:20.897260  663024 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1209 11:57:20.914735  663024 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1209 11:57:20.920520  663024 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1209 11:57:20.920566  663024 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1209 11:57:21.047290  663024 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1209 11:57:21.047437  663024 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1209 11:57:22.049131  663024 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001914766s
	I1209 11:57:22.049257  663024 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1209 11:57:27.053443  663024 kubeadm.go:310] [api-check] The API server is healthy after 5.002570817s
	I1209 11:57:27.068518  663024 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1209 11:57:27.086371  663024 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1209 11:57:27.114617  663024 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1209 11:57:27.114833  663024 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-482476 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1209 11:57:27.131354  663024 kubeadm.go:310] [bootstrap-token] Using token: 6aanjy.0y855mmcca5ic9co
	I1209 11:57:27.132852  663024 out.go:235]   - Configuring RBAC rules ...
	I1209 11:57:27.132992  663024 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1209 11:57:27.139770  663024 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1209 11:57:27.147974  663024 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1209 11:57:27.155508  663024 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1209 11:57:27.159181  663024 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1209 11:57:27.163403  663024 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1209 11:57:27.458812  663024 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1209 11:57:27.900322  663024 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1209 11:57:28.458864  663024 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1209 11:57:28.459944  663024 kubeadm.go:310] 
	I1209 11:57:28.460043  663024 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1209 11:57:28.460054  663024 kubeadm.go:310] 
	I1209 11:57:28.460156  663024 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1209 11:57:28.460166  663024 kubeadm.go:310] 
	I1209 11:57:28.460198  663024 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1209 11:57:28.460284  663024 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1209 11:57:28.460385  663024 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1209 11:57:28.460414  663024 kubeadm.go:310] 
	I1209 11:57:28.460499  663024 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1209 11:57:28.460509  663024 kubeadm.go:310] 
	I1209 11:57:28.460576  663024 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1209 11:57:28.460586  663024 kubeadm.go:310] 
	I1209 11:57:28.460663  663024 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1209 11:57:28.460766  663024 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1209 11:57:28.460862  663024 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1209 11:57:28.460871  663024 kubeadm.go:310] 
	I1209 11:57:28.460992  663024 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1209 11:57:28.461096  663024 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1209 11:57:28.461121  663024 kubeadm.go:310] 
	I1209 11:57:28.461244  663024 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token 6aanjy.0y855mmcca5ic9co \
	I1209 11:57:28.461395  663024 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c011865ce27f188d55feab5f04226df0aae8d2e7cfe661283806c6f2de0ef29a \
	I1209 11:57:28.461435  663024 kubeadm.go:310] 	--control-plane 
	I1209 11:57:28.461446  663024 kubeadm.go:310] 
	I1209 11:57:28.461551  663024 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1209 11:57:28.461574  663024 kubeadm.go:310] 
	I1209 11:57:28.461679  663024 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token 6aanjy.0y855mmcca5ic9co \
	I1209 11:57:28.461832  663024 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c011865ce27f188d55feab5f04226df0aae8d2e7cfe661283806c6f2de0ef29a 
	I1209 11:57:28.462544  663024 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1209 11:57:28.462594  663024 cni.go:84] Creating CNI manager for ""
	I1209 11:57:28.462620  663024 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 11:57:28.464574  663024 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1209 11:57:28.465952  663024 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1209 11:57:28.476155  663024 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1209 11:57:28.493471  663024 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1209 11:57:28.493551  663024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 11:57:28.493594  663024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-482476 minikube.k8s.io/updated_at=2024_12_09T11_57_28_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=60110addcdbf0fec7168b962521659e922988d6c minikube.k8s.io/name=default-k8s-diff-port-482476 minikube.k8s.io/primary=true
	I1209 11:57:28.506467  663024 ops.go:34] apiserver oom_adj: -16
	I1209 11:57:28.724224  663024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 11:57:29.224971  663024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 11:57:29.724660  663024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 11:57:30.224466  663024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 11:57:30.724354  663024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 11:57:31.224702  663024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 11:57:31.725101  663024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 11:57:32.224364  663024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 11:57:32.724357  663024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 11:57:32.844191  663024 kubeadm.go:1113] duration metric: took 4.350713188s to wait for elevateKubeSystemPrivileges
	I1209 11:57:32.844243  663024 kubeadm.go:394] duration metric: took 5m0.79272843s to StartCluster
	I1209 11:57:32.844287  663024 settings.go:142] acquiring lock: {Name:mkd819f3469f56ef651f2e5bb461e93bb6b2b5fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 11:57:32.844417  663024 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20068-609844/kubeconfig
	I1209 11:57:32.846697  663024 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/kubeconfig: {Name:mk32876a1ab47f13c0d7c0ba1ee5e32de01b4a4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 11:57:32.847014  663024 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.25 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 11:57:32.847067  663024 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1209 11:57:32.847162  663024 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-482476"
	I1209 11:57:32.847186  663024 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-482476"
	I1209 11:57:32.847192  663024 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-482476"
	W1209 11:57:32.847201  663024 addons.go:243] addon storage-provisioner should already be in state true
	I1209 11:57:32.847204  663024 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-482476"
	I1209 11:57:32.847228  663024 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-482476"
	I1209 11:57:32.847272  663024 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-482476"
	W1209 11:57:32.847287  663024 addons.go:243] addon metrics-server should already be in state true
	I1209 11:57:32.847285  663024 config.go:182] Loaded profile config "default-k8s-diff-port-482476": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 11:57:32.847328  663024 host.go:66] Checking if "default-k8s-diff-port-482476" exists ...
	I1209 11:57:32.847237  663024 host.go:66] Checking if "default-k8s-diff-port-482476" exists ...
	I1209 11:57:32.847705  663024 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:57:32.847713  663024 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:57:32.847750  663024 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:57:32.847755  663024 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:57:32.847841  663024 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:57:32.847873  663024 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:57:32.848599  663024 out.go:177] * Verifying Kubernetes components...
	I1209 11:57:32.850246  663024 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 11:57:32.864945  663024 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44639
	I1209 11:57:32.865141  663024 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37211
	I1209 11:57:32.865203  663024 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42835
	I1209 11:57:32.865473  663024 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:57:32.865635  663024 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:57:32.865733  663024 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:57:32.866096  663024 main.go:141] libmachine: Using API Version  1
	I1209 11:57:32.866115  663024 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:57:32.866241  663024 main.go:141] libmachine: Using API Version  1
	I1209 11:57:32.866264  663024 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:57:32.866241  663024 main.go:141] libmachine: Using API Version  1
	I1209 11:57:32.866316  663024 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:57:32.866642  663024 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:57:32.866654  663024 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:57:32.866656  663024 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:57:32.866865  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetState
	I1209 11:57:32.867243  663024 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:57:32.867287  663024 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:57:32.867321  663024 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:57:32.867358  663024 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:57:32.871085  663024 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-482476"
	W1209 11:57:32.871109  663024 addons.go:243] addon default-storageclass should already be in state true
	I1209 11:57:32.871142  663024 host.go:66] Checking if "default-k8s-diff-port-482476" exists ...
	I1209 11:57:32.871395  663024 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:57:32.871431  663024 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:57:32.883301  663024 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45049
	I1209 11:57:32.883976  663024 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:57:32.884508  663024 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36509
	I1209 11:57:32.884758  663024 main.go:141] libmachine: Using API Version  1
	I1209 11:57:32.884775  663024 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:57:32.885123  663024 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:57:32.885279  663024 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:57:32.885610  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetState
	I1209 11:57:32.885801  663024 main.go:141] libmachine: Using API Version  1
	I1209 11:57:32.885817  663024 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:57:32.886142  663024 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:57:32.886347  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetState
	I1209 11:57:32.888357  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .DriverName
	I1209 11:57:32.888762  663024 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37611
	I1209 11:57:32.889103  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .DriverName
	I1209 11:57:32.889192  663024 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:57:32.889669  663024 main.go:141] libmachine: Using API Version  1
	I1209 11:57:32.889692  663024 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:57:32.890035  663024 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:57:32.890082  663024 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 11:57:32.890647  663024 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:57:32.890687  663024 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:57:32.890867  663024 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1209 11:57:32.891756  663024 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 11:57:32.891774  663024 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1209 11:57:32.891794  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHHostname
	I1209 11:57:32.892543  663024 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1209 11:57:32.892563  663024 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1209 11:57:32.892587  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHHostname
	I1209 11:57:32.896754  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:57:32.897437  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:c9:8a", ip: ""} in network mk-default-k8s-diff-port-482476: {Iface:virbr2 ExpiryTime:2024-12-09 12:52:18 +0000 UTC Type:0 Mac:52:54:00:f0:c9:8a Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:default-k8s-diff-port-482476 Clientid:01:52:54:00:f0:c9:8a}
	I1209 11:57:32.897471  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined IP address 192.168.50.25 and MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:57:32.897752  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHPort
	I1209 11:57:32.897836  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:57:32.897975  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHKeyPath
	I1209 11:57:32.898370  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:c9:8a", ip: ""} in network mk-default-k8s-diff-port-482476: {Iface:virbr2 ExpiryTime:2024-12-09 12:52:18 +0000 UTC Type:0 Mac:52:54:00:f0:c9:8a Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:default-k8s-diff-port-482476 Clientid:01:52:54:00:f0:c9:8a}
	I1209 11:57:32.898381  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHUsername
	I1209 11:57:32.898395  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined IP address 192.168.50.25 and MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:57:32.898556  663024 sshutil.go:53] new ssh client: &{IP:192.168.50.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/default-k8s-diff-port-482476/id_rsa Username:docker}
	I1209 11:57:32.898649  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHPort
	I1209 11:57:32.898829  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHKeyPath
	I1209 11:57:32.898975  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHUsername
	I1209 11:57:32.899101  663024 sshutil.go:53] new ssh client: &{IP:192.168.50.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/default-k8s-diff-port-482476/id_rsa Username:docker}
	I1209 11:57:32.907891  663024 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34063
	I1209 11:57:32.908317  663024 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:57:32.908827  663024 main.go:141] libmachine: Using API Version  1
	I1209 11:57:32.908848  663024 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:57:32.909352  663024 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:57:32.909551  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetState
	I1209 11:57:32.911172  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .DriverName
	I1209 11:57:32.911417  663024 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1209 11:57:32.911434  663024 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1209 11:57:32.911460  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHHostname
	I1209 11:57:32.914016  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:57:32.914474  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:c9:8a", ip: ""} in network mk-default-k8s-diff-port-482476: {Iface:virbr2 ExpiryTime:2024-12-09 12:52:18 +0000 UTC Type:0 Mac:52:54:00:f0:c9:8a Iaid: IPaddr:192.168.50.25 Prefix:24 Hostname:default-k8s-diff-port-482476 Clientid:01:52:54:00:f0:c9:8a}
	I1209 11:57:32.914490  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | domain default-k8s-diff-port-482476 has defined IP address 192.168.50.25 and MAC address 52:54:00:f0:c9:8a in network mk-default-k8s-diff-port-482476
	I1209 11:57:32.914646  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHPort
	I1209 11:57:32.914838  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHKeyPath
	I1209 11:57:32.914965  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .GetSSHUsername
	I1209 11:57:32.915071  663024 sshutil.go:53] new ssh client: &{IP:192.168.50.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/default-k8s-diff-port-482476/id_rsa Username:docker}
	I1209 11:57:33.067075  663024 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 11:57:33.085671  663024 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-482476" to be "Ready" ...
	I1209 11:57:33.095765  663024 node_ready.go:49] node "default-k8s-diff-port-482476" has status "Ready":"True"
	I1209 11:57:33.095801  663024 node_ready.go:38] duration metric: took 10.096442ms for node "default-k8s-diff-port-482476" to be "Ready" ...
	I1209 11:57:33.095815  663024 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 11:57:33.105497  663024 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-7rr27" in "kube-system" namespace to be "Ready" ...
	I1209 11:57:33.200059  663024 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 11:57:33.218467  663024 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1209 11:57:33.218496  663024 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1209 11:57:33.225990  663024 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1209 11:57:33.278736  663024 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1209 11:57:33.278772  663024 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1209 11:57:33.342270  663024 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1209 11:57:33.342304  663024 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1209 11:57:33.412771  663024 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1209 11:57:34.250639  663024 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.050535014s)
	I1209 11:57:34.250706  663024 main.go:141] libmachine: Making call to close driver server
	I1209 11:57:34.250720  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .Close
	I1209 11:57:34.250704  663024 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.024681453s)
	I1209 11:57:34.250811  663024 main.go:141] libmachine: Making call to close driver server
	I1209 11:57:34.250820  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .Close
	I1209 11:57:34.251151  663024 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:57:34.251170  663024 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:57:34.251182  663024 main.go:141] libmachine: Making call to close driver server
	I1209 11:57:34.251192  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .Close
	I1209 11:57:34.251197  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | Closing plugin on server side
	I1209 11:57:34.251238  663024 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:57:34.251245  663024 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:57:34.251253  663024 main.go:141] libmachine: Making call to close driver server
	I1209 11:57:34.251261  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .Close
	I1209 11:57:34.253136  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | Closing plugin on server side
	I1209 11:57:34.253141  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | Closing plugin on server side
	I1209 11:57:34.253180  663024 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:57:34.253182  663024 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:57:34.253194  663024 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:57:34.253214  663024 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:57:34.279650  663024 main.go:141] libmachine: Making call to close driver server
	I1209 11:57:34.279682  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .Close
	I1209 11:57:34.280064  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | Closing plugin on server side
	I1209 11:57:34.280116  663024 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:57:34.280130  663024 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:57:34.656217  663024 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.243394493s)
	I1209 11:57:34.656287  663024 main.go:141] libmachine: Making call to close driver server
	I1209 11:57:34.656305  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .Close
	I1209 11:57:34.656641  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) DBG | Closing plugin on server side
	I1209 11:57:34.656655  663024 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:57:34.656671  663024 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:57:34.656683  663024 main.go:141] libmachine: Making call to close driver server
	I1209 11:57:34.656691  663024 main.go:141] libmachine: (default-k8s-diff-port-482476) Calling .Close
	I1209 11:57:34.656982  663024 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:57:34.656999  663024 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:57:34.657011  663024 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-482476"
	I1209 11:57:34.658878  663024 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1209 11:57:34.660089  663024 addons.go:510] duration metric: took 1.813029421s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1209 11:57:35.122487  663024 pod_ready.go:103] pod "coredns-7c65d6cfc9-7rr27" in "kube-system" namespace has status "Ready":"False"
	I1209 11:57:36.112072  663024 pod_ready.go:93] pod "coredns-7c65d6cfc9-7rr27" in "kube-system" namespace has status "Ready":"True"
	I1209 11:57:36.112097  663024 pod_ready.go:82] duration metric: took 3.006564547s for pod "coredns-7c65d6cfc9-7rr27" in "kube-system" namespace to be "Ready" ...
	I1209 11:57:36.112110  663024 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-bb47s" in "kube-system" namespace to be "Ready" ...
	I1209 11:57:36.117521  663024 pod_ready.go:93] pod "coredns-7c65d6cfc9-bb47s" in "kube-system" namespace has status "Ready":"True"
	I1209 11:57:36.117545  663024 pod_ready.go:82] duration metric: took 5.428168ms for pod "coredns-7c65d6cfc9-bb47s" in "kube-system" namespace to be "Ready" ...
	I1209 11:57:36.117554  663024 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-482476" in "kube-system" namespace to be "Ready" ...
	I1209 11:57:36.122929  663024 pod_ready.go:93] pod "etcd-default-k8s-diff-port-482476" in "kube-system" namespace has status "Ready":"True"
	I1209 11:57:36.122953  663024 pod_ready.go:82] duration metric: took 5.392834ms for pod "etcd-default-k8s-diff-port-482476" in "kube-system" namespace to be "Ready" ...
	I1209 11:57:36.122972  663024 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-482476" in "kube-system" namespace to be "Ready" ...
	I1209 11:57:36.127025  663024 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-482476" in "kube-system" namespace has status "Ready":"True"
	I1209 11:57:36.127047  663024 pod_ready.go:82] duration metric: took 4.068175ms for pod "kube-apiserver-default-k8s-diff-port-482476" in "kube-system" namespace to be "Ready" ...
	I1209 11:57:36.127056  663024 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-482476" in "kube-system" namespace to be "Ready" ...
	I1209 11:57:36.131036  663024 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-482476" in "kube-system" namespace has status "Ready":"True"
	I1209 11:57:36.131055  663024 pod_ready.go:82] duration metric: took 3.993825ms for pod "kube-controller-manager-default-k8s-diff-port-482476" in "kube-system" namespace to be "Ready" ...
	I1209 11:57:36.131064  663024 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-pgs52" in "kube-system" namespace to be "Ready" ...
	I1209 11:57:36.508951  663024 pod_ready.go:93] pod "kube-proxy-pgs52" in "kube-system" namespace has status "Ready":"True"
	I1209 11:57:36.508980  663024 pod_ready.go:82] duration metric: took 377.910722ms for pod "kube-proxy-pgs52" in "kube-system" namespace to be "Ready" ...
	I1209 11:57:36.508991  663024 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-482476" in "kube-system" namespace to be "Ready" ...
	I1209 11:57:36.909065  663024 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-482476" in "kube-system" namespace has status "Ready":"True"
	I1209 11:57:36.909093  663024 pod_ready.go:82] duration metric: took 400.095775ms for pod "kube-scheduler-default-k8s-diff-port-482476" in "kube-system" namespace to be "Ready" ...
	I1209 11:57:36.909100  663024 pod_ready.go:39] duration metric: took 3.813270613s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 11:57:36.909116  663024 api_server.go:52] waiting for apiserver process to appear ...
	I1209 11:57:36.909169  663024 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:57:36.924688  663024 api_server.go:72] duration metric: took 4.077626254s to wait for apiserver process to appear ...
	I1209 11:57:36.924726  663024 api_server.go:88] waiting for apiserver healthz status ...
	I1209 11:57:36.924752  663024 api_server.go:253] Checking apiserver healthz at https://192.168.50.25:8444/healthz ...
	I1209 11:57:36.930782  663024 api_server.go:279] https://192.168.50.25:8444/healthz returned 200:
	ok
	I1209 11:57:36.931734  663024 api_server.go:141] control plane version: v1.31.2
	I1209 11:57:36.931758  663024 api_server.go:131] duration metric: took 7.024599ms to wait for apiserver health ...
	I1209 11:57:36.931766  663024 system_pods.go:43] waiting for kube-system pods to appear ...
	I1209 11:57:37.112291  663024 system_pods.go:59] 9 kube-system pods found
	I1209 11:57:37.112323  663024 system_pods.go:61] "coredns-7c65d6cfc9-7rr27" [a5dd0401-80bf-4c87-9771-e1837c960425] Running
	I1209 11:57:37.112328  663024 system_pods.go:61] "coredns-7c65d6cfc9-bb47s" [2ff9fcf2-1494-4739-9bf4-6c9dd5bcbbf4] Running
	I1209 11:57:37.112332  663024 system_pods.go:61] "etcd-default-k8s-diff-port-482476" [01dfcc5e-2ac5-4cee-8b68-ac1f3bf8866b] Running
	I1209 11:57:37.112337  663024 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-482476" [c65f1162-8d8a-47e8-8f1a-4abc0ebc8649] Running
	I1209 11:57:37.112340  663024 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-482476" [363e5f34-878a-434e-8bf2-21cdc06be305] Running
	I1209 11:57:37.112343  663024 system_pods.go:61] "kube-proxy-pgs52" [d5a3463e-e955-4345-9559-b23cce44fa0e] Running
	I1209 11:57:37.112346  663024 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-482476" [53f924fa-554b-4ceb-b171-efcfecdc137e] Running
	I1209 11:57:37.112356  663024 system_pods.go:61] "metrics-server-6867b74b74-2lmtn" [60803d31-d0b0-4d51-a9f2-cadafd184a90] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1209 11:57:37.112363  663024 system_pods.go:61] "storage-provisioner" [6b53e3ba-9bc9-4b5a-bec9-d06336616c8a] Running
	I1209 11:57:37.112373  663024 system_pods.go:74] duration metric: took 180.599339ms to wait for pod list to return data ...
	I1209 11:57:37.112387  663024 default_sa.go:34] waiting for default service account to be created ...
	I1209 11:57:37.309750  663024 default_sa.go:45] found service account: "default"
	I1209 11:57:37.309777  663024 default_sa.go:55] duration metric: took 197.382304ms for default service account to be created ...
	I1209 11:57:37.309787  663024 system_pods.go:116] waiting for k8s-apps to be running ...
	I1209 11:57:37.513080  663024 system_pods.go:86] 9 kube-system pods found
	I1209 11:57:37.513112  663024 system_pods.go:89] "coredns-7c65d6cfc9-7rr27" [a5dd0401-80bf-4c87-9771-e1837c960425] Running
	I1209 11:57:37.513118  663024 system_pods.go:89] "coredns-7c65d6cfc9-bb47s" [2ff9fcf2-1494-4739-9bf4-6c9dd5bcbbf4] Running
	I1209 11:57:37.513121  663024 system_pods.go:89] "etcd-default-k8s-diff-port-482476" [01dfcc5e-2ac5-4cee-8b68-ac1f3bf8866b] Running
	I1209 11:57:37.513128  663024 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-482476" [c65f1162-8d8a-47e8-8f1a-4abc0ebc8649] Running
	I1209 11:57:37.513133  663024 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-482476" [363e5f34-878a-434e-8bf2-21cdc06be305] Running
	I1209 11:57:37.513136  663024 system_pods.go:89] "kube-proxy-pgs52" [d5a3463e-e955-4345-9559-b23cce44fa0e] Running
	I1209 11:57:37.513141  663024 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-482476" [53f924fa-554b-4ceb-b171-efcfecdc137e] Running
	I1209 11:57:37.513150  663024 system_pods.go:89] "metrics-server-6867b74b74-2lmtn" [60803d31-d0b0-4d51-a9f2-cadafd184a90] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1209 11:57:37.513156  663024 system_pods.go:89] "storage-provisioner" [6b53e3ba-9bc9-4b5a-bec9-d06336616c8a] Running
	I1209 11:57:37.513168  663024 system_pods.go:126] duration metric: took 203.373238ms to wait for k8s-apps to be running ...
	I1209 11:57:37.513181  663024 system_svc.go:44] waiting for kubelet service to be running ....
	I1209 11:57:37.513233  663024 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 11:57:37.527419  663024 system_svc.go:56] duration metric: took 14.22618ms WaitForService to wait for kubelet
	I1209 11:57:37.527451  663024 kubeadm.go:582] duration metric: took 4.680397826s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 11:57:37.527473  663024 node_conditions.go:102] verifying NodePressure condition ...
	I1209 11:57:37.710396  663024 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1209 11:57:37.710429  663024 node_conditions.go:123] node cpu capacity is 2
	I1209 11:57:37.710447  663024 node_conditions.go:105] duration metric: took 182.968526ms to run NodePressure ...
	I1209 11:57:37.710463  663024 start.go:241] waiting for startup goroutines ...
	I1209 11:57:37.710473  663024 start.go:246] waiting for cluster config update ...
	I1209 11:57:37.710487  663024 start.go:255] writing updated cluster config ...
	I1209 11:57:37.710799  663024 ssh_runner.go:195] Run: rm -f paused
	I1209 11:57:37.760468  663024 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1209 11:57:37.762472  663024 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-482476" cluster and "default" namespace by default
	I1209 11:57:40.219406  661546 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.274255602s)
	I1209 11:57:40.219478  661546 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 11:57:40.234863  661546 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 11:57:40.245357  661546 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 11:57:40.255253  661546 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 11:57:40.255276  661546 kubeadm.go:157] found existing configuration files:
	
	I1209 11:57:40.255319  661546 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 11:57:40.264881  661546 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 11:57:40.264934  661546 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 11:57:40.274990  661546 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 11:57:40.284941  661546 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 11:57:40.284998  661546 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 11:57:40.295188  661546 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 11:57:40.305136  661546 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 11:57:40.305181  661546 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 11:57:40.315125  661546 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 11:57:40.324727  661546 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 11:57:40.324789  661546 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 11:57:40.333574  661546 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1209 11:57:40.378743  661546 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1209 11:57:40.378932  661546 kubeadm.go:310] [preflight] Running pre-flight checks
	I1209 11:57:40.492367  661546 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1209 11:57:40.492493  661546 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1209 11:57:40.492658  661546 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1209 11:57:40.504994  661546 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1209 11:57:40.506760  661546 out.go:235]   - Generating certificates and keys ...
	I1209 11:57:40.506878  661546 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1209 11:57:40.506955  661546 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1209 11:57:40.507033  661546 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1209 11:57:40.507088  661546 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1209 11:57:40.507156  661546 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1209 11:57:40.507274  661546 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1209 11:57:40.507377  661546 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1209 11:57:40.507463  661546 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1209 11:57:40.507573  661546 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1209 11:57:40.507692  661546 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1209 11:57:40.507756  661546 kubeadm.go:310] [certs] Using the existing "sa" key
	I1209 11:57:40.507836  661546 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1209 11:57:40.607744  661546 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1209 11:57:40.684950  661546 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1209 11:57:40.826079  661546 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1209 11:57:40.945768  661546 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1209 11:57:41.212984  661546 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1209 11:57:41.213406  661546 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1209 11:57:41.216390  661546 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1209 11:57:41.218053  661546 out.go:235]   - Booting up control plane ...
	I1209 11:57:41.218202  661546 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1209 11:57:41.218307  661546 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1209 11:57:41.220009  661546 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1209 11:57:41.237816  661546 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1209 11:57:41.244148  661546 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1209 11:57:41.244204  661546 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1209 11:57:41.371083  661546 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1209 11:57:41.371245  661546 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1209 11:57:41.872938  661546 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.998998ms
	I1209 11:57:41.873141  661546 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1209 11:57:46.874725  661546 kubeadm.go:310] [api-check] The API server is healthy after 5.001587898s
	I1209 11:57:46.886996  661546 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1209 11:57:46.897941  661546 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1209 11:57:46.927451  661546 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1209 11:57:46.927718  661546 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-005123 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1209 11:57:46.945578  661546 kubeadm.go:310] [bootstrap-token] Using token: bhdcn7.orsewwwtbk1gmdg8
	I1209 11:57:46.946894  661546 out.go:235]   - Configuring RBAC rules ...
	I1209 11:57:46.947041  661546 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1209 11:57:46.950006  661546 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1209 11:57:46.956761  661546 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1209 11:57:46.959756  661546 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1209 11:57:46.962973  661546 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1209 11:57:46.970016  661546 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1209 11:57:47.282251  661546 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1209 11:57:47.714588  661546 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1209 11:57:48.283610  661546 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1209 11:57:48.283671  661546 kubeadm.go:310] 
	I1209 11:57:48.283774  661546 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1209 11:57:48.283786  661546 kubeadm.go:310] 
	I1209 11:57:48.283901  661546 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1209 11:57:48.283948  661546 kubeadm.go:310] 
	I1209 11:57:48.283995  661546 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1209 11:57:48.284089  661546 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1209 11:57:48.284139  661546 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1209 11:57:48.284148  661546 kubeadm.go:310] 
	I1209 11:57:48.284216  661546 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1209 11:57:48.284224  661546 kubeadm.go:310] 
	I1209 11:57:48.284281  661546 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1209 11:57:48.284291  661546 kubeadm.go:310] 
	I1209 11:57:48.284359  661546 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1209 11:57:48.284465  661546 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1209 11:57:48.284583  661546 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1209 11:57:48.284596  661546 kubeadm.go:310] 
	I1209 11:57:48.284739  661546 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1209 11:57:48.284846  661546 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1209 11:57:48.284859  661546 kubeadm.go:310] 
	I1209 11:57:48.284972  661546 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token bhdcn7.orsewwwtbk1gmdg8 \
	I1209 11:57:48.285133  661546 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c011865ce27f188d55feab5f04226df0aae8d2e7cfe661283806c6f2de0ef29a \
	I1209 11:57:48.285170  661546 kubeadm.go:310] 	--control-plane 
	I1209 11:57:48.285184  661546 kubeadm.go:310] 
	I1209 11:57:48.285312  661546 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1209 11:57:48.285321  661546 kubeadm.go:310] 
	I1209 11:57:48.285388  661546 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token bhdcn7.orsewwwtbk1gmdg8 \
	I1209 11:57:48.285530  661546 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c011865ce27f188d55feab5f04226df0aae8d2e7cfe661283806c6f2de0ef29a 
	I1209 11:57:48.286117  661546 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1209 11:57:48.286246  661546 cni.go:84] Creating CNI manager for ""
	I1209 11:57:48.286263  661546 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 11:57:48.288141  661546 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1209 11:57:48.289484  661546 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1209 11:57:48.301160  661546 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1209 11:57:48.320752  661546 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1209 11:57:48.320831  661546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 11:57:48.320831  661546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-005123 minikube.k8s.io/updated_at=2024_12_09T11_57_48_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=60110addcdbf0fec7168b962521659e922988d6c minikube.k8s.io/name=embed-certs-005123 minikube.k8s.io/primary=true
	I1209 11:57:48.552069  661546 ops.go:34] apiserver oom_adj: -16
	I1209 11:57:48.552119  661546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 11:57:49.052304  661546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 11:57:49.552516  661546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 11:57:50.052548  661546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 11:57:50.552931  661546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 11:57:51.052381  661546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 11:57:51.552589  661546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 11:57:52.052273  661546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 11:57:52.552546  661546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 11:57:52.645059  661546 kubeadm.go:1113] duration metric: took 4.324296774s to wait for elevateKubeSystemPrivileges
	I1209 11:57:52.645107  661546 kubeadm.go:394] duration metric: took 5m0.847017281s to StartCluster
	I1209 11:57:52.645133  661546 settings.go:142] acquiring lock: {Name:mkd819f3469f56ef651f2e5bb461e93bb6b2b5fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 11:57:52.645241  661546 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20068-609844/kubeconfig
	I1209 11:57:52.647822  661546 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-609844/kubeconfig: {Name:mk32876a1ab47f13c0d7c0ba1ee5e32de01b4a4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 11:57:52.648129  661546 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.218 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 11:57:52.648226  661546 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1209 11:57:52.648338  661546 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-005123"
	I1209 11:57:52.648354  661546 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-005123"
	W1209 11:57:52.648366  661546 addons.go:243] addon storage-provisioner should already be in state true
	I1209 11:57:52.648367  661546 addons.go:69] Setting default-storageclass=true in profile "embed-certs-005123"
	I1209 11:57:52.648396  661546 config.go:182] Loaded profile config "embed-certs-005123": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 11:57:52.648397  661546 addons.go:69] Setting metrics-server=true in profile "embed-certs-005123"
	I1209 11:57:52.648434  661546 addons.go:234] Setting addon metrics-server=true in "embed-certs-005123"
	I1209 11:57:52.648399  661546 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-005123"
	W1209 11:57:52.648448  661546 addons.go:243] addon metrics-server should already be in state true
	I1209 11:57:52.648499  661546 host.go:66] Checking if "embed-certs-005123" exists ...
	I1209 11:57:52.648400  661546 host.go:66] Checking if "embed-certs-005123" exists ...
	I1209 11:57:52.648867  661546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:57:52.648883  661546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:57:52.648914  661546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:57:52.648932  661546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:57:52.648947  661546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:57:52.648917  661546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:57:52.649702  661546 out.go:177] * Verifying Kubernetes components...
	I1209 11:57:52.651094  661546 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 11:57:52.665090  661546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38065
	I1209 11:57:52.665309  661546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35905
	I1209 11:57:52.665602  661546 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:57:52.665889  661546 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:57:52.666308  661546 main.go:141] libmachine: Using API Version  1
	I1209 11:57:52.666329  661546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:57:52.666470  661546 main.go:141] libmachine: Using API Version  1
	I1209 11:57:52.666492  661546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:57:52.666768  661546 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:57:52.666907  661546 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:57:52.667140  661546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35159
	I1209 11:57:52.667344  661546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:57:52.667387  661546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:57:52.667536  661546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:57:52.667580  661546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:57:52.667652  661546 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:57:52.668127  661546 main.go:141] libmachine: Using API Version  1
	I1209 11:57:52.668154  661546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:57:52.668657  661546 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:57:52.668868  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetState
	I1209 11:57:52.672550  661546 addons.go:234] Setting addon default-storageclass=true in "embed-certs-005123"
	W1209 11:57:52.672580  661546 addons.go:243] addon default-storageclass should already be in state true
	I1209 11:57:52.672612  661546 host.go:66] Checking if "embed-certs-005123" exists ...
	I1209 11:57:52.672985  661546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:57:52.673032  661546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:57:52.684848  661546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43667
	I1209 11:57:52.684854  661546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36851
	I1209 11:57:52.685398  661546 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:57:52.685451  661546 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:57:52.686054  661546 main.go:141] libmachine: Using API Version  1
	I1209 11:57:52.686081  661546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:57:52.686155  661546 main.go:141] libmachine: Using API Version  1
	I1209 11:57:52.686228  661546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:57:52.686553  661546 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:57:52.686614  661546 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:57:52.686753  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetState
	I1209 11:57:52.686930  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetState
	I1209 11:57:52.687838  661546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33245
	I1209 11:57:52.688391  661546 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:57:52.688818  661546 main.go:141] libmachine: (embed-certs-005123) Calling .DriverName
	I1209 11:57:52.689013  661546 main.go:141] libmachine: Using API Version  1
	I1209 11:57:52.689040  661546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:57:52.689314  661546 main.go:141] libmachine: (embed-certs-005123) Calling .DriverName
	I1209 11:57:52.689450  661546 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:57:52.689908  661546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:57:52.689943  661546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:57:52.691136  661546 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 11:57:52.691137  661546 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1209 11:57:52.692714  661546 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1209 11:57:52.692732  661546 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1209 11:57:52.692749  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHHostname
	I1209 11:57:52.692789  661546 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 11:57:52.692800  661546 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1209 11:57:52.692813  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHHostname
	I1209 11:57:52.696349  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:57:52.696791  661546 main.go:141] libmachine: (embed-certs-005123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:a0:a8", ip: ""} in network mk-embed-certs-005123: {Iface:virbr4 ExpiryTime:2024-12-09 12:52:37 +0000 UTC Type:0 Mac:52:54:00:ee:a0:a8 Iaid: IPaddr:192.168.72.218 Prefix:24 Hostname:embed-certs-005123 Clientid:01:52:54:00:ee:a0:a8}
	I1209 11:57:52.696815  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined IP address 192.168.72.218 and MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:57:52.697088  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:57:52.697143  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHPort
	I1209 11:57:52.697314  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHKeyPath
	I1209 11:57:52.697482  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHUsername
	I1209 11:57:52.697512  661546 main.go:141] libmachine: (embed-certs-005123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:a0:a8", ip: ""} in network mk-embed-certs-005123: {Iface:virbr4 ExpiryTime:2024-12-09 12:52:37 +0000 UTC Type:0 Mac:52:54:00:ee:a0:a8 Iaid: IPaddr:192.168.72.218 Prefix:24 Hostname:embed-certs-005123 Clientid:01:52:54:00:ee:a0:a8}
	I1209 11:57:52.697547  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined IP address 192.168.72.218 and MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:57:52.697658  661546 sshutil.go:53] new ssh client: &{IP:192.168.72.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/embed-certs-005123/id_rsa Username:docker}
	I1209 11:57:52.697787  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHPort
	I1209 11:57:52.697962  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHKeyPath
	I1209 11:57:52.698093  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHUsername
	I1209 11:57:52.698209  661546 sshutil.go:53] new ssh client: &{IP:192.168.72.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/embed-certs-005123/id_rsa Username:docker}
	I1209 11:57:52.705766  661546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37265
	I1209 11:57:52.706265  661546 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:57:52.706694  661546 main.go:141] libmachine: Using API Version  1
	I1209 11:57:52.706721  661546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:57:52.707031  661546 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:57:52.707241  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetState
	I1209 11:57:52.708747  661546 main.go:141] libmachine: (embed-certs-005123) Calling .DriverName
	I1209 11:57:52.708980  661546 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1209 11:57:52.708997  661546 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1209 11:57:52.709016  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHHostname
	I1209 11:57:52.711546  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:57:52.711986  661546 main.go:141] libmachine: (embed-certs-005123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:a0:a8", ip: ""} in network mk-embed-certs-005123: {Iface:virbr4 ExpiryTime:2024-12-09 12:52:37 +0000 UTC Type:0 Mac:52:54:00:ee:a0:a8 Iaid: IPaddr:192.168.72.218 Prefix:24 Hostname:embed-certs-005123 Clientid:01:52:54:00:ee:a0:a8}
	I1209 11:57:52.712011  661546 main.go:141] libmachine: (embed-certs-005123) DBG | domain embed-certs-005123 has defined IP address 192.168.72.218 and MAC address 52:54:00:ee:a0:a8 in network mk-embed-certs-005123
	I1209 11:57:52.712263  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHPort
	I1209 11:57:52.712438  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHKeyPath
	I1209 11:57:52.712604  661546 main.go:141] libmachine: (embed-certs-005123) Calling .GetSSHUsername
	I1209 11:57:52.712751  661546 sshutil.go:53] new ssh client: &{IP:192.168.72.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/embed-certs-005123/id_rsa Username:docker}
	I1209 11:57:52.858535  661546 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 11:57:52.879035  661546 node_ready.go:35] waiting up to 6m0s for node "embed-certs-005123" to be "Ready" ...
	I1209 11:57:52.899550  661546 node_ready.go:49] node "embed-certs-005123" has status "Ready":"True"
	I1209 11:57:52.899575  661546 node_ready.go:38] duration metric: took 20.508179ms for node "embed-certs-005123" to be "Ready" ...
	I1209 11:57:52.899589  661546 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 11:57:52.960716  661546 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-t49mk" in "kube-system" namespace to be "Ready" ...
	I1209 11:57:52.962755  661546 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1209 11:57:52.962779  661546 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1209 11:57:52.995747  661546 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1209 11:57:52.995787  661546 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1209 11:57:53.031395  661546 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1209 11:57:53.031426  661546 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1209 11:57:53.031535  661546 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1209 11:57:53.049695  661546 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 11:57:53.061716  661546 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1209 11:57:53.314158  661546 main.go:141] libmachine: Making call to close driver server
	I1209 11:57:53.314212  661546 main.go:141] libmachine: (embed-certs-005123) Calling .Close
	I1209 11:57:53.314523  661546 main.go:141] libmachine: (embed-certs-005123) DBG | Closing plugin on server side
	I1209 11:57:53.314548  661546 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:57:53.314565  661546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:57:53.314586  661546 main.go:141] libmachine: Making call to close driver server
	I1209 11:57:53.314598  661546 main.go:141] libmachine: (embed-certs-005123) Calling .Close
	I1209 11:57:53.314857  661546 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:57:53.314875  661546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:57:53.323573  661546 main.go:141] libmachine: Making call to close driver server
	I1209 11:57:53.323590  661546 main.go:141] libmachine: (embed-certs-005123) Calling .Close
	I1209 11:57:53.323822  661546 main.go:141] libmachine: (embed-certs-005123) DBG | Closing plugin on server side
	I1209 11:57:53.323873  661546 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:57:53.323882  661546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:57:54.004616  661546 main.go:141] libmachine: Making call to close driver server
	I1209 11:57:54.004655  661546 main.go:141] libmachine: (embed-certs-005123) Calling .Close
	I1209 11:57:54.005050  661546 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:57:54.005067  661546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:57:54.005075  661546 main.go:141] libmachine: Making call to close driver server
	I1209 11:57:54.005083  661546 main.go:141] libmachine: (embed-certs-005123) Calling .Close
	I1209 11:57:54.005351  661546 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:57:54.005372  661546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:57:54.005370  661546 main.go:141] libmachine: (embed-certs-005123) DBG | Closing plugin on server side
	I1209 11:57:54.352527  661546 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.290758533s)
	I1209 11:57:54.352616  661546 main.go:141] libmachine: Making call to close driver server
	I1209 11:57:54.352636  661546 main.go:141] libmachine: (embed-certs-005123) Calling .Close
	I1209 11:57:54.352957  661546 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:57:54.352977  661546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:57:54.352987  661546 main.go:141] libmachine: Making call to close driver server
	I1209 11:57:54.352995  661546 main.go:141] libmachine: (embed-certs-005123) Calling .Close
	I1209 11:57:54.353278  661546 main.go:141] libmachine: Successfully made call to close driver server
	I1209 11:57:54.353320  661546 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 11:57:54.353336  661546 addons.go:475] Verifying addon metrics-server=true in "embed-certs-005123"
	I1209 11:57:54.353387  661546 main.go:141] libmachine: (embed-certs-005123) DBG | Closing plugin on server side
	I1209 11:57:54.355153  661546 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1209 11:57:54.356250  661546 addons.go:510] duration metric: took 1.708044398s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1209 11:57:54.968202  661546 pod_ready.go:103] pod "coredns-7c65d6cfc9-t49mk" in "kube-system" namespace has status "Ready":"False"
	I1209 11:57:57.467948  661546 pod_ready.go:93] pod "coredns-7c65d6cfc9-t49mk" in "kube-system" namespace has status "Ready":"True"
	I1209 11:57:57.467979  661546 pod_ready.go:82] duration metric: took 4.507228843s for pod "coredns-7c65d6cfc9-t49mk" in "kube-system" namespace to be "Ready" ...
	I1209 11:57:57.467992  661546 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-xspr9" in "kube-system" namespace to be "Ready" ...
	I1209 11:57:59.475024  661546 pod_ready.go:103] pod "coredns-7c65d6cfc9-xspr9" in "kube-system" namespace has status "Ready":"False"
	I1209 11:58:00.473961  661546 pod_ready.go:93] pod "coredns-7c65d6cfc9-xspr9" in "kube-system" namespace has status "Ready":"True"
	I1209 11:58:00.473987  661546 pod_ready.go:82] duration metric: took 3.005987981s for pod "coredns-7c65d6cfc9-xspr9" in "kube-system" namespace to be "Ready" ...
	I1209 11:58:00.473996  661546 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-005123" in "kube-system" namespace to be "Ready" ...
	I1209 11:58:00.478022  661546 pod_ready.go:93] pod "etcd-embed-certs-005123" in "kube-system" namespace has status "Ready":"True"
	I1209 11:58:00.478040  661546 pod_ready.go:82] duration metric: took 4.038353ms for pod "etcd-embed-certs-005123" in "kube-system" namespace to be "Ready" ...
	I1209 11:58:00.478049  661546 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-005123" in "kube-system" namespace to be "Ready" ...
	I1209 11:58:00.482415  661546 pod_ready.go:93] pod "kube-apiserver-embed-certs-005123" in "kube-system" namespace has status "Ready":"True"
	I1209 11:58:00.482439  661546 pod_ready.go:82] duration metric: took 4.384854ms for pod "kube-apiserver-embed-certs-005123" in "kube-system" namespace to be "Ready" ...
	I1209 11:58:00.482449  661546 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-005123" in "kube-system" namespace to be "Ready" ...
	I1209 11:58:00.486284  661546 pod_ready.go:93] pod "kube-controller-manager-embed-certs-005123" in "kube-system" namespace has status "Ready":"True"
	I1209 11:58:00.486311  661546 pod_ready.go:82] duration metric: took 3.85467ms for pod "kube-controller-manager-embed-certs-005123" in "kube-system" namespace to be "Ready" ...
	I1209 11:58:00.486326  661546 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-n4pph" in "kube-system" namespace to be "Ready" ...
	I1209 11:58:00.490260  661546 pod_ready.go:93] pod "kube-proxy-n4pph" in "kube-system" namespace has status "Ready":"True"
	I1209 11:58:00.490284  661546 pod_ready.go:82] duration metric: took 3.949342ms for pod "kube-proxy-n4pph" in "kube-system" namespace to be "Ready" ...
	I1209 11:58:00.490296  661546 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-005123" in "kube-system" namespace to be "Ready" ...
	I1209 11:58:00.872396  661546 pod_ready.go:93] pod "kube-scheduler-embed-certs-005123" in "kube-system" namespace has status "Ready":"True"
	I1209 11:58:00.872420  661546 pod_ready.go:82] duration metric: took 382.116873ms for pod "kube-scheduler-embed-certs-005123" in "kube-system" namespace to be "Ready" ...
	I1209 11:58:00.872428  661546 pod_ready.go:39] duration metric: took 7.97282742s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 11:58:00.872446  661546 api_server.go:52] waiting for apiserver process to appear ...
	I1209 11:58:00.872502  661546 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:58:00.887281  661546 api_server.go:72] duration metric: took 8.239108757s to wait for apiserver process to appear ...
	I1209 11:58:00.887312  661546 api_server.go:88] waiting for apiserver healthz status ...
	I1209 11:58:00.887333  661546 api_server.go:253] Checking apiserver healthz at https://192.168.72.218:8443/healthz ...
	I1209 11:58:00.892005  661546 api_server.go:279] https://192.168.72.218:8443/healthz returned 200:
	ok
	I1209 11:58:00.893247  661546 api_server.go:141] control plane version: v1.31.2
	I1209 11:58:00.893277  661546 api_server.go:131] duration metric: took 5.95753ms to wait for apiserver health ...
	I1209 11:58:00.893288  661546 system_pods.go:43] waiting for kube-system pods to appear ...
	I1209 11:58:01.074723  661546 system_pods.go:59] 9 kube-system pods found
	I1209 11:58:01.074756  661546 system_pods.go:61] "coredns-7c65d6cfc9-t49mk" [ca3ba094-58a2-401d-8aea-46d6d96baacb] Running
	I1209 11:58:01.074762  661546 system_pods.go:61] "coredns-7c65d6cfc9-xspr9" [9384e9ea-987e-4728-bdf2-773645d52ab1] Running
	I1209 11:58:01.074766  661546 system_pods.go:61] "etcd-embed-certs-005123" [f8b23ab4-b852-4598-93d7-3b6eb1543a4b] Running
	I1209 11:58:01.074771  661546 system_pods.go:61] "kube-apiserver-embed-certs-005123" [96b0cb5a-1d81-48fc-bbc9-015de7c48ac5] Running
	I1209 11:58:01.074774  661546 system_pods.go:61] "kube-controller-manager-embed-certs-005123" [33c237d8-8f5a-4d8f-870a-7ba0dc2180dd] Running
	I1209 11:58:01.074777  661546 system_pods.go:61] "kube-proxy-n4pph" [520d101f-0df0-413f-a0fc-22ecc2884d40] Running
	I1209 11:58:01.074780  661546 system_pods.go:61] "kube-scheduler-embed-certs-005123" [11dcaf15-fc3b-4945-af9b-d08aa5528679] Running
	I1209 11:58:01.074786  661546 system_pods.go:61] "metrics-server-6867b74b74-zfw9r" [8438b820-4cc5-4d7b-8af5-9349fdd87ca8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1209 11:58:01.074791  661546 system_pods.go:61] "storage-provisioner" [91ceb801-7262-4d7e-9623-c8c1931fc34b] Running
	I1209 11:58:01.074797  661546 system_pods.go:74] duration metric: took 181.502993ms to wait for pod list to return data ...
	I1209 11:58:01.074804  661546 default_sa.go:34] waiting for default service account to be created ...
	I1209 11:58:01.272664  661546 default_sa.go:45] found service account: "default"
	I1209 11:58:01.272697  661546 default_sa.go:55] duration metric: took 197.886347ms for default service account to be created ...
	I1209 11:58:01.272707  661546 system_pods.go:116] waiting for k8s-apps to be running ...
	I1209 11:58:01.475062  661546 system_pods.go:86] 9 kube-system pods found
	I1209 11:58:01.475096  661546 system_pods.go:89] "coredns-7c65d6cfc9-t49mk" [ca3ba094-58a2-401d-8aea-46d6d96baacb] Running
	I1209 11:58:01.475102  661546 system_pods.go:89] "coredns-7c65d6cfc9-xspr9" [9384e9ea-987e-4728-bdf2-773645d52ab1] Running
	I1209 11:58:01.475105  661546 system_pods.go:89] "etcd-embed-certs-005123" [f8b23ab4-b852-4598-93d7-3b6eb1543a4b] Running
	I1209 11:58:01.475109  661546 system_pods.go:89] "kube-apiserver-embed-certs-005123" [96b0cb5a-1d81-48fc-bbc9-015de7c48ac5] Running
	I1209 11:58:01.475114  661546 system_pods.go:89] "kube-controller-manager-embed-certs-005123" [33c237d8-8f5a-4d8f-870a-7ba0dc2180dd] Running
	I1209 11:58:01.475118  661546 system_pods.go:89] "kube-proxy-n4pph" [520d101f-0df0-413f-a0fc-22ecc2884d40] Running
	I1209 11:58:01.475121  661546 system_pods.go:89] "kube-scheduler-embed-certs-005123" [11dcaf15-fc3b-4945-af9b-d08aa5528679] Running
	I1209 11:58:01.475131  661546 system_pods.go:89] "metrics-server-6867b74b74-zfw9r" [8438b820-4cc5-4d7b-8af5-9349fdd87ca8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1209 11:58:01.475138  661546 system_pods.go:89] "storage-provisioner" [91ceb801-7262-4d7e-9623-c8c1931fc34b] Running
	I1209 11:58:01.475148  661546 system_pods.go:126] duration metric: took 202.434687ms to wait for k8s-apps to be running ...
	I1209 11:58:01.475158  661546 system_svc.go:44] waiting for kubelet service to be running ....
	I1209 11:58:01.475220  661546 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 11:58:01.490373  661546 system_svc.go:56] duration metric: took 15.20079ms WaitForService to wait for kubelet
	I1209 11:58:01.490416  661546 kubeadm.go:582] duration metric: took 8.842250416s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 11:58:01.490451  661546 node_conditions.go:102] verifying NodePressure condition ...
	I1209 11:58:01.673621  661546 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1209 11:58:01.673651  661546 node_conditions.go:123] node cpu capacity is 2
	I1209 11:58:01.673662  661546 node_conditions.go:105] duration metric: took 183.205852ms to run NodePressure ...
	I1209 11:58:01.673674  661546 start.go:241] waiting for startup goroutines ...
	I1209 11:58:01.673681  661546 start.go:246] waiting for cluster config update ...
	I1209 11:58:01.673691  661546 start.go:255] writing updated cluster config ...
	I1209 11:58:01.673995  661546 ssh_runner.go:195] Run: rm -f paused
	I1209 11:58:01.725363  661546 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1209 11:58:01.727275  661546 out.go:177] * Done! kubectl is now configured to use "embed-certs-005123" cluster and "default" namespace by default
	I1209 11:58:14.994765  662586 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1209 11:58:14.994918  662586 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1209 11:58:14.995050  662586 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1209 11:58:14.995118  662586 kubeadm.go:310] [preflight] Running pre-flight checks
	I1209 11:58:14.995182  662586 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1209 11:58:14.995272  662586 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1209 11:58:14.995353  662586 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1209 11:58:14.995410  662586 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1209 11:58:14.996905  662586 out.go:235]   - Generating certificates and keys ...
	I1209 11:58:14.997000  662586 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1209 11:58:14.997055  662586 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1209 11:58:14.997123  662586 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1209 11:58:14.997184  662586 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1209 11:58:14.997278  662586 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1209 11:58:14.997349  662586 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1209 11:58:14.997474  662586 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1209 11:58:14.997567  662586 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1209 11:58:14.997631  662586 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1209 11:58:14.997700  662586 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1209 11:58:14.997736  662586 kubeadm.go:310] [certs] Using the existing "sa" key
	I1209 11:58:14.997783  662586 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1209 11:58:14.997826  662586 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1209 11:58:14.997871  662586 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1209 11:58:14.997930  662586 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1209 11:58:14.997977  662586 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1209 11:58:14.998063  662586 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1209 11:58:14.998141  662586 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1209 11:58:14.998199  662586 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1209 11:58:14.998264  662586 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1209 11:58:14.999539  662586 out.go:235]   - Booting up control plane ...
	I1209 11:58:14.999663  662586 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1209 11:58:14.999748  662586 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1209 11:58:14.999824  662586 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1209 11:58:14.999946  662586 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1209 11:58:15.000148  662586 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1209 11:58:15.000221  662586 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1209 11:58:15.000326  662586 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1209 11:58:15.000532  662586 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1209 11:58:15.000598  662586 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1209 11:58:15.000753  662586 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1209 11:58:15.000814  662586 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1209 11:58:15.000971  662586 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1209 11:58:15.001064  662586 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1209 11:58:15.001273  662586 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1209 11:58:15.001335  662586 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1209 11:58:15.001486  662586 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1209 11:58:15.001493  662586 kubeadm.go:310] 
	I1209 11:58:15.001553  662586 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1209 11:58:15.001616  662586 kubeadm.go:310] 		timed out waiting for the condition
	I1209 11:58:15.001631  662586 kubeadm.go:310] 
	I1209 11:58:15.001685  662586 kubeadm.go:310] 	This error is likely caused by:
	I1209 11:58:15.001732  662586 kubeadm.go:310] 		- The kubelet is not running
	I1209 11:58:15.001883  662586 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1209 11:58:15.001897  662586 kubeadm.go:310] 
	I1209 11:58:15.002041  662586 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1209 11:58:15.002087  662586 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1209 11:58:15.002146  662586 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1209 11:58:15.002156  662586 kubeadm.go:310] 
	I1209 11:58:15.002294  662586 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1209 11:58:15.002373  662586 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1209 11:58:15.002380  662586 kubeadm.go:310] 
	I1209 11:58:15.002502  662586 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1209 11:58:15.002623  662586 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1209 11:58:15.002725  662586 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1209 11:58:15.002799  662586 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1209 11:58:15.002835  662586 kubeadm.go:310] 
	W1209 11:58:15.002956  662586 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1209 11:58:15.003022  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1209 11:58:15.469838  662586 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 11:58:15.484503  662586 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 11:58:15.493409  662586 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 11:58:15.493430  662586 kubeadm.go:157] found existing configuration files:
	
	I1209 11:58:15.493487  662586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 11:58:15.502508  662586 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 11:58:15.502568  662586 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 11:58:15.511743  662586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 11:58:15.519855  662586 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 11:58:15.519913  662586 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 11:58:15.528743  662586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 11:58:15.537000  662586 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 11:58:15.537072  662586 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 11:58:15.546520  662586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 11:58:15.555448  662586 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 11:58:15.555526  662586 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 11:58:15.565618  662586 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1209 11:58:15.631763  662586 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1209 11:58:15.631832  662586 kubeadm.go:310] [preflight] Running pre-flight checks
	I1209 11:58:15.798683  662586 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1209 11:58:15.798822  662586 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1209 11:58:15.798957  662586 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1209 11:58:15.974522  662586 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1209 11:58:15.976286  662586 out.go:235]   - Generating certificates and keys ...
	I1209 11:58:15.976408  662586 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1209 11:58:15.976492  662586 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1209 11:58:15.976616  662586 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1209 11:58:15.976714  662586 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1209 11:58:15.976813  662586 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1209 11:58:15.976889  662586 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1209 11:58:15.976978  662586 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1209 11:58:15.977064  662586 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1209 11:58:15.977184  662586 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1209 11:58:15.977251  662586 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1209 11:58:15.977287  662586 kubeadm.go:310] [certs] Using the existing "sa" key
	I1209 11:58:15.977363  662586 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1209 11:58:16.193383  662586 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1209 11:58:16.324912  662586 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1209 11:58:16.541372  662586 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1209 11:58:16.786389  662586 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1209 11:58:16.807241  662586 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1209 11:58:16.808750  662586 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1209 11:58:16.808823  662586 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1209 11:58:16.951756  662586 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1209 11:58:16.954338  662586 out.go:235]   - Booting up control plane ...
	I1209 11:58:16.954486  662586 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1209 11:58:16.968892  662586 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1209 11:58:16.970556  662586 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1209 11:58:16.971301  662586 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1209 11:58:16.974040  662586 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1209 11:58:56.976537  662586 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1209 11:58:56.976966  662586 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1209 11:58:56.977214  662586 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1209 11:59:01.977861  662586 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1209 11:59:01.978074  662586 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1209 11:59:11.978821  662586 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1209 11:59:11.979056  662586 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1209 11:59:31.980118  662586 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1209 11:59:31.980386  662586 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1209 12:00:11.981507  662586 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1209 12:00:11.981791  662586 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1209 12:00:11.981804  662586 kubeadm.go:310] 
	I1209 12:00:11.981863  662586 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1209 12:00:11.981916  662586 kubeadm.go:310] 		timed out waiting for the condition
	I1209 12:00:11.981926  662586 kubeadm.go:310] 
	I1209 12:00:11.981977  662586 kubeadm.go:310] 	This error is likely caused by:
	I1209 12:00:11.982028  662586 kubeadm.go:310] 		- The kubelet is not running
	I1209 12:00:11.982232  662586 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1209 12:00:11.982262  662586 kubeadm.go:310] 
	I1209 12:00:11.982449  662586 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1209 12:00:11.982506  662586 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1209 12:00:11.982555  662586 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1209 12:00:11.982564  662586 kubeadm.go:310] 
	I1209 12:00:11.982709  662586 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1209 12:00:11.982824  662586 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1209 12:00:11.982837  662586 kubeadm.go:310] 
	I1209 12:00:11.982975  662586 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1209 12:00:11.983092  662586 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1209 12:00:11.983186  662586 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1209 12:00:11.983259  662586 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1209 12:00:11.983308  662586 kubeadm.go:310] 
	I1209 12:00:11.983442  662586 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1209 12:00:11.983534  662586 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1209 12:00:11.983622  662586 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1209 12:00:11.983692  662586 kubeadm.go:394] duration metric: took 7m57.372617524s to StartCluster
	I1209 12:00:11.983778  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 12:00:11.983852  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 12:00:12.032068  662586 cri.go:89] found id: ""
	I1209 12:00:12.032110  662586 logs.go:282] 0 containers: []
	W1209 12:00:12.032126  662586 logs.go:284] No container was found matching "kube-apiserver"
	I1209 12:00:12.032139  662586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 12:00:12.032232  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 12:00:12.074929  662586 cri.go:89] found id: ""
	I1209 12:00:12.074977  662586 logs.go:282] 0 containers: []
	W1209 12:00:12.074990  662586 logs.go:284] No container was found matching "etcd"
	I1209 12:00:12.075001  662586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 12:00:12.075074  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 12:00:12.113547  662586 cri.go:89] found id: ""
	I1209 12:00:12.113582  662586 logs.go:282] 0 containers: []
	W1209 12:00:12.113592  662586 logs.go:284] No container was found matching "coredns"
	I1209 12:00:12.113598  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 12:00:12.113661  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 12:00:12.147436  662586 cri.go:89] found id: ""
	I1209 12:00:12.147465  662586 logs.go:282] 0 containers: []
	W1209 12:00:12.147475  662586 logs.go:284] No container was found matching "kube-scheduler"
	I1209 12:00:12.147481  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 12:00:12.147535  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 12:00:12.184398  662586 cri.go:89] found id: ""
	I1209 12:00:12.184439  662586 logs.go:282] 0 containers: []
	W1209 12:00:12.184453  662586 logs.go:284] No container was found matching "kube-proxy"
	I1209 12:00:12.184463  662586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 12:00:12.184541  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 12:00:12.230844  662586 cri.go:89] found id: ""
	I1209 12:00:12.230884  662586 logs.go:282] 0 containers: []
	W1209 12:00:12.230896  662586 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 12:00:12.230905  662586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 12:00:12.230981  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 12:00:12.264897  662586 cri.go:89] found id: ""
	I1209 12:00:12.264930  662586 logs.go:282] 0 containers: []
	W1209 12:00:12.264939  662586 logs.go:284] No container was found matching "kindnet"
	I1209 12:00:12.264946  662586 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1209 12:00:12.265001  662586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1209 12:00:12.303553  662586 cri.go:89] found id: ""
	I1209 12:00:12.303594  662586 logs.go:282] 0 containers: []
	W1209 12:00:12.303607  662586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1209 12:00:12.303622  662586 logs.go:123] Gathering logs for container status ...
	I1209 12:00:12.303638  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 12:00:12.342799  662586 logs.go:123] Gathering logs for kubelet ...
	I1209 12:00:12.342838  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 12:00:12.392992  662586 logs.go:123] Gathering logs for dmesg ...
	I1209 12:00:12.393039  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 12:00:12.407065  662586 logs.go:123] Gathering logs for describe nodes ...
	I1209 12:00:12.407100  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 12:00:12.483599  662586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 12:00:12.483651  662586 logs.go:123] Gathering logs for CRI-O ...
	I1209 12:00:12.483675  662586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1209 12:00:12.591518  662586 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1209 12:00:12.591615  662586 out.go:270] * 
	W1209 12:00:12.591715  662586 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1209 12:00:12.591737  662586 out.go:270] * 
	W1209 12:00:12.592644  662586 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 12:00:12.596340  662586 out.go:201] 
	W1209 12:00:12.597706  662586 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1209 12:00:12.597757  662586 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1209 12:00:12.597798  662586 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1209 12:00:12.599219  662586 out.go:201] 
	
	
	==> CRI-O <==
	Dec 09 12:12:16 old-k8s-version-014592 crio[629]: time="2024-12-09 12:12:16.323970593Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733746336323945154,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2b5f4774-2b84-450c-9add-e19f3b3d9772 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 12:12:16 old-k8s-version-014592 crio[629]: time="2024-12-09 12:12:16.324543457Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c653fc31-ad84-4ccf-8f3d-0880570c51be name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 12:12:16 old-k8s-version-014592 crio[629]: time="2024-12-09 12:12:16.324612140Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c653fc31-ad84-4ccf-8f3d-0880570c51be name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 12:12:16 old-k8s-version-014592 crio[629]: time="2024-12-09 12:12:16.324642739Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=c653fc31-ad84-4ccf-8f3d-0880570c51be name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 12:12:16 old-k8s-version-014592 crio[629]: time="2024-12-09 12:12:16.355677953Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6fc0b432-6b59-46b9-8afc-cac657a4d769 name=/runtime.v1.RuntimeService/Version
	Dec 09 12:12:16 old-k8s-version-014592 crio[629]: time="2024-12-09 12:12:16.355777580Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6fc0b432-6b59-46b9-8afc-cac657a4d769 name=/runtime.v1.RuntimeService/Version
	Dec 09 12:12:16 old-k8s-version-014592 crio[629]: time="2024-12-09 12:12:16.356786645Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3ce93c7a-c661-4f78-a4d4-aed6d0280e12 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 12:12:16 old-k8s-version-014592 crio[629]: time="2024-12-09 12:12:16.357262634Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733746336357235170,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3ce93c7a-c661-4f78-a4d4-aed6d0280e12 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 12:12:16 old-k8s-version-014592 crio[629]: time="2024-12-09 12:12:16.357694344Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f367d503-9d0c-4c04-8328-28aba4a6e435 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 12:12:16 old-k8s-version-014592 crio[629]: time="2024-12-09 12:12:16.357764199Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f367d503-9d0c-4c04-8328-28aba4a6e435 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 12:12:16 old-k8s-version-014592 crio[629]: time="2024-12-09 12:12:16.357815303Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=f367d503-9d0c-4c04-8328-28aba4a6e435 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 12:12:16 old-k8s-version-014592 crio[629]: time="2024-12-09 12:12:16.403757919Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2c6e5351-f02e-4eff-95ef-4b54f2d4a088 name=/runtime.v1.RuntimeService/Version
	Dec 09 12:12:16 old-k8s-version-014592 crio[629]: time="2024-12-09 12:12:16.403855442Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2c6e5351-f02e-4eff-95ef-4b54f2d4a088 name=/runtime.v1.RuntimeService/Version
	Dec 09 12:12:16 old-k8s-version-014592 crio[629]: time="2024-12-09 12:12:16.405219102Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6bc7f405-14ad-45da-a089-76ebb6fdcaf9 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 12:12:16 old-k8s-version-014592 crio[629]: time="2024-12-09 12:12:16.405667664Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733746336405613121,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6bc7f405-14ad-45da-a089-76ebb6fdcaf9 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 12:12:16 old-k8s-version-014592 crio[629]: time="2024-12-09 12:12:16.406272532Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ae53d4f9-4c4a-46b7-b37a-b150dc4c2fd8 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 12:12:16 old-k8s-version-014592 crio[629]: time="2024-12-09 12:12:16.406328559Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ae53d4f9-4c4a-46b7-b37a-b150dc4c2fd8 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 12:12:16 old-k8s-version-014592 crio[629]: time="2024-12-09 12:12:16.406402100Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=ae53d4f9-4c4a-46b7-b37a-b150dc4c2fd8 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 12:12:16 old-k8s-version-014592 crio[629]: time="2024-12-09 12:12:16.437777066Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b5b7df25-91cf-40f3-a9f3-da18707dcefc name=/runtime.v1.RuntimeService/Version
	Dec 09 12:12:16 old-k8s-version-014592 crio[629]: time="2024-12-09 12:12:16.437871113Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b5b7df25-91cf-40f3-a9f3-da18707dcefc name=/runtime.v1.RuntimeService/Version
	Dec 09 12:12:16 old-k8s-version-014592 crio[629]: time="2024-12-09 12:12:16.438792544Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1e3d9880-3784-4ddf-83ee-3be30352f127 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 12:12:16 old-k8s-version-014592 crio[629]: time="2024-12-09 12:12:16.439226805Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733746336439204105,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1e3d9880-3784-4ddf-83ee-3be30352f127 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 12:12:16 old-k8s-version-014592 crio[629]: time="2024-12-09 12:12:16.439683357Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6cf13254-65af-4911-bf91-02afba298136 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 12:12:16 old-k8s-version-014592 crio[629]: time="2024-12-09 12:12:16.439729264Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6cf13254-65af-4911-bf91-02afba298136 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 12:12:16 old-k8s-version-014592 crio[629]: time="2024-12-09 12:12:16.439764711Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=6cf13254-65af-4911-bf91-02afba298136 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 9 11:51] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053266] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039222] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.927032] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.003479] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.562691] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Dec 9 11:52] systemd-fstab-generator[557]: Ignoring "noauto" option for root device
	[  +0.070928] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.073924] systemd-fstab-generator[569]: Ignoring "noauto" option for root device
	[  +0.215176] systemd-fstab-generator[583]: Ignoring "noauto" option for root device
	[  +0.123356] systemd-fstab-generator[595]: Ignoring "noauto" option for root device
	[  +0.253740] systemd-fstab-generator[620]: Ignoring "noauto" option for root device
	[  +6.933985] systemd-fstab-generator[876]: Ignoring "noauto" option for root device
	[  +0.063858] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.761344] systemd-fstab-generator[1001]: Ignoring "noauto" option for root device
	[  +9.884362] kauditd_printk_skb: 46 callbacks suppressed
	[Dec 9 11:56] systemd-fstab-generator[5066]: Ignoring "noauto" option for root device
	[Dec 9 11:58] systemd-fstab-generator[5348]: Ignoring "noauto" option for root device
	[  +0.064846] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 12:12:16 up 20 min,  0 users,  load average: 0.01, 0.03, 0.05
	Linux old-k8s-version-014592 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Dec 09 12:12:16 old-k8s-version-014592 kubelet[6909]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/shared_informer.go:628 +0x53
	Dec 09 12:12:16 old-k8s-version-014592 kubelet[6909]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Dec 09 12:12:16 old-k8s-version-014592 kubelet[6909]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Dec 09 12:12:16 old-k8s-version-014592 kubelet[6909]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000ba3d50, 0xc000db7ce0)
	Dec 09 12:12:16 old-k8s-version-014592 kubelet[6909]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Dec 09 12:12:16 old-k8s-version-014592 kubelet[6909]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Dec 09 12:12:16 old-k8s-version-014592 kubelet[6909]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Dec 09 12:12:16 old-k8s-version-014592 kubelet[6909]: goroutine 166 [chan receive]:
	Dec 09 12:12:16 old-k8s-version-014592 kubelet[6909]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run.func1(0xc00009e0c0, 0xc000bcd170)
	Dec 09 12:12:16 old-k8s-version-014592 kubelet[6909]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:130 +0x34
	Dec 09 12:12:16 old-k8s-version-014592 kubelet[6909]: created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run
	Dec 09 12:12:16 old-k8s-version-014592 kubelet[6909]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:129 +0xa5
	Dec 09 12:12:16 old-k8s-version-014592 kubelet[6909]: goroutine 167 [select]:
	Dec 09 12:12:16 old-k8s-version-014592 kubelet[6909]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000d6bef0, 0x4f0ac20, 0xc000d98370, 0x1, 0xc00009e0c0)
	Dec 09 12:12:16 old-k8s-version-014592 kubelet[6909]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
	Dec 09 12:12:16 old-k8s-version-014592 kubelet[6909]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc0001f1340, 0xc00009e0c0)
	Dec 09 12:12:16 old-k8s-version-014592 kubelet[6909]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Dec 09 12:12:16 old-k8s-version-014592 kubelet[6909]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Dec 09 12:12:16 old-k8s-version-014592 kubelet[6909]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Dec 09 12:12:16 old-k8s-version-014592 kubelet[6909]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000ba3d80, 0xc000db7da0)
	Dec 09 12:12:16 old-k8s-version-014592 kubelet[6909]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Dec 09 12:12:16 old-k8s-version-014592 kubelet[6909]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Dec 09 12:12:16 old-k8s-version-014592 kubelet[6909]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Dec 09 12:12:16 old-k8s-version-014592 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Dec 09 12:12:16 old-k8s-version-014592 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-014592 -n old-k8s-version-014592
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-014592 -n old-k8s-version-014592: exit status 2 (245.858727ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-014592" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (178.45s)

                                                
                                    

Test pass (245/316)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 23.31
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.14
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.31.2/json-events 13.75
13 TestDownloadOnly/v1.31.2/preload-exists 0
17 TestDownloadOnly/v1.31.2/LogsDuration 0.07
18 TestDownloadOnly/v1.31.2/DeleteAll 0.14
19 TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.64
22 TestOffline 83.87
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 142.37
31 TestAddons/serial/GCPAuth/Namespaces 0.14
32 TestAddons/serial/GCPAuth/FakeCredentials 11.5
35 TestAddons/parallel/Registry 16.72
37 TestAddons/parallel/InspektorGadget 11.89
40 TestAddons/parallel/CSI 67.89
41 TestAddons/parallel/Headlamp 19.59
42 TestAddons/parallel/CloudSpanner 6.55
43 TestAddons/parallel/LocalPath 55.07
44 TestAddons/parallel/NvidiaDevicePlugin 6.84
45 TestAddons/parallel/Yakd 10.88
48 TestCertOptions 79.75
49 TestCertExpiration 272.75
51 TestForceSystemdFlag 65.38
52 TestForceSystemdEnv 101.57
54 TestKVMDriverInstallOrUpdate 4.51
58 TestErrorSpam/setup 41.79
59 TestErrorSpam/start 0.37
60 TestErrorSpam/status 0.76
61 TestErrorSpam/pause 1.54
62 TestErrorSpam/unpause 1.75
63 TestErrorSpam/stop 4.52
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 52.17
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 44.9
70 TestFunctional/serial/KubeContext 0.05
71 TestFunctional/serial/KubectlGetPods 0.07
74 TestFunctional/serial/CacheCmd/cache/add_remote 5.14
75 TestFunctional/serial/CacheCmd/cache/add_local 2.53
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
77 TestFunctional/serial/CacheCmd/cache/list 0.05
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.23
79 TestFunctional/serial/CacheCmd/cache/cache_reload 2.17
80 TestFunctional/serial/CacheCmd/cache/delete 0.11
81 TestFunctional/serial/MinikubeKubectlCmd 0.12
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
83 TestFunctional/serial/ExtraConfig 35.71
84 TestFunctional/serial/ComponentHealth 0.07
85 TestFunctional/serial/LogsCmd 1.46
86 TestFunctional/serial/LogsFileCmd 1.43
87 TestFunctional/serial/InvalidService 4.28
89 TestFunctional/parallel/ConfigCmd 0.42
90 TestFunctional/parallel/DashboardCmd 24.84
91 TestFunctional/parallel/DryRun 0.31
92 TestFunctional/parallel/InternationalLanguage 0.18
93 TestFunctional/parallel/StatusCmd 1.21
97 TestFunctional/parallel/ServiceCmdConnect 11.77
98 TestFunctional/parallel/AddonsCmd 0.13
99 TestFunctional/parallel/PersistentVolumeClaim 47.5
101 TestFunctional/parallel/SSHCmd 0.41
102 TestFunctional/parallel/CpCmd 1.38
103 TestFunctional/parallel/MySQL 22.56
104 TestFunctional/parallel/FileSync 0.25
105 TestFunctional/parallel/CertSync 1.68
109 TestFunctional/parallel/NodeLabels 0.08
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.5
113 TestFunctional/parallel/License 1.13
114 TestFunctional/parallel/Version/short 0.34
115 TestFunctional/parallel/Version/components 0.68
116 TestFunctional/parallel/ImageCommands/ImageListShort 0.45
117 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
118 TestFunctional/parallel/ImageCommands/ImageListJson 0.25
119 TestFunctional/parallel/ImageCommands/ImageListYaml 0.27
120 TestFunctional/parallel/ImageCommands/ImageBuild 4.37
121 TestFunctional/parallel/ImageCommands/Setup 1.74
122 TestFunctional/parallel/ServiceCmd/DeployApp 12.18
132 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.08
133 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.86
134 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.69
135 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.52
136 TestFunctional/parallel/ImageCommands/ImageRemove 0.57
137 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.12
138 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.82
139 TestFunctional/parallel/UpdateContextCmd/no_changes 0.09
140 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
141 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
142 TestFunctional/parallel/ServiceCmd/List 0.41
143 TestFunctional/parallel/ServiceCmd/JSONOutput 0.37
144 TestFunctional/parallel/ProfileCmd/profile_not_create 0.52
145 TestFunctional/parallel/ServiceCmd/HTTPS 0.42
146 TestFunctional/parallel/MountCmd/any-port 21.05
147 TestFunctional/parallel/ServiceCmd/Format 0.39
148 TestFunctional/parallel/ProfileCmd/profile_list 0.42
149 TestFunctional/parallel/ServiceCmd/URL 0.37
150 TestFunctional/parallel/ProfileCmd/profile_json_output 0.38
151 TestFunctional/parallel/MountCmd/specific-port 1.95
152 TestFunctional/parallel/MountCmd/VerifyCleanup 0.91
153 TestFunctional/delete_echo-server_images 0.03
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.02
159 TestMultiControlPlane/serial/StartCluster 195.01
160 TestMultiControlPlane/serial/DeployApp 6.82
161 TestMultiControlPlane/serial/PingHostFromPods 1.2
162 TestMultiControlPlane/serial/AddWorkerNode 57.04
163 TestMultiControlPlane/serial/NodeLabels 0.07
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.87
165 TestMultiControlPlane/serial/CopyFile 13.11
171 TestMultiControlPlane/serial/DeleteSecondaryNode 16.67
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.63
174 TestMultiControlPlane/serial/RestartCluster 349.09
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.66
176 TestMultiControlPlane/serial/AddSecondaryNode 77.32
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.86
181 TestJSONOutput/start/Command 52.88
182 TestJSONOutput/start/Audit 0
184 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/pause/Command 0.66
188 TestJSONOutput/pause/Audit 0
190 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/unpause/Command 0.59
194 TestJSONOutput/unpause/Audit 0
196 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/stop/Command 7.34
200 TestJSONOutput/stop/Audit 0
202 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
204 TestErrorJSONOutput 0.2
209 TestMainNoArgs 0.05
210 TestMinikubeProfile 86.38
213 TestMountStart/serial/StartWithMountFirst 30.52
214 TestMountStart/serial/VerifyMountFirst 0.37
215 TestMountStart/serial/StartWithMountSecond 25.93
216 TestMountStart/serial/VerifyMountSecond 0.39
217 TestMountStart/serial/DeleteFirst 0.88
218 TestMountStart/serial/VerifyMountPostDelete 0.39
219 TestMountStart/serial/Stop 1.28
220 TestMountStart/serial/RestartStopped 23.43
221 TestMountStart/serial/VerifyMountPostStop 0.39
224 TestMultiNode/serial/FreshStart2Nodes 112.67
225 TestMultiNode/serial/DeployApp2Nodes 6.27
226 TestMultiNode/serial/PingHostFrom2Pods 0.81
227 TestMultiNode/serial/AddNode 51.31
228 TestMultiNode/serial/MultiNodeLabels 0.06
229 TestMultiNode/serial/ProfileList 0.57
230 TestMultiNode/serial/CopyFile 7.27
231 TestMultiNode/serial/StopNode 2.34
232 TestMultiNode/serial/StartAfterStop 38.85
234 TestMultiNode/serial/DeleteNode 2.34
236 TestMultiNode/serial/RestartMultiNode 185.02
237 TestMultiNode/serial/ValidateNameConflict 44.36
244 TestScheduledStopUnix 117.59
248 TestRunningBinaryUpgrade 139.26
257 TestPause/serial/Start 59.4
263 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
266 TestNoKubernetes/serial/StartWithK8s 90.7
271 TestNetworkPlugins/group/false 3.15
276 TestNoKubernetes/serial/StartWithStopK8s 49.38
277 TestNoKubernetes/serial/Start 48.58
278 TestStoppedBinaryUpgrade/Setup 2.3
279 TestStoppedBinaryUpgrade/Upgrade 182.38
280 TestNoKubernetes/serial/VerifyK8sNotRunning 0.21
281 TestNoKubernetes/serial/ProfileList 1.04
282 TestNoKubernetes/serial/Stop 2.38
283 TestNoKubernetes/serial/StartNoArgs 59.28
284 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.22
285 TestStoppedBinaryUpgrade/MinikubeLogs 1.05
289 TestStartStop/group/embed-certs/serial/FirstStart 50.67
291 TestStartStop/group/no-preload/serial/FirstStart 93.34
292 TestStartStop/group/embed-certs/serial/DeployApp 11.32
293 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.95
295 TestStartStop/group/no-preload/serial/DeployApp 10.24
296 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.96
300 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 51.26
301 TestStartStop/group/embed-certs/serial/SecondStart 685.82
305 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.26
306 TestStartStop/group/no-preload/serial/SecondStart 555.03
307 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.91
309 TestStartStop/group/old-k8s-version/serial/Stop 3.29
310 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
313 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 458.6
323 TestStartStop/group/newest-cni/serial/FirstStart 46.42
324 TestNetworkPlugins/group/auto/Start 61.85
325 TestStartStop/group/newest-cni/serial/DeployApp 0
326 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.39
327 TestStartStop/group/newest-cni/serial/Stop 10.4
328 TestNetworkPlugins/group/kindnet/Start 92.23
329 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.21
330 TestStartStop/group/newest-cni/serial/SecondStart 77.93
331 TestNetworkPlugins/group/auto/KubeletFlags 0.25
332 TestNetworkPlugins/group/auto/NetCatPod 10.36
333 TestNetworkPlugins/group/auto/DNS 0.16
334 TestNetworkPlugins/group/auto/Localhost 0.15
335 TestNetworkPlugins/group/auto/HairPin 0.15
336 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
337 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
338 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.38
339 TestNetworkPlugins/group/calico/Start 83.34
340 TestStartStop/group/newest-cni/serial/Pause 3.96
341 TestNetworkPlugins/group/custom-flannel/Start 99.52
342 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
343 TestNetworkPlugins/group/kindnet/KubeletFlags 0.22
344 TestNetworkPlugins/group/kindnet/NetCatPod 10.23
345 TestNetworkPlugins/group/kindnet/DNS 0.16
346 TestNetworkPlugins/group/kindnet/Localhost 0.13
347 TestNetworkPlugins/group/kindnet/HairPin 0.14
348 TestNetworkPlugins/group/enable-default-cni/Start 83.02
349 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.33
350 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.29
351 TestNetworkPlugins/group/flannel/Start 84.29
352 TestNetworkPlugins/group/calico/ControllerPod 6.02
353 TestNetworkPlugins/group/calico/KubeletFlags 0.29
354 TestNetworkPlugins/group/calico/NetCatPod 13.4
355 TestNetworkPlugins/group/calico/DNS 0.15
356 TestNetworkPlugins/group/calico/Localhost 0.15
357 TestNetworkPlugins/group/calico/HairPin 0.13
358 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.23
359 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.23
360 TestNetworkPlugins/group/custom-flannel/DNS 0.19
361 TestNetworkPlugins/group/custom-flannel/Localhost 0.19
362 TestNetworkPlugins/group/custom-flannel/HairPin 0.21
363 TestNetworkPlugins/group/bridge/Start 61.97
364 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.25
365 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.37
366 TestNetworkPlugins/group/enable-default-cni/DNS 0.2
367 TestNetworkPlugins/group/enable-default-cni/Localhost 0.15
368 TestNetworkPlugins/group/enable-default-cni/HairPin 0.17
369 TestNetworkPlugins/group/flannel/ControllerPod 6.01
370 TestNetworkPlugins/group/flannel/KubeletFlags 0.21
371 TestNetworkPlugins/group/flannel/NetCatPod 10.21
372 TestNetworkPlugins/group/flannel/DNS 0.16
373 TestNetworkPlugins/group/flannel/Localhost 0.12
374 TestNetworkPlugins/group/flannel/HairPin 0.12
375 TestNetworkPlugins/group/bridge/KubeletFlags 0.23
376 TestNetworkPlugins/group/bridge/NetCatPod 11.24
377 TestNetworkPlugins/group/bridge/DNS 0.16
378 TestNetworkPlugins/group/bridge/Localhost 0.15
379 TestNetworkPlugins/group/bridge/HairPin 0.13
x
+
TestDownloadOnly/v1.20.0/json-events (23.31s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-942086 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-942086 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (23.309759106s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (23.31s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I1209 10:33:54.889101  617017 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I1209 10:33:54.889188  617017 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-942086
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-942086: exit status 85 (71.724813ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-942086 | jenkins | v1.34.0 | 09 Dec 24 10:33 UTC |          |
	|         | -p download-only-942086        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/09 10:33:31
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 10:33:31.622774  617029 out.go:345] Setting OutFile to fd 1 ...
	I1209 10:33:31.623082  617029 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 10:33:31.623093  617029 out.go:358] Setting ErrFile to fd 2...
	I1209 10:33:31.623098  617029 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 10:33:31.623388  617029 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20068-609844/.minikube/bin
	W1209 10:33:31.623570  617029 root.go:314] Error reading config file at /home/jenkins/minikube-integration/20068-609844/.minikube/config/config.json: open /home/jenkins/minikube-integration/20068-609844/.minikube/config/config.json: no such file or directory
	I1209 10:33:31.624177  617029 out.go:352] Setting JSON to true
	I1209 10:33:31.625182  617029 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":11756,"bootTime":1733728656,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 10:33:31.625303  617029 start.go:139] virtualization: kvm guest
	I1209 10:33:31.627590  617029 out.go:97] [download-only-942086] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W1209 10:33:31.627696  617029 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20068-609844/.minikube/cache/preloaded-tarball: no such file or directory
	I1209 10:33:31.627748  617029 notify.go:220] Checking for updates...
	I1209 10:33:31.628890  617029 out.go:169] MINIKUBE_LOCATION=20068
	I1209 10:33:31.630105  617029 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 10:33:31.631331  617029 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20068-609844/kubeconfig
	I1209 10:33:31.632559  617029 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20068-609844/.minikube
	I1209 10:33:31.633749  617029 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1209 10:33:31.635772  617029 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1209 10:33:31.636033  617029 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 10:33:31.668187  617029 out.go:97] Using the kvm2 driver based on user configuration
	I1209 10:33:31.668231  617029 start.go:297] selected driver: kvm2
	I1209 10:33:31.668239  617029 start.go:901] validating driver "kvm2" against <nil>
	I1209 10:33:31.668734  617029 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 10:33:31.668870  617029 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20068-609844/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1209 10:33:31.684059  617029 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1209 10:33:31.684144  617029 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1209 10:33:31.685047  617029 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I1209 10:33:31.685288  617029 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1209 10:33:31.685335  617029 cni.go:84] Creating CNI manager for ""
	I1209 10:33:31.685429  617029 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 10:33:31.685444  617029 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1209 10:33:31.685512  617029 start.go:340] cluster config:
	{Name:download-only-942086 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-942086 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 10:33:31.685762  617029 iso.go:125] acquiring lock: {Name:mk3bf4d9da9b75cc0ae0a3732f4492bac54a4b46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 10:33:31.687463  617029 out.go:97] Downloading VM boot image ...
	I1209 10:33:31.687521  617029 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso.sha256 -> /home/jenkins/minikube-integration/20068-609844/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1209 10:33:41.750601  617029 out.go:97] Starting "download-only-942086" primary control-plane node in "download-only-942086" cluster
	I1209 10:33:41.750665  617029 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1209 10:33:41.846832  617029 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1209 10:33:41.846884  617029 cache.go:56] Caching tarball of preloaded images
	I1209 10:33:41.847075  617029 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1209 10:33:41.848623  617029 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1209 10:33:41.848655  617029 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I1209 10:33:41.943401  617029 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/20068-609844/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-942086 host does not exist
	  To start a cluster, run: "minikube start -p download-only-942086"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-942086
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/json-events (13.75s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-596508 --force --alsologtostderr --kubernetes-version=v1.31.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-596508 --force --alsologtostderr --kubernetes-version=v1.31.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (13.753037831s)
--- PASS: TestDownloadOnly/v1.31.2/json-events (13.75s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/preload-exists
I1209 10:34:08.989308  617017 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
I1209 10:34:08.989374  617017 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20068-609844/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-596508
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-596508: exit status 85 (65.511656ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-942086 | jenkins | v1.34.0 | 09 Dec 24 10:33 UTC |                     |
	|         | -p download-only-942086        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 09 Dec 24 10:33 UTC | 09 Dec 24 10:33 UTC |
	| delete  | -p download-only-942086        | download-only-942086 | jenkins | v1.34.0 | 09 Dec 24 10:33 UTC | 09 Dec 24 10:33 UTC |
	| start   | -o=json --download-only        | download-only-596508 | jenkins | v1.34.0 | 09 Dec 24 10:33 UTC |                     |
	|         | -p download-only-596508        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/09 10:33:55
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 10:33:55.279402  617265 out.go:345] Setting OutFile to fd 1 ...
	I1209 10:33:55.279544  617265 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 10:33:55.279557  617265 out.go:358] Setting ErrFile to fd 2...
	I1209 10:33:55.279564  617265 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 10:33:55.279791  617265 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20068-609844/.minikube/bin
	I1209 10:33:55.280442  617265 out.go:352] Setting JSON to true
	I1209 10:33:55.281438  617265 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":11779,"bootTime":1733728656,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 10:33:55.281553  617265 start.go:139] virtualization: kvm guest
	I1209 10:33:55.283522  617265 out.go:97] [download-only-596508] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1209 10:33:55.283656  617265 notify.go:220] Checking for updates...
	I1209 10:33:55.285124  617265 out.go:169] MINIKUBE_LOCATION=20068
	I1209 10:33:55.286486  617265 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 10:33:55.287565  617265 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20068-609844/kubeconfig
	I1209 10:33:55.288633  617265 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20068-609844/.minikube
	I1209 10:33:55.289642  617265 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1209 10:33:55.291410  617265 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1209 10:33:55.291612  617265 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 10:33:55.322353  617265 out.go:97] Using the kvm2 driver based on user configuration
	I1209 10:33:55.322380  617265 start.go:297] selected driver: kvm2
	I1209 10:33:55.322386  617265 start.go:901] validating driver "kvm2" against <nil>
	I1209 10:33:55.322741  617265 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 10:33:55.322830  617265 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20068-609844/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1209 10:33:55.338862  617265 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1209 10:33:55.338925  617265 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1209 10:33:55.339499  617265 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I1209 10:33:55.339656  617265 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1209 10:33:55.339690  617265 cni.go:84] Creating CNI manager for ""
	I1209 10:33:55.339741  617265 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 10:33:55.339753  617265 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1209 10:33:55.339800  617265 start.go:340] cluster config:
	{Name:download-only-596508 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:download-only-596508 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 10:33:55.339907  617265 iso.go:125] acquiring lock: {Name:mk3bf4d9da9b75cc0ae0a3732f4492bac54a4b46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 10:33:55.341548  617265 out.go:97] Starting "download-only-596508" primary control-plane node in "download-only-596508" cluster
	I1209 10:33:55.341570  617265 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 10:33:55.799485  617265 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.2/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1209 10:33:55.799530  617265 cache.go:56] Caching tarball of preloaded images
	I1209 10:33:55.799670  617265 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 10:33:55.801382  617265 out.go:97] Downloading Kubernetes v1.31.2 preload ...
	I1209 10:33:55.801412  617265 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 ...
	I1209 10:33:55.899326  617265 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.2/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4?checksum=md5:fc069bc1785feafa8477333f3a79092d -> /home/jenkins/minikube-integration/20068-609844/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-596508 host does not exist
	  To start a cluster, run: "minikube start -p download-only-596508"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.2/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.2/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-596508
--- PASS: TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.64s)

                                                
                                                
=== RUN   TestBinaryMirror
I1209 10:34:09.595173  617017 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-654291 --alsologtostderr --binary-mirror http://127.0.0.1:45797 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-654291" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-654291
--- PASS: TestBinaryMirror (0.64s)

                                                
                                    
x
+
TestOffline (83.87s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-482227 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-482227 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m22.79452007s)
helpers_test.go:175: Cleaning up "offline-crio-482227" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-482227
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-482227: (1.073144338s)
--- PASS: TestOffline (83.87s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-156041
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-156041: exit status 85 (54.613425ms)

                                                
                                                
-- stdout --
	* Profile "addons-156041" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-156041"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-156041
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-156041: exit status 85 (55.795394ms)

                                                
                                                
-- stdout --
	* Profile "addons-156041" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-156041"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (142.37s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-156041 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-156041 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m22.373527444s)
--- PASS: TestAddons/Setup (142.37s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-156041 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-156041 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (11.5s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-156041 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-156041 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [750ac467-92cd-4f0f-8288-ccecae9af727] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [750ac467-92cd-4f0f-8288-ccecae9af727] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 11.004380898s
addons_test.go:633: (dbg) Run:  kubectl --context addons-156041 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-156041 describe sa gcp-auth-test
addons_test.go:683: (dbg) Run:  kubectl --context addons-156041 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (11.50s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.72s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 4.033347ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-5cc95cd69-dz5k9" [94e4ed5a-c1d2-4327-99af-d2d3f88d0300] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003510553s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-8fjdn" [92870ba1-49e0-461f-91f0-1d0ee71c79d7] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.015889388s
addons_test.go:331: (dbg) Run:  kubectl --context addons-156041 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-156041 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-156041 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.947408907s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p addons-156041 ip
2024/12/09 10:37:12 [DEBUG] GET http://192.168.39.161:5000
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-156041 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.72s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.89s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-64ccm" [85e259dc-dcd7-4ae1-9c19-09e5fc082a23] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004286278s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-156041 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-156041 addons disable inspektor-gadget --alsologtostderr -v=1: (5.886912795s)
--- PASS: TestAddons/parallel/InspektorGadget (11.89s)

                                                
                                    
x
+
TestAddons/parallel/CSI (67.89s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1209 10:37:03.584158  617017 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1209 10:37:03.589684  617017 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1209 10:37:03.589713  617017 kapi.go:107] duration metric: took 5.569522ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 5.580552ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-156041 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-156041 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-156041 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-156041 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-156041 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-156041 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-156041 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-156041 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-156041 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-156041 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-156041 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-156041 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-156041 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-156041 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-156041 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-156041 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-156041 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-156041 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-156041 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-156041 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-156041 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-156041 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-156041 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-156041 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-156041 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-156041 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-156041 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-156041 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-156041 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-156041 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-156041 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-156041 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-156041 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [d9c2bff3-a751-4b98-b190-7c5243917512] Pending
helpers_test.go:344: "task-pv-pod" [d9c2bff3-a751-4b98-b190-7c5243917512] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [d9c2bff3-a751-4b98-b190-7c5243917512] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 14.003419819s
addons_test.go:511: (dbg) Run:  kubectl --context addons-156041 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-156041 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-156041 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-156041 delete pod task-pv-pod
addons_test.go:521: (dbg) Done: kubectl --context addons-156041 delete pod task-pv-pod: (1.227170703s)
addons_test.go:527: (dbg) Run:  kubectl --context addons-156041 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-156041 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-156041 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-156041 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-156041 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-156041 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-156041 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-156041 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [f1fb80a4-c638-430c-a5fe-1dc735303cc7] Pending
helpers_test.go:344: "task-pv-pod-restore" [f1fb80a4-c638-430c-a5fe-1dc735303cc7] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [f1fb80a4-c638-430c-a5fe-1dc735303cc7] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.00392309s
addons_test.go:553: (dbg) Run:  kubectl --context addons-156041 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-156041 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-156041 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-156041 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-156041 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-156041 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.820241055s)
--- PASS: TestAddons/parallel/CSI (67.89s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (19.59s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-156041 --alsologtostderr -v=1
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-cd8ffd6fc-nbhzk" [4955e17d-4483-4df0-9b73-85eff7bb90f8] Pending
helpers_test.go:344: "headlamp-cd8ffd6fc-nbhzk" [4955e17d-4483-4df0-9b73-85eff7bb90f8] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-cd8ffd6fc-nbhzk" [4955e17d-4483-4df0-9b73-85eff7bb90f8] Running / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-cd8ffd6fc-nbhzk" [4955e17d-4483-4df0-9b73-85eff7bb90f8] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.004540941s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-156041 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-156041 addons disable headlamp --alsologtostderr -v=1: (5.73725142s)
--- PASS: TestAddons/parallel/Headlamp (19.59s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.55s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-dc5db94f4-c67v8" [15547110-a0c0-4064-b954-aaa559a7ace3] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003786241s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-156041 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.55s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (55.07s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-156041 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-156041 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-156041 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-156041 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-156041 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-156041 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-156041 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-156041 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [332adf7a-613b-4f94-a8c0-51fad4546196] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [332adf7a-613b-4f94-a8c0-51fad4546196] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [332adf7a-613b-4f94-a8c0-51fad4546196] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.005937055s
addons_test.go:906: (dbg) Run:  kubectl --context addons-156041 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-linux-amd64 -p addons-156041 ssh "cat /opt/local-path-provisioner/pvc-24d2631a-658d-4b19-9ca8-01e524add183_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-156041 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-156041 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-156041 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-156041 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.260883848s)
--- PASS: TestAddons/parallel/LocalPath (55.07s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.84s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-kjjpq" [9d6efa63-ad7e-417c-9a30-6ae237fb8824] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004353378s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-156041 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.84s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.88s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-8skh6" [9849054a-8785-404f-93e3-529509ee6a33] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003640019s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-156041 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-156041 addons disable yakd --alsologtostderr -v=1: (5.87610844s)
--- PASS: TestAddons/parallel/Yakd (10.88s)

                                                
                                    
x
+
TestCertOptions (79.75s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-935628 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-935628 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m18.485441967s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-935628 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-935628 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-935628 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-935628" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-935628
--- PASS: TestCertOptions (79.75s)

                                                
                                    
x
+
TestCertExpiration (272.75s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-752166 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-752166 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m3.78321579s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-752166 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-752166 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (27.924311197s)
helpers_test.go:175: Cleaning up "cert-expiration-752166" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-752166
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-752166: (1.039603452s)
--- PASS: TestCertExpiration (272.75s)

                                                
                                    
x
+
TestForceSystemdFlag (65.38s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-451257 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-451257 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m4.331640534s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-451257 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-451257" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-451257
--- PASS: TestForceSystemdFlag (65.38s)

                                                
                                    
x
+
TestForceSystemdEnv (101.57s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-250964 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-250964 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m40.545083141s)
helpers_test.go:175: Cleaning up "force-systemd-env-250964" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-250964
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-250964: (1.027086676s)
--- PASS: TestForceSystemdEnv (101.57s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.51s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I1209 11:36:43.010935  617017 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1209 11:36:43.011115  617017 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W1209 11:36:43.043907  617017 install.go:62] docker-machine-driver-kvm2: exit status 1
W1209 11:36:43.044289  617017 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1209 11:36:43.044378  617017 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3502614161/001/docker-machine-driver-kvm2
I1209 11:36:43.674281  617017 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate3502614161/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x5316020 0x5316020 0x5316020 0x5316020 0x5316020 0x5316020 0x5316020] Decompressors:map[bz2:0xc000514ec0 gz:0xc000514ec8 tar:0xc000514e50 tar.bz2:0xc000514e60 tar.gz:0xc000514e70 tar.xz:0xc000514e80 tar.zst:0xc000514eb0 tbz2:0xc000514e60 tgz:0xc000514e70 txz:0xc000514e80 tzst:0xc000514eb0 xz:0xc000514ee0 zip:0xc000514ef0 zst:0xc000514ee8] Getters:map[file:0xc0005dcdb0 http:0xc0008c2f00 https:0xc0008c2f50] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I1209 11:36:43.674346  617017 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3502614161/001/docker-machine-driver-kvm2
I1209 11:36:45.680126  617017 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1209 11:36:45.680233  617017 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1209 11:36:45.717910  617017 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W1209 11:36:45.717953  617017 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W1209 11:36:45.718033  617017 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1209 11:36:45.718077  617017 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3502614161/002/docker-machine-driver-kvm2
I1209 11:36:45.757837  617017 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate3502614161/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x5316020 0x5316020 0x5316020 0x5316020 0x5316020 0x5316020 0x5316020] Decompressors:map[bz2:0xc000514ec0 gz:0xc000514ec8 tar:0xc000514e50 tar.bz2:0xc000514e60 tar.gz:0xc000514e70 tar.xz:0xc000514e80 tar.zst:0xc000514eb0 tbz2:0xc000514e60 tgz:0xc000514e70 txz:0xc000514e80 tzst:0xc000514eb0 xz:0xc000514ee0 zip:0xc000514ef0 zst:0xc000514ee8] Getters:map[file:0xc0029109b0 http:0xc002914a50 https:0xc002914aa0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I1209 11:36:45.757900  617017 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3502614161/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (4.51s)

                                                
                                    
x
+
TestErrorSpam/setup (41.79s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-700103 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-700103 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-700103 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-700103 --driver=kvm2  --container-runtime=crio: (41.794567019s)
--- PASS: TestErrorSpam/setup (41.79s)

                                                
                                    
x
+
TestErrorSpam/start (0.37s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-700103 --log_dir /tmp/nospam-700103 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-700103 --log_dir /tmp/nospam-700103 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-700103 --log_dir /tmp/nospam-700103 start --dry-run
--- PASS: TestErrorSpam/start (0.37s)

                                                
                                    
x
+
TestErrorSpam/status (0.76s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-700103 --log_dir /tmp/nospam-700103 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-700103 --log_dir /tmp/nospam-700103 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-700103 --log_dir /tmp/nospam-700103 status
--- PASS: TestErrorSpam/status (0.76s)

                                                
                                    
x
+
TestErrorSpam/pause (1.54s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-700103 --log_dir /tmp/nospam-700103 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-700103 --log_dir /tmp/nospam-700103 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-700103 --log_dir /tmp/nospam-700103 pause
--- PASS: TestErrorSpam/pause (1.54s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.75s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-700103 --log_dir /tmp/nospam-700103 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-700103 --log_dir /tmp/nospam-700103 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-700103 --log_dir /tmp/nospam-700103 unpause
--- PASS: TestErrorSpam/unpause (1.75s)

                                                
                                    
x
+
TestErrorSpam/stop (4.52s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-700103 --log_dir /tmp/nospam-700103 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-700103 --log_dir /tmp/nospam-700103 stop: (1.574625593s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-700103 --log_dir /tmp/nospam-700103 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-700103 --log_dir /tmp/nospam-700103 stop: (1.527060789s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-700103 --log_dir /tmp/nospam-700103 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-700103 --log_dir /tmp/nospam-700103 stop: (1.414309355s)
--- PASS: TestErrorSpam/stop (4.52s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/20068-609844/.minikube/files/etc/test/nested/copy/617017/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (52.17s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-032350 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E1209 10:46:33.304061  617017 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/addons-156041/client.crt: no such file or directory" logger="UnhandledError"
E1209 10:46:33.310702  617017 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/addons-156041/client.crt: no such file or directory" logger="UnhandledError"
E1209 10:46:33.322080  617017 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/addons-156041/client.crt: no such file or directory" logger="UnhandledError"
E1209 10:46:33.343474  617017 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/addons-156041/client.crt: no such file or directory" logger="UnhandledError"
E1209 10:46:33.384947  617017 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/addons-156041/client.crt: no such file or directory" logger="UnhandledError"
E1209 10:46:33.466455  617017 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/addons-156041/client.crt: no such file or directory" logger="UnhandledError"
E1209 10:46:33.628039  617017 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/addons-156041/client.crt: no such file or directory" logger="UnhandledError"
E1209 10:46:33.949778  617017 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/addons-156041/client.crt: no such file or directory" logger="UnhandledError"
E1209 10:46:34.591935  617017 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/addons-156041/client.crt: no such file or directory" logger="UnhandledError"
E1209 10:46:35.873598  617017 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/addons-156041/client.crt: no such file or directory" logger="UnhandledError"
E1209 10:46:38.436087  617017 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/addons-156041/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-032350 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (52.170117967s)
--- PASS: TestFunctional/serial/StartWithProxy (52.17s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (44.9s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1209 10:46:43.415311  617017 config.go:182] Loaded profile config "functional-032350": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-032350 --alsologtostderr -v=8
E1209 10:46:43.558090  617017 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/addons-156041/client.crt: no such file or directory" logger="UnhandledError"
E1209 10:46:53.799899  617017 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/addons-156041/client.crt: no such file or directory" logger="UnhandledError"
E1209 10:47:14.281848  617017 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/addons-156041/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-032350 --alsologtostderr -v=8: (44.902829503s)
functional_test.go:663: soft start took 44.903560631s for "functional-032350" cluster.
I1209 10:47:28.318485  617017 config.go:182] Loaded profile config "functional-032350": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestFunctional/serial/SoftStart (44.90s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-032350 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (5.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-032350 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-032350 cache add registry.k8s.io/pause:3.1: (1.606864524s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-032350 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-032350 cache add registry.k8s.io/pause:3.3: (1.766759819s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-032350 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-032350 cache add registry.k8s.io/pause:latest: (1.770948107s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (5.14s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.53s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-032350 /tmp/TestFunctionalserialCacheCmdcacheadd_local3067397749/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-032350 cache add minikube-local-cache-test:functional-032350
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-032350 cache add minikube-local-cache-test:functional-032350: (2.210658473s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-032350 cache delete minikube-local-cache-test:functional-032350
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-032350
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.53s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-032350 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-032350 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-032350 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-032350 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (221.884477ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-032350 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-amd64 -p functional-032350 cache reload: (1.458781057s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-032350 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.17s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-032350 kubectl -- --context functional-032350 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-032350 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (35.71s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-032350 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1209 10:47:55.244320  617017 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/addons-156041/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-032350 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (35.709096359s)
functional_test.go:761: restart took 35.709211493s for "functional-032350" cluster.
I1209 10:48:14.655759  617017 config.go:182] Loaded profile config "functional-032350": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestFunctional/serial/ExtraConfig (35.71s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-032350 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.46s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-032350 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-032350 logs: (1.460247997s)
--- PASS: TestFunctional/serial/LogsCmd (1.46s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.43s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-032350 logs --file /tmp/TestFunctionalserialLogsFileCmd4223548209/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-032350 logs --file /tmp/TestFunctionalserialLogsFileCmd4223548209/001/logs.txt: (1.431740627s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.43s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.28s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-032350 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-032350
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-032350: exit status 115 (291.826115ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.15:30481 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-032350 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.28s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-032350 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-032350 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-032350 config get cpus: exit status 14 (68.586675ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-032350 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-032350 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-032350 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-032350 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-032350 config get cpus: exit status 14 (77.100767ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (24.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-032350 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-032350 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 626285: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (24.84s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-032350 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-032350 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (154.474457ms)

                                                
                                                
-- stdout --
	* [functional-032350] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20068
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20068-609844/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20068-609844/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 10:48:36.711386  626036 out.go:345] Setting OutFile to fd 1 ...
	I1209 10:48:36.711699  626036 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 10:48:36.711711  626036 out.go:358] Setting ErrFile to fd 2...
	I1209 10:48:36.711718  626036 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 10:48:36.711944  626036 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20068-609844/.minikube/bin
	I1209 10:48:36.712511  626036 out.go:352] Setting JSON to false
	I1209 10:48:36.713596  626036 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":12661,"bootTime":1733728656,"procs":242,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 10:48:36.713714  626036 start.go:139] virtualization: kvm guest
	I1209 10:48:36.715853  626036 out.go:177] * [functional-032350] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1209 10:48:36.717260  626036 notify.go:220] Checking for updates...
	I1209 10:48:36.717339  626036 out.go:177]   - MINIKUBE_LOCATION=20068
	I1209 10:48:36.718879  626036 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 10:48:36.720343  626036 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20068-609844/kubeconfig
	I1209 10:48:36.721874  626036 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20068-609844/.minikube
	I1209 10:48:36.723169  626036 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1209 10:48:36.724468  626036 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 10:48:36.726109  626036 config.go:182] Loaded profile config "functional-032350": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 10:48:36.726709  626036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:48:36.726803  626036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:48:36.743078  626036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37141
	I1209 10:48:36.743527  626036 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:48:36.744168  626036 main.go:141] libmachine: Using API Version  1
	I1209 10:48:36.744217  626036 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:48:36.744634  626036 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:48:36.744836  626036 main.go:141] libmachine: (functional-032350) Calling .DriverName
	I1209 10:48:36.745169  626036 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 10:48:36.745613  626036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:48:36.745671  626036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:48:36.761091  626036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33259
	I1209 10:48:36.761662  626036 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:48:36.762314  626036 main.go:141] libmachine: Using API Version  1
	I1209 10:48:36.762336  626036 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:48:36.762687  626036 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:48:36.762879  626036 main.go:141] libmachine: (functional-032350) Calling .DriverName
	I1209 10:48:36.797145  626036 out.go:177] * Using the kvm2 driver based on existing profile
	I1209 10:48:36.798392  626036 start.go:297] selected driver: kvm2
	I1209 10:48:36.798418  626036 start.go:901] validating driver "kvm2" against &{Name:functional-032350 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:functional-032350 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.15 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 10:48:36.798559  626036 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 10:48:36.800699  626036 out.go:201] 
	W1209 10:48:36.802012  626036 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1209 10:48:36.803224  626036 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-032350 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-032350 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-032350 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (178.52141ms)

                                                
                                                
-- stdout --
	* [functional-032350] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20068
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20068-609844/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20068-609844/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 10:48:36.917042  626137 out.go:345] Setting OutFile to fd 1 ...
	I1209 10:48:36.917214  626137 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 10:48:36.917227  626137 out.go:358] Setting ErrFile to fd 2...
	I1209 10:48:36.917234  626137 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 10:48:36.917690  626137 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20068-609844/.minikube/bin
	I1209 10:48:36.918457  626137 out.go:352] Setting JSON to false
	I1209 10:48:36.919879  626137 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":12661,"bootTime":1733728656,"procs":250,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 10:48:36.920032  626137 start.go:139] virtualization: kvm guest
	I1209 10:48:36.922236  626137 out.go:177] * [functional-032350] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I1209 10:48:36.923556  626137 out.go:177]   - MINIKUBE_LOCATION=20068
	I1209 10:48:36.923622  626137 notify.go:220] Checking for updates...
	I1209 10:48:36.926022  626137 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 10:48:36.927537  626137 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20068-609844/kubeconfig
	I1209 10:48:36.928946  626137 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20068-609844/.minikube
	I1209 10:48:36.930213  626137 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1209 10:48:36.931356  626137 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 10:48:36.933185  626137 config.go:182] Loaded profile config "functional-032350": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 10:48:36.933927  626137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:48:36.934036  626137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:48:36.951439  626137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44727
	I1209 10:48:36.951951  626137 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:48:36.952669  626137 main.go:141] libmachine: Using API Version  1
	I1209 10:48:36.952693  626137 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:48:36.953305  626137 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:48:36.953519  626137 main.go:141] libmachine: (functional-032350) Calling .DriverName
	I1209 10:48:36.953808  626137 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 10:48:36.954139  626137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 10:48:36.954200  626137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 10:48:36.970583  626137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41591
	I1209 10:48:36.971151  626137 main.go:141] libmachine: () Calling .GetVersion
	I1209 10:48:36.971665  626137 main.go:141] libmachine: Using API Version  1
	I1209 10:48:36.971692  626137 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 10:48:36.972231  626137 main.go:141] libmachine: () Calling .GetMachineName
	I1209 10:48:36.972451  626137 main.go:141] libmachine: (functional-032350) Calling .DriverName
	I1209 10:48:37.008917  626137 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I1209 10:48:37.010058  626137 start.go:297] selected driver: kvm2
	I1209 10:48:37.010077  626137 start.go:901] validating driver "kvm2" against &{Name:functional-032350 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:functional-032350 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.15 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 10:48:37.010226  626137 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 10:48:37.012325  626137 out.go:201] 
	W1209 10:48:37.013683  626137 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1209 10:48:37.014851  626137 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-032350 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-032350 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-032350 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.21s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (11.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-032350 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-032350 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-d8mlz" [33d40990-ea2d-4701-b5b1-c78a03a0271b] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-d8mlz" [33d40990-ea2d-4701-b5b1-c78a03a0271b] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.004009895s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-032350 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.39.15:32291
functional_test.go:1675: http://192.168.39.15:32291: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-d8mlz

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.15:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.15:32291
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (11.77s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-032350 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-032350 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (47.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [3df3e845-cda5-47a8-9065-971573274e93] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.006009029s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-032350 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-032350 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-032350 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-032350 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [0fff240a-ee6b-4c09-9996-b5353ef00f82] Pending
helpers_test.go:344: "sp-pod" [0fff240a-ee6b-4c09-9996-b5353ef00f82] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [0fff240a-ee6b-4c09-9996-b5353ef00f82] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.003369778s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-032350 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-032350 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-032350 delete -f testdata/storage-provisioner/pod.yaml: (2.616709467s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-032350 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [bb81b3d5-1760-4e80-a249-ff000846a369] Pending
helpers_test.go:344: "sp-pod" [bb81b3d5-1760-4e80-a249-ff000846a369] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [bb81b3d5-1760-4e80-a249-ff000846a369] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 25.003601083s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-032350 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (47.50s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-032350 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-032350 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-032350 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-032350 ssh -n functional-032350 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-032350 cp functional-032350:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1489534082/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-032350 ssh -n functional-032350 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-032350 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-032350 ssh -n functional-032350 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.38s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (22.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-032350 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-gz286" [500c6bda-906e-403f-b451-ae0e3f9dd2cb] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-gz286" [500c6bda-906e-403f-b451-ae0e3f9dd2cb] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 19.010257432s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-032350 exec mysql-6cdb49bbb-gz286 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-032350 exec mysql-6cdb49bbb-gz286 -- mysql -ppassword -e "show databases;": exit status 1 (201.14653ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1209 10:48:56.435398  617017 retry.go:31] will retry after 697.352955ms: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-032350 exec mysql-6cdb49bbb-gz286 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-032350 exec mysql-6cdb49bbb-gz286 -- mysql -ppassword -e "show databases;": exit status 1 (147.693555ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1209 10:48:57.281281  617017 retry.go:31] will retry after 2.068874689s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-032350 exec mysql-6cdb49bbb-gz286 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (22.56s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/617017/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-032350 ssh "sudo cat /etc/test/nested/copy/617017/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/617017.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-032350 ssh "sudo cat /etc/ssl/certs/617017.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/617017.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-032350 ssh "sudo cat /usr/share/ca-certificates/617017.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-032350 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/6170172.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-032350 ssh "sudo cat /etc/ssl/certs/6170172.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/6170172.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-032350 ssh "sudo cat /usr/share/ca-certificates/6170172.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-032350 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.68s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-032350 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-032350 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-032350 ssh "sudo systemctl is-active docker": exit status 1 (253.245221ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-032350 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-032350 ssh "sudo systemctl is-active containerd": exit status 1 (246.954158ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/License (1.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
functional_test.go:2288: (dbg) Done: out/minikube-linux-amd64 license: (1.134515396s)
--- PASS: TestFunctional/parallel/License (1.13s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-032350 version --short
--- PASS: TestFunctional/parallel/Version/short (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-032350 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-032350 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-032350 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.2
registry.k8s.io/kube-proxy:v1.31.2
registry.k8s.io/kube-controller-manager:v1.31.2
registry.k8s.io/kube-apiserver:v1.31.2
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-032350
localhost/kicbase/echo-server:functional-032350
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20241007-36f62932
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-032350 image ls --format short --alsologtostderr:
I1209 10:48:59.907630  626878 out.go:345] Setting OutFile to fd 1 ...
I1209 10:48:59.907778  626878 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 10:48:59.907788  626878 out.go:358] Setting ErrFile to fd 2...
I1209 10:48:59.907791  626878 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 10:48:59.907992  626878 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20068-609844/.minikube/bin
I1209 10:48:59.908651  626878 config.go:182] Loaded profile config "functional-032350": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1209 10:48:59.908757  626878 config.go:182] Loaded profile config "functional-032350": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1209 10:48:59.909124  626878 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1209 10:48:59.909168  626878 main.go:141] libmachine: Launching plugin server for driver kvm2
I1209 10:48:59.924870  626878 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38569
I1209 10:48:59.925470  626878 main.go:141] libmachine: () Calling .GetVersion
I1209 10:48:59.926210  626878 main.go:141] libmachine: Using API Version  1
I1209 10:48:59.926249  626878 main.go:141] libmachine: () Calling .SetConfigRaw
I1209 10:48:59.926618  626878 main.go:141] libmachine: () Calling .GetMachineName
I1209 10:48:59.926806  626878 main.go:141] libmachine: (functional-032350) Calling .GetState
I1209 10:48:59.928739  626878 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1209 10:48:59.928801  626878 main.go:141] libmachine: Launching plugin server for driver kvm2
I1209 10:48:59.944625  626878 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37289
I1209 10:48:59.945177  626878 main.go:141] libmachine: () Calling .GetVersion
I1209 10:48:59.945724  626878 main.go:141] libmachine: Using API Version  1
I1209 10:48:59.945752  626878 main.go:141] libmachine: () Calling .SetConfigRaw
I1209 10:48:59.946120  626878 main.go:141] libmachine: () Calling .GetMachineName
I1209 10:48:59.946313  626878 main.go:141] libmachine: (functional-032350) Calling .DriverName
I1209 10:48:59.946502  626878 ssh_runner.go:195] Run: systemctl --version
I1209 10:48:59.946526  626878 main.go:141] libmachine: (functional-032350) Calling .GetSSHHostname
I1209 10:48:59.949535  626878 main.go:141] libmachine: (functional-032350) DBG | domain functional-032350 has defined MAC address 52:54:00:a2:b0:d5 in network mk-functional-032350
I1209 10:48:59.949981  626878 main.go:141] libmachine: (functional-032350) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:b0:d5", ip: ""} in network mk-functional-032350: {Iface:virbr1 ExpiryTime:2024-12-09 11:46:05 +0000 UTC Type:0 Mac:52:54:00:a2:b0:d5 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:functional-032350 Clientid:01:52:54:00:a2:b0:d5}
I1209 10:48:59.950006  626878 main.go:141] libmachine: (functional-032350) DBG | domain functional-032350 has defined IP address 192.168.39.15 and MAC address 52:54:00:a2:b0:d5 in network mk-functional-032350
I1209 10:48:59.950154  626878 main.go:141] libmachine: (functional-032350) Calling .GetSSHPort
I1209 10:48:59.950339  626878 main.go:141] libmachine: (functional-032350) Calling .GetSSHKeyPath
I1209 10:48:59.950495  626878 main.go:141] libmachine: (functional-032350) Calling .GetSSHUsername
I1209 10:48:59.950637  626878 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/functional-032350/id_rsa Username:docker}
I1209 10:49:00.045426  626878 ssh_runner.go:195] Run: sudo crictl images --output json
I1209 10:49:00.298537  626878 main.go:141] libmachine: Making call to close driver server
I1209 10:49:00.298557  626878 main.go:141] libmachine: (functional-032350) Calling .Close
I1209 10:49:00.298856  626878 main.go:141] libmachine: Successfully made call to close driver server
I1209 10:49:00.298883  626878 main.go:141] libmachine: Making call to close connection to plugin binary
I1209 10:49:00.298896  626878 main.go:141] libmachine: Making call to close driver server
I1209 10:49:00.298903  626878 main.go:141] libmachine: (functional-032350) Calling .Close
I1209 10:49:00.301129  626878 main.go:141] libmachine: Successfully made call to close driver server
I1209 10:49:00.301147  626878 main.go:141] libmachine: Making call to close connection to plugin binary
I1209 10:49:00.301149  626878 main.go:141] libmachine: (functional-032350) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-032350 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-032350 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| localhost/minikube-local-cache-test     | functional-032350  | 4f99c3f73081d | 3.33kB |
| registry.k8s.io/coredns/coredns         | v1.11.3            | c69fa2e9cbf5f | 63.3MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/etcd                    | 3.5.15-0           | 2e96e5913fc06 | 149MB  |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| registry.k8s.io/kube-scheduler          | v1.31.2            | 847c7bc1a5418 | 68.5MB |
| registry.k8s.io/kube-apiserver          | v1.31.2            | 9499c9960544e | 95.3MB |
| registry.k8s.io/kube-controller-manager | v1.31.2            | 0486b6c53a1b5 | 89.5MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| docker.io/kindest/kindnetd              | v20241007-36f62932 | 3a5bc24055c9e | 95MB   |
| docker.io/library/nginx                 | latest             | 66f8bdd3810c9 | 196MB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| localhost/kicbase/echo-server           | functional-032350  | 9056ab77afb8e | 4.94MB |
| registry.k8s.io/kube-proxy              | v1.31.2            | 505d571f5fd56 | 92.8MB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-032350 image ls --format table --alsologtostderr:
I1209 10:49:00.821946  627002 out.go:345] Setting OutFile to fd 1 ...
I1209 10:49:00.822064  627002 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 10:49:00.822075  627002 out.go:358] Setting ErrFile to fd 2...
I1209 10:49:00.822079  627002 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 10:49:00.822302  627002 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20068-609844/.minikube/bin
I1209 10:49:00.822932  627002 config.go:182] Loaded profile config "functional-032350": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1209 10:49:00.823040  627002 config.go:182] Loaded profile config "functional-032350": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1209 10:49:00.823404  627002 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1209 10:49:00.823447  627002 main.go:141] libmachine: Launching plugin server for driver kvm2
I1209 10:49:00.839088  627002 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38243
I1209 10:49:00.839654  627002 main.go:141] libmachine: () Calling .GetVersion
I1209 10:49:00.840257  627002 main.go:141] libmachine: Using API Version  1
I1209 10:49:00.840278  627002 main.go:141] libmachine: () Calling .SetConfigRaw
I1209 10:49:00.840666  627002 main.go:141] libmachine: () Calling .GetMachineName
I1209 10:49:00.840851  627002 main.go:141] libmachine: (functional-032350) Calling .GetState
I1209 10:49:00.842829  627002 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1209 10:49:00.842882  627002 main.go:141] libmachine: Launching plugin server for driver kvm2
I1209 10:49:00.857969  627002 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38245
I1209 10:49:00.858570  627002 main.go:141] libmachine: () Calling .GetVersion
I1209 10:49:00.859169  627002 main.go:141] libmachine: Using API Version  1
I1209 10:49:00.859195  627002 main.go:141] libmachine: () Calling .SetConfigRaw
I1209 10:49:00.859524  627002 main.go:141] libmachine: () Calling .GetMachineName
I1209 10:49:00.859722  627002 main.go:141] libmachine: (functional-032350) Calling .DriverName
I1209 10:49:00.859899  627002 ssh_runner.go:195] Run: systemctl --version
I1209 10:49:00.859924  627002 main.go:141] libmachine: (functional-032350) Calling .GetSSHHostname
I1209 10:49:00.862679  627002 main.go:141] libmachine: (functional-032350) DBG | domain functional-032350 has defined MAC address 52:54:00:a2:b0:d5 in network mk-functional-032350
I1209 10:49:00.863186  627002 main.go:141] libmachine: (functional-032350) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:b0:d5", ip: ""} in network mk-functional-032350: {Iface:virbr1 ExpiryTime:2024-12-09 11:46:05 +0000 UTC Type:0 Mac:52:54:00:a2:b0:d5 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:functional-032350 Clientid:01:52:54:00:a2:b0:d5}
I1209 10:49:00.863226  627002 main.go:141] libmachine: (functional-032350) DBG | domain functional-032350 has defined IP address 192.168.39.15 and MAC address 52:54:00:a2:b0:d5 in network mk-functional-032350
I1209 10:49:00.863352  627002 main.go:141] libmachine: (functional-032350) Calling .GetSSHPort
I1209 10:49:00.863515  627002 main.go:141] libmachine: (functional-032350) Calling .GetSSHKeyPath
I1209 10:49:00.863640  627002 main.go:141] libmachine: (functional-032350) Calling .GetSSHUsername
I1209 10:49:00.863809  627002 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/functional-032350/id_rsa Username:docker}
I1209 10:49:00.956527  627002 ssh_runner.go:195] Run: sudo crictl images --output json
I1209 10:49:00.992329  627002 main.go:141] libmachine: Making call to close driver server
I1209 10:49:00.992350  627002 main.go:141] libmachine: (functional-032350) Calling .Close
I1209 10:49:00.992662  627002 main.go:141] libmachine: Successfully made call to close driver server
I1209 10:49:00.992687  627002 main.go:141] libmachine: Making call to close connection to plugin binary
I1209 10:49:00.992689  627002 main.go:141] libmachine: (functional-032350) DBG | Closing plugin on server side
I1209 10:49:00.992711  627002 main.go:141] libmachine: Making call to close driver server
I1209 10:49:00.992719  627002 main.go:141] libmachine: (functional-032350) Calling .Close
I1209 10:49:00.992982  627002 main.go:141] libmachine: Successfully made call to close driver server
I1209 10:49:00.993000  627002 main.go:141] libmachine: Making call to close connection to plugin binary
I1209 10:49:00.993018  627002 main.go:141] libmachine: (functional-032350) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-032350 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-032350 image ls --format json --alsologtostderr:
[{"id":"505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38","repoDigests":["registry.k8s.io/kube-proxy@sha256:22535649599e9f22b1b857afcbd9a8b36be238b2b3ea68e47f60bedcea48cd3b","registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.2"],"size":"92783513"},{"id":"847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856","repoDigests":["registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282","registry.k8s.io/kube-scheduler@sha256:a40aba236dfcd0fe9d1258dcb9d22a82d83e9ea6d35f7d3574949b12df928fe5"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.2"],"size":"68457798"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"r
epoTags":["registry.k8s.io/pause:3.10"],"size":"742080"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:4ba16ce7d80945dc4bb8e85ac0794c6171bfa8a55c94fe5be415afb4c3eb938c","registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.2"],"size":"89474374"},{"id":"66f8bdd3810c96dc5c28aec39583af731b34a2cd99471530f53c8794ed5b423e","repoDigests":["docker.io/library/nginx@sha256:3d696e8357051647b844d8c7cf4a0aa71e84379999a4f6af9b8ca1f7919ade42","docker.io/library/nginx@sha256:fb197595ebe76b9c0c14ab68159fd3c08bd067ec62300583543f0ebda353b5be"],"repoTags":["docker.io/libra
ry/nginx:latest"],"size":"195919252"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-032350"],"size":"4943877"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"da86e6ba6ca197bf6bc5
e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"4f99c3f73081d7237711e9fd0c4de6c3cbcd9400213a574af65c7a34342210de","repoDigests":["localhost/minikube-local-cache-test@sha256:cc8635dabd612ab4ca7931b014583e144d757271e1e2b537676967f8833d87f0"],"repoTags":["localhost/minikube-local-cache-test:functional-032350"],"size":"3330"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e","registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"63273227"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e85
4ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":["registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d","registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"149009664"},{"id":"9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173","repoDigests":["registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0","registry.k8s.io/kube-apiserver@sha256:a4fdc0ebc2950d76f2859c5f38f2d05b760ed09fd8006d7a8d98dd9b30bc55da"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.2"],"size":"95274464"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53
b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52","repoDigests":["docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387","docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7"],"repoTags":["docker.io/kindest/kindnetd:v20241007-36f62932"],"size":"94965812"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a8
5c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-032350 image ls --format json --alsologtostderr:
I1209 10:49:00.579371  626955 out.go:345] Setting OutFile to fd 1 ...
I1209 10:49:00.579502  626955 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 10:49:00.579511  626955 out.go:358] Setting ErrFile to fd 2...
I1209 10:49:00.579518  626955 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 10:49:00.579792  626955 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20068-609844/.minikube/bin
I1209 10:49:00.580661  626955 config.go:182] Loaded profile config "functional-032350": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1209 10:49:00.580820  626955 config.go:182] Loaded profile config "functional-032350": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1209 10:49:00.581363  626955 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1209 10:49:00.581419  626955 main.go:141] libmachine: Launching plugin server for driver kvm2
I1209 10:49:00.598428  626955 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39123
I1209 10:49:00.599076  626955 main.go:141] libmachine: () Calling .GetVersion
I1209 10:49:00.599739  626955 main.go:141] libmachine: Using API Version  1
I1209 10:49:00.599775  626955 main.go:141] libmachine: () Calling .SetConfigRaw
I1209 10:49:00.600164  626955 main.go:141] libmachine: () Calling .GetMachineName
I1209 10:49:00.600410  626955 main.go:141] libmachine: (functional-032350) Calling .GetState
I1209 10:49:00.602694  626955 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1209 10:49:00.602751  626955 main.go:141] libmachine: Launching plugin server for driver kvm2
I1209 10:49:00.618654  626955 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38111
I1209 10:49:00.619076  626955 main.go:141] libmachine: () Calling .GetVersion
I1209 10:49:00.619696  626955 main.go:141] libmachine: Using API Version  1
I1209 10:49:00.619712  626955 main.go:141] libmachine: () Calling .SetConfigRaw
I1209 10:49:00.620058  626955 main.go:141] libmachine: () Calling .GetMachineName
I1209 10:49:00.620265  626955 main.go:141] libmachine: (functional-032350) Calling .DriverName
I1209 10:49:00.620443  626955 ssh_runner.go:195] Run: systemctl --version
I1209 10:49:00.620475  626955 main.go:141] libmachine: (functional-032350) Calling .GetSSHHostname
I1209 10:49:00.623865  626955 main.go:141] libmachine: (functional-032350) DBG | domain functional-032350 has defined MAC address 52:54:00:a2:b0:d5 in network mk-functional-032350
I1209 10:49:00.624280  626955 main.go:141] libmachine: (functional-032350) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:b0:d5", ip: ""} in network mk-functional-032350: {Iface:virbr1 ExpiryTime:2024-12-09 11:46:05 +0000 UTC Type:0 Mac:52:54:00:a2:b0:d5 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:functional-032350 Clientid:01:52:54:00:a2:b0:d5}
I1209 10:49:00.624320  626955 main.go:141] libmachine: (functional-032350) DBG | domain functional-032350 has defined IP address 192.168.39.15 and MAC address 52:54:00:a2:b0:d5 in network mk-functional-032350
I1209 10:49:00.624414  626955 main.go:141] libmachine: (functional-032350) Calling .GetSSHPort
I1209 10:49:00.624573  626955 main.go:141] libmachine: (functional-032350) Calling .GetSSHKeyPath
I1209 10:49:00.624693  626955 main.go:141] libmachine: (functional-032350) Calling .GetSSHUsername
I1209 10:49:00.624803  626955 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/functional-032350/id_rsa Username:docker}
I1209 10:49:00.710518  626955 ssh_runner.go:195] Run: sudo crictl images --output json
I1209 10:49:00.764880  626955 main.go:141] libmachine: Making call to close driver server
I1209 10:49:00.764902  626955 main.go:141] libmachine: (functional-032350) Calling .Close
I1209 10:49:00.765241  626955 main.go:141] libmachine: Successfully made call to close driver server
I1209 10:49:00.765259  626955 main.go:141] libmachine: Making call to close connection to plugin binary
I1209 10:49:00.765273  626955 main.go:141] libmachine: Making call to close driver server
I1209 10:49:00.765280  626955 main.go:141] libmachine: (functional-032350) Calling .Close
I1209 10:49:00.765658  626955 main.go:141] libmachine: (functional-032350) DBG | Closing plugin on server side
I1209 10:49:00.765726  626955 main.go:141] libmachine: Successfully made call to close driver server
I1209 10:49:00.765765  626955 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-032350 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-032350 image ls --format yaml --alsologtostderr:
- id: 66f8bdd3810c96dc5c28aec39583af731b34a2cd99471530f53c8794ed5b423e
repoDigests:
- docker.io/library/nginx@sha256:3d696e8357051647b844d8c7cf4a0aa71e84379999a4f6af9b8ca1f7919ade42
- docker.io/library/nginx@sha256:fb197595ebe76b9c0c14ab68159fd3c08bd067ec62300583543f0ebda353b5be
repoTags:
- docker.io/library/nginx:latest
size: "195919252"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
- registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "63273227"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests:
- registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "149009664"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: 3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52
repoDigests:
- docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387
- docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7
repoTags:
- docker.io/kindest/kindnetd:v20241007-36f62932
size: "94965812"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282
- registry.k8s.io/kube-scheduler@sha256:a40aba236dfcd0fe9d1258dcb9d22a82d83e9ea6d35f7d3574949b12df928fe5
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.2
size: "68457798"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 4f99c3f73081d7237711e9fd0c4de6c3cbcd9400213a574af65c7a34342210de
repoDigests:
- localhost/minikube-local-cache-test@sha256:cc8635dabd612ab4ca7931b014583e144d757271e1e2b537676967f8833d87f0
repoTags:
- localhost/minikube-local-cache-test:functional-032350
size: "3330"
- id: 9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0
- registry.k8s.io/kube-apiserver@sha256:a4fdc0ebc2950d76f2859c5f38f2d05b760ed09fd8006d7a8d98dd9b30bc55da
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.2
size: "95274464"
- id: 0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:4ba16ce7d80945dc4bb8e85ac0794c6171bfa8a55c94fe5be415afb4c3eb938c
- registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.2
size: "89474374"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-032350
size: "4943877"
- id: 505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38
repoDigests:
- registry.k8s.io/kube-proxy@sha256:22535649599e9f22b1b857afcbd9a8b36be238b2b3ea68e47f60bedcea48cd3b
- registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe
repoTags:
- registry.k8s.io/kube-proxy:v1.31.2
size: "92783513"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-032350 image ls --format yaml --alsologtostderr:
I1209 10:49:00.306981  626902 out.go:345] Setting OutFile to fd 1 ...
I1209 10:49:00.307119  626902 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 10:49:00.307130  626902 out.go:358] Setting ErrFile to fd 2...
I1209 10:49:00.307136  626902 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 10:49:00.307318  626902 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20068-609844/.minikube/bin
I1209 10:49:00.307992  626902 config.go:182] Loaded profile config "functional-032350": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1209 10:49:00.308124  626902 config.go:182] Loaded profile config "functional-032350": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1209 10:49:00.308523  626902 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1209 10:49:00.308571  626902 main.go:141] libmachine: Launching plugin server for driver kvm2
I1209 10:49:00.327034  626902 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41177
I1209 10:49:00.327638  626902 main.go:141] libmachine: () Calling .GetVersion
I1209 10:49:00.328301  626902 main.go:141] libmachine: Using API Version  1
I1209 10:49:00.328329  626902 main.go:141] libmachine: () Calling .SetConfigRaw
I1209 10:49:00.328891  626902 main.go:141] libmachine: () Calling .GetMachineName
I1209 10:49:00.329122  626902 main.go:141] libmachine: (functional-032350) Calling .GetState
I1209 10:49:00.331228  626902 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1209 10:49:00.331278  626902 main.go:141] libmachine: Launching plugin server for driver kvm2
I1209 10:49:00.348783  626902 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42931
I1209 10:49:00.349431  626902 main.go:141] libmachine: () Calling .GetVersion
I1209 10:49:00.349915  626902 main.go:141] libmachine: Using API Version  1
I1209 10:49:00.349942  626902 main.go:141] libmachine: () Calling .SetConfigRaw
I1209 10:49:00.350481  626902 main.go:141] libmachine: () Calling .GetMachineName
I1209 10:49:00.350716  626902 main.go:141] libmachine: (functional-032350) Calling .DriverName
I1209 10:49:00.350916  626902 ssh_runner.go:195] Run: systemctl --version
I1209 10:49:00.350937  626902 main.go:141] libmachine: (functional-032350) Calling .GetSSHHostname
I1209 10:49:00.354055  626902 main.go:141] libmachine: (functional-032350) DBG | domain functional-032350 has defined MAC address 52:54:00:a2:b0:d5 in network mk-functional-032350
I1209 10:49:00.354356  626902 main.go:141] libmachine: (functional-032350) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:b0:d5", ip: ""} in network mk-functional-032350: {Iface:virbr1 ExpiryTime:2024-12-09 11:46:05 +0000 UTC Type:0 Mac:52:54:00:a2:b0:d5 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:functional-032350 Clientid:01:52:54:00:a2:b0:d5}
I1209 10:49:00.354370  626902 main.go:141] libmachine: (functional-032350) DBG | domain functional-032350 has defined IP address 192.168.39.15 and MAC address 52:54:00:a2:b0:d5 in network mk-functional-032350
I1209 10:49:00.354509  626902 main.go:141] libmachine: (functional-032350) Calling .GetSSHPort
I1209 10:49:00.354646  626902 main.go:141] libmachine: (functional-032350) Calling .GetSSHKeyPath
I1209 10:49:00.354806  626902 main.go:141] libmachine: (functional-032350) Calling .GetSSHUsername
I1209 10:49:00.354918  626902 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/functional-032350/id_rsa Username:docker}
I1209 10:49:00.452080  626902 ssh_runner.go:195] Run: sudo crictl images --output json
I1209 10:49:00.515447  626902 main.go:141] libmachine: Making call to close driver server
I1209 10:49:00.515475  626902 main.go:141] libmachine: (functional-032350) Calling .Close
I1209 10:49:00.515810  626902 main.go:141] libmachine: Successfully made call to close driver server
I1209 10:49:00.515834  626902 main.go:141] libmachine: Making call to close connection to plugin binary
I1209 10:49:00.515840  626902 main.go:141] libmachine: (functional-032350) DBG | Closing plugin on server side
I1209 10:49:00.515878  626902 main.go:141] libmachine: Making call to close driver server
I1209 10:49:00.515890  626902 main.go:141] libmachine: (functional-032350) Calling .Close
I1209 10:49:00.516146  626902 main.go:141] libmachine: Successfully made call to close driver server
I1209 10:49:00.516172  626902 main.go:141] libmachine: Making call to close connection to plugin binary
I1209 10:49:00.516186  626902 main.go:141] libmachine: (functional-032350) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-032350 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-032350 ssh pgrep buildkitd: exit status 1 (242.585001ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-032350 image build -t localhost/my-image:functional-032350 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-032350 image build -t localhost/my-image:functional-032350 testdata/build --alsologtostderr: (3.879845216s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-amd64 -p functional-032350 image build -t localhost/my-image:functional-032350 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 4f8b720cb4b
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-032350
--> 395b95ddf1a
Successfully tagged localhost/my-image:functional-032350
395b95ddf1ac632e0c5de78dc5dcaf92d25283a6036e7266837e3d21f453ca0f
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-032350 image build -t localhost/my-image:functional-032350 testdata/build --alsologtostderr:
I1209 10:49:00.600560  626965 out.go:345] Setting OutFile to fd 1 ...
I1209 10:49:00.600698  626965 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 10:49:00.600711  626965 out.go:358] Setting ErrFile to fd 2...
I1209 10:49:00.600719  626965 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 10:49:00.600909  626965 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20068-609844/.minikube/bin
I1209 10:49:00.601546  626965 config.go:182] Loaded profile config "functional-032350": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1209 10:49:00.602152  626965 config.go:182] Loaded profile config "functional-032350": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1209 10:49:00.602695  626965 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1209 10:49:00.602774  626965 main.go:141] libmachine: Launching plugin server for driver kvm2
I1209 10:49:00.618627  626965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46029
I1209 10:49:00.619186  626965 main.go:141] libmachine: () Calling .GetVersion
I1209 10:49:00.619738  626965 main.go:141] libmachine: Using API Version  1
I1209 10:49:00.619755  626965 main.go:141] libmachine: () Calling .SetConfigRaw
I1209 10:49:00.620117  626965 main.go:141] libmachine: () Calling .GetMachineName
I1209 10:49:00.620304  626965 main.go:141] libmachine: (functional-032350) Calling .GetState
I1209 10:49:00.622260  626965 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1209 10:49:00.622303  626965 main.go:141] libmachine: Launching plugin server for driver kvm2
I1209 10:49:00.638277  626965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35311
I1209 10:49:00.638810  626965 main.go:141] libmachine: () Calling .GetVersion
I1209 10:49:00.639440  626965 main.go:141] libmachine: Using API Version  1
I1209 10:49:00.639474  626965 main.go:141] libmachine: () Calling .SetConfigRaw
I1209 10:49:00.639788  626965 main.go:141] libmachine: () Calling .GetMachineName
I1209 10:49:00.639990  626965 main.go:141] libmachine: (functional-032350) Calling .DriverName
I1209 10:49:00.640176  626965 ssh_runner.go:195] Run: systemctl --version
I1209 10:49:00.640203  626965 main.go:141] libmachine: (functional-032350) Calling .GetSSHHostname
I1209 10:49:00.643121  626965 main.go:141] libmachine: (functional-032350) DBG | domain functional-032350 has defined MAC address 52:54:00:a2:b0:d5 in network mk-functional-032350
I1209 10:49:00.643580  626965 main.go:141] libmachine: (functional-032350) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:b0:d5", ip: ""} in network mk-functional-032350: {Iface:virbr1 ExpiryTime:2024-12-09 11:46:05 +0000 UTC Type:0 Mac:52:54:00:a2:b0:d5 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:functional-032350 Clientid:01:52:54:00:a2:b0:d5}
I1209 10:49:00.643616  626965 main.go:141] libmachine: (functional-032350) DBG | domain functional-032350 has defined IP address 192.168.39.15 and MAC address 52:54:00:a2:b0:d5 in network mk-functional-032350
I1209 10:49:00.643732  626965 main.go:141] libmachine: (functional-032350) Calling .GetSSHPort
I1209 10:49:00.643897  626965 main.go:141] libmachine: (functional-032350) Calling .GetSSHKeyPath
I1209 10:49:00.644027  626965 main.go:141] libmachine: (functional-032350) Calling .GetSSHUsername
I1209 10:49:00.644166  626965 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/functional-032350/id_rsa Username:docker}
I1209 10:49:00.748671  626965 build_images.go:161] Building image from path: /tmp/build.3317217527.tar
I1209 10:49:00.748757  626965 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1209 10:49:00.772556  626965 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3317217527.tar
I1209 10:49:00.778480  626965 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3317217527.tar: stat -c "%s %y" /var/lib/minikube/build/build.3317217527.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3317217527.tar': No such file or directory
I1209 10:49:00.778514  626965 ssh_runner.go:362] scp /tmp/build.3317217527.tar --> /var/lib/minikube/build/build.3317217527.tar (3072 bytes)
I1209 10:49:00.807362  626965 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3317217527
I1209 10:49:00.819076  626965 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3317217527 -xf /var/lib/minikube/build/build.3317217527.tar
I1209 10:49:00.829323  626965 crio.go:315] Building image: /var/lib/minikube/build/build.3317217527
I1209 10:49:00.829401  626965 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-032350 /var/lib/minikube/build/build.3317217527 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1209 10:49:04.402293  626965 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-032350 /var/lib/minikube/build/build.3317217527 --cgroup-manager=cgroupfs: (3.572854808s)
I1209 10:49:04.402374  626965 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3317217527
I1209 10:49:04.412275  626965 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3317217527.tar
I1209 10:49:04.421830  626965 build_images.go:217] Built localhost/my-image:functional-032350 from /tmp/build.3317217527.tar
I1209 10:49:04.421859  626965 build_images.go:133] succeeded building to: functional-032350
I1209 10:49:04.421864  626965 build_images.go:134] failed building to: 
I1209 10:49:04.421926  626965 main.go:141] libmachine: Making call to close driver server
I1209 10:49:04.421938  626965 main.go:141] libmachine: (functional-032350) Calling .Close
I1209 10:49:04.422233  626965 main.go:141] libmachine: Successfully made call to close driver server
I1209 10:49:04.422255  626965 main.go:141] libmachine: Making call to close connection to plugin binary
I1209 10:49:04.422272  626965 main.go:141] libmachine: Making call to close driver server
I1209 10:49:04.422281  626965 main.go:141] libmachine: (functional-032350) Calling .Close
I1209 10:49:04.422530  626965 main.go:141] libmachine: (functional-032350) DBG | Closing plugin on server side
I1209 10:49:04.422605  626965 main.go:141] libmachine: Successfully made call to close driver server
I1209 10:49:04.422649  626965 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-032350 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.718406108s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-032350
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.74s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (12.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-032350 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-032350 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-jzc7m" [f1926f77-c67d-44c9-a8b8-db71dc7f9539] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-jzc7m" [f1926f77-c67d-44c9-a8b8-db71dc7f9539] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 12.003758965s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (12.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-032350 image load --daemon kicbase/echo-server:functional-032350 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-032350 image load --daemon kicbase/echo-server:functional-032350 --alsologtostderr: (2.855297985s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-032350 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-032350 image load --daemon kicbase/echo-server:functional-032350 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-032350 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.86s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-032350
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-032350 image load --daemon kicbase/echo-server:functional-032350 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-032350 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-032350 image save kicbase/echo-server:functional-032350 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-032350 image rm kicbase/echo-server:functional-032350 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-032350 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-032350 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-032350 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-032350
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-032350 image save --daemon kicbase/echo-server:functional-032350 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-032350
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.82s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-032350 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-032350 update-context --alsologtostderr -v=2
2024/12/09 10:49:01 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-032350 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-032350 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-032350 service list -o json
functional_test.go:1494: Took "368.330963ms" to run "out/minikube-linux-amd64 -p functional-032350 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-032350 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.39.15:31864
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (21.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-032350 /tmp/TestFunctionalparallelMountCmdany-port3911761618/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1733741315614324015" to /tmp/TestFunctionalparallelMountCmdany-port3911761618/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1733741315614324015" to /tmp/TestFunctionalparallelMountCmdany-port3911761618/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1733741315614324015" to /tmp/TestFunctionalparallelMountCmdany-port3911761618/001/test-1733741315614324015
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-032350 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-032350 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (282.296692ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1209 10:48:35.896996  617017 retry.go:31] will retry after 683.947961ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-032350 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-032350 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec  9 10:48 created-by-test
-rw-r--r-- 1 docker docker 24 Dec  9 10:48 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec  9 10:48 test-1733741315614324015
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-032350 ssh cat /mount-9p/test-1733741315614324015
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-032350 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [2e1e8489-0563-4c1c-88b0-b3e87018e642] Pending
helpers_test.go:344: "busybox-mount" [2e1e8489-0563-4c1c-88b0-b3e87018e642] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [2e1e8489-0563-4c1c-88b0-b3e87018e642] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [2e1e8489-0563-4c1c-88b0-b3e87018e642] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 18.005225003s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-032350 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-032350 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-032350 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-032350 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-032350 /tmp/TestFunctionalparallelMountCmdany-port3911761618/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (21.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-032350 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "364.759688ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "57.591855ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-032350 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.39.15:31864
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "317.492125ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "58.941993ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-032350 /tmp/TestFunctionalparallelMountCmdspecific-port2847572409/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-032350 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-032350 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (282.553919ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1209 10:48:56.947787  617017 retry.go:31] will retry after 545.933216ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-032350 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-032350 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-032350 /tmp/TestFunctionalparallelMountCmdspecific-port2847572409/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-032350 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-032350 ssh "sudo umount -f /mount-9p": exit status 1 (257.69968ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-032350 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-032350 /tmp/TestFunctionalparallelMountCmdspecific-port2847572409/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.95s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (0.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-032350 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1357297275/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-032350 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1357297275/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-032350 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1357297275/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-032350 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-032350 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-032350 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-032350 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-032350 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1357297275/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-032350 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1357297275/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-032350 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1357297275/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (0.91s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-032350
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-032350
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-032350
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (195.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-792382 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1209 10:49:17.166637  617017 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/addons-156041/client.crt: no such file or directory" logger="UnhandledError"
E1209 10:51:33.303536  617017 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/addons-156041/client.crt: no such file or directory" logger="UnhandledError"
E1209 10:52:01.014394  617017 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/addons-156041/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-792382 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m14.329499825s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-792382 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (195.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-792382 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-792382 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-792382 -- rollout status deployment/busybox: (4.685310307s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-792382 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-792382 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-792382 -- exec busybox-7dff88458-ft8s2 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-792382 -- exec busybox-7dff88458-rbrpt -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-792382 -- exec busybox-7dff88458-z9wjm -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-792382 -- exec busybox-7dff88458-ft8s2 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-792382 -- exec busybox-7dff88458-rbrpt -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-792382 -- exec busybox-7dff88458-z9wjm -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-792382 -- exec busybox-7dff88458-ft8s2 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-792382 -- exec busybox-7dff88458-rbrpt -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-792382 -- exec busybox-7dff88458-z9wjm -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-792382 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-792382 -- exec busybox-7dff88458-ft8s2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-792382 -- exec busybox-7dff88458-ft8s2 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-792382 -- exec busybox-7dff88458-rbrpt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-792382 -- exec busybox-7dff88458-rbrpt -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-792382 -- exec busybox-7dff88458-z9wjm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-792382 -- exec busybox-7dff88458-z9wjm -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (57.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-792382 -v=7 --alsologtostderr
E1209 10:53:22.652363  617017 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/functional-032350/client.crt: no such file or directory" logger="UnhandledError"
E1209 10:53:22.658795  617017 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/functional-032350/client.crt: no such file or directory" logger="UnhandledError"
E1209 10:53:22.670243  617017 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/functional-032350/client.crt: no such file or directory" logger="UnhandledError"
E1209 10:53:22.691682  617017 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/functional-032350/client.crt: no such file or directory" logger="UnhandledError"
E1209 10:53:22.733138  617017 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/functional-032350/client.crt: no such file or directory" logger="UnhandledError"
E1209 10:53:22.814637  617017 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/functional-032350/client.crt: no such file or directory" logger="UnhandledError"
E1209 10:53:22.976227  617017 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/functional-032350/client.crt: no such file or directory" logger="UnhandledError"
E1209 10:53:23.297975  617017 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/functional-032350/client.crt: no such file or directory" logger="UnhandledError"
E1209 10:53:23.939356  617017 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/functional-032350/client.crt: no such file or directory" logger="UnhandledError"
E1209 10:53:25.221581  617017 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/functional-032350/client.crt: no such file or directory" logger="UnhandledError"
E1209 10:53:27.783613  617017 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/functional-032350/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-792382 -v=7 --alsologtostderr: (56.17665835s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-792382 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (57.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-792382 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
E1209 10:53:32.905465  617017 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/functional-032350/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (13.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-792382 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-792382 cp testdata/cp-test.txt ha-792382:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-792382 ssh -n ha-792382 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-792382 cp ha-792382:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3558467820/001/cp-test_ha-792382.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-792382 ssh -n ha-792382 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-792382 cp ha-792382:/home/docker/cp-test.txt ha-792382-m02:/home/docker/cp-test_ha-792382_ha-792382-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-792382 ssh -n ha-792382 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-792382 ssh -n ha-792382-m02 "sudo cat /home/docker/cp-test_ha-792382_ha-792382-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-792382 cp ha-792382:/home/docker/cp-test.txt ha-792382-m03:/home/docker/cp-test_ha-792382_ha-792382-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-792382 ssh -n ha-792382 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-792382 ssh -n ha-792382-m03 "sudo cat /home/docker/cp-test_ha-792382_ha-792382-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-792382 cp ha-792382:/home/docker/cp-test.txt ha-792382-m04:/home/docker/cp-test_ha-792382_ha-792382-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-792382 ssh -n ha-792382 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-792382 ssh -n ha-792382-m04 "sudo cat /home/docker/cp-test_ha-792382_ha-792382-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-792382 cp testdata/cp-test.txt ha-792382-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-792382 ssh -n ha-792382-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-792382 cp ha-792382-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3558467820/001/cp-test_ha-792382-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-792382 ssh -n ha-792382-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-792382 cp ha-792382-m02:/home/docker/cp-test.txt ha-792382:/home/docker/cp-test_ha-792382-m02_ha-792382.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-792382 ssh -n ha-792382-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-792382 ssh -n ha-792382 "sudo cat /home/docker/cp-test_ha-792382-m02_ha-792382.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-792382 cp ha-792382-m02:/home/docker/cp-test.txt ha-792382-m03:/home/docker/cp-test_ha-792382-m02_ha-792382-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-792382 ssh -n ha-792382-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-792382 ssh -n ha-792382-m03 "sudo cat /home/docker/cp-test_ha-792382-m02_ha-792382-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-792382 cp ha-792382-m02:/home/docker/cp-test.txt ha-792382-m04:/home/docker/cp-test_ha-792382-m02_ha-792382-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-792382 ssh -n ha-792382-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-792382 ssh -n ha-792382-m04 "sudo cat /home/docker/cp-test_ha-792382-m02_ha-792382-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-792382 cp testdata/cp-test.txt ha-792382-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-792382 ssh -n ha-792382-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-792382 cp ha-792382-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3558467820/001/cp-test_ha-792382-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-792382 ssh -n ha-792382-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-792382 cp ha-792382-m03:/home/docker/cp-test.txt ha-792382:/home/docker/cp-test_ha-792382-m03_ha-792382.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-792382 ssh -n ha-792382-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-792382 ssh -n ha-792382 "sudo cat /home/docker/cp-test_ha-792382-m03_ha-792382.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-792382 cp ha-792382-m03:/home/docker/cp-test.txt ha-792382-m02:/home/docker/cp-test_ha-792382-m03_ha-792382-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-792382 ssh -n ha-792382-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-792382 ssh -n ha-792382-m02 "sudo cat /home/docker/cp-test_ha-792382-m03_ha-792382-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-792382 cp ha-792382-m03:/home/docker/cp-test.txt ha-792382-m04:/home/docker/cp-test_ha-792382-m03_ha-792382-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-792382 ssh -n ha-792382-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-792382 ssh -n ha-792382-m04 "sudo cat /home/docker/cp-test_ha-792382-m03_ha-792382-m04.txt"
E1209 10:53:43.147252  617017 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/functional-032350/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-792382 cp testdata/cp-test.txt ha-792382-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-792382 ssh -n ha-792382-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-792382 cp ha-792382-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3558467820/001/cp-test_ha-792382-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-792382 ssh -n ha-792382-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-792382 cp ha-792382-m04:/home/docker/cp-test.txt ha-792382:/home/docker/cp-test_ha-792382-m04_ha-792382.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-792382 ssh -n ha-792382-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-792382 ssh -n ha-792382 "sudo cat /home/docker/cp-test_ha-792382-m04_ha-792382.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-792382 cp ha-792382-m04:/home/docker/cp-test.txt ha-792382-m02:/home/docker/cp-test_ha-792382-m04_ha-792382-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-792382 ssh -n ha-792382-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-792382 ssh -n ha-792382-m02 "sudo cat /home/docker/cp-test_ha-792382-m04_ha-792382-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-792382 cp ha-792382-m04:/home/docker/cp-test.txt ha-792382-m03:/home/docker/cp-test_ha-792382-m04_ha-792382-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-792382 ssh -n ha-792382-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-792382 ssh -n ha-792382-m03 "sudo cat /home/docker/cp-test_ha-792382-m04_ha-792382-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (13.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (16.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-792382 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-792382 node delete m03 -v=7 --alsologtostderr: (15.916988884s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-792382 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (16.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (349.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 start -p ha-792382 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1209 11:06:33.302850  617017 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/addons-156041/client.crt: no such file or directory" logger="UnhandledError"
E1209 11:08:22.653123  617017 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/functional-032350/client.crt: no such file or directory" logger="UnhandledError"
E1209 11:09:45.723158  617017 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/functional-032350/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 start -p ha-792382 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (5m48.269079048s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-792382 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (349.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (77.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-792382 --control-plane -v=7 --alsologtostderr
E1209 11:11:33.303549  617017 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/addons-156041/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 node add -p ha-792382 --control-plane -v=7 --alsologtostderr: (1m16.480808141s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-792382 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (77.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.86s)

                                                
                                    
x
+
TestJSONOutput/start/Command (52.88s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-263704 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-263704 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (52.882608989s)
--- PASS: TestJSONOutput/start/Command (52.88s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.66s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-263704 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.66s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.59s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-263704 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.59s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.34s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-263704 --output=json --user=testUser
E1209 11:13:22.652612  617017 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/functional-032350/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-263704 --output=json --user=testUser: (7.336778397s)
--- PASS: TestJSONOutput/stop/Command (7.34s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-886097 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-886097 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (64.514185ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"ffcbd92d-cf33-453f-8d05-ac3ab56e524c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-886097] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"d78f9120-25db-48f2-9160-d56c5bdd3bce","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20068"}}
	{"specversion":"1.0","id":"5d62075c-3502-47d6-8983-db55319e20b4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"bbe15b69-f964-4e0b-a556-83388ea28f3e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20068-609844/kubeconfig"}}
	{"specversion":"1.0","id":"5ec08f09-ac32-477b-97bf-5734881ae666","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20068-609844/.minikube"}}
	{"specversion":"1.0","id":"9befac49-e3a7-47e0-9974-7f9c0754946c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"6f1ff9a0-6622-400e-bc44-f87e66f7e71e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"1a2caf89-08c3-4bb4-b51d-c57ee13a8b4a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-886097" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-886097
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (86.38s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-432783 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-432783 --driver=kvm2  --container-runtime=crio: (41.051575194s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-462936 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-462936 --driver=kvm2  --container-runtime=crio: (42.685994033s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-432783
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-462936
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-462936" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-462936
helpers_test.go:175: Cleaning up "first-432783" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-432783
--- PASS: TestMinikubeProfile (86.38s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (30.52s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-491240 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-491240 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (29.517696535s)
--- PASS: TestMountStart/serial/StartWithMountFirst (30.52s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-491240 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-491240 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (25.93s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-513596 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-513596 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (24.926967788s)
--- PASS: TestMountStart/serial/StartWithMountSecond (25.93s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-513596 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-513596 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.39s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.88s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-491240 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.88s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-513596 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-513596 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-513596
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-513596: (1.27779088s)
--- PASS: TestMountStart/serial/Stop (1.28s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (23.43s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-513596
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-513596: (22.425204442s)
--- PASS: TestMountStart/serial/RestartStopped (23.43s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-513596 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-513596 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.39s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (112.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-714725 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1209 11:16:33.302982  617017 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/addons-156041/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-714725 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m52.23404276s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-714725 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (112.67s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-714725 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-714725 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-714725 -- rollout status deployment/busybox: (4.774078995s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-714725 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-714725 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-714725 -- exec busybox-7dff88458-5n7zd -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-714725 -- exec busybox-7dff88458-hqls9 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-714725 -- exec busybox-7dff88458-5n7zd -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-714725 -- exec busybox-7dff88458-hqls9 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-714725 -- exec busybox-7dff88458-5n7zd -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-714725 -- exec busybox-7dff88458-hqls9 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.27s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-714725 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-714725 -- exec busybox-7dff88458-5n7zd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-714725 -- exec busybox-7dff88458-5n7zd -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-714725 -- exec busybox-7dff88458-hqls9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-714725 -- exec busybox-7dff88458-hqls9 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.81s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (51.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-714725 -v 3 --alsologtostderr
E1209 11:18:22.653113  617017 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/functional-032350/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-714725 -v 3 --alsologtostderr: (50.726223022s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-714725 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (51.31s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-714725 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.57s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-714725 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-714725 cp testdata/cp-test.txt multinode-714725:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-714725 ssh -n multinode-714725 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-714725 cp multinode-714725:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2432959614/001/cp-test_multinode-714725.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-714725 ssh -n multinode-714725 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-714725 cp multinode-714725:/home/docker/cp-test.txt multinode-714725-m02:/home/docker/cp-test_multinode-714725_multinode-714725-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-714725 ssh -n multinode-714725 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-714725 ssh -n multinode-714725-m02 "sudo cat /home/docker/cp-test_multinode-714725_multinode-714725-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-714725 cp multinode-714725:/home/docker/cp-test.txt multinode-714725-m03:/home/docker/cp-test_multinode-714725_multinode-714725-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-714725 ssh -n multinode-714725 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-714725 ssh -n multinode-714725-m03 "sudo cat /home/docker/cp-test_multinode-714725_multinode-714725-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-714725 cp testdata/cp-test.txt multinode-714725-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-714725 ssh -n multinode-714725-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-714725 cp multinode-714725-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2432959614/001/cp-test_multinode-714725-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-714725 ssh -n multinode-714725-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-714725 cp multinode-714725-m02:/home/docker/cp-test.txt multinode-714725:/home/docker/cp-test_multinode-714725-m02_multinode-714725.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-714725 ssh -n multinode-714725-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-714725 ssh -n multinode-714725 "sudo cat /home/docker/cp-test_multinode-714725-m02_multinode-714725.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-714725 cp multinode-714725-m02:/home/docker/cp-test.txt multinode-714725-m03:/home/docker/cp-test_multinode-714725-m02_multinode-714725-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-714725 ssh -n multinode-714725-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-714725 ssh -n multinode-714725-m03 "sudo cat /home/docker/cp-test_multinode-714725-m02_multinode-714725-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-714725 cp testdata/cp-test.txt multinode-714725-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-714725 ssh -n multinode-714725-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-714725 cp multinode-714725-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2432959614/001/cp-test_multinode-714725-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-714725 ssh -n multinode-714725-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-714725 cp multinode-714725-m03:/home/docker/cp-test.txt multinode-714725:/home/docker/cp-test_multinode-714725-m03_multinode-714725.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-714725 ssh -n multinode-714725-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-714725 ssh -n multinode-714725 "sudo cat /home/docker/cp-test_multinode-714725-m03_multinode-714725.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-714725 cp multinode-714725-m03:/home/docker/cp-test.txt multinode-714725-m02:/home/docker/cp-test_multinode-714725-m03_multinode-714725-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-714725 ssh -n multinode-714725-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-714725 ssh -n multinode-714725-m02 "sudo cat /home/docker/cp-test_multinode-714725-m03_multinode-714725-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.27s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-714725 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-714725 node stop m03: (1.47512886s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-714725 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-714725 status: exit status 7 (425.603171ms)

                                                
                                                
-- stdout --
	multinode-714725
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-714725-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-714725-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-714725 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-714725 status --alsologtostderr: exit status 7 (441.450685ms)

                                                
                                                
-- stdout --
	multinode-714725
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-714725-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-714725-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 11:19:15.304685  644568 out.go:345] Setting OutFile to fd 1 ...
	I1209 11:19:15.304817  644568 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 11:19:15.304828  644568 out.go:358] Setting ErrFile to fd 2...
	I1209 11:19:15.304833  644568 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 11:19:15.305024  644568 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20068-609844/.minikube/bin
	I1209 11:19:15.305230  644568 out.go:352] Setting JSON to false
	I1209 11:19:15.305270  644568 mustload.go:65] Loading cluster: multinode-714725
	I1209 11:19:15.305356  644568 notify.go:220] Checking for updates...
	I1209 11:19:15.305795  644568 config.go:182] Loaded profile config "multinode-714725": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 11:19:15.305822  644568 status.go:174] checking status of multinode-714725 ...
	I1209 11:19:15.306417  644568 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:19:15.306457  644568 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:19:15.322490  644568 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33535
	I1209 11:19:15.323018  644568 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:19:15.323762  644568 main.go:141] libmachine: Using API Version  1
	I1209 11:19:15.323793  644568 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:19:15.324186  644568 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:19:15.324413  644568 main.go:141] libmachine: (multinode-714725) Calling .GetState
	I1209 11:19:15.326066  644568 status.go:371] multinode-714725 host status = "Running" (err=<nil>)
	I1209 11:19:15.326084  644568 host.go:66] Checking if "multinode-714725" exists ...
	I1209 11:19:15.326481  644568 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:19:15.326531  644568 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:19:15.343820  644568 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44425
	I1209 11:19:15.344301  644568 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:19:15.344802  644568 main.go:141] libmachine: Using API Version  1
	I1209 11:19:15.344843  644568 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:19:15.345176  644568 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:19:15.345362  644568 main.go:141] libmachine: (multinode-714725) Calling .GetIP
	I1209 11:19:15.348449  644568 main.go:141] libmachine: (multinode-714725) DBG | domain multinode-714725 has defined MAC address 52:54:00:b7:d0:20 in network mk-multinode-714725
	I1209 11:19:15.348902  644568 main.go:141] libmachine: (multinode-714725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:d0:20", ip: ""} in network mk-multinode-714725: {Iface:virbr1 ExpiryTime:2024-12-09 12:16:28 +0000 UTC Type:0 Mac:52:54:00:b7:d0:20 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:multinode-714725 Clientid:01:52:54:00:b7:d0:20}
	I1209 11:19:15.348940  644568 main.go:141] libmachine: (multinode-714725) DBG | domain multinode-714725 has defined IP address 192.168.39.31 and MAC address 52:54:00:b7:d0:20 in network mk-multinode-714725
	I1209 11:19:15.349084  644568 host.go:66] Checking if "multinode-714725" exists ...
	I1209 11:19:15.349404  644568 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:19:15.349450  644568 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:19:15.364350  644568 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45663
	I1209 11:19:15.364900  644568 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:19:15.365436  644568 main.go:141] libmachine: Using API Version  1
	I1209 11:19:15.365459  644568 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:19:15.365773  644568 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:19:15.365980  644568 main.go:141] libmachine: (multinode-714725) Calling .DriverName
	I1209 11:19:15.366155  644568 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 11:19:15.366206  644568 main.go:141] libmachine: (multinode-714725) Calling .GetSSHHostname
	I1209 11:19:15.369339  644568 main.go:141] libmachine: (multinode-714725) DBG | domain multinode-714725 has defined MAC address 52:54:00:b7:d0:20 in network mk-multinode-714725
	I1209 11:19:15.371077  644568 main.go:141] libmachine: (multinode-714725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:d0:20", ip: ""} in network mk-multinode-714725: {Iface:virbr1 ExpiryTime:2024-12-09 12:16:28 +0000 UTC Type:0 Mac:52:54:00:b7:d0:20 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:multinode-714725 Clientid:01:52:54:00:b7:d0:20}
	I1209 11:19:15.371113  644568 main.go:141] libmachine: (multinode-714725) DBG | domain multinode-714725 has defined IP address 192.168.39.31 and MAC address 52:54:00:b7:d0:20 in network mk-multinode-714725
	I1209 11:19:15.371183  644568 main.go:141] libmachine: (multinode-714725) Calling .GetSSHPort
	I1209 11:19:15.371412  644568 main.go:141] libmachine: (multinode-714725) Calling .GetSSHKeyPath
	I1209 11:19:15.371599  644568 main.go:141] libmachine: (multinode-714725) Calling .GetSSHUsername
	I1209 11:19:15.371785  644568 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/multinode-714725/id_rsa Username:docker}
	I1209 11:19:15.455988  644568 ssh_runner.go:195] Run: systemctl --version
	I1209 11:19:15.463986  644568 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 11:19:15.482495  644568 kubeconfig.go:125] found "multinode-714725" server: "https://192.168.39.31:8443"
	I1209 11:19:15.482531  644568 api_server.go:166] Checking apiserver status ...
	I1209 11:19:15.482570  644568 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 11:19:15.499371  644568 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1104/cgroup
	W1209 11:19:15.508471  644568 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1104/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1209 11:19:15.508526  644568 ssh_runner.go:195] Run: ls
	I1209 11:19:15.512387  644568 api_server.go:253] Checking apiserver healthz at https://192.168.39.31:8443/healthz ...
	I1209 11:19:15.516786  644568 api_server.go:279] https://192.168.39.31:8443/healthz returned 200:
	ok
	I1209 11:19:15.516814  644568 status.go:463] multinode-714725 apiserver status = Running (err=<nil>)
	I1209 11:19:15.516827  644568 status.go:176] multinode-714725 status: &{Name:multinode-714725 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1209 11:19:15.516846  644568 status.go:174] checking status of multinode-714725-m02 ...
	I1209 11:19:15.517229  644568 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:19:15.517266  644568 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:19:15.532687  644568 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36497
	I1209 11:19:15.533168  644568 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:19:15.533660  644568 main.go:141] libmachine: Using API Version  1
	I1209 11:19:15.533685  644568 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:19:15.533986  644568 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:19:15.534152  644568 main.go:141] libmachine: (multinode-714725-m02) Calling .GetState
	I1209 11:19:15.535689  644568 status.go:371] multinode-714725-m02 host status = "Running" (err=<nil>)
	I1209 11:19:15.535706  644568 host.go:66] Checking if "multinode-714725-m02" exists ...
	I1209 11:19:15.535983  644568 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:19:15.536019  644568 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:19:15.551487  644568 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37273
	I1209 11:19:15.551937  644568 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:19:15.552420  644568 main.go:141] libmachine: Using API Version  1
	I1209 11:19:15.552442  644568 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:19:15.552735  644568 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:19:15.552918  644568 main.go:141] libmachine: (multinode-714725-m02) Calling .GetIP
	I1209 11:19:15.555628  644568 main.go:141] libmachine: (multinode-714725-m02) DBG | domain multinode-714725-m02 has defined MAC address 52:54:00:b9:24:80 in network mk-multinode-714725
	I1209 11:19:15.556091  644568 main.go:141] libmachine: (multinode-714725-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:24:80", ip: ""} in network mk-multinode-714725: {Iface:virbr1 ExpiryTime:2024-12-09 12:17:31 +0000 UTC Type:0 Mac:52:54:00:b9:24:80 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:multinode-714725-m02 Clientid:01:52:54:00:b9:24:80}
	I1209 11:19:15.556122  644568 main.go:141] libmachine: (multinode-714725-m02) DBG | domain multinode-714725-m02 has defined IP address 192.168.39.21 and MAC address 52:54:00:b9:24:80 in network mk-multinode-714725
	I1209 11:19:15.556258  644568 host.go:66] Checking if "multinode-714725-m02" exists ...
	I1209 11:19:15.556547  644568 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:19:15.556583  644568 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:19:15.573395  644568 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34085
	I1209 11:19:15.573926  644568 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:19:15.574560  644568 main.go:141] libmachine: Using API Version  1
	I1209 11:19:15.574591  644568 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:19:15.574955  644568 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:19:15.575167  644568 main.go:141] libmachine: (multinode-714725-m02) Calling .DriverName
	I1209 11:19:15.575359  644568 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 11:19:15.575388  644568 main.go:141] libmachine: (multinode-714725-m02) Calling .GetSSHHostname
	I1209 11:19:15.578622  644568 main.go:141] libmachine: (multinode-714725-m02) DBG | domain multinode-714725-m02 has defined MAC address 52:54:00:b9:24:80 in network mk-multinode-714725
	I1209 11:19:15.579084  644568 main.go:141] libmachine: (multinode-714725-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:24:80", ip: ""} in network mk-multinode-714725: {Iface:virbr1 ExpiryTime:2024-12-09 12:17:31 +0000 UTC Type:0 Mac:52:54:00:b9:24:80 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:multinode-714725-m02 Clientid:01:52:54:00:b9:24:80}
	I1209 11:19:15.579120  644568 main.go:141] libmachine: (multinode-714725-m02) DBG | domain multinode-714725-m02 has defined IP address 192.168.39.21 and MAC address 52:54:00:b9:24:80 in network mk-multinode-714725
	I1209 11:19:15.579243  644568 main.go:141] libmachine: (multinode-714725-m02) Calling .GetSSHPort
	I1209 11:19:15.579443  644568 main.go:141] libmachine: (multinode-714725-m02) Calling .GetSSHKeyPath
	I1209 11:19:15.579609  644568 main.go:141] libmachine: (multinode-714725-m02) Calling .GetSSHUsername
	I1209 11:19:15.579750  644568 sshutil.go:53] new ssh client: &{IP:192.168.39.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20068-609844/.minikube/machines/multinode-714725-m02/id_rsa Username:docker}
	I1209 11:19:15.660837  644568 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 11:19:15.673513  644568 status.go:176] multinode-714725-m02 status: &{Name:multinode-714725-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1209 11:19:15.673555  644568 status.go:174] checking status of multinode-714725-m03 ...
	I1209 11:19:15.673855  644568 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 11:19:15.673900  644568 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 11:19:15.692081  644568 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45363
	I1209 11:19:15.692624  644568 main.go:141] libmachine: () Calling .GetVersion
	I1209 11:19:15.693183  644568 main.go:141] libmachine: Using API Version  1
	I1209 11:19:15.693212  644568 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 11:19:15.693543  644568 main.go:141] libmachine: () Calling .GetMachineName
	I1209 11:19:15.693718  644568 main.go:141] libmachine: (multinode-714725-m03) Calling .GetState
	I1209 11:19:15.695511  644568 status.go:371] multinode-714725-m03 host status = "Stopped" (err=<nil>)
	I1209 11:19:15.695524  644568 status.go:384] host is not running, skipping remaining checks
	I1209 11:19:15.695530  644568 status.go:176] multinode-714725-m03 status: &{Name:multinode-714725-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.34s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (38.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-714725 node start m03 -v=7 --alsologtostderr
E1209 11:19:36.378068  617017 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/addons-156041/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-714725 node start m03 -v=7 --alsologtostderr: (38.188942467s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-714725 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (38.85s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-714725 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-714725 node delete m03: (1.795934627s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-714725 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.34s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (185.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-714725 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1209 11:28:22.652724  617017 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/functional-032350/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-714725 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m4.490216676s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-714725 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (185.02s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (44.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-714725
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-714725-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-714725-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (67.406011ms)

                                                
                                                
-- stdout --
	* [multinode-714725-m02] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20068
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20068-609844/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20068-609844/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-714725-m02' is duplicated with machine name 'multinode-714725-m02' in profile 'multinode-714725'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-714725-m03 --driver=kvm2  --container-runtime=crio
E1209 11:31:33.303582  617017 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/addons-156041/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-714725-m03 --driver=kvm2  --container-runtime=crio: (43.221039178s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-714725
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-714725: exit status 80 (208.006621ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-714725 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-714725-m03 already exists in multinode-714725-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-714725-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (44.36s)

                                                
                                    
x
+
TestScheduledStopUnix (117.59s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-825956 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-825956 --memory=2048 --driver=kvm2  --container-runtime=crio: (45.900062295s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-825956 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-825956 -n scheduled-stop-825956
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-825956 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1209 11:35:22.033953  617017 retry.go:31] will retry after 89.111µs: open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/scheduled-stop-825956/pid: no such file or directory
I1209 11:35:22.035107  617017 retry.go:31] will retry after 217.313µs: open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/scheduled-stop-825956/pid: no such file or directory
I1209 11:35:22.036246  617017 retry.go:31] will retry after 333.896µs: open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/scheduled-stop-825956/pid: no such file or directory
I1209 11:35:22.037423  617017 retry.go:31] will retry after 433.976µs: open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/scheduled-stop-825956/pid: no such file or directory
I1209 11:35:22.038545  617017 retry.go:31] will retry after 323.81µs: open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/scheduled-stop-825956/pid: no such file or directory
I1209 11:35:22.039678  617017 retry.go:31] will retry after 895.706µs: open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/scheduled-stop-825956/pid: no such file or directory
I1209 11:35:22.040789  617017 retry.go:31] will retry after 990.232µs: open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/scheduled-stop-825956/pid: no such file or directory
I1209 11:35:22.041905  617017 retry.go:31] will retry after 1.312663ms: open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/scheduled-stop-825956/pid: no such file or directory
I1209 11:35:22.044113  617017 retry.go:31] will retry after 2.1247ms: open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/scheduled-stop-825956/pid: no such file or directory
I1209 11:35:22.047313  617017 retry.go:31] will retry after 1.947762ms: open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/scheduled-stop-825956/pid: no such file or directory
I1209 11:35:22.049535  617017 retry.go:31] will retry after 7.477364ms: open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/scheduled-stop-825956/pid: no such file or directory
I1209 11:35:22.057748  617017 retry.go:31] will retry after 9.516067ms: open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/scheduled-stop-825956/pid: no such file or directory
I1209 11:35:22.068038  617017 retry.go:31] will retry after 19.405267ms: open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/scheduled-stop-825956/pid: no such file or directory
I1209 11:35:22.088338  617017 retry.go:31] will retry after 24.709747ms: open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/scheduled-stop-825956/pid: no such file or directory
I1209 11:35:22.113602  617017 retry.go:31] will retry after 43.738267ms: open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/scheduled-stop-825956/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-825956 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-825956 -n scheduled-stop-825956
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-825956
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-825956 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E1209 11:36:16.382006  617017 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/addons-156041/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-825956
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-825956: exit status 7 (74.568003ms)

                                                
                                                
-- stdout --
	scheduled-stop-825956
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-825956 -n scheduled-stop-825956
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-825956 -n scheduled-stop-825956: exit status 7 (66.912307ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-825956" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-825956
E1209 11:36:33.303267  617017 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/addons-156041/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestScheduledStopUnix (117.59s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (139.26s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.4015962939 start -p running-upgrade-119214 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.4015962939 start -p running-upgrade-119214 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m5.993611689s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-119214 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-119214 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m9.75157735s)
helpers_test.go:175: Cleaning up "running-upgrade-119214" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-119214
--- PASS: TestRunningBinaryUpgrade (139.26s)

                                                
                                    
x
+
TestPause/serial/Start (59.4s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-529265 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-529265 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (59.401072098s)
--- PASS: TestPause/serial/Start (59.40s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-597739 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-597739 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (86.331918ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-597739] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20068
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20068-609844/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20068-609844/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (90.7s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-597739 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-597739 --driver=kvm2  --container-runtime=crio: (1m30.44141834s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-597739 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (90.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-763643 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-763643 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (105.55161ms)

                                                
                                                
-- stdout --
	* [false-763643] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20068
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20068-609844/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20068-609844/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 11:36:36.590393  652222 out.go:345] Setting OutFile to fd 1 ...
	I1209 11:36:36.590545  652222 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 11:36:36.590556  652222 out.go:358] Setting ErrFile to fd 2...
	I1209 11:36:36.590560  652222 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 11:36:36.590735  652222 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20068-609844/.minikube/bin
	I1209 11:36:36.591368  652222 out.go:352] Setting JSON to false
	I1209 11:36:36.592382  652222 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":15541,"bootTime":1733728656,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 11:36:36.592497  652222 start.go:139] virtualization: kvm guest
	I1209 11:36:36.594725  652222 out.go:177] * [false-763643] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1209 11:36:36.595989  652222 notify.go:220] Checking for updates...
	I1209 11:36:36.596006  652222 out.go:177]   - MINIKUBE_LOCATION=20068
	I1209 11:36:36.597262  652222 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 11:36:36.598508  652222 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20068-609844/kubeconfig
	I1209 11:36:36.599776  652222 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20068-609844/.minikube
	I1209 11:36:36.600980  652222 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1209 11:36:36.602335  652222 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 11:36:36.604239  652222 config.go:182] Loaded profile config "NoKubernetes-597739": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 11:36:36.604396  652222 config.go:182] Loaded profile config "offline-crio-482227": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 11:36:36.604517  652222 config.go:182] Loaded profile config "pause-529265": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 11:36:36.604665  652222 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 11:36:36.641550  652222 out.go:177] * Using the kvm2 driver based on user configuration
	I1209 11:36:36.642846  652222 start.go:297] selected driver: kvm2
	I1209 11:36:36.642903  652222 start.go:901] validating driver "kvm2" against <nil>
	I1209 11:36:36.642933  652222 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 11:36:36.644781  652222 out.go:201] 
	W1209 11:36:36.645964  652222 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1209 11:36:36.647188  652222 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-763643 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-763643

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-763643

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-763643

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-763643

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-763643

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-763643

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-763643

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-763643

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-763643

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-763643

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-763643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-763643"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-763643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-763643"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-763643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-763643"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-763643

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-763643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-763643"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-763643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-763643"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-763643" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-763643" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-763643" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-763643" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-763643" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-763643" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-763643" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-763643" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-763643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-763643"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-763643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-763643"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-763643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-763643"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-763643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-763643"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-763643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-763643"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-763643" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-763643" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-763643" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-763643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-763643"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-763643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-763643"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-763643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-763643"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-763643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-763643"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-763643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-763643"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-763643

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-763643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-763643"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-763643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-763643"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-763643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-763643"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-763643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-763643"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-763643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-763643"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-763643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-763643"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-763643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-763643"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-763643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-763643"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-763643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-763643"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-763643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-763643"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-763643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-763643"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-763643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-763643"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-763643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-763643"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-763643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-763643"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-763643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-763643"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-763643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-763643"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-763643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-763643"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-763643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-763643"

                                                
                                                
----------------------- debugLogs end: false-763643 [took: 2.884406072s] --------------------------------
helpers_test.go:175: Cleaning up "false-763643" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-763643
--- PASS: TestNetworkPlugins/group/false (3.15s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (49.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-597739 --no-kubernetes --driver=kvm2  --container-runtime=crio
E1209 11:38:22.652419  617017 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/functional-032350/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-597739 --no-kubernetes --driver=kvm2  --container-runtime=crio: (48.106928743s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-597739 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-597739 status -o json: exit status 2 (235.64957ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-597739","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-597739
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-597739: (1.037590618s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (49.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (48.58s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-597739 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-597739 --no-kubernetes --driver=kvm2  --container-runtime=crio: (48.584307517s)
--- PASS: TestNoKubernetes/serial/Start (48.58s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.3s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.30s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (182.38s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2591953703 start -p stopped-upgrade-676904 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2591953703 start -p stopped-upgrade-676904 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m40.895254233s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2591953703 -p stopped-upgrade-676904 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2591953703 -p stopped-upgrade-676904 stop: (12.154395722s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-676904 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E1209 11:41:33.303550  617017 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/addons-156041/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-676904 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m9.332572017s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (182.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-597739 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-597739 "sudo systemctl is-active --quiet service kubelet": exit status 1 (208.123715ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-597739
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-597739: (2.377454214s)
--- PASS: TestNoKubernetes/serial/Stop (2.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (59.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-597739 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-597739 --driver=kvm2  --container-runtime=crio: (59.281788944s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (59.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-597739 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-597739 "sudo systemctl is-active --quiet service kubelet": exit status 1 (219.237655ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.22s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.05s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-676904
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-676904: (1.052231526s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.05s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (50.67s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-005123 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-005123 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (50.66653652s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (50.67s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (93.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-820741 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
E1209 11:43:05.728209  617017 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/functional-032350/client.crt: no such file or directory" logger="UnhandledError"
E1209 11:43:22.652656  617017 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/functional-032350/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-820741 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (1m33.342066136s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (93.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (11.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-005123 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [9bcf74e3-b510-402f-b829-4c7df5f6b8a9] Pending
helpers_test.go:344: "busybox" [9bcf74e3-b510-402f-b829-4c7df5f6b8a9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [9bcf74e3-b510-402f-b829-4c7df5f6b8a9] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 11.004178407s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-005123 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (11.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.95s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-005123 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-005123 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.95s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-820741 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [4e76af62-1ba8-410c-ace3-c92e48840825] Pending
helpers_test.go:344: "busybox" [4e76af62-1ba8-410c-ace3-c92e48840825] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [4e76af62-1ba8-410c-ace3-c92e48840825] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.003882116s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-820741 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.96s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-820741 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-820741 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.96s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (51.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-482476 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-482476 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (51.255823465s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (51.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (685.82s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-005123 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-005123 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (11m25.559535429s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-005123 -n embed-certs-005123
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (685.82s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-482476 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [7e957ef4-a510-42eb-b025-f64d260656c5] Pending
helpers_test.go:344: "busybox" [7e957ef4-a510-42eb-b025-f64d260656c5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [7e957ef4-a510-42eb-b025-f64d260656c5] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.004403182s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-482476 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (555.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-820741 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-820741 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (9m14.765821477s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-820741 -n no-preload-820741
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (555.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.91s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-482476 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-482476 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.91s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (3.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-014592 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-014592 --alsologtostderr -v=3: (3.29271896s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (3.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-014592 -n old-k8s-version-014592
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-014592 -n old-k8s-version-014592: exit status 7 (67.950056ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-014592 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (458.6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-482476 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
E1209 11:51:33.303347  617017 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/addons-156041/client.crt: no such file or directory" logger="UnhandledError"
E1209 11:52:56.384320  617017 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/addons-156041/client.crt: no such file or directory" logger="UnhandledError"
E1209 11:53:22.652421  617017 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/functional-032350/client.crt: no such file or directory" logger="UnhandledError"
E1209 11:56:33.303569  617017 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/addons-156041/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-482476 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (7m38.332743326s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-482476 -n default-k8s-diff-port-482476
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (458.60s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (46.42s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-932878 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-932878 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (46.415332455s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (46.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (61.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-763643 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-763643 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m1.850196858s)
--- PASS: TestNetworkPlugins/group/auto/Start (61.85s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.39s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-932878 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-932878 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.391404464s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.39s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.4s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-932878 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-932878 --alsologtostderr -v=3: (10.398341443s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (92.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-763643 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-763643 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m32.225537217s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (92.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-932878 -n newest-cni-932878
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-932878 -n newest-cni-932878: exit status 7 (78.783384ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-932878 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (77.93s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-932878 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
E1209 12:13:22.652817  617017 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/functional-032350/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-932878 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (1m17.587605025s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-932878 -n newest-cni-932878
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (77.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-763643 "pgrep -a kubelet"
I1209 12:14:07.233451  617017 config.go:182] Loaded profile config "auto-763643": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-763643 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-ng5ts" [d6fdda7e-018d-424a-9213-cafeb4f06faa] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-ng5ts" [d6fdda7e-018d-424a-9213-cafeb4f06faa] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.005678268s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-763643 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-763643 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-763643 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.38s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-932878 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (83.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-763643 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-763643 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m23.341068242s)
--- PASS: TestNetworkPlugins/group/calico/Start (83.34s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.96s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-932878 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-932878 -n newest-cni-932878
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-932878 -n newest-cni-932878: exit status 2 (397.492849ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-932878 -n newest-cni-932878
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-932878 -n newest-cni-932878: exit status 2 (447.695458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-932878 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-amd64 unpause -p newest-cni-932878 --alsologtostderr -v=1: (1.142728535s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-932878 -n newest-cni-932878
E1209 12:14:39.061826  617017 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/no-preload-820741/client.crt: no such file or directory" logger="UnhandledError"
E1209 12:14:39.068266  617017 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/no-preload-820741/client.crt: no such file or directory" logger="UnhandledError"
E1209 12:14:39.079678  617017 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/no-preload-820741/client.crt: no such file or directory" logger="UnhandledError"
E1209 12:14:39.101149  617017 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/no-preload-820741/client.crt: no such file or directory" logger="UnhandledError"
E1209 12:14:39.143392  617017 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/no-preload-820741/client.crt: no such file or directory" logger="UnhandledError"
E1209 12:14:39.224932  617017 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/no-preload-820741/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-932878 -n newest-cni-932878
E1209 12:14:39.387221  617017 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/no-preload-820741/client.crt: no such file or directory" logger="UnhandledError"
E1209 12:14:39.708586  617017 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/no-preload-820741/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (99.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-763643 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-763643 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m39.517015854s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (99.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-c6tfs" [0dce0c2c-3833-41cb-a5bd-61e14dd05c45] Running
E1209 12:14:41.631370  617017 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/no-preload-820741/client.crt: no such file or directory" logger="UnhandledError"
E1209 12:14:44.193101  617017 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/no-preload-820741/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005346625s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-763643 "pgrep -a kubelet"
I1209 12:14:47.425077  617017 config.go:182] Loaded profile config "kindnet-763643": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-763643 replace --force -f testdata/netcat-deployment.yaml
I1209 12:14:47.636874  617017 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-29bd9" [1d448f12-ab6f-40fb-8ec7-bd62f612da40] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1209 12:14:49.314637  617017 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/no-preload-820741/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-29bd9" [1d448f12-ab6f-40fb-8ec7-bd62f612da40] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.00533598s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-763643 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-763643 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-763643 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (83.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-763643 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
E1209 12:15:16.238673  617017 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/old-k8s-version-014592/client.crt: no such file or directory" logger="UnhandledError"
E1209 12:15:16.245197  617017 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/old-k8s-version-014592/client.crt: no such file or directory" logger="UnhandledError"
E1209 12:15:16.256646  617017 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/old-k8s-version-014592/client.crt: no such file or directory" logger="UnhandledError"
E1209 12:15:16.278144  617017 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/old-k8s-version-014592/client.crt: no such file or directory" logger="UnhandledError"
E1209 12:15:16.319678  617017 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/old-k8s-version-014592/client.crt: no such file or directory" logger="UnhandledError"
E1209 12:15:16.401952  617017 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/old-k8s-version-014592/client.crt: no such file or directory" logger="UnhandledError"
E1209 12:15:16.563981  617017 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/old-k8s-version-014592/client.crt: no such file or directory" logger="UnhandledError"
E1209 12:15:16.885865  617017 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/old-k8s-version-014592/client.crt: no such file or directory" logger="UnhandledError"
E1209 12:15:17.528297  617017 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/old-k8s-version-014592/client.crt: no such file or directory" logger="UnhandledError"
E1209 12:15:18.809806  617017 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/old-k8s-version-014592/client.crt: no such file or directory" logger="UnhandledError"
E1209 12:15:20.038410  617017 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/no-preload-820741/client.crt: no such file or directory" logger="UnhandledError"
E1209 12:15:21.372304  617017 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/old-k8s-version-014592/client.crt: no such file or directory" logger="UnhandledError"
E1209 12:15:26.494004  617017 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/old-k8s-version-014592/client.crt: no such file or directory" logger="UnhandledError"
E1209 12:15:36.735801  617017 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/old-k8s-version-014592/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-763643 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m23.023351459s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (83.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-482476 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.33s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-482476 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-482476 -n default-k8s-diff-port-482476
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-482476 -n default-k8s-diff-port-482476: exit status 2 (348.199142ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-482476 -n default-k8s-diff-port-482476
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-482476 -n default-k8s-diff-port-482476: exit status 2 (403.264187ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-482476 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-482476 -n default-k8s-diff-port-482476
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-482476 -n default-k8s-diff-port-482476
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (84.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-763643 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
E1209 12:15:57.217418  617017 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/old-k8s-version-014592/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-763643 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m24.286649047s)
--- PASS: TestNetworkPlugins/group/flannel/Start (84.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-vnx8f" [4091f8ab-cbe8-4be5-9097-1ac59f1ee5b1] Running
E1209 12:16:01.000170  617017 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/no-preload-820741/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.013705442s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-763643 "pgrep -a kubelet"
I1209 12:16:05.067596  617017 config.go:182] Loaded profile config "calico-763643": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (13.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-763643 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-qksrg" [bec71877-a742-4830-bbcf-31697724852e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-qksrg" [bec71877-a742-4830-bbcf-31697724852e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 13.005775554s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (13.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-763643 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-763643 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-763643 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-763643 "pgrep -a kubelet"
I1209 12:16:20.832422  617017 config.go:182] Loaded profile config "custom-flannel-763643": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-763643 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-n566d" [8aea63d5-e953-442a-befc-945b940fed34] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1209 12:16:25.731492  617017 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/functional-032350/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-n566d" [8aea63d5-e953-442a-befc-945b940fed34] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.005021782s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-763643 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-763643 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-763643 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (61.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-763643 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
E1209 12:16:38.178811  617017 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/old-k8s-version-014592/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-763643 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m1.973104465s)
--- PASS: TestNetworkPlugins/group/bridge/Start (61.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-763643 "pgrep -a kubelet"
I1209 12:16:38.828364  617017 config.go:182] Loaded profile config "enable-default-cni-763643": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-763643 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-6frf8" [0afb757a-7ea6-484b-9db4-f5802ad0e80f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-6frf8" [0afb757a-7ea6-484b-9db4-f5802ad0e80f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.005767277s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-763643 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-763643 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-763643 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-6jhlt" [f9eac306-7c33-4940-a8bb-01246e79827e] Running
E1209 12:17:17.041948  617017 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/default-k8s-diff-port-482476/client.crt: no such file or directory" logger="UnhandledError"
E1209 12:17:17.048357  617017 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/default-k8s-diff-port-482476/client.crt: no such file or directory" logger="UnhandledError"
E1209 12:17:17.059786  617017 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/default-k8s-diff-port-482476/client.crt: no such file or directory" logger="UnhandledError"
E1209 12:17:17.081728  617017 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/default-k8s-diff-port-482476/client.crt: no such file or directory" logger="UnhandledError"
E1209 12:17:17.123162  617017 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/default-k8s-diff-port-482476/client.crt: no such file or directory" logger="UnhandledError"
E1209 12:17:17.204654  617017 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/default-k8s-diff-port-482476/client.crt: no such file or directory" logger="UnhandledError"
E1209 12:17:17.366328  617017 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/default-k8s-diff-port-482476/client.crt: no such file or directory" logger="UnhandledError"
E1209 12:17:17.688170  617017 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/default-k8s-diff-port-482476/client.crt: no such file or directory" logger="UnhandledError"
E1209 12:17:18.329944  617017 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/default-k8s-diff-port-482476/client.crt: no such file or directory" logger="UnhandledError"
E1209 12:17:19.612020  617017 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/default-k8s-diff-port-482476/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004928757s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-763643 "pgrep -a kubelet"
I1209 12:17:21.274519  617017 config.go:182] Loaded profile config "flannel-763643": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-763643 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-t2gzd" [c54ad429-3454-4f26-9bf3-3c6bfd7db6a8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1209 12:17:22.174213  617017 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/default-k8s-diff-port-482476/client.crt: no such file or directory" logger="UnhandledError"
E1209 12:17:22.922077  617017 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/no-preload-820741/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-t2gzd" [c54ad429-3454-4f26-9bf3-3c6bfd7db6a8] Running
E1209 12:17:27.296566  617017 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-609844/.minikube/profiles/default-k8s-diff-port-482476/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.003894234s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-763643 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-763643 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-763643 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-763643 "pgrep -a kubelet"
I1209 12:17:39.598478  617017 config.go:182] Loaded profile config "bridge-763643": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-763643 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-49hvj" [c6addaaf-cd2f-452f-8beb-2c371fa5e6e7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-49hvj" [c6addaaf-cd2f-452f-8beb-2c371fa5e6e7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.00466281s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-763643 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-763643 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-763643 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                    

Test skip (39/316)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.31.2/cached-images 0
15 TestDownloadOnly/v1.31.2/binaries 0
16 TestDownloadOnly/v1.31.2/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0.32
33 TestAddons/serial/GCPAuth/RealCredentials 0
39 TestAddons/parallel/Olm 0
46 TestAddons/parallel/AmdGpuDevicePlugin 0
50 TestDockerFlags 0
53 TestDockerEnvContainerd 0
55 TestHyperKitDriverInstallOrUpdate 0
56 TestHyperkitDriverSkipUpgrade 0
107 TestFunctional/parallel/DockerEnv 0
108 TestFunctional/parallel/PodmanEnv 0
124 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
125 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
126 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
127 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
128 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
129 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
130 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
131 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
156 TestGvisorAddon 0
178 TestImageBuild 0
205 TestKicCustomNetwork 0
206 TestKicExistingNetwork 0
207 TestKicCustomSubnet 0
208 TestKicStaticIP 0
240 TestChangeNoneUser 0
243 TestScheduledStopWindows 0
245 TestSkaffold 0
247 TestInsufficientStorage 0
251 TestMissingContainerUpgrade 0
259 TestStartStop/group/disable-driver-mounts 0.16
265 TestNetworkPlugins/group/kubenet 3.2
274 TestNetworkPlugins/group/cilium 3.31
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.32s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:789: skipping: crio not supported
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-156041 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.32s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:698: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:972: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-905993" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-905993
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-763643 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-763643

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-763643

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-763643

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-763643

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-763643

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-763643

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-763643

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-763643

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-763643

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-763643

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-763643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-763643"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-763643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-763643"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-763643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-763643"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-763643

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-763643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-763643"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-763643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-763643"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-763643" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-763643" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-763643" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-763643" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-763643" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-763643" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-763643" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-763643" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-763643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-763643"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-763643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-763643"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-763643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-763643"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-763643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-763643"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-763643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-763643"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-763643" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-763643" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-763643" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-763643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-763643"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-763643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-763643"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-763643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-763643"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-763643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-763643"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-763643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-763643"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-763643

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-763643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-763643"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-763643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-763643"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-763643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-763643"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-763643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-763643"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-763643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-763643"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-763643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-763643"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-763643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-763643"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-763643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-763643"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-763643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-763643"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-763643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-763643"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-763643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-763643"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-763643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-763643"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-763643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-763643"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-763643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-763643"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-763643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-763643"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-763643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-763643"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-763643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-763643"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-763643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-763643"

                                                
                                                
----------------------- debugLogs end: kubenet-763643 [took: 3.041099893s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-763643" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-763643
--- SKIP: TestNetworkPlugins/group/kubenet (3.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-763643 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-763643

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-763643

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-763643

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-763643

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-763643

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-763643

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-763643

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-763643

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-763643

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-763643

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-763643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-763643"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-763643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-763643"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-763643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-763643"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-763643

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-763643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-763643"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-763643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-763643"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-763643" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-763643" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-763643" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-763643" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-763643" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-763643" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-763643" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-763643" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-763643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-763643"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-763643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-763643"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-763643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-763643"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-763643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-763643"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-763643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-763643"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-763643

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-763643

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-763643" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-763643" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-763643

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-763643

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-763643" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-763643" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-763643" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-763643" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-763643" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-763643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-763643"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-763643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-763643"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-763643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-763643"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-763643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-763643"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-763643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-763643"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-763643

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-763643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-763643"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-763643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-763643"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-763643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-763643"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-763643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-763643"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-763643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-763643"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-763643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-763643"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-763643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-763643"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-763643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-763643"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-763643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-763643"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-763643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-763643"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-763643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-763643"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-763643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-763643"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-763643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-763643"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-763643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-763643"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-763643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-763643"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-763643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-763643"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-763643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-763643"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-763643" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-763643"

                                                
                                                
----------------------- debugLogs end: cilium-763643 [took: 3.161141113s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-763643" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-763643
--- SKIP: TestNetworkPlugins/group/cilium (3.31s)

                                                
                                    
Copied to clipboard